sentence1
stringlengths
1
133k
sentence2
stringlengths
1
131k
women, elder abuse, and pollution. Trashion, using trash to make fashion, practiced by artists such as Marina DeBris is one example of using art to raise awareness about pollution. Art for psychological and healing purposes. Art is also used by art therapists, psychotherapists and clinical psychologists as art therapy. The Diagnostic Drawing Series, for example, is used to determine the personality and emotional functioning of a patient. The end product is not the principal goal in this case, but rather a process of healing, through creative acts, is sought. The resultant piece of artwork may also offer insight into the troubles experienced by the subject and may suggest suitable approaches to be used in more conventional forms of psychiatric therapy. Art for propaganda, or commercialism. Art is often utilized as a form of propaganda, and thus can be used to subtly influence popular conceptions or mood. In a similar way, art that tries to sell a product also influences mood and emotion. In both cases, the purpose of art here is to subtly manipulate the viewer into a particular emotional or psychological response toward a particular idea or object. Art as a fitness indicator. It has been argued that the ability of the human brain by far exceeds what was needed for survival in the ancestral environment. One evolutionary psychology explanation for this is that the human brain and associated traits (such as artistic ability and creativity) are the human equivalent of the peacock's tail. The purpose of the male peacock's extravagant tail has been argued to be to attract females (see also Fisherian runaway and handicap principle). According to this theory superior execution of art was evolutionarily important because it attracted mates. The functions of art described above are not mutually exclusive, as many of them may overlap. For example, art for the purpose of entertainment may also seek to sell a product, i.e. the movie or video game. Public access Since ancient times, much of the finest art has represented a deliberate display of wealth or power, often achieved by using massive scale and expensive materials. Much art has been commissioned by political rulers or religious establishments, with more modest versions only available to the most wealthy in society. Nevertheless, there have been many periods where art of very high quality was available, in terms of ownership, across large parts of society, above all in cheap media such as pottery, which persists in the ground, and perishable media such as textiles and wood. In many different cultures, the ceramics of indigenous peoples of the Americas are found in such a wide range of graves that they were clearly not restricted to a social elite, though other forms of art may have been. Reproductive methods such as moulds made mass-production easier, and were used to bring high-quality Ancient Roman pottery and Greek Tanagra figurines to a very wide market. Cylinder seals were both artistic and practical, and very widely used by what can be loosely called the middle class in the Ancient Near East. Once coins were widely used, these also became an art form that reached the widest range of society. Another important innovation came in the 15th century in Europe, when printmaking began with small woodcuts, mostly religious, that were often very small and hand-colored, and affordable even by peasants who glued them to the walls of their homes. Printed books were initially very expensive, but fell steadily in price until by the 19th century even the poorest could afford some with printed illustrations. Popular prints of many different sorts have decorated homes and other places for centuries. In 1661, the city of Basel, in Switzerland, opened the first public museum of art in the world, the Kunstmuseum Basel. Today, its collection is distinguished by an impressively wide historic span, from the early 15th century up to the immediate present. Its various areas of emphasis give it international standing as one of the most significant museums of its kind. These encompass: paintings and drawings by artists active in the Upper Rhine region between 1400 and 1600, and on the art of the 19th to 21st centuries. Public buildings and monuments, secular and religious, by their nature normally address the whole of society, and visitors as viewers, and display to the general public has long been an important factor in their design. Egyptian temples are typical in that the most largest and most lavish decoration was placed on the parts that could be seen by the general public, rather than the areas seen only by the priests. Many areas of royal palaces, castles and the houses of the social elite were often generally accessible, and large parts of the art collections of such people could often be seen, either by anybody, or by those able to pay a small price, or those wearing the correct clothes, regardless of who they were, as at the Palace of Versailles, where the appropriate extra accessories (silver shoe buckles and a sword) could be hired from shops outside. Special arrangements were made to allow the public to see many royal or private collections placed in galleries, as with the Orleans Collection mostly housed in a wing of the Palais Royal in Paris, which could be visited for most of the 18th century. In Italy the art tourism of the Grand Tour became a major industry from the Renaissance onwards, and governments and cities made efforts to make their key works accessible. The British Royal Collection remains distinct, but large donations such as the Old Royal Library were made from it to the British Museum, established in 1753. The Uffizi in Florence opened entirely as a gallery in 1765, though this function had been gradually taking the building over from the original civil servants' offices for a long time before. The building now occupied by the Prado in Madrid was built before the French Revolution for the public display of parts of the royal art collection, and similar royal galleries open to the public existed in Vienna, Munich and other capitals. The opening of the Musée du Louvre during the French Revolution (in 1793) as a public museum for much of the former French royal collection certainly marked an important stage in the development of public access to art, transferring ownership to a republican state, but was a continuation of trends already well established. Most modern public museums and art education programs for children in schools can be traced back to this impulse to have art available to everyone. However, museums do not only provide availability to art, but do also influence the way art is being perceived by the audience, as studies found. Thus, the museum itself is not only a blunt stage for the presentation of art, but plays an active and vital role in the overall perception of art in modern society. Museums in the United States tend to be gifts from the very rich to the masses. (The Metropolitan Museum of Art in New York City, for example, was created by John Taylor Johnston, a railroad executive whose personal art collection seeded the museum.) But despite all this, at least one of the important functions of art in the 21st century remains as a marker of wealth and social status. There have been attempts by artists to create art that can not be bought by the wealthy as a status object. One of the prime original motivators of much of the art of the late 1960s and 1970s was to create art that could not be bought and sold. It is "necessary to present something more than mere objects" said the major post war German artist Joseph Beuys. This time period saw the rise of such things as performance art, video art, and conceptual art. The idea was that if the artwork was a performance that would leave nothing behind, or was simply an idea, it could not be bought and sold. "Democratic precepts revolving around the idea that a work of art is a commodity impelled the aesthetic innovation which germinated in the mid-1960s and was reaped throughout the 1970s. Artists broadly identified under the heading of Conceptual art ... substituting performance and publishing activities for engagement with both the material and materialistic concerns of painted or sculptural form ... [have] endeavored to undermine the art object qua object." In the decades since, these ideas have been somewhat lost as the art market has learned to sell limited edition DVDs of video works, invitations to exclusive performance art pieces, and the objects left over from conceptual pieces. Many of these performances create works that are only understood by the elite who have been educated as to why an idea or video or piece of apparent garbage may be considered art. The marker of status becomes understanding the work instead of necessarily owning it, and the artwork remains an upper-class activity. "With the widespread use of DVD recording technology in the early 2000s, artists, and the gallery system that derives its profits from the sale of artworks, gained an important means of controlling the sale of video and computer artworks in limited editions to collectors." Controversies Art has long been controversial, that is to say disliked by some viewers, for a wide variety of reasons, though most pre-modern controversies are dimly recorded, or completely lost to a modern view. Iconoclasm is the destruction of art that is disliked for a variety of reasons, including religious ones. Aniconism is a general dislike of either all figurative images, or often just religious ones, and has been a thread in many major religions. It has been a crucial factor in the history of Islamic art, where depictions of Muhammad remain especially controversial. Much art has been disliked purely because it depicted or otherwise stood for unpopular rulers, parties or other groups. Artistic conventions have often been conservative and taken very seriously by art critics, though often much less so by a wider public. The iconographic content of art could cause controversy, as with late medieval depictions of the new motif of the Swoon of the Virgin in scenes of the Crucifixion of Jesus. The Last Judgment by Michelangelo was controversial for various reasons, including breaches of decorum through nudity and the Apollo-like pose of Christ. The content of much formal art through history was dictated by the patron or commissioner rather than just the artist, but with the advent of Romanticism, and economic changes in the production of art, the artists' vision became the usual determinant of the content of his art, increasing the incidence of controversies, though often reducing their significance. Strong incentives for perceived originality and publicity also encouraged artists to court controversy. Théodore Géricault's Raft of the Medusa (c. 1820), was in part a political commentary on a recent event. Édouard Manet's Le Déjeuner sur l'Herbe (1863), was considered scandalous not because of the nude woman, but because she is seated next to men fully dressed in the clothing of the time, rather than in robes of the antique world. John Singer Sargent's Madame Pierre Gautreau (Madam X) (1884), caused a controversy over the reddish pink used to color the woman's ear lobe, considered far too suggestive and supposedly ruining the high-society model's reputation. The gradual abandonment of naturalism and the depiction of realistic representations of the visual appearance of subjects in the 19th and 20th centuries led to a rolling controversy lasting for over a century. In the 20th century, Pablo Picasso's Guernica (1937) used arresting cubist techniques and stark monochromatic oils, to depict the harrowing consequences of a contemporary bombing of a small, ancient Basque town. Leon Golub's Interrogation III (1981), depicts a female nude, hooded detainee strapped to a chair, her legs open to reveal her sexual organs, surrounded by two tormentors dressed in everyday clothing. Andres Serrano's Piss Christ (1989) is a photograph of a crucifix, sacred to the Christian religion and representing Christ's sacrifice and final suffering, submerged in a glass of the artist's own urine. The resulting uproar led to comments in the United States Senate about public funding of the arts. Theory Before Modernism, aesthetics in Western art was greatly concerned with achieving the appropriate balance between different aspects of realism or truth to nature and the ideal; ideas as to what the appropriate balance is have shifted to and fro over the centuries. This concern is largely absent in other traditions of art. The aesthetic theorist John Ruskin, who championed what he saw as the naturalism of J. M. W. Turner, saw art's role as the communication by artifice of an essential truth that could only be found in nature. The definition and evaluation of art has become especially problematic since the 20th century. Richard Wollheim distinguishes three approaches to assessing the aesthetic value of art: the Realist, whereby aesthetic quality is an absolute value independent of any human view; the Objectivist, whereby it is also an absolute value, but is dependent on general human experience; and the Relativist position, whereby it is not an absolute value, but depends on, and varies with, the human experience of different humans. Arrival of Modernism The arrival of Modernism in the late 19th century lead to a radical break in the conception of the function of art, and then again in the late 20th century with the advent of postmodernism. Clement Greenberg's 1960 article "Modernist Painting" defines modern art as "the use of characteristic methods of a discipline to criticize the discipline itself". Greenberg originally applied this idea to the Abstract Expressionist movement and used it as a way to understand and justify flat (non-illusionistic) abstract painting: After Greenberg, several important art theorists emerged, such as Michael Fried, T. J. Clark, Rosalind Krauss, Linda Nochlin and Griselda Pollock among others. Though only originally intended as a way of understanding a specific set of artists, Greenberg's definition of modern art is important to many of the ideas of art within the various art movements of the 20th century and early 21st century. Pop artists like Andy Warhol became both noteworthy and influential through work including and possibly critiquing popular culture, as well as the art world. Artists of the 1980s, 1990s, and 2000s expanded this technique of self-criticism beyond high art to all cultural image-making, including fashion images, comics, billboards and pornography. Duchamp once proposed that art is any activity of any kind-everything. However, the way that only certain activities are classified today as art is a social construction. There is evidence that there may be an element of truth to this. In The Invention of Art: A Cultural History, Larry Shiner examines the construction of the modern system of the arts, i.e. fine art. He finds evidence that the older system of the arts before our modern system (fine art) held art to be any skilled human activity; for example, Ancient Greek society did not possess the term art, but techne. Techne can be understood neither as art or craft, the reason being that the distinctions of art and craft are historical products that came later on in human history. Techne included painting, sculpting and music, but also cooking, medicine, horsemanship, geometry, carpentry, prophecy, and farming, etc. New Criticism and the "intentional fallacy" Following Duchamp during the first half of the 20th century, a significant shift to general aesthetic theory took place which attempted to apply aesthetic theory between various forms of art, including the literary arts and the visual arts, to each other. This resulted in the rise of the New Criticism school and debate concerning the intentional fallacy. At issue was the question of whether the aesthetic intentions of the artist in creating the work of art, whatever its specific form, should be associated with
truth. He argues that art is not only a way of expressing the element of truth in a culture, but the means of creating it and providing a springboard from which "that which is" can be revealed. Works of art are not merely representations of the way things are, but actually produce a community's shared understanding. Each time a new artwork is added to any culture, the meaning of what it is to exist is inherently changed. Historically, art and artistic skills and ideas have often been spread through trade. An example of this is the Silk Road, where Hellenistic, Iranian, Indian and Chinese influences could mix. Greco Buddhist art is one of the most vivid examples of this interaction. The meeting of different cultures and worldviews also influenced artistic creation. An example of this is the multicultural port metropolis of Trieste at the beginning of the 20th century, where James Joyce met writers from Central Europe and the artistic development of New York City as a cultural melting pot. Forms, genres, media, and styles The creative arts are often divided into more specific categories, typically along perceptually distinguishable categories such as media, genre, styles, and form. Art form refers to the elements of art that are independent of its interpretation or significance. It covers the methods adopted by the artist and the physical composition of the artwork, primarily non-semantic aspects of the work (i.e., figurae), such as color, contour, dimension, medium, melody, space, texture, and value. Form may also include visual design principles, such as arrangement, balance, contrast, emphasis, harmony, proportion, proximity, and rhythm. In general there are three schools of philosophy regarding art, focusing respectively on form, content, and context. Extreme Formalism is the view that all aesthetic properties of art are formal (that is, part of the art form). Philosophers almost universally reject this view and hold that the properties and aesthetics of art extend beyond materials, techniques, and form. Unfortunately, there is little consensus on terminology for these informal properties. Some authors refer to subject matter and content – i.e., denotations and connotations – while others prefer terms like meaning and significance. Extreme Intentionalism holds that authorial intent plays a decisive role in the meaning of a work of art, conveying the content or essential main idea, while all other interpretations can be discarded. It defines the subject as the persons or idea represented, and the content as the artist's experience of that subject. For example, the composition of Napoleon I on his Imperial Throne is partly borrowed from the Statue of Zeus at Olympia. As evidenced by the title, the subject is Napoleon, and the content is Ingres's representation of Napoleon as "Emperor-God beyond time and space". Similarly to extreme formalism, philosophers typically reject extreme intentionalism, because art may have multiple ambiguous meanings and authorial intent may be unknowable and thus irrelevant. Its restrictive interpretation is "socially unhealthy, philosophically unreal, and politically unwise". Finally, the developing theory of post-structuralism studies art's significance in a cultural context, such as the ideas, emotions, and reactions prompted by a work. The cultural context often reduces to the artist's techniques and intentions, in which case analysis proceeds along lines similar to formalism and intentionalism. However, in other cases historical and material conditions may predominate, such as religious and philosophical convictions, sociopolitical and economic structures, or even climate and geography. Art criticism continues to grow and develop alongside art. Skill and craft Art can connote a sense of trained ability or mastery of a medium. Art can also simply refer to the developed and efficient use of a language to convey meaning with immediacy or depth. Art can be defined as an act of expressing feelings, thoughts, and observations. There is an understanding that is reached with the material as a result of handling it, which facilitates one's thought processes. A common view is that the epithet "art", particular in its elevated sense, requires a certain level of creative expertise by the artist, whether this be a demonstration of technical ability, an originality in stylistic approach, or a combination of these two. Traditionally skill of execution was viewed as a quality inseparable from art and thus necessary for its success; for Leonardo da Vinci, art, neither more nor less than his other endeavors, was a manifestation of skill. Rembrandt's work, now praised for its ephemeral virtues, was most admired by his contemporaries for its virtuosity. At the turn of the 20th century, the adroit performances of John Singer Sargent were alternately admired and viewed with skepticism for their manual fluency, yet at nearly the same time the artist who would become the era's most recognized and peripatetic iconoclast, Pablo Picasso, was completing a traditional academic training at which he excelled. A common contemporary criticism of some modern art occurs along the lines of objecting to the apparent lack of skill or ability required in the production of the artistic object. In conceptual art, Marcel Duchamp's "Fountain" is among the first examples of pieces wherein the artist used found objects ("ready-made") and exercised no traditionally recognised set of skills. Tracey Emin's My Bed, or Damien Hirst's The Physical Impossibility of Death in the Mind of Someone Living follow this example and also manipulate the mass media. Emin slept (and engaged in other activities) in her bed before placing the result in a gallery as work of art. Hirst came up with the conceptual design for the artwork but has left most of the eventual creation of many works to employed artisans. Hirst's celebrity is founded entirely on his ability to produce shocking concepts. The actual production in many conceptual and contemporary works of art is a matter of assembly of found objects. However, there are many modernist and contemporary artists who continue to excel in the skills of drawing and painting and in creating hands-on works of art. Purpose Art has had a great number of different functions throughout its history, making its purpose difficult to abstract or quantify to any single concept. This does not imply that the purpose of Art is "vague", but that it has had many unique, different reasons for being created. Some of these functions of Art are provided in the following outline. The different purposes of art may be grouped according to those that are non-motivated, and those that are motivated (Lévi-Strauss). Non-motivated functions The non-motivated purposes of art are those that are integral to being human, transcend the individual, or do not fulfill a specific external purpose. In this sense, Art, as creativity, is something humans must do by their very nature (i.e., no other species creates art), and is therefore beyond utility. Basic human instinct for harmony, balance, rhythm. Art at this level is not an action or an object, but an internal appreciation of balance and harmony (beauty), and therefore an aspect of being human beyond utility.Imitation, then, is one instinct of our nature. Next, there is the instinct for 'harmony' and rhythm, meters being manifestly sections of rhythm. Persons, therefore, starting with this natural gift developed by degrees their special aptitudes, till their rude improvisations gave birth to Poetry. – Aristotle Experience of the mysterious. Art provides a way to experience one's self in relation to the universe. This experience may often come unmotivated, as one appreciates art, music or poetry.The most beautiful thing we can experience is the mysterious. It is the source of all true art and science. – Albert Einstein Expression of the imagination. Art provides a means to express the imagination in non-grammatic ways that are not tied to the formality of spoken or written language. Unlike words, which come in sequences and each of which have a definite meaning, art provides a range of forms, symbols and ideas with meanings that are malleable.Jupiter's eagle [as an example of art] is not, like logical (aesthetic) attributes of an object, the concept of the sublimity and majesty of creation, but rather something else—something that gives the imagination an incentive to spread its flight over a whole host of kindred representations that provoke more thought than admits of expression in a concept determined by words. They furnish an aesthetic idea, which serves the above rational idea as a substitute for logical presentation, but with the proper function, however, of animating the mind by opening out for it a prospect into a field of kindred representations stretching beyond its ken. – Immanuel Kant Ritualistic and symbolic functions. In many cultures, art is used in rituals, performances and dances as a decoration or symbol. While these often have no specific utilitarian (motivated) purpose, anthropologists know that they often serve a purpose at the level of meaning within a particular culture. This meaning is not furnished by any one individual, but is often the result of many generations of change, and of a cosmological relationship within the culture.Most scholars who deal with rock paintings or objects recovered from prehistoric contexts that cannot be explained in utilitarian terms and are thus categorized as decorative, ritual or symbolic, are aware of the trap posed by the term 'art'. – Silva Tomaskova Motivated functions Motivated purposes of art refer to intentional, conscious actions on the part of the artists or creator. These may be to bring about political change, to comment on an aspect of society, to convey a specific emotion or mood, to address personal psychology, to illustrate another discipline, to (with commercial arts) sell a product, or simply as a form of communication. Communication. Art, at its simplest, is a form of communication. As most forms of communication have an intent or goal directed toward another individual, this is a motivated purpose. Illustrative arts, such as scientific illustration, are a form of art as communication. Maps are another example. However, the content need not be scientific. Emotions, moods and feelings are also communicated through art.[Art is a set of] artefacts or images with symbolic meanings as a means of communication. – Steve Mithen Art as entertainment. Art may seek to bring about a particular emotion or mood, for the purpose of relaxing or entertaining the viewer. This is often the function of the art industries of Motion Pictures and Video Games. The Avant-Garde. Art for political change. One of the defining functions of early 20th-century art has been to use visual images to bring about political change. Art movements that had this goal—Dadaism, Surrealism, Russian constructivism, and Abstract Expressionism, among others—are collectively referred to as the avant-garde arts.By contrast, the realistic attitude, inspired by positivism, from Saint Thomas Aquinas to Anatole France, clearly seems to me to be hostile to any intellectual or moral advancement. I loathe it, for it is made up of mediocrity, hate, and dull conceit. It is this attitude which today gives birth to these ridiculous books, these insulting plays. It constantly feeds on and derives strength from the newspapers and stultifies both science and art by assiduously flattering the lowest of tastes; clarity bordering on stupidity, a dog's life. – André Breton (Surrealism) Art as a "free zone", removed from the action of the social censure. Unlike the avant-garde movements, which wanted to erase cultural differences in order to produce new universal values, contemporary art has enhanced its tolerance towards cultural differences as well as its critical and liberating functions (social inquiry, activism, subversion, deconstruction ...), becoming a more open place for research and experimentation. Art for social inquiry, subversion or anarchy. While similar to art for political change, subversive or deconstructivist art may seek to question aspects of society without any specific political goal. In this case, the function of art may be simply to criticize some aspect of society. Graffiti art and other types of street art are graphics and images that are spray-painted or stencilled on publicly viewable walls, buildings, buses, trains, and bridges, usually without permission. Certain art forms, such as graffiti, may also be illegal when they break laws (in this case vandalism). Art for social causes. Art can be used to raise awareness for a large variety of causes. A number of art activities were aimed at raising awareness of autism, cancer, human trafficking, and a variety of other topics, such as ocean conservation, human rights in Darfur, murdered and missing Aboriginal women, elder abuse, and pollution. Trashion, using trash to make fashion, practiced by artists such as Marina DeBris is one example of using art to raise awareness about pollution. Art for psychological and healing purposes. Art is also used by art therapists, psychotherapists and clinical psychologists as art therapy. The Diagnostic Drawing Series, for example, is used to determine the personality and emotional functioning of a patient. The end product is not the principal goal in this case, but rather a process of healing, through creative acts, is sought. The resultant piece of artwork may also offer insight into the troubles experienced by the subject and may suggest suitable approaches to be used in more conventional forms of psychiatric therapy. Art for propaganda, or commercialism. Art is often utilized as a form of propaganda, and thus can be used to subtly influence popular conceptions or mood. In a similar way, art that tries to sell a product also influences mood and emotion. In both cases, the purpose of art here is to subtly manipulate the viewer into a particular emotional or psychological response toward a particular idea or object. Art as a fitness indicator. It has been argued that the ability of the human brain by far exceeds what was needed for survival in the ancestral environment. One evolutionary psychology explanation for this is that the human brain and associated traits (such as artistic ability and creativity) are the human equivalent of the peacock's tail. The purpose of the male peacock's extravagant tail has been argued to be to attract females (see also Fisherian runaway and handicap principle). According to this theory superior execution of art was evolutionarily important because it attracted mates. The functions of art described above are not mutually exclusive, as many of them may overlap. For example, art for the purpose of entertainment may also seek to sell a product, i.e. the movie or video game. Public access Since ancient times, much of the finest art has represented a deliberate display of wealth or power, often achieved by using massive scale and expensive materials. Much art has been commissioned by political rulers or religious establishments, with more modest versions only available to the most wealthy in society. Nevertheless, there have been many periods where art of very high quality was available, in terms of ownership, across large parts of society, above all in cheap media such as pottery, which persists in the ground, and perishable media such as textiles and wood. In many different cultures, the ceramics of indigenous peoples of the Americas are found in such a wide range of graves that they were clearly not restricted to a social elite, though other forms of art may have been. Reproductive methods such as moulds made mass-production easier, and were used to bring high-quality Ancient Roman pottery and Greek Tanagra figurines to a very wide market. Cylinder seals were both artistic and practical, and very widely used by what can be loosely called the middle class in the Ancient Near East. Once coins were widely used, these also became an art form that reached the widest range of society. Another important innovation came in the 15th century in Europe, when printmaking began with small woodcuts, mostly religious, that were often very small and hand-colored, and affordable even by peasants who glued them to the walls of their homes. Printed books were initially very expensive, but fell steadily in price until by the 19th century even the poorest could afford some with printed illustrations. Popular prints of many different sorts have decorated homes and other places for centuries. In 1661, the city of Basel, in Switzerland, opened the first public museum of art in the world, the Kunstmuseum Basel. Today, its collection is distinguished by an impressively wide historic span, from the early 15th century up to the immediate present. Its various areas of emphasis give it international standing as one of the most significant museums of its kind. These encompass: paintings and drawings by artists active in the Upper Rhine region between 1400 and 1600, and on the art of the 19th to 21st centuries. Public buildings and monuments, secular and religious, by their nature normally address the whole of society, and visitors as viewers, and display to the general public has long been an important factor in their design. Egyptian temples are typical in that the most largest and most lavish decoration was placed on the parts that could be seen by the general public, rather than the areas seen only by the priests. Many areas of royal palaces, castles and the houses of the social elite were often generally accessible, and large parts of the art collections of such people could often be seen, either by anybody, or by those able to pay a small price, or those wearing the correct clothes, regardless of who they were, as at the Palace of Versailles, where the appropriate extra accessories (silver shoe buckles and a sword) could be hired from shops outside. Special arrangements were made to allow the public to see many royal or private collections placed in galleries, as with the Orleans Collection mostly housed in a wing of the Palais Royal in Paris, which could be visited for most of the 18th century. In Italy the art tourism of the Grand Tour became a major industry from the Renaissance onwards, and governments and cities made efforts to make their key works accessible. The British Royal Collection remains distinct, but large donations such as the Old Royal Library were made from it to the British Museum, established in 1753. The Uffizi in Florence opened entirely as a gallery in 1765, though this function had been gradually taking the building over from the original civil servants' offices for a long time before. The building now occupied by the Prado in Madrid was built before the French Revolution for the public display of parts of the royal art collection, and similar royal galleries open to the public existed in Vienna, Munich and other capitals. The opening of the Musée du Louvre during the French Revolution (in 1793) as a public museum for much of the former French royal collection certainly marked
Their wide geographic dispersion in the fossil record is uncharacteristic of benthic animals, suggesting a pelagic existence. The thoracic segment appears to form a hinge between the head and pygidium allowing for a bivalved ostracodan-type lifestyle. The orientation of the thoracic appendages appears ill-suited for benthic living. Recent work suggests that some agnostids were benthic predators, engaging in cannibalism and possibly pack-hunting behavior. They are sometimes preserved within the voids of other organisms, for instance within empty hyolith conchs, within sponges, worm tubes and under the carapaces of bivalved arthropods, presumably in order to hide from predators or strong storm currents; or maybe whilst scavenging for food. In the case of the tapering worm tubes Selkirkia, trilobites are always found with their heads directed towards the opening of the tube, suggesting that they reversed in; the absence of any moulted carapaces suggests that moulting was not their primary reason for seeking shelter. References External links Order Agnostida by Sam Gon III. The Virtual Fossil Museum – Trilobite Order Agnostida Agnostida fact sheet by Sam Gon III. "Earth's Early Cannibals Caught in the Act", by Larry O'Hanlon, news.discovery.com. Trilobite orders Cambrian trilobites Ordovician trilobites Fossil taxa described in 1864
trilobites or a stem group. The challenge to the status has focused on Agnostina partly due to the juveniles of one genus have been found with legs differing dramatically from those of adult trilobites, suggesting they are not members of the lamellipedian clade, of which trilobites are a part. Instead, the limbs of agnostids closely resemble those of stem group crustaceans, although they lack the proximal endite, which defines that group. They are likely the sister taxon to the crustacean stem lineage, and, as such, part of the clade, Crustaceomorpha. Other researchers have suggested, based on a cladistic analyses of dorsal exoskeletal features, that Eodiscina and Agnostida are closely united, and the Eodiscina descended from the trilobite order Ptychopariida. Ecology Scientists have long debated whether the agnostids lived a pelagic or a benthic lifestyle. Their lack of eyes, a morphology not well-suited for swimming, and their fossils found in association with other benthic trilobites suggest a benthic (bottom-dwelling) mode of life. They are likely to have lived on areas of the ocean floor which received little or no light and fed on detritus which descended from upper layers of the sea to the bottom. Their wide geographic dispersion in the fossil record is uncharacteristic of benthic animals, suggesting a pelagic existence. The thoracic segment appears to form a hinge between the head and pygidium allowing for a bivalved ostracodan-type lifestyle. The orientation of the
future pregnancies. The studies supporting this did not control for factors not related to abortion or miscarriage, and hence the causes of this correlation have not been determined, although multiple possibilities have been suggested. Some purported risks of abortion are promoted primarily by anti-abortion groups, but lack scientific support. For example, the question of a link between induced abortion and breast cancer has been investigated extensively. Major medical and scientific bodies (including the WHO, National Cancer Institute, American Cancer Society, Royal College of OBGYN and American Congress of OBGYN) have concluded that abortion does not cause breast cancer. In the past even illegality has not automatically meant that the abortions were unsafe. Referring to the U.S., historian Linda Gordon states: "In fact, illegal abortions in this country have an impressive safety record." According to Rickie Solinger, Authors Jerome Bates and Edward Zawadzki describe the case of an illegal abortionist in the eastern U.S. in the early 20th century who was proud of having successfully completed 13,844 abortions without any fatality. In 1870s New York City the famous abortionist/midwife Madame Restell (Anna Trow Lohman) appears to have lost very few women among her more than 100,000 patients—a lower mortality rate than the childbirth mortality rate at the time. In 1936, the prominent professor of obstetrics and gynecology Frederick J. Taussig wrote that a cause of increasing mortality during the years of illegality in the U.S. was that Mental health Current evidence finds no relationship between most induced abortions and mental health problems other than those expected for any unwanted pregnancy. A report by the American Psychological Association concluded that a woman's first abortion is not a threat to mental health when carried out in the first trimester, with such women no more likely to have mental-health problems than those carrying an unwanted pregnancy to term; the mental-health outcome of a woman's second or greater abortion is less certain. Some older reviews concluded that abortion was associated with an increased risk of psychological problems; however, they did not use an appropriate control group. Although some studies show negative mental-health outcomes in women who choose abortions after the first trimester because of fetal abnormalities, more rigorous research would be needed to show this conclusively. Some proposed negative psychological effects of abortion have been referred to by anti-abortion advocates as a separate condition called "post-abortion syndrome", but this is not recognized by medical or psychological professionals in the United States. A long term-study among US women found that about 99% of women felt that they made the right decision five years after they had an abortion. Relief was the primary emotion with few women feeling sadness or guilt. Social stigma was a main factor predicting negative emotions and regret years later. Unsafe abortion Women seeking an abortion may use unsafe methods, especially when it is legally restricted. They may attempt self-induced abortion or seek the help of a person without proper medical training or facilities. This can lead to severe complications, such as incomplete abortion, sepsis, hemorrhage, and damage to internal organs. Unsafe abortions are a major cause of injury and death among women worldwide. Although data are imprecise, it is estimated that approximately 20 million unsafe abortions are performed annually, with 97% taking place in developing countries. Unsafe abortions are believed to result in millions of injuries. Estimates of deaths vary according to methodology, and have ranged from 37,000 to 70,000 in the past decade; deaths from unsafe abortion account for around 13% of all maternal deaths. The World Health Organization believes that mortality has fallen since the 1990s. To reduce the number of unsafe abortions, public health organizations have generally advocated emphasizing the legalization of abortion, training of medical personnel, and ensuring access to reproductive-health services. In response, opponents of abortion point out that abortion bans in no way affect prenatal care for women who choose to carry their fetus to term. The Dublin Declaration on Maternal Health, signed in 2012, notes, "the prohibition of abortion does not affect, in any way, the availability of optimal care to pregnant women." A major factor in whether abortions are performed safely or not is the legal standing of abortion. Countries with restrictive abortion laws have higher rates of unsafe abortion and similar overall abortion rates compared to those where abortion is legal and available. For example, the 1996 legalization of abortion in South Africa had an immediate positive impact on the frequency of abortion-related complications, with abortion-related deaths dropping by more than 90%. Similar reductions in maternal mortality have been observed after other countries have liberalized their abortion laws, such as Romania and Nepal. A 2011 study concluded that in the United States, some state-level anti-abortion laws are correlated with lower rates of abortion in that state. The analysis, however, did not take into account travel to other states without such laws to obtain an abortion. In addition, a lack of access to effective contraception contributes to unsafe abortion. It has been estimated that the incidence of unsafe abortion could be reduced by up to 75% (from 20 million to 5 million annually) if modern family planning and maternal health services were readily available globally. Rates of such abortions may be difficult to measure because they can be reported variously as miscarriage, "induced miscarriage", "menstrual regulation", "mini-abortion", and "regulation of a delayed/suspended menstruation". Forty percent of the world's women are able to access therapeutic and elective abortions within gestational limits, while an additional 35 percent have access to legal abortion if they meet certain physical, mental, or socioeconomic criteria. While maternal mortality seldom results from safe abortions, unsafe abortions result in 70,000 deaths and 5 million disabilities per year. Complications of unsafe abortion account for approximately an eighth of maternal mortalities worldwide, though this varies by region. Secondary infertility caused by an unsafe abortion affects an estimated 24 million women. The rate of unsafe abortions has increased from 44% to 49% between 1995 and 2008. Health education, access to family planning, and improvements in health care during and after abortion have been proposed to address this phenomenon. Incidence There are two commonly used methods of measuring the incidence of abortion: Abortion rate – number of abortions annually per 1000 women between 15 and 44 years of age (some sources use a range of 15–49) Abortion percentage – number of abortions out of 100 known pregnancies (pregnancies include live births, abortions and miscarriages) In many places, where abortion is illegal or carries a heavy social stigma, medical reporting of abortion is not reliable. For this reason, estimates of the incidence of abortion must be made without determining certainty related to standard error. The number of abortions performed worldwide seems to have remained stable in recent years, with 41.6 million having been performed in 2003 and 43.8 million having been performed in 2008. The abortion rate worldwide was 28 per 1000 women per year, though it was 24 per 1000 women per year for developed countries and 29 per 1000 women per year for developing countries. The same 2012 study indicated that in 2008, the estimated abortion percentage of known pregnancies was at 21% worldwide, with 26% in developed countries and 20% in developing countries. On average, the incidence of abortion is similar in countries with restrictive abortion laws and those with more liberal access to abortion. However, restrictive abortion laws are associated with increases in the percentage of abortions performed unsafely. The unsafe abortion rate in developing countries is partly attributable to lack of access to modern contraceptives; according to the Guttmacher Institute, providing access to contraceptives would result in about 14.5 million fewer unsafe abortions and 38,000 fewer deaths from unsafe abortion annually worldwide. The rate of legal, induced abortion varies extensively worldwide. According to the report of employees of Guttmacher Institute it ranged from 7 per 1000 women per year (Germany and Switzerland) to 30 per 1000 women per year (Estonia) in countries with complete statistics in 2008. The proportion of pregnancies that ended in induced abortion ranged from about 10% (Israel, the Netherlands and Switzerland) to 30% (Estonia) in the same group, though it might be as high as 36% in Hungary and Romania, whose statistics were deemed incomplete. An American study in 2002 concluded that about half of women having abortions were using a form of contraception at the time of becoming pregnant. Inconsistent use was reported by half of those using condoms and three-quarters of those using the birth control pill; 42% of those using condoms reported failure through slipping or breakage. The Guttmacher Institute estimated that "most abortions in the United States are obtained by minority women" because minority women "have much higher rates of unintended pregnancy". In 2022, while people of color comprise 44% of the population in Mississippi, 59% of the population in Texas, 42% of the population in Louisiana, and 35% of the population in Alabama, they comprise 80%, 74%, 72%, and 70% of those receiving abortions. The abortion rate may also be expressed as the average number of abortions a woman has during her reproductive years; this is referred to as total abortion rate (TAR). Gestational age and method Abortion rates also vary depending on the stage of pregnancy and the method practiced. In 2003, the Centers for Disease Control and Prevention (CDC) reported that 26% of reported legal induced abortions in the United States were known to have been obtained at less than 6 weeks' gestation, 18% at 7 weeks, 15% at 8 weeks, 18% at 9 through 10 weeks, 10% at 11 through 12 weeks, 6% at 13 through 15 weeks, 4% at 16 through 20 weeks and 1% at more than 21 weeks. 91% of these were classified as having been done by "curettage" (suction-aspiration, dilation and curettage, dilation and evacuation), 8% by "medical" means (mifepristone), >1% by "intrauterine instillation" (saline or prostaglandin), and 1% by "other" (including hysterotomy and hysterectomy). According to the CDC, due to data collection difficulties the data must be viewed as tentative and some fetal deaths reported beyond 20 weeks may be natural deaths erroneously classified as abortions if the removal of the dead fetus is accomplished by the same procedure as an induced abortion. The Guttmacher Institute estimated there were 2,200 intact dilation and extraction procedures in the US during 2000; this accounts for <0.2% of the total number of abortions performed that year. Similarly, in England and Wales in 2006, 89% of terminations occurred at or under 12 weeks, 9% between 13 and 19 weeks, and 2% at or over 20 weeks. 64% of those reported were by vacuum aspiration, 6% by D&E, and 30% were medical. There are more second trimester abortions in developing countries such as China, India and Vietnam than in developed countries. Motivation Personal The reasons why women have abortions are diverse and vary across the world. Some of the reasons may include an inability to afford a child, domestic violence, lack of support, feeling they are too young, and the wish to complete education or advance a career. Additional reasons include not being able or willing to raise a child conceived as a result of rape or incest Societal Some abortions are undergone as the result of societal pressures. These might include the preference for children of a specific sex or race, disapproval of single or early motherhood, stigmatization of people with disabilities, insufficient economic support for families, lack of access to or rejection of contraceptive methods, or efforts toward population control (such as China's one-child policy). These factors can sometimes result in compulsory abortion or sex-selective abortion. Maternal and fetal health An additional factor is maternal health which was listed as the main reason by about a third of women in 3 of 27 countries and about 7% of women in a further 7 of these 27 countries. In the U.S., the Supreme Court decisions in Roe v. Wade and Doe v. Bolton: "ruled that the state's interest in the life of the fetus became compelling only at the point of viability, defined as the point at which the fetus can survive independently of its mother. Even after the point of viability, the state cannot favor the life of the fetus over the life or health of the pregnant woman. Under the right of privacy, physicians must be free to use their "medical judgment for the preservation of the life or health of the mother." On the same day that the Court decided Roe, it also decided Doe v. Bolton, in which the Court defined health very broadly: "The medical judgment may be exercised in the light of all factors—physical, emotional, psychological, familial, and the woman's age—relevant to the well-being of the patient. All these factors may relate to health. This allows the attending physician the room he needs to make his best medical judgment." Public opinion shifted in America following television personality Sherri Finkbine's discovery during her fifth month of pregnancy that she had been exposed to thalidomide. Unable to obtain a legal abortion in the United States, she traveled to Sweden. From 1962 to 1965, an outbreak of German measles left 15,000 babies with severe birth defects. In 1967, the American Medical Association publicly supported liberalization of abortion laws. A National Opinion Research Center poll in 1965 showed 73% supported abortion when the mother's life was at risk, 57% when birth defects were present and 59% for pregnancies resulting from rape or incest. Cancer The rate of cancer during pregnancy is 0.02–1%, and in many cases, cancer of the mother leads to consideration of abortion to protect the life of the mother, or in response to the potential damage that may occur to the fetus during treatment. This is particularly true for cervical cancer, the most common type of which occurs in 1 of every 2,000–13,000 pregnancies, for which initiation of treatment "cannot co-exist with preservation of fetal life (unless neoadjuvant chemotherapy is chosen)". Very early stage cervical cancers (I and IIa) may be treated by radical hysterectomy and pelvic lymph node dissection, radiation therapy, or both, while later stages are treated by radiotherapy. Chemotherapy may be used simultaneously. Treatment of breast cancer during pregnancy also involves fetal considerations, because lumpectomy is discouraged in favor of modified radical mastectomy unless late-term pregnancy allows follow-up radiation therapy to be administered after the birth. Exposure to a single chemotherapy drug is estimated to cause a 7.5–17% risk of teratogenic effects on the fetus, with higher risks for multiple drug treatments. Treatment with more than 40 Gy of radiation usually causes spontaneous abortion. Exposure to much lower doses during the first trimester, especially 8 to 15 weeks of development, can cause intellectual disability or microcephaly, and exposure at this or subsequent stages can cause reduced intrauterine growth and birth weight. Exposures above 0.005–0.025 Gy cause a dose-dependent reduction in IQ. It is possible to greatly reduce exposure to radiation with abdominal shielding, depending on how far the area to be irradiated is from the fetus. The process of birth itself may also put the mother at risk. "Vaginal delivery may result in dissemination of neoplastic cells into lymphovascular channels, haemorrhage, cervical laceration and implantation of malignant cells in the episiotomy site, while abdominal delivery may delay the initiation of non-surgical treatment." History and religion Since ancient times abortions have been done using a number of methods, including herbal medicines, sharp tools, with force, or through other traditional methods. Induced abortion has a long history and can be traced back to civilizations as varied as ancient China (abortifacient knowledge is often attributed to the mythological ruler Shennong), ancient India since its Vedic age, ancient Egypt with its Ebers Papyrus (c. 1550 BCE), and the Roman Empire in the time of Juvenal (c. 200 CE). One of the earliest known artistic representations of abortion is in a bas relief at Angkor Wat (c. 1150). Found in a series of friezes that represent judgment after death in Hindu and Buddhist culture, it depicts the technique of abdominal abortion. Some medical scholars and abortion opponents have suggested that the Hippocratic Oath forbade Ancient Greek physicians from performing abortions; other scholars disagree with this interpretation, and state that the medical texts of Hippocratic Corpus contain descriptions of abortive techniques right alongside the Oath. The physician Scribonius Largus wrote in 43 CE that the Hippocratic Oath prohibits abortion, as did Soranus, although apparently not all doctors adhered to it strictly at the time. According to Soranus' 1st or 2nd century CE work Gynaecology, one party of medical practitioners banished all abortives as required by the Hippocratic Oath; the other party—to which he belonged—was willing to prescribe abortions, but only for the sake of the mother's health. Aristotle, in his treatise on government Politics (350 BCE), condemns infanticide as a means of population control. He preferred abortion in such cases, with the restriction "[that it] must be practised on it before it has developed sensation and life; for the line between lawful and unlawful abortion will be marked by the fact of having sensation and being alive". In Christianity, Pope Sixtus V (1585–90) was the first Pope before 1869 to declare that abortion is homicide regardless of the stage of pregnancy; and his pronouncement of 1588 was reversed three years later by Pope Gregory XIV. Through most of its history the Catholic Church was divided on whether it believed that early abortion was murder, and it did not begin vigorously opposing abortion until the 19th century. Several historians have written that prior to the 19th century most Catholic authors did not regard termination of pregnancy before "quickening" or "ensoulment" as an abortion. From 1750, excommunication became the punishment for abortions. Statements made in 1992 in the Catechism of the Catholic Church, the codified summary of the Church's teachings, opposed abortion. A 2014 Guttmacher survey of US abortion patients found that many reported a religious affiliation—24% were Catholic while 30% were Protestant. A 1995 survey reported that Catholic women are as likely as the general population to terminate a pregnancy, Protestants are less likely to do so, and Evangelical Christians are the least likely to do so. Islamic tradition has traditionally permitted abortion until a point in time when Muslims believe the soul enters the fetus, considered by various theologians to be at conception, 40 days after conception, 120 days after conception, or quickening. However, abortion is largely heavily restricted or forbidden in areas of high Islamic faith such as the Middle East and North Africa. In Europe and North America, abortion techniques advanced starting in the 17th century. However, the conservatism of most in the medical profession with regards to sexual matters prevented the wide expansion of abortion techniques. Other medical practitioners in addition to some physicians advertised their services, and they were not widely regulated until the 19th century, when the practice (sometimes called restellism) was banned in both the United States and the United Kingdom. Church groups as well as physicians were highly influential in anti-abortion movements. In the US, according to some sources, abortion was more dangerous than childbirth until about 1930 when incremental improvements in abortion procedures relative to childbirth made abortion safer. However, other sources maintain that in the 19th century early abortions under the hygienic conditions in which midwives usually worked were relatively safe. In addition, some commentators have written that, despite improved medical procedures, the period from the 1930s until legalization also saw more zealous enforcement of anti-abortion
much higher rates of maternal morbidity and mortality than D&E or induction abortion. First-trimester procedures can generally be performed using local anesthesia, while second-trimester methods may require deep sedation or general anesthesia. Labor induction abortion In places lacking the necessary medical skill for dilation and extraction, or where preferred by practitioners, an abortion can be induced by first inducing labor and then inducing fetal demise if necessary. This is sometimes called "induced miscarriage". This procedure may be performed from 13 weeks gestation to the third trimester. Although it is very uncommon in the United States, more than 80% of induced abortions throughout the second trimester are labor-induced abortions in Sweden and other nearby countries. Only limited data are available comparing this method with dilation and extraction. Unlike D&E, labor-induced abortions after 18 weeks may be complicated by the occurrence of brief fetal survival, which may be legally characterized as live birth. For this reason, labor-induced abortion is legally risky in the United States. Other methods Historically, a number of herbs reputed to possess abortifacient properties have been used in folk medicine. Among these are: tansy, pennyroyal, black cohosh, and the now-extinct silphium. In 1978, one woman in Colorado died and another developed organ damage when they attempted to terminate their pregnancies by taking pennyroyal oil. Because the indiscriminant use of herbs as abortifacients can cause serious—even lethal—side effects, such as multiple organ failure, such use is not recommended by physicians. Abortion is sometimes attempted by causing trauma to the abdomen. The degree of force, if severe, can cause serious internal injuries without necessarily succeeding in inducing miscarriage. In Southeast Asia, there is an ancient tradition of attempting abortion through forceful abdominal massage. One of the bas reliefs decorating the temple of Angkor Wat in Cambodia depicts a demon performing such an abortion upon a woman who has been sent to the underworld. Reported methods of unsafe, self-induced abortion include misuse of misoprostol and insertion of non-surgical implements such as knitting needles and clothes hangers into the uterus. These and other methods to terminate pregnancy may be called "induced miscarriage". Such methods are rarely used in countries where surgical abortion is legal and available. Safety The health risks of abortion depend principally upon whether the procedure is performed safely or unsafely. The World Health Organization (WHO) defines unsafe abortions as those performed by unskilled individuals, with hazardous equipment, or in unsanitary facilities. Legal abortions performed in the developed world are among the safest procedures in medicine. In the United States as of 2012, abortion was estimated to be about 14 times safer for women than childbirth. CDC estimated in 2019 that US pregnancy-related mortality was 17.2 maternal deaths per 100,000 live births, while the US abortion mortality rate is 0.7 maternal deaths per 100,000 procedures. In the UK, guidelines of the Royal College of Obstetricians and Gynaecologists state that "Women should be advised that abortion is generally safer than continuing a pregnancy to term." Worldwide, on average, abortion is safer than carrying a pregnancy to term. A 2007 study reported that "26% of all pregnancies worldwide are terminated by induced abortion," whereas "deaths from improperly performed [abortion] procedures constitute 13% of maternal mortality globally." In Indonesia in 2000 it was estimated that 2 million pregnancies ended in abortion, 4.5 million pregnancies were carried to term, and 14-16 percent of maternal deaths resulted from abortion. In the US from 2000 to 2009, abortion had a mortality rate lower than plastic surgery, lower or similar to running a marathon, and about equivalent to traveling 760 miles in a passenger car. Five years after seeking abortion services, women who gave birth after being denied an abortion reported worse health than women who had either first or second trimester abortions. The risk of abortion-related mortality increases with gestational age, but remains lower than that of childbirth. Outpatient abortion is as safe from 64 to 70 days' gestation as it before 63 days. There is little difference in terms of safety and efficacy between medical abortion using a combined regimen of mifepristone and misoprostol and surgical abortion (vacuum aspiration) in early first trimester abortions up to 10 weeks gestation. Medical abortion using the prostaglandin analog misoprostol alone is less effective and more painful than medical abortion using a combined regimen of mifepristone and misoprostol or surgical abortion. Vacuum aspiration in the first trimester is the safest method of surgical abortion, and can be performed in a primary care office, abortion clinic, or hospital. Complications, which are rare, can include uterine perforation, pelvic infection, and retained products of conception requiring a second procedure to evacuate. Infections account for one-third of abortion-related deaths in the United States. The rate of complications of vacuum aspiration abortion in the first trimester is similar regardless of whether the procedure is performed in a hospital, surgical center, or office. Preventive antibiotics (such as doxycycline or metronidazole) are typically given before abortion procedures, as they are believed to substantially reduce the risk of postoperative uterine infection; however, antibiotics are not routinely given with abortion pills. The rate of failed procedures does not appear to vary significantly depending on whether the abortion is performed by a doctor or a mid-level practitioner. Complications after second-trimester abortion are similar to those after first-trimester abortion, and depend somewhat on the method chosen. The risk of death from abortion approaches roughly half the risk of death from childbirth the farther along a woman is in pregnancy; from one in a million before 9 weeks gestation to nearly one in ten thousand at 21 weeks or more (as measured from the last menstrual period). It appears that having had a prior surgical uterine evacuation (whether because of induced abortion or treatment of miscarriage) correlates with a small increase in the risk of preterm birth in future pregnancies. The studies supporting this did not control for factors not related to abortion or miscarriage, and hence the causes of this correlation have not been determined, although multiple possibilities have been suggested. Some purported risks of abortion are promoted primarily by anti-abortion groups, but lack scientific support. For example, the question of a link between induced abortion and breast cancer has been investigated extensively. Major medical and scientific bodies (including the WHO, National Cancer Institute, American Cancer Society, Royal College of OBGYN and American Congress of OBGYN) have concluded that abortion does not cause breast cancer. In the past even illegality has not automatically meant that the abortions were unsafe. Referring to the U.S., historian Linda Gordon states: "In fact, illegal abortions in this country have an impressive safety record." According to Rickie Solinger, Authors Jerome Bates and Edward Zawadzki describe the case of an illegal abortionist in the eastern U.S. in the early 20th century who was proud of having successfully completed 13,844 abortions without any fatality. In 1870s New York City the famous abortionist/midwife Madame Restell (Anna Trow Lohman) appears to have lost very few women among her more than 100,000 patients—a lower mortality rate than the childbirth mortality rate at the time. In 1936, the prominent professor of obstetrics and gynecology Frederick J. Taussig wrote that a cause of increasing mortality during the years of illegality in the U.S. was that Mental health Current evidence finds no relationship between most induced abortions and mental health problems other than those expected for any unwanted pregnancy. A report by the American Psychological Association concluded that a woman's first abortion is not a threat to mental health when carried out in the first trimester, with such women no more likely to have mental-health problems than those carrying an unwanted pregnancy to term; the mental-health outcome of a woman's second or greater abortion is less certain. Some older reviews concluded that abortion was associated with an increased risk of psychological problems; however, they did not use an appropriate control group. Although some studies show negative mental-health outcomes in women who choose abortions after the first trimester because of fetal abnormalities, more rigorous research would be needed to show this conclusively. Some proposed negative psychological effects of abortion have been referred to by anti-abortion advocates as a separate condition called "post-abortion syndrome", but this is not recognized by medical or psychological professionals in the United States. A long term-study among US women found that about 99% of women felt that they made the right decision five years after they had an abortion. Relief was the primary emotion with few women feeling sadness or guilt. Social stigma was a main factor predicting negative emotions and regret years later. Unsafe abortion Women seeking an abortion may use unsafe methods, especially when it is legally restricted. They may attempt self-induced abortion or seek the help of a person without proper medical training or facilities. This can lead to severe complications, such as incomplete abortion, sepsis, hemorrhage, and damage to internal organs. Unsafe abortions are a major cause of injury and death among women worldwide. Although data are imprecise, it is estimated that approximately 20 million unsafe abortions are performed annually, with 97% taking place in developing countries. Unsafe abortions are believed to result in millions of injuries. Estimates of deaths vary according to methodology, and have ranged from 37,000 to 70,000 in the past decade; deaths from unsafe abortion account for around 13% of all maternal deaths. The World Health Organization believes that mortality has fallen since the 1990s. To reduce the number of unsafe abortions, public health organizations have generally advocated emphasizing the legalization of abortion, training of medical personnel, and ensuring access to reproductive-health services. In response, opponents of abortion point out that abortion bans in no way affect prenatal care for women who choose to carry their fetus to term. The Dublin Declaration on Maternal Health, signed in 2012, notes, "the prohibition of abortion does not affect, in any way, the availability of optimal care to pregnant women." A major factor in whether abortions are performed safely or not is the legal standing of abortion. Countries with restrictive abortion laws have higher rates of unsafe abortion and similar overall abortion rates compared to those where abortion is legal and available. For example, the 1996 legalization of abortion in South Africa had an immediate positive impact on the frequency of abortion-related complications, with abortion-related deaths dropping by more than 90%. Similar reductions in maternal mortality have been observed after other countries have liberalized their abortion laws, such as Romania and Nepal. A 2011 study concluded that in the United States, some state-level anti-abortion laws are correlated with lower rates of abortion in that state. The analysis, however, did not take into account travel to other states without such laws to obtain an abortion. In addition, a lack of access to effective contraception contributes to unsafe abortion. It has been estimated that the incidence of unsafe abortion could be reduced by up to 75% (from 20 million to 5 million annually) if modern family planning and maternal health services were readily available globally. Rates of such abortions may be difficult to measure because they can be reported variously as miscarriage, "induced miscarriage", "menstrual regulation", "mini-abortion", and "regulation of a delayed/suspended menstruation". Forty percent of the world's women are able to access therapeutic and elective abortions within gestational limits, while an additional 35 percent have access to legal abortion if they meet certain physical, mental, or socioeconomic criteria. While maternal mortality seldom results from safe abortions, unsafe abortions result in 70,000 deaths and 5 million disabilities per year. Complications of unsafe abortion account for approximately an eighth of maternal mortalities worldwide, though this varies by region. Secondary infertility caused by an unsafe abortion affects an estimated 24 million women. The rate of unsafe abortions has increased from 44% to 49% between 1995 and 2008. Health education, access to family planning, and improvements in health care during and after abortion have been proposed to address this phenomenon. Incidence There are two commonly used methods of measuring the incidence of abortion: Abortion rate – number of abortions annually per 1000 women between 15 and 44 years of age (some sources use a range of 15–49) Abortion percentage – number of abortions out of 100 known pregnancies (pregnancies include live births, abortions and miscarriages) In many places, where abortion is illegal or carries a heavy social stigma, medical reporting of abortion is not reliable. For this reason, estimates of the incidence of abortion must be made without determining certainty related to standard error. The number of abortions performed worldwide seems to have remained stable in recent years, with 41.6 million having been performed in 2003 and 43.8 million having been performed in 2008. The abortion rate worldwide was 28 per 1000 women per year, though it was 24 per 1000 women per year for developed countries and 29 per 1000 women per year for developing countries. The same 2012 study indicated that in 2008, the estimated abortion percentage of known pregnancies was at 21% worldwide, with 26% in developed countries and 20% in developing countries. On average, the incidence of abortion is similar in countries with restrictive abortion laws and those with more liberal access to abortion. However, restrictive abortion laws are associated with increases in the percentage of abortions performed unsafely. The unsafe abortion rate in developing countries is partly attributable to lack of access to modern contraceptives; according to the Guttmacher Institute, providing access to contraceptives would result in about 14.5 million fewer unsafe abortions and 38,000 fewer deaths from unsafe abortion annually worldwide. The rate of legal, induced abortion varies extensively worldwide. According to the report of employees of Guttmacher Institute it ranged from 7 per 1000 women per year (Germany and Switzerland) to 30 per 1000 women per year (Estonia) in countries with complete statistics in 2008. The proportion of pregnancies that ended in induced abortion ranged from about 10% (Israel, the Netherlands and Switzerland) to 30% (Estonia) in the same group, though it might be as high as 36% in Hungary and Romania, whose statistics were deemed incomplete. An American study in 2002 concluded that about half of women having abortions were using a form of contraception at the time of becoming pregnant. Inconsistent use was reported by half of those using condoms and three-quarters of those using the birth control pill; 42% of those using condoms reported failure through slipping or breakage. The Guttmacher Institute estimated that "most abortions in the United States are obtained by minority women" because minority women "have much higher rates of unintended pregnancy". In 2022, while people of color comprise 44% of the population in Mississippi, 59% of the population in Texas, 42% of the population in Louisiana, and 35% of the population in Alabama, they comprise 80%, 74%, 72%, and 70% of those receiving abortions. The abortion rate may also be expressed as the average number of abortions a woman has during her reproductive years; this is referred to as total abortion rate (TAR). Gestational age and method Abortion rates also vary depending on the stage of pregnancy and the method practiced. In 2003, the Centers for Disease Control and Prevention (CDC) reported that 26% of reported legal induced abortions in the United States were known to have been obtained at less than 6 weeks' gestation, 18% at 7 weeks, 15% at 8 weeks, 18% at 9 through 10 weeks, 10% at 11 through 12 weeks, 6% at 13 through 15 weeks, 4% at 16 through 20 weeks and 1% at more than 21 weeks. 91% of these were classified as having been done by "curettage" (suction-aspiration, dilation and curettage, dilation and evacuation), 8% by "medical" means (mifepristone), >1% by "intrauterine instillation" (saline or prostaglandin), and 1% by "other" (including hysterotomy and hysterectomy). According to the CDC, due to data collection difficulties the data must be viewed as tentative and some fetal deaths reported beyond 20 weeks may be natural deaths erroneously classified as abortions if the removal of the dead fetus is accomplished by the same procedure as an induced abortion. The Guttmacher Institute estimated there were 2,200 intact dilation and extraction procedures in the US during 2000; this accounts for <0.2% of the total number of abortions performed that year. Similarly, in England and Wales in 2006, 89% of terminations occurred at or under 12 weeks, 9% between 13 and 19 weeks, and 2% at or over 20 weeks. 64% of those reported were by vacuum aspiration, 6% by D&E, and 30% were medical. There are more second trimester abortions in developing countries such as China, India and Vietnam than in developed countries. Motivation Personal The reasons why women have abortions are diverse and vary across the world. Some of the reasons may include an inability to afford a child, domestic violence, lack of support, feeling they are too young, and the wish to complete education or advance a career. Additional reasons include not being able or willing to raise a child conceived as a result of rape or incest Societal Some abortions are undergone as the result of societal pressures. These might include the preference for children of a specific sex or race, disapproval of single or early motherhood, stigmatization of people with disabilities, insufficient economic support for families, lack of access to or rejection of contraceptive methods, or efforts toward population control (such as China's one-child policy). These factors can sometimes result in compulsory abortion or sex-selective abortion. Maternal and fetal health An additional factor is maternal health which was listed as the main reason by about a third of women in 3 of 27 countries and about 7% of women in a further 7 of these 27 countries. In the U.S., the Supreme Court decisions in Roe v. Wade and Doe v. Bolton: "ruled that the state's interest in the life of the fetus became compelling only at the point of viability, defined as the point at which the fetus can survive independently of its mother. Even after the point of viability, the state cannot favor the life of the fetus over the life or health of the pregnant woman. Under the right of privacy, physicians must be free to use their "medical judgment for the preservation of the life or health of the mother." On the same day that the Court decided Roe, it also decided Doe v. Bolton, in which the Court defined health very broadly: "The medical judgment may be exercised in the light of all factors—physical, emotional, psychological, familial, and the woman's age—relevant to the well-being of the patient. All these factors may relate to health. This allows the attending physician the room he needs to make his best medical judgment." Public opinion shifted in America following television personality Sherri Finkbine's discovery during her fifth month of pregnancy that she had been exposed to thalidomide. Unable to obtain a legal abortion in the United States, she traveled to Sweden. From 1962 to 1965, an outbreak of German measles left 15,000 babies with severe birth defects. In 1967, the American Medical Association publicly supported liberalization of abortion laws. A National Opinion Research Center poll in 1965 showed 73% supported abortion when the mother's life was at risk, 57% when birth defects were present and 59% for pregnancies resulting from rape or incest. Cancer The rate of cancer during pregnancy is 0.02–1%, and in many cases, cancer of the mother leads to consideration of abortion to protect the life of the mother, or in response to the potential damage that may occur to the fetus during treatment. This is particularly true for cervical cancer, the most common type of which occurs in 1 of every 2,000–13,000 pregnancies, for which initiation of treatment "cannot co-exist with preservation of fetal life (unless neoadjuvant chemotherapy is chosen)". Very early stage cervical cancers (I and IIa) may be treated by radical hysterectomy and pelvic lymph node dissection, radiation therapy, or both, while later stages are treated by radiotherapy. Chemotherapy may be used simultaneously. Treatment of breast cancer during pregnancy also involves fetal considerations, because lumpectomy is discouraged in favor of modified radical mastectomy unless late-term pregnancy allows follow-up radiation therapy to be administered after the birth. Exposure to a single chemotherapy drug is estimated to cause a 7.5–17% risk of teratogenic effects on the fetus, with higher risks for multiple drug treatments. Treatment with more than 40 Gy of radiation usually causes spontaneous abortion. Exposure to much lower doses during the first trimester, especially 8 to 15 weeks of development, can cause intellectual disability or microcephaly, and exposure at this or subsequent stages can cause reduced intrauterine growth and birth weight. Exposures above 0.005–0.025 Gy cause a dose-dependent reduction in IQ. It is possible to greatly reduce exposure to radiation with abdominal shielding, depending on how far the area to be irradiated is from the fetus. The process of
legal document or of several related legal papers. Abstract of title The abstract of title, used in real estate transactions, is the more common form of abstract. An abstract of title lists all the owners of a piece of land, a house, or a building before it came into possession of the present owner. The abstract also records all deeds, wills, mortgages, and other documents that affect ownership of the property. An abstract describes a chain of transfers from owner to owner and any agreements by former owners that are binding on later owners. Patent law In the context of patent
or of several related legal papers. Abstract of title The abstract of title, used in real estate transactions, is the more common form of abstract. An abstract of title lists all the owners of a piece of land, a house, or a building before it came into possession of the present owner. The abstract also records
and viewed the war as an opportunity to weaken Britain. He initially avoided open conflict, but allowed American ships to take on cargoes in French ports, a technical violation of neutrality. Although public opinion favored the American cause, Finance Minister Turgot argued they did not need French help to gain independence and war was too expensive. Instead, Vergennes persuaded Louis XVI to secretly fund a government front company to purchase munitions for the Patriots, carried in neutral Dutch ships and imported through Sint Eustatius in the Caribbean. Many Americans opposed a French alliance, fearing to "exchange one tyranny for another", but this changed after a series of military setbacks in early 1776. As France had nothing to gain from the colonies reconciling with Britain, Congress had three choices; making peace on British terms, continuing the struggle on their own, or proclaiming independence, guaranteed by France. Although the Declaration of Independence in July 1776 had wide public support, Adams was among those reluctant to pay the price of an alliance with France, and over 20% of Congressmen voted against it. Congress agreed to the treaty with reluctance and as the war moved in their favor increasingly lost interest in it. Silas Deane was sent to Paris to begin negotiations with Vergennes, whose key objectives were replacing Britain as the United States' primary commercial and military partner while securing the French West Indies from American expansion. These islands were extremely valuable; in 1772, the value of sugar and coffee produced by Saint-Domingue on its own exceeded that of all American exports combined. Talks progressed slowly until October 1777, when British defeat at Saratoga and their apparent willingness to negotiate peace convinced Vergennes only a permanent alliance could prevent the "disaster" of Anglo-American rapprochement. Assurances of formal French support allowed Congress to reject the Carlisle Peace Commission and insist on nothing short of complete independence. On February 6, 1778, France and the United States signed the Treaty of Amity and Commerce regulating trade between the two countries, followed by a defensive military alliance against Britain, the Treaty of Alliance. In return for French guarantees of American independence, Congress undertook to defend their interests in the West Indies, while both sides agreed not to make a separate peace; conflict over these provisions would lead to the 1798 to 1800 Quasi-War. Charles III of Spain was invited to join on the same terms but refused, largely due to concerns over the impact of the Revolution on Spanish colonies in the Americas. Spain had complained on multiple occasions about encroachment by American settlers into Louisiana, a problem that could only get worse once the United States replaced Britain. Although Spain ultimately made important contributions to American success, in the Treaty of Aranjuez (1779), Charles agreed only to support France's war with Britain outside America, in return for help in recovering Gibraltar, Menorca and Spanish Florida. The terms were confidential since several conflicted with American aims; for example, the French claimed exclusive control of the Newfoundland cod fisheries, a non-negotiable for colonies like Massachusetts. One less well-known impact of this agreement was the abiding American distrust of 'foreign entanglements'; the US would not sign another treaty until the NATO agreement in 1949. This was because the US had agreed not to make peace without France, while Aranjuez committed France to keep fighting until Spain recovered Gibraltar, effectively making it a condition of US independence without the knowledge of Congress. To encourage French participation in the struggle for independence, the US representative in Paris, Silas Deane promised promotion and command positions to any French officer who joined the Continental Army. Although many proved incompetent, one outstanding exception was Gilbert du Motier, Marquis de Lafayette, whom Congress appointed a major General. In addition to his military ability, Lafayette showed considerable political skill in building support for Washington among his officers and within Congress, liaising with French army and naval commanders, and promoting the Patriot cause in France. When the war started, Britain tried to borrow the Dutch-based Scots Brigade for service in America, but pro-Patriot sentiment led the States General to refuse. Although the Republic was no longer a major power, prior to 1774 they still dominated the European carrying trade, and Dutch merchants made large profits shipping French-supplied munitions to the Patriots. This ended when Britain declared war in December 1780, a conflict that proved disastrous to the Dutch economy. The Dutch were also excluded from the First League of Armed Neutrality, formed by Russia, Sweden and Denmark in March 1780 to protect neutral shipping from being stopped and searched for contraband by Britain and France. The British government failed to take into account the strength of the American merchant marine and support from European countries, which allowed the colonies to import munitions and continue trading with relative impunity. While well aware of this, the North administration delayed placing the Royal Navy on a war footing for cost reasons; this prevented the institution of an effective blockade and restricted them to ineffectual diplomatic protests. Traditional British policy was to employ European land-based allies to divert the opposition, a role filled by Prussia in the Seven Years' War; in 1778, they were diplomatically isolated and faced war on multiple fronts. Meanwhile, George III had given up on subduing America while Britain had a European war to fight. He did not welcome war with France, but he believed the British victories over France in the Seven Years' War as a reason to believe in ultimate victory over France. Britain could not find a powerful ally among the Great Powers to engage France on the European continent. Britain subsequently changed its focus into the Caribbean theater, and diverted major military resources away from America. Vergennes colleague "For her honour, France had to seize this opportunity to rise from her degradation...... "If she neglected it, if fear overcame duty, she would add debasement to humiliation, and become an object of contempt to her own century and to all future peoples". Stalemate in the North At the end of 1777, Howe resigned and was replaced by Sir Henry Clinton on May 24, 1778; with French entry into the war, he was ordered to consolidate his forces in New York. On June 18, the British departed Philadelphia with the reinvigorated Americans in pursuit; the Battle of Monmouth on June 28 was inconclusive but boosted Patriot morale. Washington had rallied Charles Lee's broken regiments, the Continentals repulsed British bayonet charges, the British rear guard lost perhaps 50 per-cent more casualties, and the Americans held the field at the end of the day. That midnight, the newly installed Clinton continued his retreat to New York. A French naval force under Admiral Charles Henri Hector d'Estaing was sent to assist Washington; deciding New York was too formidable a target, in August they launched a combined attack on Newport, with General John Sullivan commanding land forces. The resulting Battle of Rhode Island was indecisive; badly damaged by a storm, the French withdrew to avoid putting their ships at risk. Further activity was limited to British raids on Chestnut Neck and Little Egg Harbor in October. In July 1779, the Americans captured British positions at Stony Point and Paulus Hook. Clinton unsuccessfully tried to tempt Washington into a decisive engagement by sending General William Tryon to raid Connecticut. In July, a large American naval operation, the Penobscot Expedition, attempted to retake Maine, then part of Massachusetts, but was defeated. Persistent Iroquois raids along the border with Quebec led to the punitive Sullivan Expedition in April 1779, destroying many settlements but failing to stop them. During the winter of 1779–1780, the Continental Army suffered greater hardships than at Valley Forge. Morale was poor, public support fell away in the long war, the Continental dollar was virtually worthless, the army was plagued with supply problems, desertion was common, and mutinies occurred in the Pennsylvania Line and New Jersey Line regiments over the conditions in early 1780. In June 1780, Clinton sent 6,000 men under Wilhelm von Knyphausen to retake New Jersey, but they were halted by local militia at the Battle of Connecticut Farms; although the Americans withdrew, Knyphausen felt he was not strong enough to engage Washington's main force and retreated. A second attempt two weeks later ended in a British defeat at the Battle of Springfield, effectively ending their ambitions in New Jersey. In July, Washington appointed Benedict Arnold commander of West Point; his attempt to betray the fort to the British failed due to incompetent planning, and the plot was revealed when his British contact John André was captured and later executed. Arnold escaped to New York and switched sides, an action justified in a pamphlet addressed "To the Inhabitants of America"; the Patriots condemned his betrayal, while he found himself almost as unpopular with the British. The war to the west of the Appalachians was largely confined to skirmishing and raids. In February 1778, an expedition of militia to destroy British military supplies in settlements along the Cuyahoga River was halted by adverse weather. Later in the year, a second campaign was undertaken to seize the Illinois Country from the British. Virginia militia, Canadien settlers, and Indian allies commanded by Colonel George Rogers Clark captured Kaskaskia on July 4 then secured Vincennes, though Vincennes was recaptured by Quebec Governor Henry Hamilton. In early 1779, the Virginians counterattacked in the siege of Fort Vincennes and took Hamilton prisoner. Clark secured western British Quebec as the American Northwest Territory in the Treaty of Paris concluding the war. On May 25, 1780, British Colonel Henry Bird invaded Kentucky as part of a wider operation to clear American resistance from Quebec to the Gulf coast. Their Pensacola advance on New Orleans was overcome by Spanish Governor Gálvez's offensive on Mobile. Simultaneous British attacks were repulsed on St. Louis by the Spanish Lieutenant Governor de Leyba, and on the Virginia county courthouse at Cahokia by Lieutenant Colonel Clark. The British initiative under Bird from Detroit was ended at the rumored approach of Clark. The scale of violence in the Licking River Valley, such as during the Battle of Blue Licks, was extreme "even for frontier standards". It led to men of English and German settlements to join Clark's militia when the British and their auxiliaries withdrew to the Great Lakes. The Americans responded with a major offensive along the Mad River in August which met with some success in the Battle of Piqua but did not end Indian raids. French soldier Augustin de La Balme led a Canadian militia in an attempt to capture Detroit, but they dispersed when Miami natives led by Little Turtle attacked the encamped settlers on November 5. The war in the west had become a stalemate with the British garrison sitting in Detroit and the Virginians expanding westward settlements north of the Ohio River in the face of British-allied Indian resistance. War in the South The "Southern Strategy" was developed by Lord Germain, based on input from London-based Loyalists like Joseph Galloway. They argued it made no sense to fight the Patriots in the north where they were strongest, while the New England economy was reliant on trade with Britain, regardless of who governed it. On the other hand, duties on tobacco made the South far more profitable for Britain, while local support meant securing it required small numbers of regular troops. Victory would leave a truncated United States facing British possessions in the south, Canada to the north, and Ohio on their western border; with the Atlantic seaboard controlled by the Royal Navy, Congress would be forced to agree to terms. However, assumptions about the level of Loyalist support proved wildly optimistic. Germain accordingly ordered Augustine Prévost, the British commander in East Florida, to advance into Georgia in December 1778. Lieutenant-Colonel Archibald Campbell, an experienced officer taken prisoner earlier in the war before being exchanged for Ethan Allen, captured Savannah on December 29, 1778. He recruited a Loyalist militia of nearly 1,100, many of whom allegedly joined only after Campbell threatened to confiscate their property. Poor motivation and training made them unreliable troops, as demonstrated in their defeat by Patriot militia at the Battle of Kettle Creek on February 14, 1779, although this was offset by British victory at Brier Creek on March 3. In June, Prévost launched an abortive assault on Charleston, before retreating to Savannah, an operation notorious for widespread looting by British troops that enraged both Loyalists and Patriots. In October, a joint French and American operation under Admiral d'Estaing and General Benjamin Lincoln failed to recapture Savannah. Prévost was replaced by Lord Cornwallis, who assumed responsibility for Germain's strategy; he soon realized estimates of Loyalist support were considerably over-stated, and he needed far larger numbers of regular forces. Reinforced by Clinton, his troops captured Charleston in May 1780, inflicting the most serious Patriot defeat of the war; over 5,000 prisoners were taken and the Continental Army in the south effectively destroyed. On May 29, Loyalist regular Banastre Tarleton defeated an American force of 400 at the Battle of Waxhaws; over 120 were killed, many allegedly after surrendering. Responsibility is disputed, Loyalists claiming Tarleton was shot at while negotiating terms of surrender, but it was later used as a recruiting tool by the Patriots. Clinton returned to New York, leaving Cornwallis to oversee the south; despite their success, the two men left barely on speaking terms, with dire consequences for the future conduct of the war. The Southern strategy depended on local support, but this was undermined by a series of coercive measures. Previously, captured Patriots were sent home after swearing not to take up arms against the king; they were now required to fight their former comrades, while the confiscation of Patriot-owned plantations led formerly neutral "grandees" to side with them. Skirmishes at Williamson's Plantation, Cedar Springs, Rocky Mount, and Hanging Rock signaled widespread resistance to the new oaths throughout South Carolina. In July, Congress appointed General Horatio Gates commander in the south; he was defeated at the Battle of Camden on August 16, leaving Cornwallis free to enter North Carolina. Despite battlefield success, the British could not control the countryside and Patriot attacks continued; before moving north, Cornwallis sent Loyalist militia under Major Patrick Ferguson to cover his left flank, leaving their forces too far apart to provide mutual support. In early October, Ferguson was defeated at the Battle of Kings Mountain, dispersing organized Loyalist resistance in the region. Despite this, Cornwallis continued into North Carolina hoping for Loyalist support, while Washington replaced Gates with General Nathanael Greene in December 1780. Greene divided his army, leading his main force southeast pursued by Cornwallis; a detachment was sent southwest under Daniel Morgan, who defeated Tarleton's British Legion at Cowpens on January 17, 1781, nearly eliminating it as a fighting force. The Patriots now held the initiative in the south, with the exception of a raid on Richmond led by Benedict Arnold in January 1781. Greene led Cornwallis on a series of countermarches around North Carolina; by early March, the British were exhausted and short of supplies and Greene felt strong enough to fight the Battle of Guilford Court House on March 15. Although victorious, Cornwallis suffered heavy casualties and retreated to Wilmington, North Carolina seeking supplies and reinforcements. The Patriots now controlled most of the Carolinas and Georgia outside the coastal areas; after a minor reversal at the Battle of Hobkirk's Hill, they recaptured Fort Watson and Fort Motte on April 15. On June 6, Brigadier General Andrew Pickens captured Augusta, leaving the British in Georgia confined to Charleston and Savannah. The assumption Loyalists would do most of the fighting left the British short of troops and battlefield victories came at the cost of losses they could not replace. Despite halting Greene's advance at the Battle of Eutaw Springs on September 8, Cornwallis withdrew to Charleston with little to show for his campaign. Western campaign When Spain joined France's war against Britain in 1779, their treaty specifically excluded Spanish military action in North America. However, from the beginning of the war, Bernardo de Gálvez, the Governor of Spanish Louisiana, allowed the Americans to import supplies and munitions into New Orleans, then ship them to Pittsburgh. This provided an alternative transportation route for the Continental Army, bypassing the British blockade of the Atlantic Coast. The trade was organized by Oliver Pollock, a successful merchant in Havana and New Orleans who was appointed US "commercial agent". It also helped support the American campaign in the west; in the 1778 Illinois campaign, militia under General George Rogers Clark cleared the British from what was then part of Quebec, creating Illinois County, Virginia. Despite official neutrality, Gálvez initiated offensive operations against British outposts. First, he cleared British garrisons in Baton Rouge, Louisiana, Fort Bute, and Natchez, Mississippi, and captured five forts. In doing so, Gálvez opened navigation on the Mississippi River north to the American settlement in Pittsburg. In 1781, Galvez and Pollock campaigned east along the Gulf Coast to secure West Florida, including British-held Mobile and Pensacola. The Spanish operations crippled the British supply of armaments to British Indian allies, which effectively suspended a military alliance to attack settlers between the Mississippi River and the Appalachian Mountains. British defeat in the United States Clinton spent most of 1781 based in New York City; he failed to construct a coherent operational strategy, partly due to his difficult relationship with Admiral Marriot Arbuthnot. In Charleston, Cornwallis independently developed an aggressive plan for a campaign in Virginia, which he hoped would isolate Greene's army in the Carolinas and cause the collapse of Patriot resistance in the South. This was approved by Lord Germain in London, but neither of them informed Clinton. Washington and Rochambeau now discussed their options; the former wanted to attack New York, the latter Virginia, where Cornwallis' forces were less well-established and thus easier to defeat. Washington eventually gave way and Lafayette took a combined Franco-American force into Virginia, but Clinton misinterpreted his movements as preparations for an attack on New York. Concerned by this threat, he instructed Cornwallis to establish a fortified sea base where the Royal Navy could evacuate his troops to help defend New York. When Lafayette entered Virginia, Cornwallis complied with Clinton's orders and withdrew to Yorktown, where he constructed strong defenses and awaited evacuation. An agreement by the Spanish navy to defend the French West Indies allowed Admiral de Grasse to relocate to the Atlantic seaboard, a move Arbuthnot did not anticipate. This provided Lafayette naval support, while the failure of previous combined operations at Newport and Savannah meant their co-ordination was planned more carefully. Despite repeated urging from his subordinates, Cornwallis made no attempt to engage Lafayette before he could establish siege lines. Even worse, expecting to be withdrawn within a few days he abandoned the outer defenses, which were promptly occupied by the besiegers and hastened British defeat. On August 31, a British fleet under Thomas Graves left New York for Yorktown. After landing troops and munitions for the besiegers on August 30, de Grasse had remained in Chesapeake Bay and intercepted him on September 5; although the Battle of the Chesapeake was indecisive in terms of losses, Graves was forced to retreat, leaving Cornwallis isolated. An attempted breakout over the York River at Gloucester Point failed due to bad weather. Under heavy bombardment with dwindling supplies, Cornwallis felt his situation was hopeless and on October 16 sent emissaries to Washington to negotiate surrender; after twelve hours of negotiations, these were finalized the next day. Responsibility for defeat was the subject of fierce public debate between Cornwallis, Clinton and Germain. Despite criticism from his junior officers, Cornwallis retained the confidence of his peers and later held a series of senior government positions; Clinton ultimately took most of the blame and spent the rest of his life in obscurity. Subsequent to Yorktown, American forces were assigned to supervise the armistice between Washington and Clinton made to facilitate British departure following the January 1782 law of Parliament forbidding any further British offensive action in North America. British-American negotiations in Paris led to preliminaries signed November 1782 acknowledging US independence. The enacted Congressional war aim for British withdrawal from its North American claims to be ceded to the US was completed for the coastal cities in stages. In the South, Generals Greene and Wayne loosely invested the withdrawing British at Savanna and Charleston. There they observed the British finally taking off their regulars from Charleston December 14, 1782. Loyalist provincial militias of whites and free blacks, as well as Loyalists with their slaves were transported in a relocation to Nova Scotia and the British Caribbean. Native American allies of the British and some freed blacks were left to escape through the American lines unaided. Washington moved his army to New Windsor on the Hudson River about sixty miles north of New York City, and there the substance of the American army was furloughed home with officers at half pay until the Treaty of Paris formally ended the war on September 3, 1783. At that time, Congress decommissioned the regiments of Washington's Continental Army and began issuing land grants to veterans in the Northwest Territories for their war service. The last of the British occupation of New York City ended on November 25, 1783, with the departure of Clinton's replacement, General Sir Guy Carleton. Strategy and commanders To win their insurrection, the Americans needed to outlast the British will to continue the fight. To restore the empire, the British had to defeat the Continental Army in the early months, and compel the Congress to dissolve itself. Historian Terry M. Mays identifies three separate types of warfare, the first being a colonial conflict in which objections to Imperial trade regulation were as significant as taxation policy. The second was a civil war with all thirteen states split between Patriots, Loyalists and those who preferred to remain neutral. Particularly in the south, many battles were fought between Patriots and Loyalists with no British involvement, leading to divisions that continued after independence was achieved. The third element was a global war between France, Spain, the Dutch Republic and Britain, with America as one of a number of different theaters. After entering the war in 1778, France provided the Americans money, weapons, soldiers, and naval assistance, while French troops fought under US command in North America. While Spain did not formally join the war in America, they provided access to the Mississippi River and by capturing British possessions on the Gulf of Mexico denied bases to the Royal Navy, as well as retaking Menorca and besieging Gibraltar in Europe. Although the Dutch Republic was no longer a major power, prior to 1774 they still dominated the European carrying trade, and Dutch merchants made large profits by shipping French-supplied munitions to the Patriots. This ended when Britain declared war in December 1780 and the conflict proved disastrous to their economy. The Dutch were also excluded from the First League of Armed Neutrality, formed by Russia, Sweden and Denmark in March 1780 to protect neutral shipping from being stopped and searched for contraband by Britain and France. While of limited effect, these interventions forced the British to divert men and resources away from North America. American strategy Congress had multiple advantages if the rebellion turned into a protracted war. Their prosperous state populations depended on local production for food and supplies rather than on imports from their mother country that lay six to twelve weeks away by sail. They were spread across most of the North American Atlantic seaboard, stretching 1,000 miles. Most farms were remote from the seaports, and controlling four or five major ports did not give British armies control over the inland areas. Each state had established internal distribution systems. Each former colony had a long-established system of local militia, combat-tested in support of British regulars thirteen years before to secure an expanded British Empire. Together they took away French claims in North America west to the Mississippi River in the French and Indian War. The state legislatures independently funded and controlled their local militias. In the American Revolution, they trained and provided Continental Line regiments to the regular army, each with their own state officer corps. Motivation was also a major asset: each colonial capital had its own newspapers and printers, and the Patriots had more popular support than the Loyalists. British hoped that the Loyalists would do much of the fighting, but they fought less than expected. Continental Army When the war began, Congress lacked a professional army or navy, and each colony only maintained local militias. Militiamen were lightly armed, had little training, and usually did not have uniforms. Their units served for only a few weeks or months at a time and lacked the training and discipline of more experienced soldiers. Local county militias were reluctant to travel far from home and they were unavailable for extended operations. To compensate for this, Congress established a regular force known as the Continental Army on June 14, 1775, the origin of the modern United States Army, and appointed Washington as commander-in-chief. However, it suffered significantly from the lack of an effective training program and from largely inexperienced officers and sergeants, offset by a few senior officers. Each state legislature appointed officers for both county and state militias and their regimental Continental Line officers; although Washington was required to accept Congressional appointments, he was still permitted to choose and command his own generals, such as Nathanael Greene, his chief of artillery, Henry Knox, and Alexander Hamilton, the chief of staff. One of Washington's most successful recruits to general officer was Baron Friedrich Wilhelm von Steuben, a veteran of the Prussian general staff who wrote the Revolutionary War Drill Manual. The development of the Continental Army was always a work in progress and Washington used both his regulars and state militia throughout the war; when properly employed, the combination allowed them to overwhelm smaller British forces, as at Concord, Boston, Bennington, and Saratoga. Both sides used partisan warfare, but the state militias effectively suppressed Loyalist activity when British regulars were not in the area. Washington designed the overall military strategy of the war in cooperation with Congress, established the principle of civilian supremacy in military affairs, personally recruited his senior officer corps, and kept the states focused on a common goal. For the first three years until after Valley Forge, the Continental Army was largely supplemented by local state militias. Initially, Washington employed the inexperienced officers and untrained troops in Fabian strategies rather than risk frontal assaults against Britain's professional soldiers and officers. Over the course of the entire war, Washington lost more battles than he won, but he never surrendered his troops and maintained a fighting force in the face of British field armies and never gave up fighting for the American cause. By prevailing European standards, the armies in America were relatively small, limited by lack of supplies and logistics; the British in particular were constrained by the difficulty of transporting troops across the Atlantic and dependence on local supplies. Washington never directly commanded more than 17,000 men, while the combined Franco-American army at Yorktown was only about 19,000. At the beginning of 1776, Patriot forces consisted of 20,000 men, with two-thirds in the Continental Army and the other third in the various state militias. About 250,000 men served as regulars or as militia for the Revolutionary cause over eight years during wartime, but there were never more than 90,000 men under arms at one time. As a whole, American officers never equaled their opponents in tactics and maneuvers, and they lost most of the pitched battles. The great successes at Boston (1776), Saratoga (1777), and Yorktown (1781) were won from trapping the British far from base with a greater number of troops. Nevertheless, after 1778, Washington's army was transformed into a more disciplined and effective force, mostly by Baron von Steuben's training. Immediately after the Army emerged from Valley Forge, it proved its ability to match the British troops in action at the Battle of Monmouth, including a black Rhode Island regiment fending off a British bayonet attack then counter-charging for the first time in Washington's army. Here Washington came to realize that saving entire towns was not necessary, but preserving his army and keeping the revolutionary spirit alive was more important in the long run. Washington informed Henry Laurens "that the possession of our towns, while we have an army in the field, will avail them little." Although Congress was responsible for the war effort and provided supplies to the troops, Washington took it upon himself to pressure the Congress and state legislatures to provide the essentials of war; there was never nearly enough. Congress evolved in its committee oversight and established the Board of War, which included members of the military. Because the Board of War was also a committee ensnared with its own internal procedures, Congress also created the post of Secretary of War, and appointed Major General Benjamin Lincoln in February 1781 to the position. Washington worked closely with Lincoln to coordinate civilian and military authorities and took charge of training and supplying the army. Continental Navy During the first summer of the war, Washington began outfitting schooners and other small seagoing vessels to prey on ships supplying the British in Boston. Congress established the Continental Navy on October 13, 1775, and appointed Esek Hopkins as its first commander; for most of the war, it consisted of a handful of small frigates and sloops, supported by numerous privateers. On November 10, 1775, Congress authorized the creation of the Continental Marines, forefather of the United States Marine Corps. John Paul Jones became the first American naval hero by capturing HMS Drake on April 24, 1778, the first victory for any American military vessel in British waters. The last was by the frigate USS Alliance commanded by Captain John Barry. On March 10, 1783, the Alliance outgunned HMS Sybil in a 45-minute duel while escorting Spanish gold from Havana to Congress. After Yorktown, all US Navy ships were sold or given away; it was the first time in America's history that it had no fighting forces on the high seas. Congress primarily commissioned privateers to reduce costs and to take advantage of the large proportion of colonial sailors found in the British Empire. Overall, they included 1,700 ships that successfully captured 2,283 enemy ships to damage the British effort and to enrich themselves with the proceeds from the sale of cargo and the ship itself. About 55,000 sailors served aboard American privateers during the war. France At the beginning of the war, the Americans had no major international allies, as most nation-states watched and waited to see how developments would unfold in British North America. Over time, the Continental Army acquitted itself well in the face of British regulars and their German auxiliaries known to all European great powers. Battles such as the Battle of Bennington, the Battles of Saratoga, and even defeats such as the Battle of Germantown, proved decisive in gaining the attention and support of powerful European nations including France and Spain, and the Dutch Republic; the latter moved from covertly supplying the Americans with weapons and supplies to overtly supporting them. The decisive American victory at Saratoga convinced France, who was already a long-time rival of Britain, to offer the Americans the Treaty of Amity and Commerce. The two nations also agreed to a defensive Treaty of Alliance to protect their trade and also guaranteed American independence from Britain. To engage the United States as a French ally militarily, the treaty was conditioned on Britain initiating a war on France to stop it from trading with the US. Spain and the Dutch Republic were invited to join by both France and the United States in the treaty, but neither made a formal reply. On June 13, 1778, France declared war on Great Britain, and it invoked the French military alliance with the US, which ensured additional US privateer support for French possessions in the Caribbean. Washington worked closely with the soldiers and navy that France would send to America, primarily through Lafayette on his staff. French assistance made critical contributions required to defeat General Charles Cornwallis at Yorktown in 1781. British strategy The British military had considerable experience of fighting in North America, most recently during the Seven Years' War which forced France to give up New France in 1763. However, in previous conflicts they benefited from local logistics, as well as support from the colonial militia, which was not available in the American Revolutionary War. Reinforcements had to come from Europe, and maintaining large armies over such distances was extremely complex; ships could take three months to cross the Atlantic, and orders from London were often outdated by the time they arrived. Prior to the conflict, the colonies were largely autonomous economic and political entities, with no centralized area of ultimate strategic importance. This meant that, unlike Europe where the fall of a capital city often ended wars, that in America continued even after the loss of major settlements such as Philadelphia, the seat of Congress, New York and Charleston. British power was reliant on the Royal Navy, whose dominance allowed them to resupply their own expeditionary forces while preventing access to enemy ports. However, the majority of the American population was agrarian, rather than urban; supported by the French navy and blockade runners based in the Dutch Caribbean, their economy was able to survive. The geographical size of the colonies and limited manpower meant the British could not simultaneously conduct military operations and occupy territory without local support. Debate persists over whether their defeat was inevitable; one British statesman described it as "like trying to conquer a map". While Ferling argues Patriot victory was nothing short of a miracle, Ellis suggests the odds always favored the Americans, especially after Howe squandered the chance of a decisive British success in 1776, an "opportunity that would never come again". The US military history speculates the additional commitment of 10,000 fresh troops in 1780 would have placed British victory "within the realm of possibility". British Army The expulsion of France from North America in 1763 led to a drastic reduction in British troop levels in the colonies; in 1775, there were only 8,500 regular soldiers among a civilian population of 2.8 million. The bulk of military resources in the Americas were focused on defending sugar islands in the Caribbean; Jamaica alone generated more revenue than all thirteen American colonies combined. With the end of the Seven Years' War, the permanent army in Britain was also cut back, which resulted in administrative difficulties when the war began a decade later. Over the course of the war, there were four separate British commanders-in-chief, the first of whom was Thomas Gage; appointed in 1763, his initial focus was establishing British rule in former French areas of Canada. Rightly or wrongly, many in London blamed the revolt on his failure to take firm action earlier, and he was relieved after the heavy losses incurred at Bunker Hill. His replacement was Sir William Howe, a member of the Whig faction in Parliament who opposed the policy of coercion advocated by Lord North; Cornwallis, who later surrendered at Yorktown, was one of many senior officers who initially refused to serve in North America. The 1775 campaign showed the British overestimated the capabilities of their own troops and underestimated the colonial militia, requiring a reassessment of tactics and strategy. However, it allowed the Patriots to take the initiative and British authorities rapidly lost control over every colony. Howe's responsibility is still debated; despite receiving large numbers of reinforcements, Bunker Hill seems to have permanently affected his self-confidence and lack of tactical flexibility meant he often failed to follow up opportunities. Many of his decisions were attributed to supply problems, such as the delay in launching the New York campaign and failure to pursue Washington's beaten army. Having lost the confidence of his subordinates, he was recalled after Burgoyne surrendered at Saratoga. Following the failure of the Carlisle Commission, British policy changed from treating the Patriots as subjects who needed to be reconciled to enemies who had to be defeated. In 1778, Howe was replaced by Sir Henry Clinton, appointed instead of Carleton who was considered overly cautious. Regarded as an expert on tactics and strategy, like his predecessors Clinton was handicapped by chronic supply issues. As a result, he was largely inactive in 1779 and much of 1780; in October 1780, he warned Germain of "fatal consequences" if matters did not improve. In addition, Clinton's strategy was compromised by conflict with political superiors in London and his colleagues in North America, especially Admiral Mariot Arbuthnot, replaced in early 1781 by Rodney. He was neither notified nor consulted when Germain approved Cornwallis' invasion of the south in 1781 and delayed sending him reinforcements believing the bulk of Washington's army was still outside New York City. After the surrender at Yorktown, Clinton was relieved by Carleton, whose major task was to oversee the evacuation of Loyalists and British troops from Savannah, Charleston, and New York City. German Troops During the 18th century, all states commonly hired foreign soldiers, especially Britain; during the Seven Years' War, they comprised 10% of the British army and their use caused little debate. When it became clear additional troops were needed to suppress the revolt in America, it was decided to employ mercenaries. There were several reasons for this, including public sympathy for the Patriot cause, an historical reluctance to expand the British army and the time needed to recruit and train new regiments. An alternate source was readily available in the Holy Roman Empire, where many smaller states had a long tradition of renting their armies to the highest bidder. The most important was Hesse-Cassel, known as "the Mercenary State". The first supply agreements were signed by the North administration in late 1775; over the next decade, more than 40,000 Germans fought in North America, Gibraltar, South Africa and India, of whom 30,000 served in the American War. Often generically referred to as "Hessians", they included men from many other states, including Hanover and Brunswick. Sir Henry Clinton recommended recruiting Russian troops whom he rated very highly, having seen them in action against the Ottomans; however, negotiations with Catherine the Great made little progress. Unlike previous wars their use led to intense political debate in Britain, France, and even Germany, where Frederick the Great refused to provide passage through his territories for troops hired for the American war. In March 1776, the agreements were challenged in Parliament by Whigs who objected to "coercion" in general, and the use of foreign soldiers to subdue "British subjects". The debates were covered in detail by American newspapers, which reprinted key speeches and in May 1776 they received copies of the treaties themselves. Provided by British sympathizers, these were smuggled into
Congress drafted a Petition to the King and organized a boycott of British goods. Despite attempts to achieve a peaceful solution, fighting began with the Battle of Lexington on April 19, 1775 and in June Congress authorized George Washington to create a Continental Army. Although the "coercion policy" advocated by the North ministry was opposed by a faction within Parliament, both sides increasingly viewed conflict as inevitable. The Olive Branch Petition sent by Congress to George III in July 1775 was rejected and in August Parliament declared the colonies to be in a state of rebellion. Following the loss of Boston in March 1776, Sir William Howe, the new British commander-in-chief, launched the New York and New Jersey campaign. He captured New York City in November, before Washington won small but significant victories at Trenton and Princeton, which restored Patriot confidence. In summer 1777, Howe succeeded in taking Philadelphia, but in October a separate force under John Burgoyne was forced to surrender at Saratoga. This victory was crucial in convincing powers like France and Spain an independent United States was a viable entity. France provided the US informal economic and military support from the beginning of the rebellion, and after Saratoga the two countries signed a commercial agreement and a Treaty of Alliance in February 1778. In return for a guarantee of independence, Congress joined France in its global war with Britain and agreed to defend the French West Indies. Spain also allied with France against Britain in the Treaty of Aranjuez (1779), though it did not formally ally with the Americans. Nevertheless, access to ports in Spanish Louisiana allowed the Patriots to import arms and supplies, while the Spanish Gulf Coast campaign deprived the Royal Navy of key bases in the south. This undermined the 1778 strategy devised by Howe's replacement, Sir Henry Clinton, which took the war into the Southern United States. Despite some initial success, by September 1781 Cornwallis was besieged by a Franco-American force in Yorktown. After an attempt to resupply the garrison failed, Cornwallis surrendered in October, and although the British wars with France and Spain continued for another two years, this ended fighting in North America. In April 1782, the North ministry was replaced by a new British government which accepted American independence and began negotiating the Treaty of Paris, ratified on September 3, 1783. Prelude to revolution The French and Indian War, part of the wider global conflict known as the Seven Years' War, ended with the 1763 Peace of Paris, which expelled France from its possessions in New France. Acquisition of territories in Atlantic Canada and West Florida, inhabited largely by French or Spanish-speaking Catholics, led the British authorities to consolidate their hold by populating them with English-speaking settlers. Preventing conflict between settlers and Native American tribes west of the Appalachian Mountains would also avoid the cost of an expensive military occupation. The Proclamation Line of 1763 was designed to achieve these aims by refocusing colonial expansion north into Nova Scotia and south into Florida, with the Mississippi River as the dividing line between British and Spanish possessions in the Americas. Settlement beyond the 1763 limits was tightly restricted, while claims by individual colonies west of this line were rescinded, most significantly Virginia and Massachusetts who argued their boundaries extended from the Atlantic to the Pacific. Ultimately the vast exchange of territory destabilized existing alliances and trade networks between settlers and Native Americans in the west, while it proved impossible to prevent encroachment beyond the Proclamation Line. With the exception of Virginia and others "deprived" of their rights in the western lands, the colonial legislatures generally agreed on the principle of boundaries but disagreed on where to set them, while many settlers resented the restrictions. Since enforcement required permanent garrisons along the frontier, it led to increasingly bitter disputes over who should pay for them. Taxation and legislation Although directly administered by the Crown, acting through a local Governor, the colonies were largely governed by native-born property owners. While external affairs were managed by London, colonial militia were funded locally but with the ending of the French threat in 1763, the legislatures expected less taxation, not more. At the same time, the huge debt incurred by the Seven Years' War and demands from British taxpayers for cuts in government expenditure meant Parliament expected the colonies to fund their own defense. The 1763 to 1765 Grenville ministry instructed the Royal Navy to stop the trade of smuggled goods and enforce customs duties levied in American ports. The most important was the 1733 Molasses Act; routinely ignored prior to 1763, it had a significant economic impact since 85% of New England rum exports were manufactured from imported molasses. These measures were followed by the Sugar Act and Stamp Act, which imposed additional taxes on the colonies to pay for defending the western frontier. In July 1765, the Whigs formed the First Rockingham ministry, which repealed the Stamp Act and reduced tax on foreign molasses to help the New England economy, but re-asserted Parliamentary authority in the Declaratory Act. However, this did little to end the discontent; in 1768, a riot started in Boston when the authorities seized the sloop Liberty on suspicion of smuggling. Tensions escalated further in March 1770 when British troops fired on rock-throwing civilians, killing five in what became known as the Boston Massacre. The Massacre coincided with the partial repeal of the Townshend Acts by the Tory-based North Ministry, which came to power in January 1770 and remained in office until 1781. North insisted on retaining duty on tea to enshrine Parliament's right to tax the colonies; the amount was minor, but ignored the fact it was that very principle Americans found objectionable. Tensions escalated following the destruction of a customs vessel in the June 1772 Gaspee Affair, then came to a head in 1773. A banking crisis led to the near-collapse of the East India Company, which dominated the British economy; to support it, Parliament passed the Tea Act, giving it a trading monopoly in the Thirteen Colonies. Since most American tea was smuggled by the Dutch, the Act was opposed by those who managed the illegal trade, while being seen as yet another attempt to impose the principle of taxation by Parliament. In December 1773, a group called the Sons of Liberty disguised as Mohawk natives dumped 342 crates of tea into Boston Harbor, an event later known as the Boston Tea Party. Parliament responded by passing the so-called Intolerable Acts, aimed specifically at Massachusetts, although many colonists and members of the Whig opposition considered them a threat to liberty in general. This led to increased sympathy for the Patriot cause locally, as well as in Parliament and the London press. Break with the British Crown Over the course of the 18th century, the elected lower houses in the colonial legislatures gradually wrested power from their Royal Governors. Dominated by smaller landowners and merchants, these Assemblies now established ad hoc provincial legislatures, variously called Congresses, Conventions, and Conferences, effectively replacing Royal control. With the exception of Georgia, twelve colonies sent representatives to the First Continental Congress to agree on a unified response to the crisis. Many of the delegates feared that an all-out boycott would result in war and sent a Petition to the King calling for the repeal of the Intolerable Acts. However, after some debate, on September 17, 1774, Congress endorsed the Massachusetts Suffolk Resolves and on October 20 passed the Continental Association; based on a draft prepared by the First Virginia Convention in August, this instituted economic sanctions against Britain. While denying its authority over internal American affairs, a faction led by James Duane and future Loyalist Joseph Galloway insisted Congress recognize Parliament's right to regulate colonial trade. Expecting concessions by the North administration, Congress authorized the extralegal committees and conventions of the colonial legislatures to enforce the boycott; this succeeded in reducing British imports by 97% from 1774 to 1775. However, on February 9 Parliament declared Massachusetts to be in a state of rebellion and instituted a blockade of the colony. In July, the Restraining Acts limited colonial trade with the British West Indies and Britain and barred New England ships from the Newfoundland cod fisheries. The increase in tension led to a scramble for control of militia stores, which each Assembly was legally obliged to maintain for defense. On April 19, a British attempt to secure the Concord arsenal culminated in the Battles of Lexington and Concord which began the war. Political reactions After the Patriot victory at Concord, moderates in Congress led by John Dickinson drafted the Olive Branch Petition, offering to accept royal authority in return for George III mediating in the dispute. However, since it was immediately followed by the Declaration of the Causes and Necessity of Taking Up Arms, Colonial Secretary Dartmouth viewed the offer as insincere; he refused to present the petition to the king, which was therefore rejected in early September. Although constitutionally correct, since George could not oppose his own government, it disappointed those Americans who hoped he would mediate in the dispute, while the hostility of his language annoyed even Loyalist members of Congress. Combined with the Proclamation of Rebellion, issued on August 23 in response to the Battle at Bunker Hill, it ended hopes of a peaceful settlement. Backed by the Whigs, Parliament initially rejected the imposition of coercive measures by 170 votes, fearing an aggressive policy would simply drive the Americans towards independence. However, by the end of 1774 the collapse of British authority meant both North and George III were convinced war was inevitable. After Boston, Gage halted operations and awaited reinforcements; the Irish Parliament approved the recruitment of new regiments, while allowing Catholics to enlist for the first time. Britain also signed a series of treaties with German states to supply additional troops. Within a year it had an army of over 32,000 men in America, the largest ever sent outside Europe at the time. The employment of German mercenaries and Catholics against people viewed as British citizens was opposed by many in Parliament, as well as the colonial assemblies; combined with the lack of activity by Gage, it allowed the Patriots to take control of the legislatures. Support for independence was boosted by Thomas Paine's pamphlet Common Sense, which argued for American self-government, that was widely reprinted. To draft the Declaration of Independence, Congress appointed the Committee of Five, consisting of Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman and Robert Livingston. Identifying inhabitants of the Thirteen Colonies as "one people", it simultaneously dissolved political links with Britain, while including a long list of alleged violations of "English rights" committed by George III. On July 2, Congress voted for independence and published the declaration on July 4, which Washington read to his troops in New York City on July 9. At this point, the Revolution ceased to be an internal dispute over trade and tax policies and became a civil war, since each state represented in Congress was engaged in a struggle with Britain, but also split between Patriots and Loyalists. Patriots generally supported independence from Britain and a new national union in Congress, while Loyalists remained faithful to British rule. Estimates of numbers vary, one suggestion being the population as a whole was split evenly between committed Patriots, committed Loyalists and those who were indifferent. Others calculate the spilt as 40% Patriot, 40% neutral, 20% Loyalist, but with considerable regional variations. At the onset of the war, Congress realized defeating Britain required foreign alliances and intelligence-gathering. The Committee of Secret Correspondence was formed for "the sole purpose of corresponding with our friends in Great Britain and other parts of the world". From 1775 to 1776, it shared information and built alliances through secret correspondence, as well as employing secret agents in Europe to gather intelligence, conduct undercover operations, analyze foreign publications and initiate Patriot propaganda campaigns. Paine served as secretary, while Silas Deane was instrumental in securing French aid in Paris. War breaks out As the American Revolutionary War unfolded in North America, there were two principal campaign theaters within the thirteen states, and a smaller but strategically important one west of the Appalachian Mountains to the Mississippi River and north to the Great Lakes. The full-on military campaigning began in the states north of Maryland, and fighting was most frequent and severest there between 1775 and 1778. Patriots achieved several strategic victories in the South, the British lost their first army at Saratoga, and the French entered the war as an American ally. In the expanded Northern theater and wintering at Valley Forge, General Washington observed British operations coming out of New York at the 1778 Battle of Monmouth. He then closed off British initiatives by a series of raids that contained the British army in New York City. The same year, Spanish-supplied Virginia Colonel George Rogers Clark joined by Francophone settlers and their Indian allies conquered Western Quebec, the US Northwest Territory. Starting in 1779, the British initiated a southern strategy to begin at Savannah, gather Loyalist support, and reoccupy Patriot-controlled territory north to Chesapeake Bay. Initially the British were successful, and the Americans lost an entire army at the siege of Charleston, which caused a severe setback for Patriots in the region. But then British maneuvering north led to a combined American and French force cornering a second British army at Battle of Yorktown, and their surrender effectively ended the Revolutionary War. Early engagements On April 14, 1775, Sir Thomas Gage, Commander-in-Chief, North America since 1763 and also Governor of Massachusetts from 1774, received orders to take action against the Patriots. He decided to destroy militia ordnance stored at Concord, Massachusetts, and capture John Hancock and Samuel Adams, who were considered the principal instigators of the rebellion. The operation was to begin around midnight on April 19, in the hope of completing it before the Patriots could respond. However, Paul Revere learned of the plan and notified Captain Parker, commander of the Concord militia, who prepared to resist the attempted seizure. British troops clashed with colonial forces at Lexington and Concord, suffering around 300 casualties before withdrawing to Boston, which was then besieged by the militia. In May, 4,500 British reinforcements arrived under Generals William Howe, John Burgoyne, and Sir Henry Clinton. On June 17, they seized the Charlestown Peninsula at the Battle of Bunker Hill, a frontal assault in which they suffered over 1,000 casualties. Dismayed at the costly attack which had gained them little, Gage appealed to London for a larger army to suppress the revolt, but instead was replaced as commander by Howe. On June 14, 1775, Congress took control of Patriot forces outside Boston, and Congressional leader John Adams nominated George Washington as commander-in-chief of the new Continental Army. Washington previously commanded Virginia militia regiments in the French and Indian War, and on June 16, John Hancock officially proclaimed him "General and Commander in Chief of the army of the United Colonies." On July 3, He assumed command on July 3, preferring to fortify Dorchester Heights outside Boston rather than assaulting it. In early March 1776, Colonel Henry Knox arrived with heavy artillery acquired in the Capture of Fort Ticonderoga. Under cover of darkness, on March 5 Washington placed these on Dorchester Heights, from where they could fire on the town and British ships in Boston Harbor. Fearing another Bunker Hill, Howe evacuated the city on March 17 without further loss and sailed to Halifax, Nova Scotia, while Washington moved south to New York City. Beginning in August 1775, American privateers raided towns in Nova Scotia, including Saint John, Charlottetown and Yarmouth. In 1776, John Paul Jones and Jonathan Eddy attacked Canso and Fort Cumberland respectively. British officials in Quebec began negotiating with the Iroquois for their support, while the Americans urged them to maintain neutrality. Aware of Native American leanings toward the British and fearing an Anglo-Indian attack from Canada, Congress authorized an invasion of Quebec in April 1775. A second American invasion was defeated at the Battle of Quebec on December 31, and after a loose siege the Americans withdrew on May 6, 1776. A failed counter-attack at Trois-Rivières on June 8 ended American operations in Quebec. However, British pursuit was blocked by American ships on Lake Champlain until they were cleared on October 11 at the Battle of Valcour Island. The American troops were forced to withdraw to Fort Ticonderoga, ending the campaign. In November 1776, a Massachusetts-sponsored uprising in Nova Scotia during the Battle of Fort Cumberland was dispersed. The cumulative failures cost the Patriots support in local public opinion, and aggressive anti-Loyalist policies in the New England colonies alienated the Canadians. The Patriots made no further attempts to invade north. In Virginia, an attempt by Governor Lord Dunmore to seize militia stores on April 20 1775 led to an increase in tension, although conflict was avoided for the time being. This changed after the publication of Dunmore's Proclamation on November 7, 1775, promising freedom to any slaves who fled their Patriot masters and agreed to fight for the Crown. British forces were defeated at Great Bridge on December 9 and took refuge on British ships anchored near the port of Norfolk. When the Third Virginia Convention refused to disband its militia or accept martial law, Dunmore ordered the Burning of Norfolk on January 1, 1776. The siege of Savage's Old Fields began on November 19 in South Carolina between Loyalist and Patriot militias, and the Loyalists were subsequently driven out of the colony in the Snow Campaign. Loyalists were recruited in North Carolina to reassert British rule in the South, but they were decisively defeated in the Battle of Moore's Creek Bridge. A British expedition sent to reconquer South Carolina launched an attack on Charleston in the Battle of Sullivan's Island on June 28, 1776, but it failed and left the South under Patriot control until 1780. A shortage of gunpowder led Congress to authorize a naval expedition against The Bahamas to secure ordnance stored there. On March 3, 1776, an American squadron landed at Nassau and encountered minimal resistance, confiscating what supplies they could before sailing for home on March 17. A month later, after a brief skirmish with , they returned to New London, Connecticut, the base for American naval operations during the Revolution. British New York counter-offensive After regrouping at Halifax, Nova Scotia, William Howe was determined to take the fight to the Americans. He sailed for New York in June 1776 and began landing troops on Staten Island near the entrance to New York Harbor on July 2. The Americans rejected Howe's informal attempt to negotiate peace on July 30; Washington knew that an attack on the city was imminent and realized that he needed advance information to deal with disciplined British regular troops. On August 12, 1776, Patriot Thomas Knowlton was given orders to form an elite group for reconnaissance and secret missions. Knowlton's Rangers, which included Nathan Hale, became the Army's first intelligence unit. When Washington was driven off Long Island he soon realized that he would need more than military might and amateur spies to defeat the British. He was committed to professionalizing military intelligence, and with the aid of Benjamin Tallmadge, they launched the six-man Culper spy ring. The efforts of Washington and the Culper Spy Ring substantially increased effective allocation and deployment of Continental regiments in the field. Over the course of the war Washington spent more than 10 percent of his total military funds on intelligence operations. Washington split his army into positions on Manhattan Island and across the East River in western Long Island. On August 27 at the Battle of Long Island, Howe outflanked Washington and forced him back to Brooklyn Heights, but he did not attempt to encircle Washington's forces. Through the night of August 28, General Henry Knox bombarded the British. Knowing they were up against overwhelming odds, Washington ordered the assembly of a war council on August 29; all agreed to retreat to Manhattan. Washington quickly had his troops assembled and ferried them across the East River to Manhattan on flat-bottomed freight boats without any losses in men or ordnance, leaving General Thomas Mifflin's regiments as a rearguard. General Howe officially met with a delegation from Congress at the September Staten Island Peace Conference, but it failed to conclude peace as the British delegates only had the authority to offer pardons and could not recognize independence. On September 15, Howe seized control of New York City when the British landed at Kip's Bay and unsuccessfully engaged the Americans at the Battle of Harlem Heights the following day. On October 18 Howe failed to encircle the Americans at the Battle of Pell's Point, and the Americans withdrew. Howe declined to close with Washington's army on October 28 at the Battle of White Plains, and instead attacked a hill that was of no strategic value. Washington's retreat isolated his remaining forces and the British captured Fort Washington on November 16. The British victory there amounted to Washington's most disastrous defeat with the loss of 3,000 prisoners. The remaining American regiments on Long Island fell back four days later. General Sir Henry Clinton wanted to pursue Washington's disorganized army, but he was first required to commit 6,000 troops to capture Newport, Rhode Island to secure the Loyalist port. General Charles Cornwallis pursued Washington, but Howe ordered him to halt, leaving Washington unmolested. The outlook was bleak for the American cause: the reduced army had dwindled to fewer than 5,000 men and would be reduced further when enlistments expired at the end of the year. Popular support wavered, morale declined, and Congress abandoned Philadelphia and moved to Baltimore. Loyalist activity surged in the wake of the American defeat, especially in New York state. In London, news of the victorious Long Island campaign was well received with festivities held in the capital. Public support reached a peak, and King George III awarded the Order of the Bath to Howe. Strategic deficiencies among Patriot forces were evident: Washington divided a numerically weaker army in the face of a stronger one, his inexperienced staff misread the military situation, and American troops fled in the face of enemy fire. The successes led to predictions that the British could win within a year. In the meantime, the British established winter quarters in the New York City area and anticipated renewed campaigning the following spring. Two weeks after Congress withdrew to safer Maryland, Washington crossed the ice-choked Delaware River about 30 miles upriver from Philadelphia on the night of December 25–26, 1776. His approach over frozen trails surprised Hessian Colonel Johann Rall. The Continentals overwhelmed the Hessian garrison at Trenton, New Jersey, and took 900 prisoners. The celebrated victory rescued the American army's flagging morale, gave new hope to the Patriot cause, and dispelled much of the fear of professional Hessian "mercenaries". Cornwallis marched to retake Trenton but was repulsed at the Battle of the Assunpink Creek; in the night of January 2, Washington outmaneuvered Cornwallis and defeated his rearguard in the Battle of Princeton the following day. The two victories helped to convince the French that the Americans were worthy military allies. Washington entered winter quarters from January to May 1778 at Morristown, New Jersey, and he received the Congressional direction to inoculate all Continental troops against smallpox. Although a Forage War between the armies continued until March, Howe did not attempt to attack the Americans over the winter of 1776–1777. British northern strategy fails The 1776 campaign demonstrated regaining New England would be a prolonged affair, which led to a change in British strategy. This involved isolating the north from the rest of the country by taking control of the Hudson River, allowing them to focus on the south where Loyalist support was believed to be substantial. In December 1776, Howe wrote to the Colonial Secretary Lord Germain, proposing a limited offensive against Philadelphia, while a second force moved down the Hudson from Canada. Germain received this on February 23, 1777, followed a few days later by a memorandum from Burgoyne, then in London on leave. Burgoyne supplied several alternatives, all of which gave him responsibility for the offensive, with Howe remaining on the defensive. The option selected required him to lead the main force south from Montreal down the Hudson Valley, while a detachment under Barry St. Leger moved east from Lake Ontario. The two would meet at Albany, leaving Howe to decide whether to join them. Reasonable in principle, this did not account for the logistical difficulties involved and Burgoyne erroneously assumed Howe would remain on the defensive; Germain's failure to make this clear meant he opted to attack Philadelphia instead. Burgoyne set out on June 14, 1777, with a mixed force of British regulars, German auxiliaries and Canadian militia, and captured Fort Ticonderoga on July 5. As General Horatio Gates retreated, his troops blocked roads, destroyed bridges, dammed streams, and stripped the area of food. This slowed Burgoyne's progress and forced him to send out large foraging expeditions; on one of these, more than 700 British troops were captured at the Battle of Bennington on August 16. St Leger moved east and besieged Fort Stanwix; despite defeating an American relief force at the Battle of Oriskany on August 6, he was abandoned by his Indian allies and withdrew to Quebec on August 22. Now isolated and outnumbered by Gates, Burgoyne continued onto Albany rather than retreating to Fort Ticonderoga, reaching Saratoga on September 13. He asked Clinton for support while constructing defenses around the town. Morale among his troops rapidly declined, and an unsuccessful attempt to break past Gates at the Battle of Freeman Farms on September 19 resulted in 600 British casualties. When Clinton advised he could not reach them, Burgoyne's subordinates advised retreat; a reconnaissance in force on October 7 was repulsed by Gates at the Battle of Bemis Heights, forcing them back into Saratoga with heavy losses. By October 11, all hope of escape had vanished; persistent rain reduced the camp to a "squalid hell" of mud and starving cattle, supplies were dangerously low and many of the wounded in agony. Burgoyne capitulated on October 17; around 6,222 soldiers, including German forces commanded by General Riedesel, surrendered their arms before being taken to Boston, where they were to be transported to England. After securing additional supplies, Howe made another attempt on Philadelphia by landing his troops in Chesapeake Bay on August 24. He now compounded failure to support Burgoyne by missing repeated opportunities to destroy his opponent, defeating Washington at the Battle of Brandywine on September 11, then allowing him to withdraw in good order. After dispersing an American detachment at Paoli on September 20, Cornwallis occupied Philadelphia on September 26, with the main force of 9,000 under Howe based just to the north at Germantown. Washington attacked them on October 4, but was repulsed. To prevent Howe's forces in Philadelphia being resupplied by sea, the Patriots erected Fort Mifflin and nearby Fort Mercer on the east and west banks of the Delaware respectively, and placed obstacles in the river south of the city. This was supported by a small flotilla of Continental Navy ships on the Delaware, supplemented by the Pennsylvania State Navy, commanded by John Hazelwood. An attempt by the Royal Navy to take the forts in the October 20 to 22 Battle of Red Bank failed; a second attack captured Fort Mifflin on November 16, while Fort Mercer was abandoned two days later when Cornwallis breached the walls. His supply lines secured, Howe tried to tempt Washington into giving battle, but after inconclusive skirmishing at the Battle of White Marsh from December 5 to 8, he withdrew to Philadelphia for the winter. On December 19, the Americans followed suit and entered winter quarters at Valley Forge; while Washington's domestic opponents contrasted his lack of battlefield success with Gates' victory at Saratoga, foreign observers such as Frederick the Great were equally impressed with Germantown, which demonstrated resilience and determination. Over the winter, poor conditions, supply problems and low morale resulted in 2,000 deaths, with another 3,000 unfit for duty due to lack of shoes. However, Baron Friedrich Wilhelm von Steuben took the opportunity to introduce Prussian Army drill and infantry tactics to the entire Continental Army; he did this by training "model companies" in each regiment, who then instructed their home units. Despite Valley Forge being only twenty miles away, Howe made no effort to attack their camp, an action some critics argue could have ended the war. Foreign intervention Like his predecessors, French foreign minister Vergennes considered the 1763 Peace a national humiliation* and viewed the war as an opportunity to weaken Britain. He initially avoided open conflict, but allowed American ships to take on cargoes in French ports, a technical violation of neutrality. Although public opinion favored the American cause, Finance Minister Turgot argued they did not need French help to gain independence and war was too expensive. Instead, Vergennes persuaded Louis XVI to secretly fund a government front company to purchase munitions for the Patriots, carried in neutral Dutch ships and imported through Sint Eustatius in the Caribbean. Many Americans opposed a French alliance, fearing to "exchange one tyranny for another", but this changed after a series of military setbacks in early 1776. As France had nothing to gain from the colonies reconciling with Britain, Congress had three choices; making peace on British terms, continuing the struggle on their own, or proclaiming independence, guaranteed by France. Although the Declaration of Independence in July 1776 had wide public support, Adams was among those reluctant to pay the price of an alliance with France, and over 20% of Congressmen voted against it. Congress agreed to the treaty with reluctance and as the war moved in their favor increasingly lost interest in it. Silas Deane was sent to Paris to begin negotiations with Vergennes, whose key objectives were replacing Britain as the United States' primary commercial and military partner while securing the French West Indies from American expansion. These islands were extremely valuable; in 1772, the value of sugar and coffee produced by Saint-Domingue on its own exceeded that of all American exports combined. Talks progressed slowly until October 1777, when British defeat at Saratoga and their apparent willingness to negotiate peace convinced Vergennes only a permanent alliance could prevent the "disaster" of Anglo-American rapprochement. Assurances of formal French support allowed Congress to reject the Carlisle Peace Commission and insist on nothing short of complete independence. On February 6, 1778, France and the United States signed the Treaty of Amity and Commerce regulating trade between the two countries, followed by a defensive military alliance against Britain, the Treaty of Alliance. In return for French guarantees of American independence, Congress undertook to defend their interests in the West Indies, while both sides agreed not to make a separate peace; conflict over these provisions would lead to the
current and voltage, the ampere can alternatively be expressed in terms of the other units using the relationship , and thus 1 A = 1 W/V. Current can be measured by a multimeter, a device that can measure electrical voltage, current, and resistance. Former definition in the SI Until 2019, the SI defined the ampere as follows: The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to newtons per metre of length. Ampère's force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the formal definition of the ampere. The SI unit of charge, the coulomb, was then defined as "the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second: In general, charge was determined by steady current flowing for a time as . Realisation The standard ampere is most accurately realised using a Kibble balance, but is in practice maintained via Ohm's law from the units of electromotive force and resistance, the volt and the ohm, since the latter two can be tied to physical phenomena that are relatively easy to reproduce, the Josephson effect and the quantum Hall effect, respectively. Techniques to establish the realisation of an ampere have a relative uncertainty of approximately a few parts in 10, and involve realisations of the watt, the ohm and the volt. See also Ammeter Ampacity (current-carrying capacity)
System of Units defines the ampere in terms of other base units by measuring the electromagnetic force between electrical conductors carrying electric current. The earlier CGS system had two definitions of current, one essentially the same as the SI's and the other using electric charge as the base unit, with the unit of charge defined by measuring the force between two charged metal plates. The ampere was then defined as one coulomb of charge per second. In SI, the unit of charge, the coulomb, is defined as the charge carried by one ampere during one second. New definitions, in terms of invariant constants of nature, specifically the elementary charge, took effect on 20 May 2019. Definition The ampere is defined by taking the fixed numerical value of the elementary charge to be 1.602 176 634 × 10−19 when expressed in the unit C, which is equal to A⋅s, where the second is defined in terms of , the unperturbed ground state hyperfine transition frequency of the caesium-133 atom. The SI unit of charge, the coulomb, "is the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second: In general, charge is determined by steady current flowing for a time as . Constant, instantaneous and average current are expressed in amperes (as in "the charging current is 1.2 A") and the charge accumulated (or passed through a circuit) over a period of time is expressed in coulombs (as in "the battery charge is ").
Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm which adds up the elements of a list of n numbers would have a time requirement of O(n), using big O notation. At all times the algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. Therefore, it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost O(log n)) outperforms a sequential search (cost O(n) ) when used for table lookups on sorted lists or arrays. Formal versus empirical The analysis, and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware/software platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off" problem, the efficiency of a particular algorithm may not have significant consequences (unless n is extremely large) but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are not trivial to perform in a fair manner. Execution efficiency To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. Classification There are various ways to classify algorithms, each with its own merits. By implementation One way to classify algorithms is by implementation means. Recursion A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition (also known as termination condition) matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa. Logical An algorithm may be viewed as controlled logical deduction. This notion may be expressed as: Algorithm = logic + control. The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages, the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant semantics: a change in the axioms produces a well-defined change in the algorithm. Serial, parallel or distributed Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a computer network. Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable. Some problems have no parallel algorithms and are called inherently serial problems. Deterministic or non-deterministic Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics. Exact or approximate While many algorithms reach an exact solution, approximation algorithms seek an approximation that is closer to the true solution. The approximation can be reached by either using a deterministic or a random strategy. Such algorithms have practical value for many hard problems. One of the examples of an approximate algorithm is the Knapsack problem, where there is a set of given items. Its goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. Total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value. Quantum algorithm They run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement. By design paradigm Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories includes many different types of algorithms. Some common paradigms are: Brute-force or exhaustive search This is the naive method of trying every possible solution to see which is best. Divide and conquer A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease and conquer algorithm, which solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of a decrease and conquer algorithm is the binary search algorithm. Search and enumeration Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking. Randomized algorithm Such algorithms make some choices randomly (or pseudo-randomly). They can be very useful in finding approximate solutions for problems where finding exact solutions can be impractical (see heuristic method below). For some of these problems, it is known that the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithms for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms: Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP. Reduction of complexity This technique involves solving a difficult problem by transforming it into a better-known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer. Back tracking In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution. Optimization problems For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: Linear programming When searching for optimal solutions to a linear function bound to linear equality and inequality constraints, the constraints of the problem can be used directly in producing the optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem additionally requires that one or more of the unknowns must be an integer then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem. Dynamic programming When a problem shows optimal substructures—meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity. The greedy method A greedy algorithm is similar to a dynamic programming algorithm in that it works by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution, which may be given or have been constructed in some way, and improve it by making small modifications. For some problems they can find the optimal solution while for others they stop at local optima, that is, at solutions that cannot be improved by the algorithm but are not optimum. The most popular use of greedy algorithms is for finding the minimal spanning tree where finding the optimal solution is possible with this method. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem. The heuristic method In optimization problems, heuristic algorithms can be used to find a solution close to the optimal solution in cases where finding the optimal solution is impractical. These algorithms work by getting closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. Their merit is that they can find a solution very close to the optimal solution in a relatively short time. Such algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some of them, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm. By field of study Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques. Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented for optimization of resource consumption in industry but is now used in solving a broad range of problems in many fields. By complexity Algorithms can be classified by the amount of time they need to complete compared to their input size: Constant time: if the time needed by the algorithm is the same, regardless of the input size. E.g. an access to an array element. Logarithmic time: if the time is a logarithmic function of the input size. E.g. binary search algorithm. Linear time: if the time is proportional to the input size. E.g. the traverse of a list. Polynomial time: if the time is a power of the input size. E.g. the bubble sort algorithm has quadratic time complexity. Exponential time: if the time is an exponential function of the input size. E.g. Brute-force search. Some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them. Continuous algorithms The adjective "continuous" when applied to the word "algorithm" can mean: An algorithm operating on data that represents continuous quantities, even though this data is represented by discrete approximations—such algorithms are studied in numerical analysis; or An algorithm in the form of a differential equation that operates continuously on the data, running on an analog computer. Legal issues Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography). History: Development of the notion of "algorithm" Ancient Near East The earliest evidence of algorithms is found in the Babylonian mathematics of ancient Mesopotamia (modern Iraq). A Sumerian clay tablet found in Shuruppak near Baghdad and dated to circa 2500 BC described the earliest division algorithm. During the Hammurabi dynasty circa 1800-1600 BC, Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events. Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus circa 1550 BC. Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus, and the Euclidean algorithm, which was first described in Euclid's Elements (c. 300 BC). Discrete and distinguishable symbols Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying: accumulating stones or marks scratched on sticks or making discrete symbols in clay. Through the Babylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine computations. Manipulation of symbols as "place holders" for numbers: algebra Muhammad ibn Mūsā al-Khwārizmī, a Persian mathematician, wrote the Al-jabr in the 9th century. The terms "algorism" and "algorithm" are derived from the name al-Khwārizmī, while the term "algebra" is derived from the book Al-jabr. In Europe, the word "algorithm" was originally used to refer to the sets of rules and techniques used by Al-Khwarizmi to solve algebraic equations, before later being generalized to refer to any set of rules or techniques. This eventually culminated in Leibniz's notion of the calculus ratiocinator (ca 1680): Cryptographic algorithms The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm. Mechanical contrivances with discrete states The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particular, the verge escapement that provides us with the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines"—the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace, mid-19th century. Lovelace is credited with the first creation of an algorithm intended for processing on a computer—Babbage's analytical engine, the first device considered a real Turing-complete computer instead of just a calculator—and is sometimes called "history's first programmer" as a result, though a full implementation of Babbage's second device would not be realized until decades after her lifetime. Logical machines 1870 – Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to reduce Boolean equations when presented in a form similar to what is now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically ... More recently, however, I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc.] ...". With this machine he could analyze a "syllogism or any other simple logical argument". This machine he displayed in 1870 before the Fellows of the Royal Society. Another logician John Venn, however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all that can be rationally expected of any logical machine". Jacquard loom, Hollerith punch cards, telegraphy and telephony – the electromechanical relay: Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers. By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape (ca 1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter (ca. 1910) with its punched-paper use of Baudot code on tape. Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device". Davis (2000) observes the particular importance of the electromechanical relay (with its two "binary states" open and closed): It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical relays, that machines were built having the scope Babbage had envisioned." Mathematics during the 19th century up to the mid-20th century Symbols and rules: In rapid succession, the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language". But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a " 'formula language', that is a lingua characterica, a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules". The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913). The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular, the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox. The resultant considerations led to Kurt Gödel's paper (1931)—he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers. Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene and J.B. Rosser's λ-calculus a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene. Church's proof that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no decision about the next instruction. Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"—in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine". Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I", and a few years later Kleene's renaming his Thesis "Church's Thesis" and proposing "Turing's Thesis". Emil Post (1936) and Alan Turing (1936–37, 1939) Emil Post (1936) described the actions of a "computer" (human being) as follows: "...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to be carried out, and a fixed unalterable set of directions. His symbol space would be "a two-way infinite sequence of spaces or boxes... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time.... a box is to admit of but two possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke. "One box is to be singled out and called the starting point. ...a specific problem is to be given in symbolic form by a finite number of boxes [i.e., INPUT] being marked with a stroke. Likewise, the answer [i.e., OUTPUT] is to be given in symbolic form by such a configuration of marked boxes... "A set of directions applicable to a general problem sets up a deterministic process when applied to each specific problem. This process terminates only when it comes to the direction of type (C ) [i.e., STOP]". See more at Post–Turing machine Alan Turing's work preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter, and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'". Given the prevalence of Morse code and telegraphy, ticker tape machines, and teletypewriters we might conjecture that all were influences. Turing—his model of computation is now called a Turing machine—begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further and creates a machine as a model of computation of numbers. "Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book...I assume then that the computation is carried out on one-dimensional paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite... "The behavior of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite... "Let us imagine that the operations performed by the computer to be split up into 'simple operations' which are so elementary that it is not easy to imagine them further divided." Turing's reduction yields the following: "The simple operations must therefore include: "(a) Changes of the symbol on one of the observed squares "(b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares. "It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must, therefore, be taken to be one of the following: "(A) A possible change (a) of symbol together with a possible change of state of mind. "(B) A possible change (b) of observed squares, together with a possible change of state of mind" "We may now construct a machine to do the work of this computer." A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it: "A function is said to be "effectively calculable" if its values can be found by some purely mechanical process. Though it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more definite, mathematical expressible definition ... [he discusses the history of the definition pretty much as presented above with respect to Gödel, Herbrand, Kleene, Church, Turing, and Post] ... We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability † with effective calculability ... . "† We shall use the expression "computable function" to mean a function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions". J.B. Rosser (1939) and S.C. Kleene (1943) J. Barkley Rosser defined an 'effective [mathematical] method' in the following manner (italicization added): "'Effective method' is used here in the rather special sense of a method each step of which is precisely determined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date. [his footnote #5; see discussion immediately below]. The simplest of these to state (due to Post and Turing) says essentially that an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer. All three definitions are equivalent, so it doesn't matter which one is used. Moreover, the fact that all three are equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–226) Rosser's footnote No. 5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular, Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and their use of recursion, in particular, Gödel's use in his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–37) in their mechanism-models of computation. Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the following context (boldface in original): "12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure,
variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see finite-state machine, state transition table and control table for more), as flowcharts and drakon-charts (see state diagram for more), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see Turing machine for more). Representations of algorithms can be classed into three accepted levels of Turing machine description, as follows: 1 High-level description "...prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head." 2 Implementation description "...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level, we do not give details of states or transition function." 3 Formal description Most detailed, "lowest level", gives the Turing machine's "state table". For an example of the simple algorithm "Add m+n" described in all three levels, see Examples. Design Algorithm design refers to a method or a mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories of operation research, such as dynamic programming and divide-and-conquer. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g. an algorithm's run-time growth as the size of its input increases. Typical steps in the development of algorithms: Problem definition Development of a model Specification of the algorithm Designing an algorithm Checking the correctness of the algorithm Analysis of algorithm Implementation of algorithm Program testing Documentation preparation Computer algorithms "Elegant" (compact) programs, "good" (fast) programs : The notion of "simplicity and elegance" appears informally in Knuth and precisely in Chaitin: Knuth: " ... we want good algorithms in some loosely defined aesthetic sense. One criterion ... is the length of time taken to perform the algorithm .... Other criteria are adaptability of the algorithm to computers, its simplicity and elegance, etc." Chaitin: " ... a program is 'elegant,' by which I mean that it's the smallest possible program for producing the output that it does" Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant—such a proof would solve the Halting problem (ibid). Algorithm versus function computable by an algorithm: For a given function multiple algorithms may exist. This is true, even without expanding the available instruction set available to the programmer. Rogers observes that "It is ... important to distinguish between the notion of algorithm, i.e. procedure and the notion of function computable by algorithm, i.e. mapping yielded by procedure. The same function may have several different algorithms". Unfortunately, there may be a tradeoff between goodness (speed) and elegance (compactness)—an elegant program may take more steps to complete a computation than one less elegant. An example that uses Euclid's algorithm appears below. Computers (and computors), models of computation: A computer (or human "computor") is a restricted type of machine, a "discrete deterministic mechanical device" that blindly follows its instructions. Melzak's and Lambek's primitive models reduced this notion to four elements: (i) discrete, distinguishable locations, (ii) discrete, indistinguishable counters (iii) an agent, and (iv) a list of instructions that are effective relative to the capability of the agent. Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability". Minsky's machine proceeds sequentially through its five (or six, depending on how one counts) instructions unless either a conditional IF-THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution) operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g. L ← L − 1). Rarely must a programmer write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT. However, a few different assignment instructions (e.g. DECREMENT, INCREMENT, and ZERO/CLEAR/EMPTY for a Minsky machine) are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is a convenience; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0 THEN GOTO xxx is unconditional. Simulation of an algorithm: computer (computor) language: Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example". But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a quadratic equation the computor must know how to take a square root. If they don't, then the algorithm, to be effective, must provide a set of rules for extracting a square root. This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor). But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters". When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" instruction available rather than just subtraction (or worse: just Minsky's "decrement"). Structured programming, canonical structures: Per the Church–Turing thesis, any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations, Turing completeness requires only four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction. Canonical flowchart symbols: The graphical aide called a flowchart, offers a way to describe and document an algorithm (and a computer program of one). Like the program flow of a Minsky machine, a flowchart always starts at the top of a page and proceeds down. Its primary symbols are only four: the directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm–Jacopini canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. The symbols, and their use to build the canonical structures are shown in the diagram. Examples Algorithm example One of the simplest algorithms is to find the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be stated in a high-level description in English prose, as: High-level description: If there are no numbers in the set then there is no highest number. Assume the first number in the set is the largest number in the set. For each remaining number in the set: if this number is larger than the current largest number, consider this number to be the largest number in the set. When there are no numbers left in the set to iterate over, consider the current largest number to be the largest number of the set. (Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code: Input: A list of numbers L. Output: The largest number in the list L. if L.size = 0 return null largest ← L[0] for each item in L, do if item > largest, then largest ← item return largest Euclid's algorithm In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC). It is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. Euclid poses the problem thus: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including zero. To "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s. In modern words, remainder r = l − q×s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division. For Euclid's method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be zero, AND (ii) the subtraction must be "proper"; i.e., a test must guarantee that the smaller of the two numbers is subtracted from the larger (or the two can be equal so their subtraction yields zero). Euclid's original proof adds a third requirement: the two lengths must not be prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest. While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another, it yields the number "1" for their common measure. So, to be precise, the following is really Nicomachus' algorithm. Computer language for Euclid's algorithm Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction. A location is symbolized by upper case letter(s), e.g. S, A, etc. The varying quantity (number) in a location is written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009. An inelegant program for Euclid's algorithm The following algorithm is framed as Knuth's four-step version of Euclid's and Nicomachus', but, rather than using division to find the remainder, it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-level description, shown in boldface, is adapted from Knuth 1973:2–4: INPUT: [Into two locations L and S put the numbers l and s that represent the two lengths]: INPUT L, S [Initialize R: make the remaining length r equal to the starting/initial/input length l]: R ← L E0: [Ensure r ≥ s.] [Ensure the smaller of the two numbers is in S and the larger in R]: IF R > S THEN the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6: GOTO step 7 ELSE swap the contents of R and S. L ← R (this first step is redundant, but is useful for later discussion). R ← S S ← L E1: [Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R. IF S > R THEN done measuring so GOTO 10 ELSE measure again, R ← R − S [Remainder-loop]: GOTO 7. E2: [Is the remainder zero?]: EITHER (i) the last measure was exact, the remainder in R is zero, and the program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S. IF R = 0 THEN done so GOTO step 15 ELSE CONTINUE TO step 11, E3: [Interchange s and r]: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s; L serves as a temporary location. L ← R R ← S S ← L [Repeat the measuring process]: GOTO 7 OUTPUT: [Done. S contains the greatest common divisor]: PRINT S DONE: HALT, END, STOP. An elegant program for Euclid's algorithm The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language, the steps are numbered, and the instruction LET [] = [] is the assignment instruction symbolized by ←. 5 REM Euclid's algorithm for greatest common divisor 6 PRINT "Type two integers greater than 0" 10 INPUT A,B 20 IF B=0 THEN GOTO 80 30 IF A > B THEN GOTO 60 40 LET B=B-A 50 GOTO 20 60 LET A=A-B 70 GOTO 20 80 PRINT A 90 END How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at last the minuend M is less than or equal to the subtrahend S (Difference = Minuend − Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses. The following version can be used with programming languages from the C-family: // Euclid's algorithm for greatest common divisor int euclidAlgorithm (int A, int B){ A=abs(A); B=abs(B); while (B!=0){ while (A>B) A=A-B; B=B-A; } return A; } Testing the Euclid algorithms Does an algorithm do what its author wants it to do? A few test cases usually give some confidence in the core functionality. But tests are not enough. For test cases, one source uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively prime numbers 14157 and 5950. But "exceptional cases" must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane 5 Flight 501 rocket failure (June 4, 1996). Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm". Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof. Measuring and improving the Euclid algorithms Elegance (compactness) versus goodness (speed): With only six core instructions, "Elegant" is the clear winner, compared to "Inelegant" at thirteen instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed. Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective"—that is, it computes the function intended by its author—then the question becomes, can it be improved? The compactness of "Inelegant" can be improved by the elimination of five steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm; rather, it can only be done heuristically; i.e., by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps, together with steps 2 and 3, can be eliminated. This reduces the number of core instructions from thirteen to eight, which makes it "more elegant" than "Elegant", at nine steps. The speed of "Elegant" can be improved by moving the "B=0?" test outside of the two subtraction loops. This change calls for the addition of three instructions (B = 0?, A = 0?, GOTO). Now "Elegant" computes the example-numbers faster; whether this is always the case for any given A, B, and R, S would require a detailed analysis. Algorithmic analysis It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm which adds up the elements of a list of n numbers would have a time requirement of O(n), using big O notation. At all times the algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. Therefore, it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost O(log n)) outperforms a sequential search (cost O(n) ) when used for table lookups on sorted lists or arrays. Formal versus empirical The analysis, and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware/software platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off" problem, the efficiency of a particular algorithm may not have significant consequences (unless n is extremely large) but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are not trivial to perform in a fair manner. Execution efficiency To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. Classification There are various ways to classify algorithms, each with its own merits. By implementation One way to classify algorithms is by implementation means. Recursion A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition (also known as termination condition) matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa. Logical An algorithm may be viewed as controlled logical deduction. This notion may be expressed as: Algorithm = logic + control. The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages, the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant semantics: a change in the axioms produces a well-defined change in the algorithm. Serial, parallel or distributed Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a computer network. Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable. Some problems have no parallel algorithms and are called inherently serial problems. Deterministic or non-deterministic Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics. Exact or approximate While many algorithms reach an exact solution, approximation algorithms seek an approximation that is closer to the true solution. The approximation can be reached by either using a deterministic or a random strategy. Such algorithms have practical value for many hard problems. One of the examples of an approximate algorithm is the Knapsack problem, where there is a set of given items. Its goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. Total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value. Quantum algorithm They run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement. By design paradigm Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories includes many different types of algorithms. Some common paradigms are: Brute-force or exhaustive search This is the naive method of trying every possible solution to see which is best. Divide and conquer A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease and conquer algorithm, which solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of a decrease and conquer algorithm is the binary search algorithm. Search and enumeration Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking. Randomized algorithm Such algorithms make some choices randomly (or pseudo-randomly). They can be very useful in finding approximate solutions for problems where finding exact solutions can be impractical (see heuristic method below). For some of these problems, it is known that the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithms for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms: Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP. Reduction of complexity This technique involves solving a difficult problem by transforming it into a better-known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer. Back tracking In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution. Optimization problems For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: Linear programming When searching for optimal solutions to a linear function bound to linear equality and inequality constraints, the constraints of the problem can be used directly in producing the optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem additionally requires that one or more of the unknowns must be an integer then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem. Dynamic programming When a problem shows optimal substructures—meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization
during winter and early spring when no other cover exists and they provide fresh vegetation for animals and birds that feed on them. Although they are often considered to be weeds in gardens, this viewpoint is not always necessary, as most of them die when the soil temperature warms up again in early to late spring when other plants are still dormant and have not yet leafed out. Even though they do not compete directly with cultivated plants, sometimes winter annuals are considered a pest in commercial agriculture, because they can be hosts for insect pests or fungal diseases (such as ovary smut, Microbotryum sp.) which attack crops being cultivated. The property that they prevent the soil from drying out can also be problematic for commercial agriculture. Molecular genetics In 2008, it was discovered that the inactivation of only two genes in one species of annual plant leads to its conversion into a perennial plant. Researchers deactivated the SOC1 and FUL genes (which control flowering time) of Arabidopsis thaliana. This switch established phenotypes common in perennial plants, such as wood formation. See
period in which they take place vary according to geographical location, and may not correspond to the four traditional seasonal divisions of the year. With respect to the traditional seasons, annual plants are generally categorized into summer annuals and winter annuals. Summer annuals germinate during spring or early summer and mature by autumn of the same year. Winter annuals germinate during the autumn and mature during the spring or summer of the following calendar year. One seed-to-seed life cycle for an annual can occur in as little as a month in some species, though most last several months. Oilseed rapa can go from seed-to-seed in about five weeks under a bank of fluorescent lamps. This style of growing is often used in classrooms for education. Many desert annuals are therophytes, because their seed-to-seed life cycle is only weeks and they spend most of the year as seeds to survive dry conditions. Cultivation In cultivation, many food plants are, or are grown as, annuals,
angiosperms having evolved in parallel. This makes it easier to reconcile molecular clock data that suggests that the angiosperms diverged from the gymnosperms around 320-300 mya. Some more recent studies have used the word anthophyte to describe a group which
that suggests that the angiosperms diverged from the gymnosperms around 320-300 mya. Some more recent studies have used the word anthophyte to describe a group which includes the angiosperms and a variety of fossils (glossopterids, Pentoxylon, Bennettitales,
diesel engine manufacturer Dresser Atlas, a provider of oilfield and factory automation services Tele Atlas, a Dutch mapping company Western Atlas, an oilfield services company Computing and technology Atlas (computer), an early supercomputer, built in the 1960s Atlas (robot), a humanoid robot developed by Boston Dynamics and DARPA ATLAS (software), a software flagging naturalized American for denaturalization Atlas, a computer used at the Lawrence Livermore National Laboratory in 2006 Abbreviated Test Language for All Systems, or ATLAS, a MILSPEC language for avionics equipment testing Advanced Technology Leisure Application Simulator, or ATLAS, a hydraulic motion simulator used in theme parks ASP.NET AJAX (formerly "Atlas"), a set of ASP.NET extensions ATLAS Transformation Language, programming language Atlas.ti, a qualitative analysis program Automatically Tuned Linear Algebra Software, or ATLAS, Texture atlas, or image sprite sheet UNIVAC 1101, an early American computer, built in the 1950s Science Astronomy Atlas (comet) (C/2019 Y4) Atlas (crater) on the near side of the Moon Atlas (moon), a satellite of Saturn Atlas (star), also designated 27 Tauri, a triple star system in the constellation of Taurus and a member of the Pleiades Advanced Technology Large-Aperture Space Telescope (ATLAST) Advanced Topographic Laser Altimeter System (ATLAS), a space-based lidar instrument on ICESat-2 Asteroid Terrestrial-impact Last Alert System (ATLAS) Mathematics Atlas (manifolds), a set of smooth charts Atlas (topology), a set of charts Smooth atlas Physics Argonne Tandem Linear Accelerator System, or ATLAS, a linear accelerator at the Argonne National Laboratory ATLAS experiment, a particle detector for the Large Hadron Collider at CERN Atomic-terrace low-angle shadowing, or ATLAS, a nanofabrication technique Biology and healthcare Atlas (anatomy), part of the spine Atlas personality, a term used in psychology to describe the personality of someone whose childhood was characterized by excessive responsibilities Brain atlas, a neuroanatomical map of the brain of a human or other animal Animals and plants Atlas bear Atlas beetle Atlas cedar Atlas moth Atlas pied flycatcher, a bird Atlas turtle Sport Atlas Delmenhorst, a German association football club Atlas F.C., a Mexican professional football club Club Atlético Atlas, an Argentine amateur football club KK Atlas, a former men's professional basketball club based in Belgrade (today's Serbia) Transport Aerospace Atlas (rocket family) SM-65 Atlas intercontinental ballistic missile (ICBM) AeroVelo Atlas, a human-powered helicopter Airbus A400M Atlas, a military aircraft produced 2007–present Armstrong Whitworth Atlas, a British military aeroplane produced 1927–1933 Atlas Air, an American cargo airline Atlas Aircraft, a 1940s aircraft manufacturer Atlas Aircraft Corporation, a South African military aircraft manufacturer Atlas Aviation, an aircraft maintenance firm Atlas Blue, a Moroccan low-cost airline Atlasjet, a Turkish airline Birdman Atlas, an ultralight aircraft HMLAT-303, U.S. Marine Corps helicopter training squadron La Mouette Atlas, a French hang glider design Automotive Atlas (1951 automobile), a French mini-car Atlas (light trucks), a Greek motor vehicle manufacturer Atlas (Pittsburgh automobile), produced 1906–1907 Atlas (Springfield automobile), produced 1907–1913 Atlas, a British van by the Standard Motor Company produced 1958–1962 Atlas Drop Forge Company, a parts subsidiary of REO Motor Car Company Atlas Motor Buggy, an American highwheeler produced in 1909 General Motors Atlas engine Honda Atlas Cars Pakistan, a Pakistani car manufacturer Nissan Atlas, a Japanese light truck Volkswagen Atlas, a sport utility vehicle Geely Atlas, a sport utility vehicle Ships and boats
sprite sheet UNIVAC 1101, an early American computer, built in the 1950s Science Astronomy Atlas (comet) (C/2019 Y4) Atlas (crater) on the near side of the Moon Atlas (moon), a satellite of Saturn Atlas (star), also designated 27 Tauri, a triple star system in the constellation of Taurus and a member of the Pleiades Advanced Technology Large-Aperture Space Telescope (ATLAST) Advanced Topographic Laser Altimeter System (ATLAS), a space-based lidar instrument on ICESat-2 Asteroid Terrestrial-impact Last Alert System (ATLAS) Mathematics Atlas (manifolds), a set of smooth charts Atlas (topology), a set of charts Smooth atlas Physics Argonne Tandem Linear Accelerator System, or ATLAS, a linear accelerator at the Argonne National Laboratory ATLAS experiment, a particle detector for the Large Hadron Collider at CERN Atomic-terrace low-angle shadowing, or ATLAS, a nanofabrication technique Biology and healthcare Atlas (anatomy), part of the spine Atlas personality, a term used in psychology to describe the personality of someone whose childhood was characterized by excessive responsibilities Brain atlas, a neuroanatomical map of the brain of a human or other animal Animals and plants Atlas bear Atlas beetle Atlas cedar Atlas moth Atlas pied flycatcher, a bird Atlas turtle Sport Atlas Delmenhorst, a German association football club Atlas F.C., a Mexican professional football club Club Atlético Atlas, an Argentine amateur football club KK Atlas, a former men's professional basketball club based in Belgrade (today's Serbia) Transport Aerospace Atlas (rocket family) SM-65 Atlas intercontinental ballistic missile (ICBM) AeroVelo Atlas, a human-powered helicopter Airbus A400M Atlas, a military aircraft produced 2007–present Armstrong Whitworth Atlas, a British military aeroplane produced 1927–1933 Atlas Air, an American cargo airline Atlas Aircraft, a 1940s aircraft manufacturer Atlas Aircraft Corporation, a South African military aircraft manufacturer Atlas Aviation, an aircraft maintenance firm Atlas Blue, a Moroccan low-cost airline Atlasjet, a Turkish airline Birdman Atlas, an ultralight aircraft HMLAT-303, U.S. Marine Corps helicopter training squadron La Mouette Atlas, a French hang glider design Automotive Atlas (1951 automobile), a French mini-car Atlas (light trucks), a Greek motor vehicle manufacturer Atlas (Pittsburgh automobile), produced 1906–1907 Atlas (Springfield automobile), produced 1907–1913 Atlas, a British van by the Standard Motor Company produced 1958–1962 Atlas Drop Forge Company, a parts subsidiary of REO Motor Car Company Atlas Motor Buggy, an American highwheeler produced in 1909 General Motors Atlas engine Honda Atlas Cars Pakistan, a Pakistani car manufacturer Nissan Atlas, a Japanese light truck Volkswagen Atlas, a sport utility vehicle Geely Atlas, a sport utility vehicle Ships and boats Atlas Werke, a former German shipbuilding company , the name of several Royal Navy ships ST Atlas, a Swedish tugboat , the name of several U.S. Navy ships Trains Atlas, an 1863–1885 South Devon Railway Dido class locomotive Atlas, a 1927–1962 LMS Royal Scot Class locomotive Atlas Car and Manufacturing Company, a locomotive manufacturer Atlas Model Railroad Other uses Atlas (architecture) ATLAS (simulation) (Army Tactical Level Advanced Simulation), a Thai military system Atlas (storm), which hit the Midwestern United States in October 2013, named by The Weather Channel Agrupación de Trabajadores Latinoamericanos Sindicalistas, or ATLAS, a former Latin American trade union
a different time of the day to brushing. Gargling is where the head is tilted back, allowing the mouthwash to sit in the back of the mouth while exhaling, causing the liquid to bubble. Gargling is practiced in Japan for perceived prevention of viral infection. One commonly used way is with infusions or tea. In some cultures, gargling is usually done in private, typically in a bathroom at a sink so the liquid can be rinsed away. Effects The most-commonly-used mouthwashes are commercial antiseptics, which are used at home as part of an oral hygiene routine. Mouthwashes combine ingredients to treat a variety of oral conditions. Variations are common, and mouthwash has no standard formulation, so its use and recommendation involves concerns about patient safety. Some manufacturers of mouthwash state that their antiseptic and antiplaque mouthwashes kill the bacterial plaque that causes cavities, gingivitis, and bad breath. It is, however, generally agreed that the use of mouthwash does not eliminate the need for both brushing and flossing. The American Dental Association asserts that regular brushing and proper flossing are enough in most cases, in addition to regular dental check-ups, although they approve many mouthwashes. For many patients, however, the mechanical methods could be tedious and time-consuming, and, additionally, some local conditions may render them especially difficult. Chemotherapeutic agents, including mouthwashes, could have a key role as adjuncts to daily home care, preventing and controlling supragingival plaque, gingivitis and oral malodor. Minor and transient side effects of mouthwashes are very common, such as taste disturbance, tooth staining, sensation of a dry mouth, etc. Alcohol-containing mouthwashes may make dry mouth and halitosis worse, as they dry out the mouth. Soreness, ulceration and redness may sometimes occur (e.g., aphthous stomatitis or allergic contact stomatitis) if the person is allergic or sensitive to mouthwash ingredients, such as preservatives, coloring, flavors and fragrances. Such effects might be reduced or eliminated by diluting the mouthwash with water, using a different mouthwash (e.g. saltwater), or foregoing mouthwash entirely. Prescription mouthwashes are used prior to and after oral surgery procedures, such as tooth extraction, or to treat the pain associated with mucositis caused by radiation therapy or chemotherapy. They are also prescribed for aphthous ulcers, other oral ulcers, and other mouth pain. "Magic mouthwashes" are prescription mouthwashes compounded in a pharmacy from a list of ingredients specified by a doctor. Despite a lack of evidence that prescription mouthwashes are more effective in decreasing the pain of oral lesions, many patients and prescribers continue to use them. There has been only one controlled study to evaluate the efficacy of magic mouthwash; it shows no difference in efficacy between the most common magic-mouthwash formulation, on the one hand, and commercial mouthwashes (such as chlorhexidine) or a saline/baking soda solution, on the other. Current guidelines suggest that saline solution is just as effective as magic mouthwash in pain relief and in shortening the healing time of oral mucositis from cancer therapies. History The first known references to mouth rinsing is in Ayurveda for treatment of gingivitis. Later, in the Greek and Roman periods, mouth rinsing following mechanical cleansing became common among the upper classes, and Hippocrates recommended a mixture of salt, alum, and vinegar. The Jewish Talmud, dating back about 1,800 years, suggests a cure for gum ailments containing "dough water" and olive oil. Before Europeans came to the Americas, Native North American and Mesoamerican cultures used mouthwashes, often made from plants such as Coptis trifolia. Indeed, Aztec dentistry was more advanced than European dentistry of the age. Peoples of the Americas used salt water mouthwashes for sore throats, and other mouthwashes for problems such as teething and mouth ulcers. Anton van Leeuwenhoek, the famous 17th century microscopist, discovered living organisms (living, because they were mobile) in deposits on the teeth (what we now call dental plaque). He also found organisms in water from the canal next to his home in Delft. He experimented with samples by adding vinegar or brandy and found that this resulted in the immediate immobilization or killing of the organisms suspended in water. Next he tried rinsing the mouth of himself and somebody else with a mouthwash containing vinegar or brandy and found that living organisms remained in the dental plaque. He concluded—correctly—that the mouthwash either did not reach, or was not present long enough, to kill the plaque organisms. In 1892, German Richard Seifert invented mouthwash product Odol, which was produced by company founder Karl August Lingner (1861–1916) in Dresden. That remained the state of affairs until the late 1960s when Harald Loe (at the time a professor at the Royal Dental College in Aarhus, Denmark) demonstrated that a chlorhexidine compound could prevent the build-up of dental plaque. The reason for chlorhexidine's effectiveness is that it strongly adheres to surfaces in the mouth and thus remains present in effective concentrations for many hours. Since then commercial interest in mouthwashes has been intense and several newer products claim effectiveness in reducing the build-up in dental plaque and the associated severity of gingivitis, in addition to fighting bad breath. Many of these solutions aim to control the Volatile Sulfur Compound (VSC)-creating anaerobic bacteria that live in the mouth and excrete substances that lead to bad breath and unpleasant mouth taste. For example, the number of mouthwash variants in the United States of America has grown from 15 (1970) to 66 (1998) to 113 (2012). Research Research in the field of microbiotas shows that only a limited set of microbes cause tooth decay, with most of the bacteria in the human mouth being harmless. Focused attention on cavity-causing bacteria such as Streptococcus mutans has led research into new mouthwash treatments that prevent these bacteria from initially growing. While current mouthwash treatments must be used with a degree of frequency to prevent this bacteria from regrowing, future treatments could provide a viable long-term solution. Ingredients Alcohol Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol, which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide "bite". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing, although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the mouth, releasing more malodorous volatile sulfur compounds. Therefore, alcohol-containing mouthwash may temporarily worsen halitosis in those who already have it, or, indeed, be the sole cause of halitosis in other individuals. It is hypothesized that alcohol in mouthwashes acts as a carcinogen (cancer-inducing agent). Generally, there is no scientific consensus about this. One review stated: The same researchers also state that the risk of acquiring oral cancer rises almost five times for users of alcohol-containing mouthwash who neither smoke nor drink (with a higher rate of increase for those who do). In addition, the authors highlight side effects from several mainstream mouthwashes that included dental erosion and accidental poisoning of children. The review garnered media attention and conflicting opinions from other researchers. Yinka Ebo of Cancer Research UK disputed the findings, concluding that "there is still not enough evidence to suggest that using mouthwash that contains alcohol will increase the risk of mouth cancer". Studies conducted in 1985, 1995, 2003, and 2012 did not support an association between alcohol-containing mouth rinses and oral cancer. Andrew Penman, chief executive of The Cancer Council New South Wales, called for further research on the matter. In a March 2009 brief, the American Dental Association said "the available evidence does not support a connection between oral cancer and alcohol-containing mouthrinse". Many newer brands of mouthwash are alcohol free, not just in response to consumer concerns about oral cancer, but also to cater for religious groups who abstain from alcohol consumption. Benzydamine (analgesic) In painful oral conditions such as aphthous stomatitis, analgesic mouthrinses (e.g. benzydamine mouthwash, or "Difflam") are sometimes used to ease pain, commonly used before meals to reduce discomfort while eating. Benzoic acid Benzoic acid acts as a buffer. Betamethasone Betamethasone is sometimes used as an anti-inflammatory, corticosteroid mouthwash. It may be used for severe inflammatory conditions of the oral mucosa such as the severe forms of aphthous stomatitis. Cetylpyridinium chloride (antiseptic, antimalodor) Cetylpyridinium chloride containing mouthwash (e.g. 0.05%) is used in some specialized mouthwashes for halitosis. Cetylpyridinium chloride mouthwash has less anti-plaque effect than chlorhexidine and may cause staining of teeth, or sometimes an oral burning sensation or ulceration. Chlorhexidine digluconate and hexetidine (antiseptic) Chlorhexidine digluconate is a chemical antiseptic and is used in a 0.12–0.2% solution as a mouthwash. However, there is no evidence to support
to wash away the beneficial fluoride residue left from the toothpaste. Similarly, the mouth should not be rinsed out with water after brushing. Patients were told to "spit don't rinse" after toothbrushing as part of a National Health Service campaign in the UK. A fluoride mouthrinse can be used at a different time of the day to brushing. Gargling is where the head is tilted back, allowing the mouthwash to sit in the back of the mouth while exhaling, causing the liquid to bubble. Gargling is practiced in Japan for perceived prevention of viral infection. One commonly used way is with infusions or tea. In some cultures, gargling is usually done in private, typically in a bathroom at a sink so the liquid can be rinsed away. Effects The most-commonly-used mouthwashes are commercial antiseptics, which are used at home as part of an oral hygiene routine. Mouthwashes combine ingredients to treat a variety of oral conditions. Variations are common, and mouthwash has no standard formulation, so its use and recommendation involves concerns about patient safety. Some manufacturers of mouthwash state that their antiseptic and antiplaque mouthwashes kill the bacterial plaque that causes cavities, gingivitis, and bad breath. It is, however, generally agreed that the use of mouthwash does not eliminate the need for both brushing and flossing. The American Dental Association asserts that regular brushing and proper flossing are enough in most cases, in addition to regular dental check-ups, although they approve many mouthwashes. For many patients, however, the mechanical methods could be tedious and time-consuming, and, additionally, some local conditions may render them especially difficult. Chemotherapeutic agents, including mouthwashes, could have a key role as adjuncts to daily home care, preventing and controlling supragingival plaque, gingivitis and oral malodor. Minor and transient side effects of mouthwashes are very common, such as taste disturbance, tooth staining, sensation of a dry mouth, etc. Alcohol-containing mouthwashes may make dry mouth and halitosis worse, as they dry out the mouth. Soreness, ulceration and redness may sometimes occur (e.g., aphthous stomatitis or allergic contact stomatitis) if the person is allergic or sensitive to mouthwash ingredients, such as preservatives, coloring, flavors and fragrances. Such effects might be reduced or eliminated by diluting the mouthwash with water, using a different mouthwash (e.g. saltwater), or foregoing mouthwash entirely. Prescription mouthwashes are used prior to and after oral surgery procedures, such as tooth extraction, or to treat the pain associated with mucositis caused by radiation therapy or chemotherapy. They are also prescribed for aphthous ulcers, other oral ulcers, and other mouth pain. "Magic mouthwashes" are prescription mouthwashes compounded in a pharmacy from a list of ingredients specified by a doctor. Despite a lack of evidence that prescription mouthwashes are more effective in decreasing the pain of oral lesions, many patients and prescribers continue to use them. There has been only one controlled study to evaluate the efficacy of magic mouthwash; it shows no difference in efficacy between the most common magic-mouthwash formulation, on the one hand, and commercial mouthwashes (such as chlorhexidine) or a saline/baking soda solution, on the other. Current guidelines suggest that saline solution is just as effective as magic mouthwash in pain relief and in shortening the healing time of oral mucositis from cancer therapies. History The first known references to mouth rinsing is in Ayurveda for treatment of gingivitis. Later, in the Greek and Roman periods, mouth rinsing following mechanical cleansing became common among the upper classes, and Hippocrates recommended a mixture of salt, alum, and vinegar. The Jewish Talmud, dating back about 1,800 years, suggests a cure for gum ailments containing "dough water" and olive oil. Before Europeans came to the Americas, Native North American and Mesoamerican cultures used mouthwashes, often made from plants such as Coptis trifolia. Indeed, Aztec dentistry was more advanced than European dentistry of the age. Peoples of the Americas used salt water mouthwashes for sore throats, and other mouthwashes for problems such as teething and mouth ulcers. Anton van Leeuwenhoek, the famous 17th century microscopist, discovered living organisms (living, because they were mobile) in deposits on the teeth (what we now call dental plaque). He also found organisms in water from the canal next to his home in Delft. He experimented with samples by adding vinegar or brandy and found that this resulted in the immediate immobilization or killing of the organisms suspended in water. Next he tried rinsing the mouth of himself and somebody else with a mouthwash containing vinegar or brandy and found that living organisms remained in the dental plaque. He concluded—correctly—that the mouthwash either did not reach, or was not present long enough, to kill the plaque organisms. In 1892, German Richard Seifert invented mouthwash product Odol, which was produced by company founder Karl August Lingner (1861–1916) in Dresden. That remained the state of affairs until the late 1960s when Harald Loe (at the time a professor at the Royal Dental College in Aarhus, Denmark) demonstrated that a chlorhexidine compound could prevent the build-up of dental plaque. The reason for chlorhexidine's effectiveness is that it strongly adheres to surfaces in the mouth and thus remains present in effective concentrations for many hours. Since then commercial interest in mouthwashes has been intense and several newer products claim effectiveness in reducing the build-up in dental plaque and the associated severity of gingivitis, in addition to fighting bad breath. Many of these solutions aim to control the Volatile Sulfur Compound (VSC)-creating anaerobic bacteria that live in the mouth and excrete substances that lead to bad breath and unpleasant mouth taste. For example, the number of mouthwash variants in the United States of America has grown from 15 (1970) to 66 (1998) to 113 (2012). Research Research in the field of microbiotas shows that only a limited set of microbes cause tooth decay, with most of the bacteria in the human mouth being harmless. Focused attention on cavity-causing bacteria such as Streptococcus mutans has led research into new mouthwash treatments that prevent these bacteria from initially growing. While current mouthwash treatments must be used with a degree of frequency to prevent this bacteria from regrowing, future treatments could provide a viable long-term solution. Ingredients Alcohol Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol, which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide "bite". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing, although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the
Lysimachus of Acarnania. Alexander was raised in the manner of noble Macedonian youths, learning to read, play the lyre, ride, fight, and hunt. When Alexander was ten years old, a trader from Thessaly brought Philip a horse, which he offered to sell for thirteen talents. The horse refused to be mounted, and Philip ordered it away. Alexander, however, detecting the horse's fear of its own shadow, asked to tame the horse, which he eventually managed. Plutarch stated that Philip, overjoyed at this display of courage and ambition, kissed his son tearfully, declaring: "My boy, you must find a kingdom big enough for your ambitions. Macedon is too small for you", and bought the horse for him. Alexander named it Bucephalas, meaning "ox-head". Bucephalas carried Alexander as far as India. When the animal died (because of old age, according to Plutarch, at age thirty), Alexander named a city after him, Bucephala. Education When Alexander was 13, Philip began to search for a tutor, and considered such academics as Isocrates and Speusippus, the latter offering to resign from his stewardship of the Academy to take up the post. In the end, Philip chose Aristotle and provided the Temple of the Nymphs at Mieza as a classroom. In return for teaching Alexander, Philip agreed to rebuild Aristotle's hometown of Stageira, which Philip had razed, and to repopulate it by buying and freeing the ex-citizens who were slaves, or pardoning those who were in exile. Mieza was like a boarding school for Alexander and the children of Macedonian nobles, such as Ptolemy, Hephaistion, and Cassander. Many of these students would become his friends and future generals, and are often known as the "Companions". Aristotle taught Alexander and his companions about medicine, philosophy, morals, religion, logic, and art. Under Aristotle's tutelage, Alexander developed a passion for the works of Homer, and in particular the Iliad; Aristotle gave him an annotated copy, which Alexander later carried on his campaigns. Alexander was able to quote Euripides from memory. During his youth, Alexander was also acquainted with Persian exiles at the Macedonian court, who received the protection of Philip II for several years as they opposed Artaxerxes III. Among them were Artabazos II and his daughter Barsine, possible future mistress of Alexander, who resided at the Macedonian court from 352 to 342 BC, as well as Amminapes, future satrap of Alexander, or a Persian nobleman named Sisines. This gave the Macedonian court a good knowledge of Persian issues, and may even have influenced some of the innovations in the management of the Macedonian state. Suda writes that Anaximenes of Lampsacus was one of Alexander's teachers, and that Anaximenes also accompanied Alexander on his campaigns. Heir of Philip II Regency and ascent of Macedon At the age of 16, Alexander's education under Aristotle ended. Philip II had waged war against the Thracians to the north, which left Alexander in charge as regent and heir apparent. During Philip's absence, the Thracian tribe of Maedi revolted against Macedonia. Alexander responded quickly and drove them from their territory. The territory was colonized, and a city, named Alexandropolis, was founded. Upon Philip's return, Alexander was dispatched with a small force to subdue the revolts in southern Thrace. Campaigning against the Greek city of Perinthus, Alexander reportedly saved his father's life. Meanwhile, the city of Amphissa began to work lands that were sacred to Apollo near Delphi, a sacrilege that gave Philip the opportunity to further intervene in Greek affairs. While Philip was occupied in Thrace, Alexander was ordered to muster an army for a campaign in southern Greece. Concerned that other Greek states might intervene, Alexander made it look as though he was preparing to attack Illyria instead. During this turmoil, the Illyrians invaded Macedonia, only to be repelled by Alexander. Philip and his army joined his son in 338 BC, and they marched south through Thermopylae, taking it after stubborn resistance from its Theban garrison. They went on to occupy the city of Elatea, only a few days' march from both Athens and Thebes. The Athenians, led by Demosthenes, voted to seek alliance with Thebes against Macedonia. Both Athens and Philip sent embassies to win Thebes's favour, but Athens won the contest. Philip marched on Amphissa (ostensibly acting on the request of the Amphictyonic League), capturing the mercenaries sent there by Demosthenes and accepting the city's surrender. Philip then returned to Elatea, sending a final offer of peace to Athens and Thebes, who both rejected it. As Philip marched south, his opponents blocked him near Chaeronea, Boeotia. During the ensuing Battle of Chaeronea, Philip commanded the right wing and Alexander the left, accompanied by a group of Philip's trusted generals. According to the ancient sources, the two sides fought bitterly for some time. Philip deliberately commanded his troops to retreat, counting on the untested Athenian hoplites to follow, thus breaking their line. Alexander was the first to break the Theban lines, followed by Philip's generals. Having damaged the enemy's cohesion, Philip ordered his troops to press forward and quickly routed them. With the Athenians lost, the Thebans were surrounded. Left to fight alone, they were defeated. After the victory at Chaeronea, Philip and Alexander marched unopposed into the Peloponnese, welcomed by all cities; however, when they reached Sparta, they were refused, but did not resort to war. At Corinth, Philip established a "Hellenic Alliance" (modelled on the old anti-Persian alliance of the Greco-Persian Wars), which included most Greek city-states except Sparta. Philip was then named Hegemon (often translated as "Supreme Commander") of this league (known by modern scholars as the League of Corinth), and announced his plans to attack the Persian Empire. Exile and return When Philip returned to Pella, he fell in love with and married Cleopatra Eurydice in 338 BC, the niece of his general Attalus. The marriage made Alexander's position as heir less secure, since any son of Cleopatra Eurydice would be a fully Macedonian heir, while Alexander was only half-Macedonian. During the wedding banquet, a drunken Attalus publicly prayed to the gods that the union would produce a legitimate heir. In 337 BC, Alexander fled Macedon with his mother, dropping her off with her brother, King Alexander I of Epirus in Dodona, capital of the Molossians. He continued to Illyria, where he sought refuge with one or more Illyrian kings, perhaps with Glaukias, and was treated as a guest, despite having defeated them in battle a few years before. However, it appears Philip never intended to disown his politically and militarily trained son. Accordingly, Alexander returned to Macedon after six months due to the efforts of a family friend, Demaratus, who mediated between the two parties. In the following year, the Persian satrap (governor) of Caria, Pixodarus, offered his eldest daughter to Alexander's half-brother, Philip Arrhidaeus. Olympias and several of Alexander's friends suggested this showed Philip intended to make Arrhidaeus his heir. Alexander reacted by sending an actor, Thessalus of Corinth, to tell Pixodarus that he should not offer his daughter's hand to an illegitimate son, but instead to Alexander. When Philip heard of this, he stopped the negotiations and scolded Alexander for wishing to marry the daughter of a Carian, explaining that he wanted a better bride for him. Philip exiled four of Alexander's friends, Harpalus, Nearchus, Ptolemy and Erigyius, and had the Corinthians bring Thessalus to him in chains. King of Macedon Accession In summer 336 BC, while at Aegae attending the wedding of his daughter Cleopatra to Olympias's brother, Alexander I of Epirus, Philip was assassinated by the captain of his bodyguards, Pausanias. As Pausanias tried to escape, he tripped over a vine and was killed by his pursuers, including two of Alexander's companions, Perdiccas and Leonnatus. Alexander was proclaimed king on the spot by the nobles and army at the age of 20. Consolidation of power Alexander began his reign by eliminating potential rivals to the throne. He had his cousin, the former Amyntas IV, executed. He also had two Macedonian princes from the region of Lyncestis killed, but spared a third, Alexander Lyncestes. Olympias had Cleopatra Eurydice and Europa, her daughter by Philip, burned alive. When Alexander learned about this, he was furious. Alexander also ordered the murder of Attalus, who was in command of the advance guard of the army in Asia Minor and Cleopatra's uncle. Attalus was at that time corresponding with Demosthenes, regarding the possibility of defecting to Athens. Attalus also had severely insulted Alexander, and following Cleopatra's murder, Alexander may have considered him too dangerous to leave alive. Alexander spared Arrhidaeus, who was by all accounts mentally disabled, possibly as a result of poisoning by Olympias. News of Philip's death roused many states into revolt, including Thebes, Athens, Thessaly, and the Thracian tribes north of Macedon. When news of the revolts reached Alexander, he responded quickly. Though advised to use diplomacy, Alexander mustered 3,000 Macedonian cavalry and rode south towards Thessaly. He found the Thessalian army occupying the pass between Mount Olympus and Mount Ossa, and ordered his men to ride over Mount Ossa. When the Thessalians awoke the next day, they found Alexander in their rear and promptly surrendered, adding their cavalry to Alexander's force. He then continued south towards the Peloponnese. Alexander stopped at Thermopylae, where he was recognized as the leader of the Amphictyonic League before heading south to Corinth. Athens sued for peace and Alexander pardoned the rebels. The famous encounter between Alexander and Diogenes the Cynic occurred during Alexander's stay in Corinth. When Alexander asked Diogenes what he could do for him, the philosopher disdainfully asked Alexander to stand a little to the side, as he was blocking the sunlight. This reply apparently delighted Alexander, who is reported to have said "But verily, if I were not Alexander, I would like to be Diogenes." At Corinth, Alexander took the title of Hegemon ("leader") and, like Philip, was appointed commander for the coming war against Persia. He also received news of a Thracian uprising. Balkan campaign Before crossing to Asia, Alexander wanted to safeguard his northern borders. In the spring of 335 BC, he advanced to suppress several revolts. Starting from Amphipolis, he travelled east into the country of the "Independent Thracians"; and at Mount Haemus, the Macedonian army attacked and defeated the Thracian forces manning the heights. The Macedonians marched into the country of the Triballi, and defeated their army near the Lyginus river (a tributary of the Danube). Alexander then marched for three days to the Danube, encountering the Getae tribe on the opposite shore. Crossing the river at night, he surprised them and forced their army to retreat after the first cavalry skirmish. News then reached Alexander that Cleitus, King of Illyria, and King Glaukias of the Taulantii were in open revolt against his authority. Marching west into Illyria, Alexander defeated each in turn, forcing the two rulers to flee with their troops. With these victories, he secured his northern frontier. While Alexander campaigned north, the Thebans and Athenians rebelled once again. Alexander immediately headed south. While the other cities again hesitated, Thebes decided to fight. The Theban resistance was ineffective, and Alexander razed the city and divided its territory between the other Boeotian cities. The end of Thebes cowed Athens, leaving all of Greece temporarily at peace. Alexander then set out on his Asian campaign, leaving Antipater as regent. Conquest of the Persian Empire Asia Minor After his victory at the Battle of Chaeronea (338 BC), Philip II began the work of establishing himself as hēgemṓn () of a league which according to Diodorus was to wage a campaign against the Persians for the sundry grievances Greece suffered in 480 and free the Greek cities of the western coast and islands from Achaemenid rule. In 336 he sent Parmenion, with Amyntas, Andromenes and Attalus, and an army of 10,000 men into Anatolia to make preparations for an invasion. At first, all went well. The Greek cities on the western coast of Anatolia revolted until the news arrived that Philip had been murdered and had been succeeded by his young son Alexander. The Macedonians were demoralized by Philip's death and were subsequently defeated near Magnesia by the Achaemenids under the command of the mercenary Memnon of Rhodes. Taking over the invasion project of Philip II, Alexander's army crossed the Hellespont in 334 BC with approximately 48,100 soldiers, 6,100 cavalry and a fleet of 120 ships with crews numbering 38,000, drawn from Macedon and various Greek city-states, mercenaries, and feudally raised soldiers from Thrace, Paionia, and Illyria. He showed his intent to conquer the entirety of the Persian Empire by throwing a spear into Asian soil and saying he accepted Asia as a gift from the gods. This also showed Alexander's eagerness to fight, in contrast to his father's preference for diplomacy. After an initial victory against Persian forces at the Battle of the Granicus, Alexander accepted the surrender of the Persian provincial capital and treasury of Sardis; he then proceeded along the Ionian coast, granting autonomy and democracy to the cities. Miletus, held by Achaemenid forces, required a delicate siege operation, with Persian naval forces nearby. Further south, at Halicarnassus, in Caria, Alexander successfully waged his first large-scale siege, eventually forcing his opponents, the mercenary captain Memnon of Rhodes and the Persian satrap of Caria, Orontobates, to withdraw by sea. Alexander left the government of Caria to a member of the Hecatomnid dynasty, Ada, who adopted Alexander. From Halicarnassus, Alexander proceeded into mountainous Lycia and the Pamphylian plain, asserting control over all coastal cities to deny the Persians naval bases. From Pamphylia onwards the coast held no major ports and Alexander moved inland. At Termessos, Alexander humbled but did not storm the Pisidian city. At the ancient Phrygian capital of Gordium, Alexander "undid" the hitherto unsolvable Gordian Knot, a feat said to await the future "king of Asia". According to the story, Alexander proclaimed that it did not matter how the knot was undone and hacked it apart with his sword. The Levant and Syria In spring 333 BC, Alexander crossed the Taurus into Cilicia. After a long pause due to an illness, he marched on towards Syria. Though outmanoeuvered by Darius's significantly larger army, he marched back to Cilicia, where he defeated Darius at Issus. Darius fled the battle, causing his army to collapse, and left behind his wife, his two daughters, his mother Sisygambis, and a fabulous treasure. He offered a peace treaty that included the lands he had already lost, and a ransom of 10,000 talents for his family. Alexander replied that since he was now king of Asia, it was he alone who decided territorial divisions. Alexander proceeded to take possession of Syria, and most of the coast of the Levant. In the following year, 332 BC, he was forced to attack Tyre, which he captured after a long and difficult siege. The men of military age were massacred and the women and children sold into slavery. Egypt When Alexander destroyed Tyre, most of the towns on the route to Egypt quickly capitulated. However, Alexander was met with resistance at Gaza. The stronghold was heavily fortified and built on a hill, requiring a siege. When "his engineers pointed out to him that because of the height of the mound it would be impossible... this encouraged Alexander all the more to make the attempt". After three unsuccessful assaults, the stronghold fell, but not before Alexander had received a serious shoulder wound. As in Tyre, men of military age were put to the sword and the women and children were sold into slavery. Egypt was only one of a large number of territories taken by Alexander from the Persians. After his trip to Siwa, Alexander was crowned in the temple of Ptah at Memphis. It appears that the Egyptian people did not find it disturbing that he was a foreigner - nor that he was absent for virtually his entire reign. Alexander restored the temples neglected by the Persians and dedicated new monuments to the Egyptian gods. In the temple of Luxor, near Karnak, he built a chapel for the sacred barge. During his brief months in Egypt, he reformed the taxation system on the Greek models and organized the military occupation of the country, but, early in 331 BCE, he left for Asia in pursuit of the Persians. Alexander advanced on Egypt in later 332 BC, where he was regarded as a liberator. To legitimize taking power and be recognized as the descendant of the long line of pharaohs, Alexander made sacrifices to the gods at Memphis and went to consult the famous oracle of Amun-Ra at the Siwa Oasis. He was pronounced son of the deity Amun at the Oracle of Siwa Oasis in the Libyan desert. Henceforth, Alexander often referred to Zeus-Ammon as his true father, and after his death, currency depicted him adorned with the Horns of Ammon as a symbol of his divinity. The Greeks interpreted this message - one that the gods addressed to all pharaohs - as a prophecy. During his stay in Egypt, he founded Alexandria, which would become the prosperous capital of the Ptolemaic Kingdom after his death. Control of Egypt passed to Ptolemy I (son of Lagos), the founder of the Ptolemaic Dynasty (305-30 BCE) after the death of Alexander. Assyria and Babylonia Leaving Egypt in 331 BC, Alexander marched eastward into Achaemenid Assyria in Upper Mesopotamia (now northern Iraq) and defeated Darius again at the Battle of Gaugamela. Darius once more fled the field, and Alexander chased him as far as Arbela. Gaugamela would be the final and decisive encounter between the two. Darius fled over the mountains to Ecbatana (modern Hamadan) while Alexander captured Babylon. Babylonian astronomical diaries says that "the king of the world, Alexander" sends his scouts with a message to the people of Babylon before entering the city: "I shall not enter your houses". Persia From Babylon, Alexander went to Susa, one of the Achaemenid capitals, and captured its treasury. He sent the bulk of his army to the Persian ceremonial capital of Persepolis via the Persian Royal Road. Alexander himself took selected troops on the direct route to the city. He then stormed the pass of the Persian Gates (in the modern Zagros Mountains) which had been blocked by a Persian army under Ariobarzanes and then hurried to Persepolis before its garrison could loot the treasury. On entering Persepolis, Alexander allowed his troops to loot the city for several days. Alexander stayed in Persepolis for five months. During his stay a fire broke out in the eastern palace of Xerxes I and spread to the rest of the city. Possible causes include a drunken accident or deliberate revenge for the burning of the Acropolis of Athens during the Second Persian War by Xerxes; Plutarch and Diodorus allege that Alexander's companion, the hetaera Thaïs, instigated and started the fire. Even as he watched the city burn, Alexander immediately began to regret his decision. Plutarch claims that he ordered his men to put out the fires, but that the flames had already spread to most of the city. Curtius claims that Alexander did not regret his decision until the next morning. Plutarch recounts an anecdote in which Alexander pauses and talks to a fallen statue of Xerxes as if it were a live person: Fall of the Empire and the East Alexander then chased Darius, first into Media, and then Parthia. The Persian king no longer controlled his own destiny, and was taken prisoner by Bessus, his Bactrian satrap and kinsman. As Alexander approached, Bessus had his men fatally stab the Great King and then declared himself Darius's successor as Artaxerxes V, before retreating into Central Asia to launch a guerrilla campaign against Alexander. Alexander buried Darius's remains next to his Achaemenid predecessors in a regal funeral. He claimed that, while dying, Darius had named him as his successor to the Achaemenid throne. The Achaemenid Empire is normally considered to have fallen with Darius. However, as basic forms of community life and the general structure of government were maintained and resuscitated by Alexander under his own rule, he, in the words of the Iranologist Pierre Briant "may therefore be considered to have acted in many ways as the last of the Achaemenids." Alexander viewed Bessus as a usurper and set out to defeat him. This campaign, initially against Bessus, turned into a grand tour of central Asia. Alexander founded a series of new cities, all called Alexandria, including modern Kandahar in Afghanistan, and Alexandria Eschate ("The Furthest") in modern Tajikistan. The campaign took Alexander through Media, Parthia, Aria (West Afghanistan), Drangiana, Arachosia (South and Central Afghanistan), Bactria (North and Central Afghanistan), and Scythia. In 329 BC, Spitamenes, who held an undefined position in the satrapy of Sogdiana, betrayed Bessus to Ptolemy, one of Alexander's trusted companions, and Bessus was executed. However, when, at some point later, Alexander was on the Jaxartes dealing with an incursion by a horse nomad army, Spitamenes raised Sogdiana in revolt. Alexander personally defeated the Scythians at the Battle of Jaxartes and immediately launched a campaign against Spitamenes, defeating him in the Battle of Gabai. After the defeat, Spitamenes was killed by his own men, who then sued for peace. Problems and plots During this time, Alexander adopted some elements of Persian dress and customs at his court, notably the custom of proskynesis, either a symbolic kissing of the hand, or prostration on the ground, that Persians showed to their social superiors. This was one aspect of Alexander's broad strategy aimed at securing the aid and support of the Iranian upper classes. The Greeks however regarded the gesture of proskynesis as the province of deities and believed that Alexander meant to deify himself by requiring it. This cost him the sympathies of many of his countrymen, and he eventually abandoned it. During the long rule of the Achaemenids, the elite positions in many segments of the empire including the central government, the army, and the many satrapies were specifically reserved for Iranians and to a major degree Persian noblemen. The latter were in many cases additionally connected through marriage alliances with the royal Achaemenid family. This created a problem for Alexander as to whether he had to make use of the various segments and people that had given the empire its solidity and unity for a lengthy period of time. Pierre Briant explains that Alexander realized that it was insufficient to merely exploit the internal contradictions within the imperial system as in Asia Minor, Babylonia or Egypt; he also had to (re)create a central government with or without the support of the Iranians. As early as 334 BC he demonstrated awareness of this, when he challenged incumbent King Darius III "by appropriating the main elements of the Achaemenid monarchy's ideology, particularly the theme of the king who protects the lands and the peasants". Alexander wrote a letter in 332 BC to Darius III, wherein he argued that he was worthier than Darius "to succeed to the Achaemenid throne". However, Alexander's eventual decision to burn the Achaemenid palace at Persepolis in conjunction with the major rejection and opposition of the "entire Persian people" made it impracticable for him to pose himself as Darius' legitimate successor. Against Bessus (Artaxerxes V) however, Briant adds, Alexander reasserted "his claim to legitimacy as the avenger of Darius III". A plot against his life was revealed, and one of his officers, Philotas, was executed for failing to alert Alexander. The death of the son necessitated the death of the father, and thus Parmenion, who had been charged with guarding the treasury at Ecbatana, was assassinated at Alexander's command, to prevent attempts at vengeance. Most infamously, Alexander personally killed the man who had saved his life at Granicus, Cleitus the Black, during a violent drunken altercation at Maracanda (modern day Samarkand in Uzbekistan), in which Cleitus accused Alexander of several judgmental mistakes and most especially, of having forgotten the Macedonian ways in favour of a corrupt oriental lifestyle. Later, in the Central Asian campaign, a second plot against his life was revealed, this one instigated by his own royal pages. His official historian, Callisthenes of Olynthus, was implicated in the plot, and in the Anabasis of Alexander, Arrian states that Callisthenes and the pages were then tortured on the rack as punishment, and likely died soon after. It remains unclear if Callisthenes was actually involved in the plot, for prior to his accusation he had fallen out of favour by leading the opposition to the attempt to introduce proskynesis. Macedon in Alexander's absence When Alexander set out for Asia, he left his general Antipater, an experienced military and political leader and part of Philip II's "Old Guard", in charge of Macedon. Alexander's sacking of Thebes ensured that Greece remained quiet during his absence. The one exception was a call to arms by Spartan king Agis III in 331 BC, whom Antipater defeated and killed in the battle of Megalopolis. Antipater referred the Spartans' punishment to the League of Corinth, which then deferred to Alexander, who chose to pardon them. There was also considerable friction between Antipater and Olympias, and each complained to Alexander about the other. In general, Greece enjoyed a period of peace and prosperity during Alexander's campaign in Asia. Alexander sent back vast sums from his conquest, which stimulated the economy and increased trade across his empire. However, Alexander's constant demands for troops and the migration of Macedonians throughout his empire depleted Macedon's strength, greatly weakening it in the years after Alexander, and ultimately led to its subjugation by Rome after the Third Macedonian War (171–168 BC). Indian campaign Forays into the Indian subcontinent After the death of Spitamenes and his marriage to Roxana (Raoxshna in Old Iranian) to cement relations with his new satrapies, Alexander turned to the Indian subcontinent. He invited the chieftains of the former satrapy of Gandhara (a region presently straddling eastern Afghanistan and northern Pakistan), to come to him and submit to his authority. Omphis (Indian name Ambhi), the ruler of Taxila, whose kingdom extended from the Indus to the Hydaspes (Jhelum), complied, but the chieftains of some hill clans, including the Aspasioi and Assakenoi sections of the Kambojas (known in Indian texts also as Ashvayanas and Ashvakayanas), refused to submit. Ambhi hastened to relieve Alexander of his apprehension and met him with valuable presents, placing himself and all his forces at his disposal. Alexander not only returned Ambhi his title and the gifts but he also presented him with a wardrobe of "Persian robes, gold and silver ornaments, 30 horses and 1,000 talents in gold". Alexander was emboldened to divide his forces, and Ambhi assisted Hephaestion and Perdiccas in constructing a bridge over the Indus where it bends at Hund, supplied their troops with provisions, and received Alexander himself, and his whole army, in his capital city of Taxila, with every demonstration of friendship and the most liberal hospitality. On the subsequent advance of the Macedonian king, Taxiles accompanied him with a force of 5,000 men and took part in the battle of the Hydaspes River. After that victory he was sent by Alexander in pursuit of Porus, to whom he was charged to offer favourable terms, but narrowly escaped losing his life at the hands of his old enemy. Subsequently, however, the two rivals were reconciled by the personal mediation of Alexander; and Taxiles, after having contributed zealously to the equipment of the fleet on the Hydaspes, was entrusted by the king with the government of the whole territory between that river and the Indus. A considerable accession of power was granted him after the death of Philip, son of Machatas; and he was allowed to retain his authority at the death of Alexander himself (323 BC), as well as in the subsequent partition of the provinces at Triparadisus, 321 BC. In the winter of 327/326 BC, Alexander personally led a campaign against the Aspasioi of Kunar valleys, the Guraeans of the Guraeus valley, and the Assakenoi of the Swat and Buner valleys. A fierce contest ensued with the Aspasioi in which Alexander was wounded in the shoulder by a dart, but eventually the Aspasioi lost. Alexander then faced the Assakenoi, who fought against him from the strongholds of Massaga, Ora and Aornos. The fort of Massaga was reduced only after days of bloody fighting, in which Alexander was wounded seriously in the ankle. According to Curtius, "Not only did Alexander slaughter the entire population of Massaga, but also did he reduce its buildings to rubble." A similar slaughter followed at Ora. In the aftermath of Massaga and Ora, numerous Assakenians fled to the fortress of Aornos. Alexander followed close behind and captured the strategic hill-fort after four bloody days. After Aornos, Alexander crossed the Indus and fought and won an epic battle against King Porus, who ruled a region lying between the Hydaspes and the Acesines (Chenab), in what is now the Punjab, in the Battle of the Hydaspes in 326 BC. Alexander was impressed by Porus's bravery, and made him an ally. He appointed Porus as satrap, and added to Porus's territory land that he did not previously own, towards the south-east, up to the Hyphasis (Beas). Choosing a local helped him control these lands so distant from Greece. Alexander founded two cities on opposite sides of the Hydaspes river, naming one Bucephala, in honour of his horse, who died around this time. The other was Nicaea (Victory), thought to be located at the site of modern-day Mong, Punjab. Philostratus the Elder in the Life of Apollonius of Tyana writes that in the army of Porus there was an elephant who fought brave against Alexander's army and Alexander dedicated it to the Helios (Sun) and named it Ajax, because he thought that a so great animal deserved a great name. The elephant had gold rings around its tusks and an inscription was on them written in Greek: "Alexander the son of Zeus dedicates Ajax to the Helios" (ΑΛΕΞΑΝΔΡΟΣ Ο ΔΙΟΣ ΤΟΝ ΑΙΑΝΤΑ ΤΩΙ
was ten years old, a trader from Thessaly brought Philip a horse, which he offered to sell for thirteen talents. The horse refused to be mounted, and Philip ordered it away. Alexander, however, detecting the horse's fear of its own shadow, asked to tame the horse, which he eventually managed. Plutarch stated that Philip, overjoyed at this display of courage and ambition, kissed his son tearfully, declaring: "My boy, you must find a kingdom big enough for your ambitions. Macedon is too small for you", and bought the horse for him. Alexander named it Bucephalas, meaning "ox-head". Bucephalas carried Alexander as far as India. When the animal died (because of old age, according to Plutarch, at age thirty), Alexander named a city after him, Bucephala. Education When Alexander was 13, Philip began to search for a tutor, and considered such academics as Isocrates and Speusippus, the latter offering to resign from his stewardship of the Academy to take up the post. In the end, Philip chose Aristotle and provided the Temple of the Nymphs at Mieza as a classroom. In return for teaching Alexander, Philip agreed to rebuild Aristotle's hometown of Stageira, which Philip had razed, and to repopulate it by buying and freeing the ex-citizens who were slaves, or pardoning those who were in exile. Mieza was like a boarding school for Alexander and the children of Macedonian nobles, such as Ptolemy, Hephaistion, and Cassander. Many of these students would become his friends and future generals, and are often known as the "Companions". Aristotle taught Alexander and his companions about medicine, philosophy, morals, religion, logic, and art. Under Aristotle's tutelage, Alexander developed a passion for the works of Homer, and in particular the Iliad; Aristotle gave him an annotated copy, which Alexander later carried on his campaigns. Alexander was able to quote Euripides from memory. During his youth, Alexander was also acquainted with Persian exiles at the Macedonian court, who received the protection of Philip II for several years as they opposed Artaxerxes III. Among them were Artabazos II and his daughter Barsine, possible future mistress of Alexander, who resided at the Macedonian court from 352 to 342 BC, as well as Amminapes, future satrap of Alexander, or a Persian nobleman named Sisines. This gave the Macedonian court a good knowledge of Persian issues, and may even have influenced some of the innovations in the management of the Macedonian state. Suda writes that Anaximenes of Lampsacus was one of Alexander's teachers, and that Anaximenes also accompanied Alexander on his campaigns. Heir of Philip II Regency and ascent of Macedon At the age of 16, Alexander's education under Aristotle ended. Philip II had waged war against the Thracians to the north, which left Alexander in charge as regent and heir apparent. During Philip's absence, the Thracian tribe of Maedi revolted against Macedonia. Alexander responded quickly and drove them from their territory. The territory was colonized, and a city, named Alexandropolis, was founded. Upon Philip's return, Alexander was dispatched with a small force to subdue the revolts in southern Thrace. Campaigning against the Greek city of Perinthus, Alexander reportedly saved his father's life. Meanwhile, the city of Amphissa began to work lands that were sacred to Apollo near Delphi, a sacrilege that gave Philip the opportunity to further intervene in Greek affairs. While Philip was occupied in Thrace, Alexander was ordered to muster an army for a campaign in southern Greece. Concerned that other Greek states might intervene, Alexander made it look as though he was preparing to attack Illyria instead. During this turmoil, the Illyrians invaded Macedonia, only to be repelled by Alexander. Philip and his army joined his son in 338 BC, and they marched south through Thermopylae, taking it after stubborn resistance from its Theban garrison. They went on to occupy the city of Elatea, only a few days' march from both Athens and Thebes. The Athenians, led by Demosthenes, voted to seek alliance with Thebes against Macedonia. Both Athens and Philip sent embassies to win Thebes's favour, but Athens won the contest. Philip marched on Amphissa (ostensibly acting on the request of the Amphictyonic League), capturing the mercenaries sent there by Demosthenes and accepting the city's surrender. Philip then returned to Elatea, sending a final offer of peace to Athens and Thebes, who both rejected it. As Philip marched south, his opponents blocked him near Chaeronea, Boeotia. During the ensuing Battle of Chaeronea, Philip commanded the right wing and Alexander the left, accompanied by a group of Philip's trusted generals. According to the ancient sources, the two sides fought bitterly for some time. Philip deliberately commanded his troops to retreat, counting on the untested Athenian hoplites to follow, thus breaking their line. Alexander was the first to break the Theban lines, followed by Philip's generals. Having damaged the enemy's cohesion, Philip ordered his troops to press forward and quickly routed them. With the Athenians lost, the Thebans were surrounded. Left to fight alone, they were defeated. After the victory at Chaeronea, Philip and Alexander marched unopposed into the Peloponnese, welcomed by all cities; however, when they reached Sparta, they were refused, but did not resort to war. At Corinth, Philip established a "Hellenic Alliance" (modelled on the old anti-Persian alliance of the Greco-Persian Wars), which included most Greek city-states except Sparta. Philip was then named Hegemon (often translated as "Supreme Commander") of this league (known by modern scholars as the League of Corinth), and announced his plans to attack the Persian Empire. Exile and return When Philip returned to Pella, he fell in love with and married Cleopatra Eurydice in 338 BC, the niece of his general Attalus. The marriage made Alexander's position as heir less secure, since any son of Cleopatra Eurydice would be a fully Macedonian heir, while Alexander was only half-Macedonian. During the wedding banquet, a drunken Attalus publicly prayed to the gods that the union would produce a legitimate heir. In 337 BC, Alexander fled Macedon with his mother, dropping her off with her brother, King Alexander I of Epirus in Dodona, capital of the Molossians. He continued to Illyria, where he sought refuge with one or more Illyrian kings, perhaps with Glaukias, and was treated as a guest, despite having defeated them in battle a few years before. However, it appears Philip never intended to disown his politically and militarily trained son. Accordingly, Alexander returned to Macedon after six months due to the efforts of a family friend, Demaratus, who mediated between the two parties. In the following year, the Persian satrap (governor) of Caria, Pixodarus, offered his eldest daughter to Alexander's half-brother, Philip Arrhidaeus. Olympias and several of Alexander's friends suggested this showed Philip intended to make Arrhidaeus his heir. Alexander reacted by sending an actor, Thessalus of Corinth, to tell Pixodarus that he should not offer his daughter's hand to an illegitimate son, but instead to Alexander. When Philip heard of this, he stopped the negotiations and scolded Alexander for wishing to marry the daughter of a Carian, explaining that he wanted a better bride for him. Philip exiled four of Alexander's friends, Harpalus, Nearchus, Ptolemy and Erigyius, and had the Corinthians bring Thessalus to him in chains. King of Macedon Accession In summer 336 BC, while at Aegae attending the wedding of his daughter Cleopatra to Olympias's brother, Alexander I of Epirus, Philip was assassinated by the captain of his bodyguards, Pausanias. As Pausanias tried to escape, he tripped over a vine and was killed by his pursuers, including two of Alexander's companions, Perdiccas and Leonnatus. Alexander was proclaimed king on the spot by the nobles and army at the age of 20. Consolidation of power Alexander began his reign by eliminating potential rivals to the throne. He had his cousin, the former Amyntas IV, executed. He also had two Macedonian princes from the region of Lyncestis killed, but spared a third, Alexander Lyncestes. Olympias had Cleopatra Eurydice and Europa, her daughter by Philip, burned alive. When Alexander learned about this, he was furious. Alexander also ordered the murder of Attalus, who was in command of the advance guard of the army in Asia Minor and Cleopatra's uncle. Attalus was at that time corresponding with Demosthenes, regarding the possibility of defecting to Athens. Attalus also had severely insulted Alexander, and following Cleopatra's murder, Alexander may have considered him too dangerous to leave alive. Alexander spared Arrhidaeus, who was by all accounts mentally disabled, possibly as a result of poisoning by Olympias. News of Philip's death roused many states into revolt, including Thebes, Athens, Thessaly, and the Thracian tribes north of Macedon. When news of the revolts reached Alexander, he responded quickly. Though advised to use diplomacy, Alexander mustered 3,000 Macedonian cavalry and rode south towards Thessaly. He found the Thessalian army occupying the pass between Mount Olympus and Mount Ossa, and ordered his men to ride over Mount Ossa. When the Thessalians awoke the next day, they found Alexander in their rear and promptly surrendered, adding their cavalry to Alexander's force. He then continued south towards the Peloponnese. Alexander stopped at Thermopylae, where he was recognized as the leader of the Amphictyonic League before heading south to Corinth. Athens sued for peace and Alexander pardoned the rebels. The famous encounter between Alexander and Diogenes the Cynic occurred during Alexander's stay in Corinth. When Alexander asked Diogenes what he could do for him, the philosopher disdainfully asked Alexander to stand a little to the side, as he was blocking the sunlight. This reply apparently delighted Alexander, who is reported to have said "But verily, if I were not Alexander, I would like to be Diogenes." At Corinth, Alexander took the title of Hegemon ("leader") and, like Philip, was appointed commander for the coming war against Persia. He also received news of a Thracian uprising. Balkan campaign Before crossing to Asia, Alexander wanted to safeguard his northern borders. In the spring of 335 BC, he advanced to suppress several revolts. Starting from Amphipolis, he travelled east into the country of the "Independent Thracians"; and at Mount Haemus, the Macedonian army attacked and defeated the Thracian forces manning the heights. The Macedonians marched into the country of the Triballi, and defeated their army near the Lyginus river (a tributary of the Danube). Alexander then marched for three days to the Danube, encountering the Getae tribe on the opposite shore. Crossing the river at night, he surprised them and forced their army to retreat after the first cavalry skirmish. News then reached Alexander that Cleitus, King of Illyria, and King Glaukias of the Taulantii were in open revolt against his authority. Marching west into Illyria, Alexander defeated each in turn, forcing the two rulers to flee with their troops. With these victories, he secured his northern frontier. While Alexander campaigned north, the Thebans and Athenians rebelled once again. Alexander immediately headed south. While the other cities again hesitated, Thebes decided to fight. The Theban resistance was ineffective, and Alexander razed the city and divided its territory between the other Boeotian cities. The end of Thebes cowed Athens, leaving all of Greece temporarily at peace. Alexander then set out on his Asian campaign, leaving Antipater as regent. Conquest of the Persian Empire Asia Minor After his victory at the Battle of Chaeronea (338 BC), Philip II began the work of establishing himself as hēgemṓn () of a league which according to Diodorus was to wage a campaign against the Persians for the sundry grievances Greece suffered in 480 and free the Greek cities of the western coast and islands from Achaemenid rule. In 336 he sent Parmenion, with Amyntas, Andromenes and Attalus, and an army of 10,000 men into Anatolia to make preparations for an invasion. At first, all went well. The Greek cities on the western coast of Anatolia revolted until the news arrived that Philip had been murdered and had been succeeded by his young son Alexander. The Macedonians were demoralized by Philip's death and were subsequently defeated near Magnesia by the Achaemenids under the command of the mercenary Memnon of Rhodes. Taking over the invasion project of Philip II, Alexander's army crossed the Hellespont in 334 BC with approximately 48,100 soldiers, 6,100 cavalry and a fleet of 120 ships with crews numbering 38,000, drawn from Macedon and various Greek city-states, mercenaries, and feudally raised soldiers from Thrace, Paionia, and Illyria. He showed his intent to conquer the entirety of the Persian Empire by throwing a spear into Asian soil and saying he accepted Asia as a gift from the gods. This also showed Alexander's eagerness to fight, in contrast to his father's preference for diplomacy. After an initial victory against Persian forces at the Battle of the Granicus, Alexander accepted the surrender of the Persian provincial capital and treasury of Sardis; he then proceeded along the Ionian coast, granting autonomy and democracy to the cities. Miletus, held by Achaemenid forces, required a delicate siege operation, with Persian naval forces nearby. Further south, at Halicarnassus, in Caria, Alexander successfully waged his first large-scale siege, eventually forcing his opponents, the mercenary captain Memnon of Rhodes and the Persian satrap of Caria, Orontobates, to withdraw by sea. Alexander left the government of Caria to a member of the Hecatomnid dynasty, Ada, who adopted Alexander. From Halicarnassus, Alexander proceeded into mountainous Lycia and the Pamphylian plain, asserting control over all coastal cities to deny the Persians naval bases. From Pamphylia onwards the coast held no major ports and Alexander moved inland. At Termessos, Alexander humbled but did not storm the Pisidian city. At the ancient Phrygian capital of Gordium, Alexander "undid" the hitherto unsolvable Gordian Knot, a feat said to await the future "king of Asia". According to the story, Alexander proclaimed that it did not matter how the knot was undone and hacked it apart with his sword. The Levant and Syria In spring 333 BC, Alexander crossed the Taurus into Cilicia. After a long pause due to an illness, he marched on towards Syria. Though outmanoeuvered by Darius's significantly larger army, he marched back to Cilicia, where he defeated Darius at Issus. Darius fled the battle, causing his army to collapse, and left behind his wife, his two daughters, his mother Sisygambis, and a fabulous treasure. He offered a peace treaty that included the lands he had already lost, and a ransom of 10,000 talents for his family. Alexander replied that since he was now king of Asia, it was he alone who decided territorial divisions. Alexander proceeded to take possession of Syria, and most of the coast of the Levant. In the following year, 332 BC, he was forced to attack Tyre, which he captured after a long and difficult siege. The men of military age were massacred and the women and children sold into slavery. Egypt When Alexander destroyed Tyre, most of the towns on the route to Egypt quickly capitulated. However, Alexander was met with resistance at Gaza. The stronghold was heavily fortified and built on a hill, requiring a siege. When "his engineers pointed out to him that because of the height of the mound it would be impossible... this encouraged Alexander all the more to make the attempt". After three unsuccessful assaults, the stronghold fell, but not before Alexander had received a serious shoulder wound. As in Tyre, men of military age were put to the sword and the women and children were sold into slavery. Egypt was only one of a large number of territories taken by Alexander from the Persians. After his trip to Siwa, Alexander was crowned in the temple of Ptah at Memphis. It appears that the Egyptian people did not find it disturbing that he was a foreigner - nor that he was absent for virtually his entire reign. Alexander restored the temples neglected by the Persians and dedicated new monuments to the Egyptian gods. In the temple of Luxor, near Karnak, he built a chapel for the sacred barge. During his brief months in Egypt, he reformed the taxation system on the Greek models and organized the military occupation of the country, but, early in 331 BCE, he left for Asia in pursuit of the Persians. Alexander advanced on Egypt in later 332 BC, where he was regarded as a liberator. To legitimize taking power and be recognized as the descendant of the long line of pharaohs, Alexander made sacrifices to the gods at Memphis and went to consult the famous oracle of Amun-Ra at the Siwa Oasis. He was pronounced son of the deity Amun at the Oracle of Siwa Oasis in the Libyan desert. Henceforth, Alexander often referred to Zeus-Ammon as his true father, and after his death, currency depicted him adorned with the Horns of Ammon as a symbol of his divinity. The Greeks interpreted this message - one that the gods addressed to all pharaohs - as a prophecy. During his stay in Egypt, he founded Alexandria, which would become the prosperous capital of the Ptolemaic Kingdom after his death. Control of Egypt passed to Ptolemy I (son of Lagos), the founder of the Ptolemaic Dynasty (305-30 BCE) after the death of Alexander. Assyria and Babylonia Leaving Egypt in 331 BC, Alexander marched eastward into Achaemenid Assyria in Upper Mesopotamia (now northern Iraq) and defeated Darius again at the Battle of Gaugamela. Darius once more fled the field, and Alexander chased him as far as Arbela. Gaugamela would be the final and decisive encounter between the two. Darius fled over the mountains to Ecbatana (modern Hamadan) while Alexander captured Babylon. Babylonian astronomical diaries says that "the king of the world, Alexander" sends his scouts with a message to the people of Babylon before entering the city: "I shall not enter your houses". Persia From Babylon, Alexander went to Susa, one of the Achaemenid capitals, and captured its treasury. He sent the bulk of his army to the Persian ceremonial capital of Persepolis via the Persian Royal Road. Alexander himself took selected troops on the direct route to the city. He then stormed the pass of the Persian Gates (in the modern Zagros Mountains) which had been blocked by a Persian army under Ariobarzanes and then hurried to Persepolis before its garrison could loot the treasury. On entering Persepolis, Alexander allowed his troops to loot the city for several days. Alexander stayed in Persepolis for five months. During his stay a fire broke out in the eastern palace of Xerxes I and spread to the rest of the city. Possible causes include a drunken accident or deliberate revenge for the burning of the Acropolis of Athens during the Second Persian War by Xerxes; Plutarch and Diodorus allege that Alexander's companion, the hetaera Thaïs, instigated and started the fire. Even as he watched the city burn, Alexander immediately began to regret his decision. Plutarch claims that he ordered his men to put out the fires, but that the flames had already spread to most of the city. Curtius claims that Alexander did not regret his decision until the next morning. Plutarch recounts an anecdote in which Alexander pauses and talks to a fallen statue of Xerxes as if it were a live person: Fall of the Empire and the East Alexander then chased Darius, first into Media, and then Parthia. The Persian king no longer controlled his own destiny, and was taken prisoner by Bessus, his Bactrian satrap and kinsman. As Alexander approached, Bessus had his men fatally stab the Great King and then declared himself Darius's successor as Artaxerxes V, before retreating into Central Asia to launch a guerrilla campaign against Alexander. Alexander buried Darius's remains next to his Achaemenid predecessors in a regal funeral. He claimed that, while dying, Darius had named him as his successor to the Achaemenid throne. The
Korzybski maintained that humans are limited in what they know by (1) the structure of their nervous systems, and (2) the structure of their languages. Humans cannot experience the world directly, but only through their "abstractions" (nonverbal impressions or "gleanings" derived from the nervous system, and verbal indicators expressed and derived from language). These sometimes mislead us about what is the truth. Our understanding sometimes lacks similarity of structure with what is actually happening. He sought to train our awareness of abstracting, using techniques he had derived from his study of mathematics and science. He called this awareness, this goal of his system, "consciousness of abstracting". His system included the promotion of attitudes such as "I don't know; let's see," in order that we may better discover or reflect on its realities as revealed by modern science. Another technique involved becoming inwardly and outwardly quiet, an experience he termed, "silence on the objective levels". "To be" Many devotees and critics of Korzybski reduced his rather complex system to a simple matter of what he said about the verb form "is" of the general verb "to be." His system, however, is based primarily on such terminology as the different "orders of abstraction," and formulations such as "consciousness of abstracting." The contention that Korzybski opposed the use of the verb "to be" would be a profound exaggeration. He thought that certain uses of the verb "to be", called the "is of identity" and the "is of predication", were faulty in structure, e.g., a statement such as, "Elizabeth is a fool" (said of a person named "Elizabeth" who has done something that we regard as foolish). In Korzybski's system, one's assessment of Elizabeth belongs to a higher order of abstraction than Elizabeth herself. Korzybski's remedy was to deny identity; in this example, to be aware continually that "Elizabeth" is not what we call her. We find Elizabeth not in the verbal domain, the world of words, but the nonverbal domain (the two, he said, amount to different orders of abstraction). This was expressed by Korzybski's most famous premise, "the map is not the territory". Note that this premise uses the phrase "is not", a form of "to be"; this and many other examples show that he did not intend to abandon "to be" as such. In fact, he said explicitly that there were no structural problems with the verb "to be" when used as an auxiliary verb or when used to state existence or location. It was even acceptable at times to use the faulty forms of the verb "to be," as long as one was aware of their structural limitations. Anecdotes One day, Korzybski was giving a lecture to a group of students, and he interrupted the
know; let's see," in order that we may better discover or reflect on its realities as revealed by modern science. Another technique involved becoming inwardly and outwardly quiet, an experience he termed, "silence on the objective levels". "To be" Many devotees and critics of Korzybski reduced his rather complex system to a simple matter of what he said about the verb form "is" of the general verb "to be." His system, however, is based primarily on such terminology as the different "orders of abstraction," and formulations such as "consciousness of abstracting." The contention that Korzybski opposed the use of the verb "to be" would be a profound exaggeration. He thought that certain uses of the verb "to be", called the "is of identity" and the "is of predication", were faulty in structure, e.g., a statement such as, "Elizabeth is a fool" (said of a person named "Elizabeth" who has done something that we regard as foolish). In Korzybski's system, one's assessment of Elizabeth belongs to a higher order of abstraction than Elizabeth herself. Korzybski's remedy was to deny identity; in this example, to be aware continually that "Elizabeth" is not what we call her. We find Elizabeth not in the verbal domain, the world of words, but the nonverbal domain (the two, he said, amount to different orders of abstraction). This was expressed by Korzybski's most famous premise, "the map is not the territory". Note that this premise uses the phrase "is not", a form of "to be"; this and many other examples show that he did not intend to abandon "to be" as such. In fact, he said explicitly that there were no structural problems with the verb "to be" when used as an auxiliary verb or when used to state existence or location. It was even acceptable at times to use the faulty forms of the verb "to be," as long as one was aware of their structural limitations. Anecdotes One day, Korzybski was giving a lecture to a group of students, and he interrupted the lesson suddenly in order to retrieve a packet of biscuits, wrapped in white paper, from his briefcase. He muttered that he just had to eat something, and he asked the students on the seats in the front row if they would also like a biscuit. A few students took a biscuit. "Nice biscuit, don't you think," said Korzybski, while he took a second one. The students were chewing vigorously. Then he tore the white paper from the biscuits, in order to reveal the original packaging. On it was a big picture of a dog's head and the words "Dog Cookies." The students looked at the package, and were shocked. Two of them wanted to vomit, put their hands in front of their mouths, and ran out of the lecture hall to the toilet. "You see," Korzybski remarked, "I have just demonstrated that people don't just eat food, but also words, and that the taste of the former is often outdone by the taste of the latter." William Burroughs went to a Korzybski workshop in the Autumn of 1939. He was 25 years old, and paid $40. His fellow students—there were 38 in all—included young Samuel I. Hayakawa (later to become a Republican member of the U.S. Senate) and Wendell Johnson (founder of the Monster Study). Influence Korzybski was well received in numerous disciplines, as evidenced by the positive reactions from leading figures in the sciences and humanities in the 1940s and 1950s. These include author Robert A. Heinlein naming a character after him in his 1940 short story "Blowups Happen", and science fiction writer A. E. van Vogt in his novel "The World of Null-A", published in 1948. Korzybski's ideas influenced philosopher Alan Watts who used his phrase "the map is not the territory" in lectures. Writer Robert Anton Wilson was also deeply influenced by Korzybski's ideas. As reported in the third edition of Science and Sanity, in World War II the US Army used Korzybski's system to treat battle fatigue in Europe, under the supervision of Dr. Douglas M. Kelley, who went on to become the psychiatrist in charge of the
in April 1979, Rains discussed Planet Grab, a multiplayer arcade game later renamed to Cosmos. Logg did not know the name of the game, thinking Computer Space as "the inspiration for the two-dimensional approach". Rains conceived of Asteroids as a mixture of Computer Space and Space Invaders, combining the two-dimensional approach of Computer Space with Space Invaders addictive gameplay of "completion" and "eliminate all threats". The unfinished game featured a giant, indestructible asteroid, so Rains asked Logg: "Well, why don’t we have a game where you shoot the rocks and blow them up?" In response, Logg described a similar concept where the player selectively shoots at rocks that break into smaller pieces. Both agreed on the concept. Hardware Asteroids was implemented on hardware developed by Delman and is a vector game, in which the graphics are composed of lines drawn on a vector monitor. Rains initially wanted the game done in raster graphics, but Logg, experienced in vector graphics, suggested an XY monitor because the high image quality would permit precise aiming. The hardware is chiefly a MOS 6502 executing the game program, and QuadraScan, a high-resolution vector graphics processor developed by Atari and referred to as an "XY display system" and the "Digital Vector Generator (DVG)". The original design concepts for QuadraScan came out of Cyan Engineering, Atari's off-campus research lab in Grass Valley, California, in 1978. Cyan gave it to Delman, who finished the design and first used it for Lunar Lander. Logg received Delman's modified board with five buttons, 13 sound effects, and additional RAM, and he used it to develop Asteroids. The size of the board was 4 by 4 inches, and it was "linked up" to a monitor. Implementation Logg modeled the player's ship, the five-button control scheme, and the game physics after Spacewar!, which he had played as a student at the University of California, Berkeley, but made several changes to improve playability. The ship was programmed into the hardware and rendered by the monitor, and it was configured to move with thrust and inertia. The hyperspace button was not placed near Logg's right thumb, which he was dissatisfied with, as he had a problem "tak[ing] his hand off the thrust button". Drawings of asteroids in various shapes were incorporated into the game. Logg copied the idea of a high score table with initials from Exidy's Star Fire. The two saucers were formulated to be different from each other. A steadily decreasing timer shortens intervals between saucer attacks to keep the player from not shooting asteroids and saucers. A "heartbeat" soundtrack quickens as the game progresses. The game does not have a sound chip. Delman created a hardware circuit for 13 sound effects by hand which was wired onto the board. A prototype of Asteroids was well received by several Atari staff and engineers, who "wander[ed] between labs, passing comment and stopping to play as they went". Logg was often asked when he would be leaving by employees eager to play the prototype, so he created a second prototype for staff to play. Atari tested the game in arcades in Sacramento, California, and also observed players during focus group sessions at Atari. Players used to Spacewar! struggled to maintain grip on the thrust button and requested a joystick; players accustomed to Space Invaders noted they get no break in the game. Logg and other engineers observed proceedings and documented comments in four pages. Asteroids slows down as the player gains 50–100 lives, because there is no limit to the number of lives displayed. The player can "lose" the game after more than 250 lives are collected. Ports Asteroids was released for the Atari VCS (later renamed the Atari 2600) and Atari 8-bit family in 1981, then the Atari 7800 in 1986. A port for the Atari 5200, identical to the Atari 8-bit computer version, was in development in 1982, but was not published. The Atari 7800 version was a launch title and includes cooperative play; the asteroids have colorful textures and the "heartbeat" sound effect remains intact. Programmers Brad Stewart and Bob Smith were unable to fit the Atari VCS port into a 4 KB cartridge. It became the first game for the console to use bank switching, a technique that increases ROM size from 4 KB to 8 KB. Reception Asteroids was immediately successful upon release. It displaced Space Invaders by popularity in the United States and became Atari's best selling arcade game of all time, with over 70,000 units sold. Atari earned an estimated $150 million in sales from the game, and arcade operators earned a further $500 million from coin drops. Atari had been in the process of manufacturing another vector game, Lunar Lander, but demand for Asteroids was so high "that several hundred Asteroids games were shipped in Lunar Lander cabinets". Asteroids was so popular that some video arcade operators had to install large boxes to hold the number of coins spent by players. It replaced Space Invaders at the top of the US RePlay amusement arcade charts in April 1980, though Space Invaders remained the top game at street locations. Asteroids went on to become the highest-grossing arcade video game of 1980 in the United States, dethroning Space Invaders. It shipped 70,000 arcade units worldwide in 1980, including over 60,000 sold in the United States that year, and grossed about worldwide ( adjusted for inflation) by 1980. The game remained at the top of the US RePlay charts through March 1981. However, the game did not perform as well overseas in Europe and Asia. It sold 30,000 arcade units overseas, for a total of 100,000 arcade units sold worldwide. Atari manufactured 76,312 units from its US and Ireland plants, including 21,394 Asteroids Deluxe units. It was a commercial failure in Japan when it released there in 1980, partly due to its complex controls and partly due to the Japanese market beginning to lose interest in space shoot 'em ups at the time. Asteroids received positive reviews from video game critics and has been regarded as Logg's magnum opus. Richard A. Edwards reviewed the 1981 Asteroids home cartridge in The Space Gamer No. 46. Edwards commented that "this home cartridge is a virtual duplicate of the ever-popular Atari arcade game. [...] If blasting asteroids is the thing you want to do then this is the game, but at this price I can't wholeheartedly recommend it". Video Games Player magazine reviewed the Atari VCS version, rating the graphics and sound a B, while giving the game an overall B+ rating. Electronic Fun with Computers & Games magazine gave the Atari VCS version an A rating. William Cassidy, writing for GameSpy's "Classic Gaming", noticed its innovations, including being one of the first video games to track initials and allow players to enter their initials for appearing in the top 10 high scores, and commented, "the vector graphics fit the futuristic outer space theme very well". In 1996, Next Generation listed it as number 39 on their "Top 100 Games of All Time", particularly lauding the control dynamics which require "the constant juggling of speed, positioning, and direction". In 1999, Next Generation listed Asteroids as number 29 on their "Top 50 Games of All Time", commenting that "Asteroid was a classic the day it was released, and it has never lost any of its appeal". Asteroids was ranked fourth on Retro Gamers list of "Top 25 Arcade Games"; the Retro Gamer staff cited its simplicity and the lack of a proper ending as allowances of revisiting the game. In 2012, Asteroids was listed on Time All-Time 100 greatest video games list. Entertainment Weekly named Asteroids one of the top ten games for the Atari 2600 in 2013. It was added to the Museum of Modern Art's collection of video games. In 2021, The Guardian listed Asteroids as the second greatest video game of the 1970s, just below Galaxian (1979). By contrast, in March 1983 the Atari 8-bit port of Asteroids won sixth place in Softlines Dog of the Year awards "for badness in computer games", Atari division, based on reader submissions. Usage of the names of Saturday Night Live characters "Mr. Bill" and "Sluggo" to refer to the saucers in an Esquire article about the game led to Logg receiving a cease and desist letter from a lawyer with the "Mr. Bill Trademark". Legacy Arcade sequels Released in 1981, Asteroids Deluxe was the first sequel to Asteroids. Dave Shepperd edited the code and made enhancements to the game without Logg's involvement. The onscreen objects are tinted blue, and hyperspace is replaced by a shield that depletes when used. The asteroids rotate, and new "killer satellite" enemies break into smaller ships that home in on the player's position. The arcade machine's monitor displays vector graphics overlaying a holographic backdrop. The game is more difficult than the original and enables saucers to shoot across the screen boundary, eliminating the lurking strategy for high scores in the original.
player can also send the ship into hyperspace, causing it to disappear and reappear in a random location on the screen, at the risk of self-destructing or appearing on top of an asteroid. Each level starts with a few large asteroids drifting in various directions on the screen. Objects wrap around screen edges – for instance, an asteroid that drifts off the top edge of the screen reappears at the bottom and continues moving in the same direction. As the player shoots asteroids, they break into smaller asteroids that move faster and are more difficult to hit. Smaller asteroids are also worth more points. Two flying saucers appear periodically on the screen; the "big saucer" shoots randomly and poorly, while the "small saucer" fires frequently at the ship. After reaching a score of 40,000, only the small saucer appears. As the player's score increases, the angle range of the shots from the small saucer diminishes until the saucer fires extremely accurately. Once the screen has been cleared of all asteroids and flying saucers, a new set of large asteroids appears, thus starting the next level. The game gets harder as the number of asteroids increases until after the score reaches a range between 40,000 and 60,000. The player starts with 3–5 lives upon game start and gains an extra life per 10,000 points. Play continues to the last ship lost, which ends the game. Machine "turns over" at 99,990 points, which is the maximum high score that can be achieved. Lurking exploit In the original game design, saucers were supposed to begin shooting as soon as they appeared, but this was changed. Additionally, saucers can only aim at the player's ship on-screen; they are not capable of aiming across a screen boundary. These behaviors allow a "lurking" strategy, in which the player stays near the edge of the screen opposite the saucer. By keeping just one or two rocks in play, a player can shoot across the boundary and destroy saucers to accumulate points indefinitely with little risk of being destroyed. Arcade operators began to complain about losing revenue due to this exploit. In response, Atari issued a patched EPROM and, due to the impact of this exploit, Atari (and other companies) changed their development and testing policies to try to prevent future games from having such exploits. Development Concept Asteroids was conceived by Lyle Rains and programmed by Ed Logg with collaborations from other Atari staff. Logg was impressed with the Atari Video Computer System (later called the Atari 2600), and he joined Atari's coin-op division to work on Dirt Bike, which was never released due to an unsuccessful field test. Paul Mancuso joined the development team as Asteroids technician and engineer Howard Delman contributed to the hardware. During a meeting in April 1979, Rains discussed Planet Grab, a multiplayer arcade game later renamed to Cosmos. Logg did not know the name of the game, thinking Computer Space as "the inspiration for the two-dimensional approach". Rains conceived of Asteroids as a mixture of Computer Space and Space Invaders, combining the two-dimensional approach of Computer Space with Space Invaders addictive gameplay of "completion" and "eliminate all threats". The unfinished game featured a giant, indestructible asteroid, so Rains asked Logg: "Well, why don’t we have a game where you shoot the rocks and blow them up?" In response, Logg described a similar concept where the player selectively shoots at rocks that break into smaller pieces. Both agreed on the concept. Hardware Asteroids was implemented on hardware developed by Delman and is a vector game, in which the graphics are composed of lines drawn on a vector monitor. Rains initially wanted the game done in raster graphics, but Logg, experienced in vector graphics, suggested an XY monitor because the high image quality would permit precise aiming. The hardware is chiefly a MOS 6502 executing the game program, and QuadraScan, a high-resolution vector graphics processor developed by Atari and referred to as an "XY display system" and the "Digital Vector Generator (DVG)". The original design concepts for QuadraScan came out of Cyan Engineering, Atari's off-campus research lab in Grass Valley, California, in 1978. Cyan gave it to Delman, who finished the design and first used it for Lunar Lander. Logg received Delman's modified board with five buttons, 13 sound effects, and additional RAM, and he used it to develop Asteroids. The size of the board was 4 by 4 inches, and it was "linked up" to a monitor. Implementation Logg modeled the player's ship, the five-button control scheme, and the game physics after Spacewar!, which he had played as a student at the University of California, Berkeley, but made several changes to improve playability. The ship was programmed into the hardware and rendered by the monitor, and it was configured to move with thrust and inertia. The hyperspace button was not placed near Logg's right thumb, which he was dissatisfied with, as he had a problem "tak[ing] his hand off the thrust button". Drawings of asteroids in various shapes were incorporated into the game. Logg copied the idea of a high score table with initials from Exidy's Star Fire. The two saucers were formulated to be different from each other. A steadily decreasing timer shortens intervals between saucer attacks to keep the player from not shooting asteroids and saucers. A "heartbeat" soundtrack quickens as the game progresses. The
and thus has the authority attribution for Amaryllidaceae. In 1810, Brown proposed that a subgroup of Liliaceae be distinguished on the basis of the position of the ovaries and be referred to as Amaryllideae and in 1813 de Candolle described Liliacées Juss. and Amaryllidées Brown as two quite separate families. The literature on the organisation of genera into families and higher ranks became available in the English language with Samuel Frederick Gray's A natural arrangement of British plants (1821). Gray used a combination of Linnaeus' sexual classification and Jussieu's natural classification to group together a number of families having in common six equal stamens, a single style and a perianth that was simple and petaloid, but did not use formal names for these higher ranks. Within the grouping he separated families by the characteristics of their fruit and seed. He treated groups of genera with these characteristics as separate families, such as Amaryllideae, Liliaceae, Asphodeleae and Asparageae. The circumscription of Asparagales has been a source of difficulty for many botanists from the time of John Lindley (1846), the other important British taxonomist of the early nineteenth century. In his first taxonomic work, An Introduction to the Natural System of Botany (1830) he partly followed Jussieu by describing a subclass he called Endogenae, or Monocotyledonous Plants (preserving de Candolle's Endogenæ phanerogamæ) divided into two tribes, the Petaloidea and Glumaceae. He divided the former, often referred to as petaloid monocots, into 32 orders, including the Liliaceae (defined narrowly), but also most of the families considered to make up the Asparagales today, including the Amaryllideae. By 1846, in his final scheme Lindley had greatly expanded and refined the treatment of the monocots, introducing both an intermediate ranking (Alliances) and tribes within orders (i.e. families). Lindley placed the Liliaceae within the Liliales, but saw it as a paraphyletic ("catch-all") family, being all Liliales not included in the other orders, but hoped that the future would reveal some characteristic that would group them better. The order Liliales was very large and had become a used to include almost all monocotyledons with colourful tepals and without starch in their endosperm (the lilioid monocots). The Liliales was difficult to divide into families because morphological characters were not present in patterns that clearly demarcated groups. This kept the Liliaceae separate from the Amaryllidaceae (Narcissales). Of these Liliaceae was divided into eleven tribes (with 133 genera) and Amaryllidaceae into four tribes (with 68 genera), yet both contained many genera that would eventually segregate to each other's contemporary orders (Liliales and Asparagales respectively). The Liliaceae would be reduced to a small 'core' represented by the tribe Tulipae, while large groups such Scilleae and Asparagae would become part of Asparagales either as part of the Amaryllidaceae or as separate families. While of the Amaryllidaceae, the Agaveae would be part of Asparagaceae but the Alstroemeriae would become a family within the Liliales. The number of known genera (and species) continued to grow and by the time of the next major British classification, that of the Bentham & Hooker system in 1883 (published in Latin) several of Lindley's other families had been absorbed into the Liliaceae. They used the term 'series' to indicate suprafamilial rank, with seven series of monocotyledons (including Glumaceae), but did not use Lindley's terms for these. However they did place the Liliaceous and Amaryllidaceous genera into separate series. The Liliaceae were placed in series Coronariae, while the Amaryllideae were placed in series Epigynae. The Liliaceae now consisted of twenty tribes (including Tulipeae, Scilleae and Asparageae), and the Amaryllideae of five (including Agaveae and Alstroemerieae). An important addition to the treatment of the Liliaceae was the recognition of the Allieae as a distinct tribe that would eventually find its way to the Asparagales as the subfamily Allioideae of the Amaryllidaceae. Post-Darwinian The appearance of Charles Darwin's Origin of Species in 1859 changed the way that taxonomists considered plant classification, incorporating evolutionary information into their schemata. The Darwinian approach led to the concept of phylogeny (tree-like structure) in assembling classification systems, starting with Eichler. Eichler, having established a hierarchical system in which the flowering plants (angiosperms) were divided into monocotyledons and dicotyledons, further divided into former into seven orders. Within the Liliiflorae were seven families, including Liliaceae and Amaryllidaceae. Liliaceae included Allium and Ornithogalum (modern Allioideae) and Asparagus. Engler, in his system developed Eichler's ideas into a much more elaborate scheme which he treated in a number of works including Die Natürlichen Pflanzenfamilien (Engler and Prantl 1888) and Syllabus der Pflanzenfamilien (1892–1924). In his treatment of Liliiflorae the Liliineae were a suborder which included both families Liliaceae and Amaryllidaceae. The Liliaceae had eight subfamilies and the Amaryllidaceae four. In this rearrangement of Liliaceae, with fewer subdivisions, the core Liliales were represented as subfamily Lilioideae (with Tulipae and Scilleae as tribes), the Asparagae were represented as Asparagoideae and the Allioideae was preserved, representing the alliaceous genera. Allieae, Agapantheae and Gilliesieae were the three tribes within this subfamily. In the Amaryllidacea, there was little change from the Bentham & Hooker. A similar approach was adopted by Wettstein. Twentieth century In the twentieth century the Wettstein system (1901–1935) placed many of the taxa in an order called 'Liliiflorae'. Next Johannes Paulus Lotsy (1911) proposed dividing the Liliiflorae into a number of smaller families including Asparagaceae. Then Herbert Huber (1969, 1977), following Lotsy's example, proposed that the Liliiflorae be split into four groups including the 'Asparagoid' Liliiflorae. The widely used Cronquist system (1968–1988) used the very broadly defined order Liliales. These various proposals to separate small groups of genera into more homogeneous families made little impact till that of Dahlgren (1985) incorporating new information including synapomorphy. Dahlgren developed Huber's ideas further and popularised them, with a major deconstruction of existing families into smaller units. They created a new order, calling it Asparagales. This was one of five orders within the superorder Liliiflorae. Where Cronquist saw one family, Dahlgren saw forty distributed over three orders (predominantly Liliales and Asparagales). Over the 1980s, in the context of a more general review of the classification of angiosperms, the Liliaceae were subjected to more intense scrutiny. By the end of that decade, the Royal Botanic Gardens at Kew, the British Museum of Natural History and the Edinburgh Botanical Gardens formed a committee to examine the possibility of separating the family at least for the organization of their herbaria. That committee finally recommended that 24 new families be created in the place of the original broad Liliaceae, largely by elevating subfamilies to the rank of separate families. The order Asparagales as currently circumscribed has only recently been recognized in classification systems, through the advent of phylogenetics. The 1990s saw considerable progress in plant phylogeny and phylogenetic theory, enabling a phylogenetic tree to be constructed for all of the flowering plants. The establishment of major new clades necessitated a departure from the older but widely used classifications such as Cronquist and Thorne based largely on morphology rather than genetic data. This complicated discussion about plant evolution and necessitated a major restructuring. rbcL gene sequencing and cladistic analysis of monocots had redefined the Liliales in 1995. from four morphological orders sensu Dahlgren. The largest clade representing the Liliaceae, all previously included in Liliales, but including both the Calochortaceae and Liliaceae sensu Tamura. This redefined family, that became referred to as core Liliales, but corresponded to the emerging circumscription of the Angiosperm Phylogeny Group (1998). Phylogeny and APG system The 2009 revision of the Angiosperm Phylogeny Group system, APG III, places the order in the clade monocots. From the Dahlgren system of 1985 onwards, studies based mainly on morphology had identified the Asparagales as a distinct group, but had also included groups now located in Liliales, Pandanales and Zingiberales. Research in the 21st century has supported the monophyly of Asparagales, based on morphology, 18S rDNA, and other DNA sequences, although some phylogenetic reconstructions based on molecular data have suggested that Asparagales may be paraphyletic, with Orchidaceae separated from the rest. Within the monocots, Asparagales is the sister group of the commelinid clade. This cladogram shows the placement of Asparagales within the orders of Lilianae sensu Chase & Reveal (monocots) based on molecular phylogenetic evidence. The lilioid monocot orders are bracketed, namely Petrosaviales, Dioscoreales, Pandanales, Liliales and Asparagales. These constitute a paraphyletic assemblage, that is groups with a common ancestor that do not include all direct descendants (in this case commelinids as the sister group to Asparagales); to form a clade, all the groups joined by thick lines would need to be included. While Acorales and Alismatales have been collectively referred to as "alismatid monocots" (basal or early branching monocots), the remaining clades (lilioid and commelinid monocots) have been referred to as the "core monocots". The relationship between the orders (with the exception of the two sister orders) is pectinate, that is diverging in succession from the line that leads to the commelinids. Numbers indicate crown group (most recent common ancestor of the sampled species of the clade of interest) divergence times in mya (million years ago). Subdivision A phylogenetic tree for the Asparagales, generally to family level, but including groups which were recently and widely treated as families but which are now reduced to subfamily rank, is shown below. The tree shown above can be divided into a basal paraphyletic group, the 'lower Asparagales (asparagoids)', from Orchidaceae to Asphodelaceae, and a well-supported monophyletic group of 'core Asparagales' (higher asparagoids), comprising the two largest families, Amaryllidaceae sensu lato and Asparagaceae sensu lato. Two differences between these two groups (although with exceptions) are: the mode of microsporogenesis and the position of the ovary. The 'lower Asparagales' typically have simultaneous microsporogenesis (i.e. cell walls develop only after both meiotic divisions), which appears to be an apomorphy within the monocots, whereas the 'core Asparagales' have reverted to successive microsporogenesis (i.e. cell walls develop after each division). The 'lower Asparagales' typically have an inferior ovary, whereas the 'core Asparagales' have reverted to a superior ovary. A 2002 morphological study by Rudall treated possessing an inferior ovary as a synapomorphy of the Asparagales, stating that reversions to a superior ovary in the 'core Asparagales' could be associated with the presence of nectaries below the ovaries. However, Stevens notes that superior ovaries are distributed among the 'lower Asparagales' in such a way that it is not clear where to place the evolution of different ovary morphologies. The position of the ovary seems a much more flexible character (here and in other angiosperms) than previously thought. Changes to family structure in APG III The APG III system when it was published in 2009, greatly expanded the families Xanthorrhoeaceae, Amaryllidaceae, and Asparagaceae. Thirteen of the families of the earlier APG II system were thereby reduced to subfamilies within these three families. The expanded Xanthorrhoeaceae is now called "Asphodelaceae". The APG II families (left) and their equivalent APG III subfamilies (right) are as follows: Structure of Asparagales Orchid clade Orchidaceae is possibly the largest family of all angiosperms (only Asteraceae might - or might not - be more speciose) and hence by far the largest in the order. The Dahlgren system recognized three families of orchids, but DNA sequence analysis later showed that these families are polyphyletic and so should be combined. Several studies suggest (with high bootstrap support) that Orchidaceae is the sister of the rest of the Asparagales. Other studies have placed the orchids differently in the phylogenetic tree, generally among the Boryaceae-Hypoxidaceae clade. The position of Orchidaceae shown above seems the best current hypothesis, but cannot be taken as confirmed. Orchids have simultaneous microsporogenesis and inferior ovaries, two characters that are typical of the 'lower Asparagales'. However, their nectaries are rarely in the septa of the ovaries, and most orchids have dust-like seeds, atypical of the rest of the order. (Some members of Vanilloideae and Cypripedioideae have crustose seeds, probably associated with dispersal by birds and mammals that are attracted by fermenting fleshy fruit releasing fragrant compounds, e.g. vanilla.) In terms of the number of species, Orchidaceae diversification is remarkable. However, although the other Asparagales may be less rich in species, they are more variable morphologically, including tree-like forms. Boryaceae to Hypoxidaceae The four families excluding Boryaceae form a well-supported clade in studies based on DNA sequence analysis. All four contain relatively few species, and it has been suggested that they be combined into one family under the name Hypoxidaceae sensu lato. The relationship between Boryaceae (which includes only two genera, Borya and Alania), and other Asparagales has remained unclear for a long time. The Boryaceae are mycorrhizal, but not in the same way as orchids. Morphological studies have suggested a close relationship between Boryaceae and Blandfordiaceae. There is relatively low support for the position of Boryaceae in the tree shown above. Ixioliriaceae to Xeronemataceae
hairy seeds (e.g. Eriospermum, family Asparagaceae s.l.), berries (e.g. Maianthemum, family Asparagaceae s.l.), or highly reduced seeds (e.g. orchids) lack this dark pigment in their seed coats. Phytomelan is not unique to Asparagales (i.e. it is not a synapomorphy) but it is common within the order and rare outside it. The inner portion of the seed coat is usually completely collapsed. In contrast, the morphologically similar seeds of Liliales have no phytomelan, and usually retain a cellular structure in the inner portion of the seed coat. Most monocots are unable to thicken their stems once they have formed, since they lack the cylindrical meristem present in other angiosperm groups. Asparagales have a method of secondary thickening which is otherwise only found in Dioscorea (in the monocot order Disoscoreales). In a process called 'anomalous secondary growth', they are able to create new vascular bundles around which thickening growth occurs. Agave, Yucca, Aloe, Dracaena, Nolina and Cordyline can become massive trees, albeit not of the height of the tallest dicots, and with less branching. Other genera in the order, such as Lomandra and Aphyllanthes, have the same type of secondary growth but confined to their underground stems. Microsporogenesis (part of pollen formation) distinguishes some members of Asparagales from Liliales. Microsporogenesis involves a cell dividing twice (meiotically) to form four daughter cells. There are two kinds of microsporogenesis: successive and simultaneous (although intermediates exist). In successive microsporogenesis, walls are laid down separating the daughter cells after each division. In simultaneous microsporogenesis, there is no wall formation until all four cell nuclei are present. Liliales all have successive microsporogenesis, which is thought to be the primitive condition in monocots. It seems that when the Asparagales first diverged they developed simultaneous microsporogenesis, which the 'lower' Asparagale families retain. However, the 'core' Asparagales (see Phylogenetics ) have reverted to successive microsporogenesis. The Asparagales appear to be unified by a mutation affecting their telomeres (a region of repetitive DNA at the end of a chromosome). The typical 'Arabidopsis-type' sequence of bases has been fully or partially replaced by other sequences, with the 'human-type' predominating. Other apomorphic characters of the order according to Stevens are: the presence of chelidonic acid, anthers longer than wide, tapetal cells bi- to tetra-nuclear, tegmen not persistent, endosperm helobial, and loss of mitochondrial gene sdh3. Taxonomy As circumscribed within the Angiosperm Phylogeny Group system Asparagales is the largest order within the monocotyledons, with 14 families, 1,122 genera and about 25,000–42,000 species, thus accounting for about 50% of all monocots and 10–15% of the flowering plants (angiosperms). The attribution of botanical authority for the name Asparagales belongs to Johann Heinrich Friedrich Link (1767–1851) who coined the word 'Asparaginae' in 1829 for a higher order taxon that included Asparagus although Adanson and Jussieau had also done so earlier (see History). Earlier circumscriptions of Asparagales attributed the name to Bromhead (1838), who had been the first to use the term 'Asparagales'. History Pre-Darwinian The type genus, Asparagus, from which the name of the order is derived, was described by Carl Linnaeus in 1753, with ten species. He placed Asparagus within the Hexandria Monogynia (six stamens, one carpel) in his sexual classification in the Species Plantarum. The majority of taxa now considered to constitute Asparagales have historically been placed within the very large and diverse family, Liliaceae. The family Liliaceae was first described by Michel Adanson in 1763, and in his taxonomic scheme he created eight sections within it, including the Asparagi with Asparagus and three other genera. The system of organising genera into families is generally credited to Antoine Laurent de Jussieu who formally described both the Liliaceae and the type family of Asparagales, the Asparagaceae, as Lilia and Asparagi, respectively, in 1789. Jussieu established the hierarchical system of taxonomy (phylogeny), placing Asparagus and related genera within a division of Monocotyledons, a class (III) of Stamina Perigynia and 'order' Asparagi, divided into three subfamilies. The use of the term Ordo (order) at that time was closer to what we now understand as Family, rather than Order. In creating his scheme he used a modified form of Linnaeus' sexual classification but using the respective topography of stamens to carpels rather than just their numbers. While De Jussieu's Stamina Perigynia also included a number of 'orders' that would eventually form families within the Asparagales such as the Asphodeli (Asphodelaceae), Narcissi (Amaryllidaceae) and Irides (Iridaceae), the remainder are now allocated to other orders. Jussieu's Asparagi soon came to be referred to as Asparagacées in the French literature (Latin: Asparagaceae). Meanwhile, the 'Narcissi' had been renamed as the 'Amaryllidées' (Amaryllideae) in 1805, by Jean Henri Jaume Saint-Hilaire, using Amaryllis as the type species rather than Narcissus, and thus has the authority attribution for Amaryllidaceae. In 1810, Brown proposed that a subgroup of Liliaceae be distinguished on the basis of the position of the ovaries and be referred to as Amaryllideae and in 1813 de Candolle described Liliacées Juss. and Amaryllidées Brown as two quite separate families. The literature on the organisation of genera into families and higher ranks became available in the English language with Samuel Frederick Gray's A natural arrangement of British plants (1821). Gray used a combination of Linnaeus' sexual classification and Jussieu's natural classification to group together a number of families having in common six equal stamens, a single style and a perianth that was simple and petaloid, but did not use formal names for these higher ranks. Within the grouping he separated families by the characteristics of their fruit and seed. He treated groups of genera with these characteristics as separate families, such as Amaryllideae, Liliaceae, Asphodeleae and Asparageae. The circumscription of Asparagales has been a source of difficulty for many botanists from the time of John Lindley (1846), the other important British taxonomist of the early nineteenth century. In his first taxonomic work, An Introduction to the Natural System of Botany (1830) he partly followed Jussieu by describing a subclass he called Endogenae, or Monocotyledonous Plants (preserving de Candolle's Endogenæ phanerogamæ) divided into two tribes, the Petaloidea and Glumaceae. He divided the former, often referred to as petaloid monocots, into 32 orders, including the Liliaceae (defined narrowly), but also most of the families considered to make up the Asparagales today, including the Amaryllideae. By 1846, in his final scheme Lindley had greatly expanded and refined the treatment of the monocots, introducing both an intermediate ranking (Alliances) and tribes within orders (i.e. families). Lindley placed the Liliaceae within the Liliales, but saw it as a paraphyletic ("catch-all") family, being all Liliales not included in the other orders, but hoped that the future would reveal some characteristic that would group them better. The order Liliales was very large and had become a used to include almost all monocotyledons with colourful tepals and without starch in their endosperm (the lilioid monocots). The Liliales was difficult to divide into families because morphological characters were not present in patterns that clearly demarcated groups. This kept the Liliaceae separate from the Amaryllidaceae (Narcissales). Of these Liliaceae was divided into eleven tribes (with 133 genera) and Amaryllidaceae into four tribes (with 68 genera), yet both contained many genera that would eventually segregate to each other's contemporary orders (Liliales and Asparagales respectively). The Liliaceae would be reduced to a small 'core' represented by the tribe Tulipae, while large groups such Scilleae and Asparagae would become part of Asparagales either as part of the Amaryllidaceae or as separate families. While of the Amaryllidaceae, the Agaveae would be part of Asparagaceae but the Alstroemeriae would become a family within the Liliales. The number of known genera (and species) continued to grow and by the time of the next major British classification, that of the Bentham & Hooker system in 1883 (published in Latin) several of Lindley's other families had been absorbed into the Liliaceae. They used the term 'series' to indicate suprafamilial rank, with seven series of monocotyledons (including Glumaceae), but did not use Lindley's terms for these. However they did place the Liliaceous and Amaryllidaceous genera into separate series. The Liliaceae were placed in series Coronariae, while the Amaryllideae were placed in series Epigynae. The Liliaceae now consisted of twenty tribes (including Tulipeae, Scilleae and Asparageae), and the Amaryllideae of five (including Agaveae and Alstroemerieae). An important addition to the treatment of the Liliaceae was the recognition of the Allieae as a distinct tribe that would eventually find its way to the Asparagales as the subfamily Allioideae of the Amaryllidaceae. Post-Darwinian The appearance of Charles Darwin's Origin of Species in 1859 changed the way that taxonomists considered plant classification, incorporating evolutionary information into their schemata. The Darwinian approach led to the concept of phylogeny (tree-like structure) in assembling classification systems, starting with Eichler. Eichler, having established a hierarchical system in which the flowering plants (angiosperms) were divided into monocotyledons and dicotyledons, further divided into former into seven orders. Within the Liliiflorae were seven families, including Liliaceae and Amaryllidaceae. Liliaceae included Allium and Ornithogalum (modern Allioideae) and Asparagus. Engler, in his system developed Eichler's ideas into a much more elaborate scheme which he treated in a number of works including Die Natürlichen Pflanzenfamilien (Engler and Prantl 1888) and Syllabus der Pflanzenfamilien (1892–1924). In his treatment of Liliiflorae the Liliineae were a suborder
places the Alismatales in subclass Alismatidae, class Liliopsida [= monocotyledons] and includes only three families as shown: Alismataceae Butomaceae Limnocharitaceae Cronquist's subclass Alismatidae conformed fairly closely to the order Alismatales as defined by APG, minus the Araceae. The Dahlgren system places the Alismatales in the superorder Alismatanae in the subclass Liliidae [= monocotyledons] in the class Magnoliopsida [= angiosperms] with the following families included: Alismataceae Aponogetonaceae Butomaceae Hydrocharitaceae Limnocharitaceae In Tahktajan's classification (1997), the order Alismatales contains only the Alismataceae and Limnocharitaceae, making it equivalent to the Alismataceae as revised in APG-III. Other families included in the Alismatates as currently defined are here distributed among 10 additional orders, all of which are assigned, with the following exception, to the Subclass Alismatidae. Araceae in Tahktajan 1997 is assigned to the Arales and placed in the Subclass Aridae; Tofieldiaceae to the Melanthiales and placed in the Liliidae. Angiosperm Phylogeny Group The Angiosperm Phylogeny Group system (APG) of 1998 and APG II (2003) assigned the Alismatales to the monocots, which may be thought of as an unranked clade containing the families listed below. The biggest departure from earlier systems (see below) is the inclusion of family Araceae. By its inclusion, the order has grown enormously in number of species. The family Araceae alone accounts for about a hundred genera, totaling over two thousand species. The rest of the families together contain only about five hundred species, many of which are in very small families. The APG III system (2009) differs only in that the Limnocharitaceae are combined with the Alismataceae; it was also suggested that the genus Maundia (of the Juncaginaceae) could be separated into a monogeneric family, the Maundiaceae, but the authors noted that more study was necessary before the Maundiaceae could be recognized. order Alismatales sensu APG III family Alismataceae (including Limnocharitaceae) family Aponogetonaceae family Araceae family Butomaceae family Cymodoceaceae family Hydrocharitaceae family Juncaginaceae family Posidoniaceae family Potamogetonaceae family Ruppiaceae family Scheuchzeriaceae family Tofieldiaceae family Zosteraceae In APG IV (2016), it was decided that evidence
include those with staminate flowers that detach from the parent plant and float to the surface. There they can pollinate carpellate flowers floating on the surface via long pedicels. In others, pollination occurs underwater, where pollen may form elongated strands, increasing chance of success. Most aquatic species have a totally submerged juvenile phase, and flowers are either floating or emergent. Vegetation may be totally submersed, have floating leaves, or protrude from the water. Collectively, they are commonly known as "water plantain". Taxonomy The Alismatales contain about 165 genera in 13 families, with a cosmopolitan distribution. Phylogenetically, they are basal monocots, diverging early in evolution relative to the lilioid and commelinid monocot lineages. Together with the Acorales, the Alismatales are referred to informally as the alismatid monocots. Early systems The Cronquist system (1981) places the Alismatales in subclass Alismatidae, class Liliopsida [= monocotyledons] and includes only three families as shown: Alismataceae Butomaceae Limnocharitaceae Cronquist's subclass Alismatidae conformed fairly closely to the order Alismatales as defined by APG, minus the Araceae. The Dahlgren system places the Alismatales in the superorder Alismatanae in the subclass Liliidae [= monocotyledons] in the class Magnoliopsida [= angiosperms] with the following families included: Alismataceae Aponogetonaceae Butomaceae Hydrocharitaceae Limnocharitaceae In Tahktajan's classification (1997), the order Alismatales contains only the Alismataceae and Limnocharitaceae, making it equivalent to the Alismataceae as revised in APG-III. Other families included in the Alismatates as currently defined are here distributed among 10 additional orders, all of which are assigned, with the following exception, to the Subclass Alismatidae. Araceae in Tahktajan
the newer classifications, though there is some slight variation and in particular, the Torriceliaceae may be divided. Under this definition, well-known members include carrots, celery, parsley, and Hedera helix (English ivy). The order Apiales is placed within the asterid group of eudicots as circumscribed by the APG III system. Within the asterids, Apiales belongs to an unranked group called the campanulids, and within the campanulids, it belongs to a clade known in phylogenetic nomenclature as Apiidae. In 2010, a subclade of Apiidae named Dipsapiidae was defined to consist of the three orders: Apiales, Paracryphiales, and Dipsacales. Taxonomy Under the Cronquist system, only the Apiaceae and Araliaceae were included here, and the restricted order was placed among the rosids rather than the asterids. The Pittosporaceae were placed within the Rosales, and many of the other forms within the family Cornaceae. Pennantia was in the family Icacinaceae. In the classification system of Dahlgren the families Apiaceae and Araliaceae were placed in the order Ariales, in the superorder Araliiflorae (also called Aralianae). The present understanding of
Pennantia was in the family Icacinaceae. In the classification system of Dahlgren the families Apiaceae and Araliaceae were placed in the order Ariales, in the superorder Araliiflorae (also called Aralianae). The present understanding of the Apiales is fairly recent and is based upon comparison of DNA sequences by phylogenetic methods. The circumscriptions of some of the families have changed. In 2009, one of the subfamilies of Araliaceae was shown to be polyphyletic. Gynoecia The largest and obviously closely related families of Apiales are Araliaceae, Myodocarpaceae and Apiaceae, which resemble each other in the structure of their gynoecia. In this respect however, the Pittosporaceae is notably distinct from them. Typical syncarpous gynoecia exhibit four vertical zones, determined by the extent of fusion of the carpels. In most plants the synascidiate (i.e. "united bottle-shaped") and symplicate zones are fertile and bear the ovules. Each of the first three families possess mainly bi- or multilocular ovaries in a gynoecium with a long synascidiate, but very short symplicate zone, where the ovules are inserted at their transition, the so-called cross-zone (or "Querzone"). In gynoecia of the Pittosporaceae, the
composite flowers made of florets, and ten families related to the Asteraceae. While asterids in general are characterized by fused petals, composite flowers consisting of many florets create the false appearance of separate petals (as found in the rosids). The order is cosmopolitan (plants found throughout most of the world including desert and frigid zones), and includes mostly herbaceous species, although a small number of trees (such as the Lobelia deckenii, the giant lobelia, and Dendrosenecio, giant groundsels) and shrubs are also present. Asterales are organisms that seem to have evolved from one common ancestor. Asterales share characteristics on morphological and biochemical levels. Synapomorphies (a character that is shared by two or more groups through evolutionary development) include the presence in the plants of oligosaccharide inulin, a nutrient storage molecule used instead of starch; and unique stamen morphology. The stamens are usually found around the style, either aggregated densely or fused into a tube, probably an adaptation in association with the plunger (brush; or secondary) pollination that is common among the families of the order, wherein pollen is collected and stored on the length of the pistil. Taxonomy The name and order Asterales is botanically venerable, dating back to at least 1926 in the Hutchinson system of plant taxonomy when it contained only five families, of which only two are retained in the APG III classification. Under the Cronquist system of taxonomic classification of flowering plants, Asteraceae was
families, the largest of which are the Asteraceae, with about 25,000 species, and the Campanulaceae ("bellflowers"), with about 2,000 species. The remaining families count together for less than 1500 species. The two large families are cosmopolitan, with many of their species found in the Northern Hemisphere, and the smaller families are usually confined to Australia and the adjacent areas, or sometimes South America. Only the Asteraceae have composite flower heads; the other families do not, but share other characteristics such as storage of inulin that define the 11 families as more closely related to each other than to other plant families or orders such as the rosids. The phylogenetic tree according to APG III for the Campanulid clade is as below. Biogeography The core Asterales are Stylidiaceae (six genera), APA clade (Alseuosmiaceae, Phellinaceae and Argophyllaceae, together 7 genera), MGCA clade (Menyanthaceae, Goodeniaceae, Calyceraceae, in total twenty genera), and Asteraceae (about sixteen hundred genera). Other Asterales are Rousseaceae (four genera), Campanulaceae (eighty four genera) and Pentaphragmataceae (one genus). All Asterales families are represented in the Southern Hemisphere; however, Asteraceae and Campanulaceae are cosmopolitan and Menyanthaceae nearly so. Evolution Although most extant species of Asteraceae are herbaceous, the examination of the basal members in the family suggests that the common ancestor of the family was an arborescent plant, a tree or shrub, perhaps adapted to dry conditions, radiating from South America. Less can be said about the Asterales themselves with certainty, although since several families in Asterales contain trees, the ancestral member is most likely to have been a tree or shrub. Because all clades are represented in the southern hemisphere but many not in the northern hemisphere, it is natural to conjecture that there is a common southern origin to them. Asterales are angiosperms, flowering plants that appeared about 140 million years ago. The Asterales order probably originated in the Cretaceous (145 – 66 Mya) on the supercontinent Gondwana which broke up from 184 – 80 Mya, forming the area that is now Australia, South America, Africa, India and Antarctica. Asterales contain about 14% of eudicot diversity. From an analysis of relationships and diversities within the Asterales and with their superorders, estimates of the age of the beginning of the Asterales have been made, which range from 116 Mya to 82Mya. However few fossils have been found, of the
named. Manual methods of the 1900s and modern reporting Until 1998, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope, or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. Any body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations. These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ). The last step of discovery is to send the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union. Computerized methods There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth (see Earth-crosser asteroids). The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens. Various asteroid deflection strategies have been proposed, as early as the 1960s. The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact. Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across. All these considerations helped spur the launch of highly efficient surveys that consist of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. A list of teams using such systems includes: Lincoln Near-Earth Asteroid Research (LINEAR) Near-Earth Asteroid Tracking (NEAT) Spacewatch Lowell Observatory Near-Earth-Object Search (LONEOS) Catalina Sky Survey (CSS) Pan-STARRS NEOWISE Asteroid Terrestrial-impact Last Alert System (ATLAS) Campo Imperatore Near-Earth Object Survey (CINEOS) Japanese Spaceguard Association Asiago-DLR Asteroid Survey (ADAS) , the LINEAR system alone has discovered 147,132 asteroids. Among all the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter. Terminology Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. Beech and Steel's 1995 paper proposed a meteoroid definition including size limits. The term "asteroid", from the Greek word for "star-like", never had a formal definition, with the broader term minor planet being preferred by the International Astronomical Union. However, following the discovery of asteroids below ten meters in size, Rubin and Grossman's 2010 paper revised the previous definition of meteoroid to objects between 10 µm and 1 meter in size in order to maintain the distinction between asteroids and meteoroids. The smallest asteroids discovered (based on absolute magnitude H) are with and with both with an estimated size of about 1 meter. In 2006, the term "small Solar System body" was also introduced to cover both most minor planets and comets. Other languages prefer "planetoid" (Greek for "planet-like"), and this term is occasionally used in English especially for larger minor planets such as the dwarf planets as well as an alternative for asteroids since they are not star-like. The word "planetesimal" has a similar meaning, but refers specifically to the small building blocks of the planets that existed when the Solar System was forming. The term "planetule" was coined by the geologist William Daniel Conybeare to describe minor planets, but is not in common use. The three largest objects in the asteroid belt, Ceres, Pallas, and Vesta, grew to the stage of protoplanets. Ceres is a dwarf planet, the only one in the inner Solar System. When found, asteroids were seen as a class of objects distinct from comets, and there was no unified term for the two until "small Solar System body" was coined in 2006. The main difference between an asteroid and a comet is that a comet shows a coma due to sublimation of near-surface ices by solar radiation. A few objects have ended up being dual-listed because they were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; most "asteroids" with notably eccentric orbits are probably dormant or extinct comets. For almost two centuries, from the discovery of Ceres in 1801 until the discovery of the first centaur, Chiron in 1977, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few such as Hidalgo ventured far beyond Jupiter for part of their orbit. Those located between the orbits of Mars and Jupiter were known for many years simply as The Asteroids. When astronomers started finding more small bodies that permanently resided further out than Jupiter, now called centaurs, they numbered them among the traditional asteroids, though there was debate over whether they should be considered asteroids or as a new type of object. Then, when the first trans-Neptunian object (other than Pluto), Albion, was discovered in 1992, and especially when large numbers of similar objects started turning up, new terms were invented to sidestep the issue: Kuiper-belt object, trans-Neptunian object, scattered-disc object, and so on. These inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies are not expected to exhibit much cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets and not asteroids. The innermost of these are the Kuiper-belt objects, called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Furthermore, most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. (The much more distant Oort cloud is hypothesized to be the main reservoir of dormant comets.) Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line. The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term "asteroid" to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects. When the IAU introduced the class small Solar System bodies in 2006 to include most objects previously classified as minor planets and comets, they created the class of dwarf planets for the largest minor planets – those that have enough mass to have become ellipsoidal under their own gravity. According to the IAU, "the term 'minor planet' may still be used, but generally, the term 'Small Solar System Body' will be preferred." Currently only the largest object in the asteroid belt, Ceres, at about across, has been placed in the dwarf planet category. Formation It is thought that planetesimals in the asteroid belt evolved much like the rest of the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust. In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres. Distribution within the Solar System Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include: Asteroid belt The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is now estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter. Trojans Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, L4 and L5, which lie 60° ahead of and behind the larger body. The most significant population of trojans are the Jupiter trojans. Although fewer Jupiter trojans have been discovered (), it is thought that they are as numerous as the asteroids in the asteroid belt. Trojans have been found in the orbits of other planets, including Venus, Earth, Mars, Uranus, and Neptune. Near-Earth asteroids Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as Earth-crossers. , 14,464 near-Earth asteroids are known and approximately 900–1,000 have a diameter of over one kilometer. Characteristics Size distribution
was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations. These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ). The last step of discovery is to send the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union. Computerized methods There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth (see Earth-crosser asteroids). The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens. Various asteroid deflection strategies have been proposed, as early as the 1960s. The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact. Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across. All these considerations helped spur the launch of highly efficient surveys that consist of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. A list of teams using such systems includes: Lincoln Near-Earth Asteroid Research (LINEAR) Near-Earth Asteroid Tracking (NEAT) Spacewatch Lowell Observatory Near-Earth-Object Search (LONEOS) Catalina Sky Survey (CSS) Pan-STARRS NEOWISE Asteroid Terrestrial-impact Last Alert System (ATLAS) Campo Imperatore Near-Earth Object Survey (CINEOS) Japanese Spaceguard Association Asiago-DLR Asteroid Survey (ADAS) , the LINEAR system alone has discovered 147,132 asteroids. Among all the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter. Terminology Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. Beech and Steel's 1995 paper proposed a meteoroid definition including size limits. The term "asteroid", from the Greek word for "star-like", never had a formal definition, with the broader term minor planet being preferred by the International Astronomical Union. However, following the discovery of asteroids below ten meters in size, Rubin and Grossman's 2010 paper revised the previous definition of meteoroid to objects between 10 µm and 1 meter in size in order to maintain the distinction between asteroids and meteoroids. The smallest asteroids discovered (based on absolute magnitude H) are with and with both with an estimated size of about 1 meter. In 2006, the term "small Solar System body" was also introduced to cover both most minor planets and comets. Other languages prefer "planetoid" (Greek for "planet-like"), and this term is occasionally used in English especially for larger minor planets such as the dwarf planets as well as an alternative for asteroids since they are not star-like. The word "planetesimal" has a similar meaning, but refers specifically to the small building blocks of the planets that existed when the Solar System was forming. The term "planetule" was coined by the geologist William Daniel Conybeare to describe minor planets, but is not in common use. The three largest objects in the asteroid belt, Ceres, Pallas, and Vesta, grew to the stage of protoplanets. Ceres is a dwarf planet, the only one in the inner Solar System. When found, asteroids were seen as a class of objects distinct from comets, and there was no unified term for the two until "small Solar System body" was coined in 2006. The main difference between an asteroid and a comet is that a comet shows a coma due to sublimation of near-surface ices by solar radiation. A few objects have ended up being dual-listed because they were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; most "asteroids" with notably eccentric orbits are probably dormant or extinct comets. For almost two centuries, from the discovery of Ceres in 1801 until the discovery of the first centaur, Chiron in 1977, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few such as Hidalgo ventured far beyond Jupiter for part of their orbit. Those located between the orbits of Mars and Jupiter were known for many years simply as The Asteroids. When astronomers started finding more small bodies that permanently resided further out than Jupiter, now called centaurs, they numbered them among the traditional asteroids, though there was debate over whether they should be considered asteroids or as a new type of object. Then, when the first trans-Neptunian object (other than Pluto), Albion, was discovered in 1992, and especially when large numbers of similar objects started turning up, new terms were invented to sidestep the issue: Kuiper-belt object, trans-Neptunian object, scattered-disc object, and so on. These inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies are not expected to exhibit much cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets and not asteroids. The innermost of these are the Kuiper-belt objects, called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Furthermore, most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. (The much more distant Oort cloud is hypothesized to be the main reservoir of dormant comets.) Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line. The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term "asteroid" to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects. When the IAU introduced the class small Solar System bodies in 2006 to include most objects previously classified as minor planets and comets, they created the class of dwarf planets for the largest minor planets – those that have enough mass to have become ellipsoidal under their own gravity. According to the IAU, "the term 'minor planet' may still be used, but generally, the term 'Small Solar System Body' will be preferred." Currently only the largest object in the asteroid belt, Ceres, at about across, has been placed in the dwarf planet category. Formation It is thought that planetesimals in the asteroid belt evolved much like the rest of the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust. In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres. Distribution within the Solar System Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include: Asteroid belt The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is now estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter. Trojans Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, L4 and L5, which lie 60° ahead of and behind the larger body. The most significant population of trojans are the Jupiter trojans. Although fewer Jupiter trojans have been discovered (), it is thought that they are as numerous as the asteroids in the asteroid belt. Trojans have been found in the orbits of other planets, including Venus, Earth, Mars, Uranus, and Neptune. Near-Earth asteroids Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as Earth-crossers. , 14,464 near-Earth asteroids are known and approximately 900–1,000 have a diameter of over one kilometer. Characteristics Size distribution Asteroids vary greatly in size, from almost for the largest down to rocks just 1 meter across. The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and
in some jurisdictions using common law. Concept An allocution allows the defendant to explain why the sentence should be lenient. In plea bargains, an allocution may be required of the defendant. The defendant explicitly admits specifically and in detail the actions and their reasons in exchange for a reduced sentence. In principle, that removes any doubt as to the exact nature of the defendant's guilt in the matter. The term "allocution" is used generally only in jurisdictions in the United States, but there are vaguely similar processes in other common law countries. In many other jurisdictions, it is for the defense lawyer to mitigate on their client's behalf, and the defendant rarely has the opportunity to speak. The right of victims to speak at sentencing is also sometimes referred to as allocution. Australia In Australia, the term allocutus is used by the Clerk of Arraigns or another formal associate of the Court. It is generally phrased as, "Prisoner at the Bar, you have been found Guilty by a jury of your peers
plea in mitigation is absolute. If a judge or magistrate refuses to hear such a plea or does not properly consider it, the sentence can be overturned on appeal. United States In most of the United States, defendants are allowed the opportunity to allocute before a sentence is passed. Some jurisdictions hold that as an absolute right. In its absence, a sentence but not the conviction may be overturned, resulting in the need for a new sentencing hearing. In the federal system, Federal Rules of Criminal Procedure 32(i)(4) provides that the court must "address the defendant personally in order to permit the defendant to speak or present any information to mitigate the sentence." The Federal Public Defender recommends that defendants speak in terms of how a lenient sentence will be sufficient but not greater than necessary to comply with the statutory directives set forth in . See also Confession (law) References Criminal procedure Evidence law
the right of the opposite party to have the deponent produced for cross-examination. Therefore, an affidavit cannot ordinarily be used as evidence in absence of a specific order of the court. Sri Lanka In Sri Lanka, under the Oaths Ordinance, with the exception of a court-martial, a person may submit an affidavit signed in the presence of a commissioner for oaths or a justice of the peace. Ireland Affidavits are made in a similar way as to England and Wales, although "make oath" is sometimes omitted. An affirmed affidavit may be substituted for an sworn affidavit in most cases for those opposed to swearing oaths. The person making the affidavit is known as the deponent and signs the affidavit. The affidavit concludes in the standard format "sworn/affirmed (declared) before me, [name of commissioner for oaths/solicitor], a commissioner for oaths (solicitor), on the [date] at [location] in the county/city of [county/city], and I know the deponent", and it is signed and stamped by the commissioner for oaths. It is important that the Commissioner states his/her name clearly, sometimes documents are rejected when the name cannot be ascertained. In August 2020, a new method of filing affidavits came into force. Under Section 21 of the Civil Law and Criminal Law (Miscellaneous Provisions) Act 2020 witnesses are no longer required to swear before God or make an affirmation when filing an affidavit. Instead, witnesses will make a non-religious “statement of truth” and, if it is breached, will be liable for up to one year in prison if convicted summarily or, upon conviction on indictment, to a maximum fine of €250,000 or imprisonment for a term not exceeding 5 years, or both. This is designed to replace affidavits and statutory declarations in situations where the electronic means of lodgement or filing of documents with the Court provided for in Section 20 is utilised. As of January 2022, it has yet to be adopted widely, and it is expected it will not be used for some time by lay litigants who will still lodge papers in person. United States In American jurisprudence, under the rules for hearsay, admission of an unsupported affidavit as evidence is unusual (especially if the affiant is not available for cross-examination) with regard to material facts which may be dispositive of the matter at bar. Affidavits from persons who are dead or otherwise incapacitated, or who cannot be located or made to appear, may be accepted by the court, but usually only in the presence of corroborating evidence. An affidavit which reflected a better grasp of the facts close in time to the actual events may be used to refresh a witness's recollection. Materials used to refresh recollection are admissible as evidence. If the affiant is a party in the case, the affiant's opponent may be successful in having the affidavit admitted as evidence, as statements by a party-opponent are admissible through an exception to the hearsay rule. Affidavits are typically included in the response to interrogatories.
God or make an affirmation when filing an affidavit. Instead, witnesses will make a non-religious “statement of truth” and, if it is breached, will be liable for up to one year in prison if convicted summarily or, upon conviction on indictment, to a maximum fine of €250,000 or imprisonment for a term not exceeding 5 years, or both. This is designed to replace affidavits and statutory declarations in situations where the electronic means of lodgement or filing of documents with the Court provided for in Section 20 is utilised. As of January 2022, it has yet to be adopted widely, and it is expected it will not be used for some time by lay litigants who will still lodge papers in person. United States In American jurisprudence, under the rules for hearsay, admission of an unsupported affidavit as evidence is unusual (especially if the affiant is not available for cross-examination) with regard to material facts which may be dispositive of the matter at bar. Affidavits from persons who are dead or otherwise incapacitated, or who cannot be located or made to appear, may be accepted by the court, but usually only in the presence of corroborating evidence. An affidavit which reflected a better grasp of the facts close in time to the actual events may be used to refresh a witness's recollection. Materials used to refresh recollection are admissible as evidence. If the affiant is a party in the case, the affiant's opponent may be successful in having the affidavit admitted as evidence, as statements by a party-opponent are admissible through an exception to the hearsay rule. Affidavits are typically included in the response to interrogatories. Requests for admissions under Federal Rule of Civil Procedure 36, however, are not required to be sworn. When a person signs an affidavit, that person is eligible to take the stand at a trial or evidentiary hearing. One
equatorial coordinate system. In non-Western astronomy In traditional Chinese astronomy, stars from Aries were used in several constellations. The brightest stars—Alpha, Beta, and Gamma Arietis—formed a constellation called Lou (婁), variously translated as "bond", "lasso", and "sickle", which was associated with the ritual sacrifice of cattle. This name was shared by the 16th lunar mansion, the location of the full moon closest to the autumnal equinox. This constellation has also been associated with harvest-time as it could represent a woman carrying a basket of food on her head. 35, 39, and 41 Arietis were part of a constellation called Wei (胃), which represented a fat abdomen and was the namesake of the 17th lunar mansion, which represented granaries. Delta and Zeta Arietis were a part of the constellation Tianyin (天陰), thought to represent the Emperor's hunting partner. Zuogeng (左更), a constellation depicting a marsh and pond inspector, was composed of Mu, Nu, Omicron, Pi, and Sigma Arietis. He was accompanied by Yeou-kang, a constellation depicting an official in charge of pasture distribution. In a similar system to the Chinese, the first lunar mansion in Hindu astronomy was called "Aswini", after the traditional names for Beta and Gamma Arietis, the Aswins. Because the Hindu new year began with the vernal equinox, the Rig Veda contains over 50 new-year's related hymns to the twins, making them some of the most prominent characters in the work. Aries itself was known as "Aja" and "Mesha". In Hebrew astronomy Aries was named "Taleh"; it signified either Simeon or Gad, and generally symbolizes the "Lamb of the World". The neighboring Syrians named the constellation "Amru", and the bordering Turks named it "Kuzi". Half a world away, in the Marshall Islands, several stars from Aries were incorporated into a constellation depicting a porpoise, along with stars from Cassiopeia, Andromeda, and Triangulum. Alpha, Beta, and Gamma Arietis formed the head of the porpoise, while stars from Andromeda formed the body and the bright stars of Cassiopeia formed the tail. Other Polynesian peoples recognized Aries as a constellation. The Marquesas islanders called it Na-pai-ka; the Māori constellation Pipiri may correspond to modern Aries as well. In indigenous Peruvian astronomy, a constellation with most of the same stars as Aries existed. It was called the "Market Moon" and the "Kneeling Terrace", as a reminder for when to hold the annual harvest festival, Ayri Huay. Features Stars Aries has three prominent stars forming an asterism, designated Alpha, Beta, and Gamma Arietis by Johann Bayer. Alpha (Hamal) and Beta (Sheratan) are commonly used for navigation. There is also one other star above the fourth magnitude, 41 Arietis (Bharani). α Arietis, called Hamal, is the brightest star in Aries. Its traditional name is derived from the Arabic word for "lamb" or "head of the ram" (ras al-hamal), which references Aries's mythological background. With a spectral class of K2 and a luminosity class of III, it is an orange giant with an apparent visual magnitude of 2.00, which lies 66 light-years from Earth. Hamal has a luminosity of and its absolute magnitude is −0.1. β Arietis, also known as Sheratan, is a blue-white star with an apparent visual magnitude of 2.64. Its traditional name is derived from "sharatayn", the Arabic word for "the two signs", referring to both Beta and Gamma Arietis in their position as heralds of the vernal equinox. The two stars were known to the Bedouin as "qarna al-hamal", "horns of the ram". It is 59 light-years from Earth. It has a luminosity of and its absolute magnitude is 2.1. It is a spectroscopic binary star, one in which the companion star is only known through analysis of the spectra. The spectral class of the primary is A5. Hermann Carl Vogel determined that Sheratan was a spectroscopic binary in 1903; its orbit was determined by Hans Ludendorff in 1907. It has since been studied for its eccentric orbit. γ Arietis, with a common name of Mesarthim, is a binary star with two white-hued components, located in a rich field of magnitude 8–12 stars. Its traditional name has conflicting derivations. It may be derived from a corruption of "al-sharatan", the Arabic word meaning "pair" or a word for "fat ram". However, it may also come from the Sanskrit for "first star of Aries" or the Hebrew for "ministerial servants", both of which are unusual languages of origin for star names. Along with Beta Arietis, it was known to the Bedouin as "qarna al-hamal". The primary is of magnitude 4.59 and the secondary is of magnitude 4.68. The system is 164 light-years from Earth. The two components are separated by 7.8 arcseconds, and the system as a whole has an apparent magnitude of 3.9. The primary has a luminosity of and the secondary has a luminosity of ; the primary is an A-type star with an absolute magnitude of 0.2 and the secondary is a B9-type star with an absolute magnitude of 0.4. The angle between the two components is 1°. Mesarthim was discovered to be a double star by Robert Hooke in 1664, one of the earliest such telescopic discoveries. The primary, γ1 Arietis, is an Alpha² Canum Venaticorum variable star that has a range of 0.02 magnitudes and a period of 2.607 days. It is unusual because of its strong silicon emission lines. The constellation is home to several double stars, including Epsilon, Lambda, and Pi Arietis. ε Arietis is a binary star with two white components. The primary is of magnitude 5.2 and the secondary is of magnitude 5.5. The system is 290 light-years from Earth. Its overall magnitude is 4.63, and the primary has an absolute magnitude of 1.4. Its spectral class is A2. The two components are separated by 1.5 arcseconds. λ Arietis is a wide double star with a white-hued primary and a yellow-hued secondary. The primary is of magnitude 4.8 and the secondary is of magnitude 7.3. The primary is 129 light-years from Earth. It has an absolute magnitude of 1.7 and a spectral class of F0. The two components are separated by 36 arcseconds at an angle of 50°; the two stars are located 0.5° east of 7 Arietis. π Arietis is a close binary star with a blue-white primary and a white secondary. The primary is of magnitude 5.3 and the secondary is of magnitude 8.5. The primary is 776 light-years from Earth. The primary itself is a wide double star with a separation of 25.2 arcseconds; the tertiary has a magnitude of 10.8. The primary and secondary are separated by 3.2 arcseconds. Most of the other stars in Aries visible to the naked eye have magnitudes between 3 and 5. δ Ari, called Boteïn, is a star of magnitude 4.35, 170 light-years away. It has an absolute
as a ram, modeled on the precedent of Ptolemy. However, some Islamic celestial globes depicted Aries as a nondescript four-legged animal with what may be antlers instead of horns. Some early Bedouin observers saw a ram elsewhere in the sky; this constellation featured the Pleiades as the ram's tail. The generally accepted Arabic formation of Aries consisted of thirteen stars in a figure along with five "unformed" stars, four of which were over the animal's hindquarters and one of which was the disputed star over Aries's head. Al-Sufi's depiction differed from both other Arab astronomers' and Flamsteed's, in that his Aries was running and looking behind itself. The obsolete constellations of Aries (Apes/Vespa/Lilium/Musca (Borealis)) all centred on the same the northern stars. In 1612, Petrus Plancius introduced Apes, a constellation representing a bee. In 1624, the same stars were used by Jakob Bartsch as for Vespa, representing a wasp. In 1679, Augustin Royer used these stars for his constellation Lilium, representing the fleur-de-lis. None of these constellation became widely accepted. Johann Hevelius renamed the constellation "Musca" in 1690 in his Firmamentum Sobiescianum. To differentiate it from Musca, the southern fly, it was later renamed Musca Borealis but it did not gain acceptance and its stars were ultimately officially reabsorbed into Aries. The asterism involved was 33, 35, 39, and 41 Arietis. In 1922, the International Astronomical Union defined its recommended three-letter abbreviation, "Ari". The official boundaries of Aries were defined in 1930 by Eugène Delporte as a polygon of 12 segments. Its right ascension is between 1h 46.4m and 3h 29.4m and its declination is between 10.36° and 31.22° in the equatorial coordinate system. In non-Western astronomy In traditional Chinese astronomy, stars from Aries were used in several constellations. The brightest stars—Alpha, Beta, and Gamma Arietis—formed a constellation called Lou (婁), variously translated as "bond", "lasso", and "sickle", which was associated with the ritual sacrifice of cattle. This name was shared by the 16th lunar mansion, the location of the full moon closest to the autumnal equinox. This constellation has also been associated with harvest-time as it could represent a woman carrying a basket of food on her head. 35, 39, and 41 Arietis were part of a constellation called Wei (胃), which represented a fat abdomen and was the namesake of the 17th lunar mansion, which represented granaries. Delta and Zeta Arietis were a part of the constellation Tianyin (天陰), thought to represent the Emperor's hunting partner. Zuogeng (左更), a constellation depicting a marsh and pond inspector, was composed of Mu, Nu, Omicron, Pi, and Sigma Arietis. He was accompanied by Yeou-kang, a constellation depicting an official in charge of pasture distribution. In a similar system to the Chinese, the first lunar mansion in Hindu astronomy was called "Aswini", after the traditional names for Beta and Gamma Arietis, the Aswins. Because the Hindu new year began with the vernal equinox, the Rig Veda contains over 50 new-year's related hymns to the twins, making them some of the most prominent characters in the work. Aries itself was known as "Aja" and "Mesha". In Hebrew astronomy Aries was named "Taleh"; it signified either Simeon or Gad, and generally symbolizes the "Lamb of the World". The neighboring Syrians named the constellation "Amru", and the bordering Turks named it "Kuzi". Half a world away, in the Marshall Islands, several stars from Aries were incorporated into a constellation depicting a porpoise, along with stars from Cassiopeia, Andromeda, and Triangulum. Alpha, Beta, and Gamma Arietis formed the head of the porpoise, while stars from Andromeda formed the body and the bright stars of Cassiopeia formed the tail. Other Polynesian peoples recognized Aries as a constellation. The Marquesas islanders called it Na-pai-ka; the Māori constellation Pipiri may correspond to modern Aries as well. In indigenous Peruvian astronomy, a constellation with most of the same stars as Aries existed. It was called the "Market Moon" and the "Kneeling Terrace", as a reminder for when to hold the annual harvest festival, Ayri Huay. Features Stars Aries has three prominent stars forming an asterism, designated Alpha, Beta, and Gamma Arietis by Johann Bayer. Alpha (Hamal) and Beta (Sheratan) are commonly used for navigation. There is also one other star above the fourth magnitude, 41 Arietis (Bharani). α Arietis, called Hamal, is the brightest star in Aries. Its traditional name is derived from the Arabic word for "lamb" or "head of the ram" (ras al-hamal), which references Aries's mythological background. With a spectral class of K2 and a luminosity class of III, it is an orange giant with an apparent visual magnitude of 2.00, which lies 66 light-years from Earth. Hamal has a luminosity of and its absolute magnitude is −0.1. β Arietis, also known as Sheratan, is a blue-white star with an apparent visual magnitude of 2.64. Its traditional name is derived from "sharatayn", the Arabic word for "the two signs", referring to both Beta and Gamma Arietis in their position as heralds of the vernal equinox. The two stars were known to the Bedouin as "qarna al-hamal", "horns of the ram". It is 59 light-years from Earth. It has a luminosity of and its absolute magnitude is 2.1. It is a spectroscopic binary star, one in which the companion star is only known through analysis of the spectra. The spectral class of the primary is A5. Hermann Carl Vogel determined that Sheratan was a spectroscopic binary in 1903; its orbit was determined by Hans Ludendorff in 1907. It has since been studied for its eccentric orbit. γ Arietis, with a common name of Mesarthim, is a binary star with two white-hued components, located in a rich field of magnitude 8–12 stars. Its traditional name has conflicting derivations. It may be derived from a corruption of "al-sharatan", the Arabic word meaning "pair" or a word for "fat ram". However, it may also come from the Sanskrit for "first star of Aries" or the Hebrew for "ministerial servants", both of which are unusual languages of origin for star names. Along with Beta Arietis, it was known to the Bedouin as "qarna al-hamal". The primary is of magnitude 4.59 and the secondary is of magnitude 4.68. The system is 164 light-years from Earth. The two components are separated by 7.8 arcseconds, and the system as a whole has an apparent magnitude of 3.9. The primary has a luminosity of and the secondary has a luminosity of ; the primary is an A-type star with an absolute magnitude of 0.2 and the secondary is a B9-type star with an absolute magnitude of 0.4. The angle between the two components is 1°. Mesarthim was discovered to be a double star by Robert Hooke in 1664, one of the earliest such telescopic discoveries. The primary, γ1 Arietis, is an Alpha² Canum Venaticorum variable star that has a range of 0.02 magnitudes and a period of 2.607 days. It is unusual because of its strong silicon emission lines. The constellation is home to several double stars, including Epsilon, Lambda, and Pi Arietis. ε Arietis is a binary star with two white components. The primary is of magnitude 5.2 and the secondary is of magnitude 5.5. The system is 290 light-years from Earth. Its overall magnitude is 4.63, and the primary has an absolute magnitude of 1.4. Its spectral class is A2. The two components are separated by 1.5 arcseconds. λ Arietis is a wide double star with a white-hued primary and a yellow-hued secondary. The primary is of magnitude 4.8 and the secondary is of magnitude 7.3. The primary is 129 light-years from Earth. It has an absolute magnitude of 1.7 and a spectral class of F0. The two components are separated by 36 arcseconds at an angle of 50°; the two stars are located 0.5° east of 7 Arietis. π Arietis is a close binary star with a blue-white primary and a white secondary. The primary is of magnitude 5.3 and the secondary is of magnitude 8.5. The primary is 776 light-years from Earth. The primary itself is a wide double star with a separation of 25.2 arcseconds; the tertiary has a magnitude of 10.8. The primary and secondary are separated by 3.2 arcseconds. Most of the other stars in Aries visible to the naked eye have magnitudes between 3 and 5. δ Ari, called Boteïn, is a star of magnitude 4.35, 170 light-years away. It has an absolute magnitude
was represented by three stars; its position is disputed and may have instead been located in Sculptor. Tienliecheng also has a disputed position; the 13-star castle replete with ramparts may have possessed Nu and Xi Aquarii but may instead have been located south in Piscis Austrinus. The Water Jar asterism was seen to the ancient Chinese as the tomb, Fenmu. Nearby, the emperors' mausoleum Xiuliang stood, demarcated by Kappa Aquarii and three other collinear stars. Ku ("crying") and Qi ("weeping"), each composed of two stars, were located in the same region. Three of the Chinese lunar mansions shared their name with constellations. Nu, also the name for the 10th lunar mansion, was a handmaiden represented by Epsilon, Mu, 3, and 4 Aquarii. The 11th lunar mansion shared its name with the constellation Xu ("emptiness"), formed by Beta Aquarii and Alpha Equulei; it represented a bleak place associated with death and funerals. Wei, the rooftop and 12th lunar mansion, was a V-shaped constellation formed by Alpha Aquarii, Theta Pegasi, and Epsilon Pegasi; it shared its name with two other Chinese constellations, in modern-day Scorpius and Aries. Features Stars Despite both its prominent position on the zodiac and its large size, Aquarius has no particularly bright stars, its four brightest stars being less than magnitude 2. However, recent research has shown that there are several stars lying within its borders that possess planetary systems. The two brightest stars, Alpha and Beta Aquarii, are luminous yellow supergiants, of spectral types G0Ib and G2Ib respectively, that were once hot blue-white B-class main sequence stars 5 to 9 times as massive as the Sun. The two are also moving through space perpendicular to the plane of the Milky Way. Just shading Alpha, Beta Aquarii is the brightest star in Aquarius with an apparent magnitude of 2.91. It also has the proper name of Sadalsuud. Having cooled and swollen to around 50 times the Sun's diameter, it is around 2200 times as luminous as the Sun. It is around 6.4 times as massive as the Sun and around 56 million years old. Sadalsuud is 540 ± 20 light-years from Earth. Alpha Aquarii, also known as Sadalmelik, has an apparent magnitude of 2.94. It is 520 ± 20 light-years distant from Earth, and is around 6.5 times as massive as the Sun and 3000 times as luminous. It is 53 million years old. γ Aquarii, also called Sadachbia, is a white main sequence star of spectral type star of spectral type A0V that is between 158 and 315 million years old and is around two and a half times the Sun's mass, and double its radius. Of magnitude 3.85, it is 164 ± 9 light years away. It has a luminosity of . The name Sadachbia comes from the Arabic for "lucky stars of the tents", sa'd al-akhbiya. δ Aquarii, also known as Skat or Scheat is a blue-white A2 spectral type star of apparent magnitude 3.27 and luminosity of . ε Aquarii, also known as Albali, is a blue-white A1 spectral type star with an apparent magnitude of 3.77, an absolute magnitude of 1.2, and a luminosity of . ζ Aquarii is an F2 spectral type double star; both stars are white. Overall, it appears to be of magnitude 3.6 and luminosity of . The primary has a magnitude of 4.53 and the secondary a magnitude of 4.31, but both have an absolute magnitude of 0.6. Its orbital period is 760 years; the two components are currently moving farther apart. θ Aquarii, sometimes called Ancha, is a G8 spectral type star with an apparent magnitude of 4.16 and an absolute magnitude of 1.4. κ Aquarii, also called Situla, has an apparent magnitude of 5.03. λ Aquarii, also called Hudoor or Ekchusis, is an M2 spectral type star of magnitude 3.74 and luminosity of . ξ Aquarii, also called Bunda, is an A7 spectral type star with an apparent magnitude of 4.69 and an absolute magnitude of 2.4. π Aquarii, also called Seat, is a B0 spectral type star with an apparent magnitude of 4.66 and an absolute magnitude of −4.1. Planetary systems Twelve exoplanet systems have been found in Aquarius as of 2013. Gliese 876, one of the nearest stars to Earth at a distance of 15 light-years, was the first red dwarf star to be found to possess a planetary system. It is orbited by four planets, including one terrestrial planet 6.6 times the mass of Earth. The planets vary in orbital period from 2 days to 124 days. 91 Aquarii is an orange giant star orbited by one planet, 91 Aquarii b. The planet's mass is 2.9 times the mass of Jupiter, and its orbital period is 182 days. Gliese 849 is a red dwarf star orbited by the first known long-period Jupiter-like planet, Gliese 849 b. The planet's mass is 0.99 times that of Jupiter and its orbital period is 1,852 days. There are also less-prominent systems in Aquarius. WASP-6, a type G8 star of magnitude 12.4, is host to one exoplanet, WASP-6 b. The star is 307 parsecs from Earth and has a mass of 0.888 solar masses and a radius of 0.87 solar radii. WASP-6 b was discovered in 2008 by the transit method. It orbits its parent star every 3.36 days at a distance of 0.042 astronomical units (AU). It is 0.503 Jupiter masses but has a proportionally larger radius of 1.224 Jupiter radii. HD 206610, a K0 star located 194 parsecs from Earth, is host to one planet, HD 206610 b. The host star is larger than the Sun; more massive at 1.56 solar masses and larger at 6.1 solar radii. The planet was discovered by the radial velocity method in 2010 and has a mass of 2.2 Jupiter masses. It orbits every 610 days at a distance of 1.68 AU. Much closer to its sun is WASP-47 b, which orbits every 4.15 days only 0.052 AU from its sun, yellow dwarf (G9V) WASP-47. WASP-47 is close in size to the Sun, having a radius of 1.15 solar radii and a mass even closer at 1.08 solar masses. WASP-47 b was discovered in 2011 by the transit method, like WASP-6 b. It is slightly larger than Jupiter with a mass of 1.14 Jupiter masses and a radius of 1.15 Jupiter masses. There are several
period is 182 days. Gliese 849 is a red dwarf star orbited by the first known long-period Jupiter-like planet, Gliese 849 b. The planet's mass is 0.99 times that of Jupiter and its orbital period is 1,852 days. There are also less-prominent systems in Aquarius. WASP-6, a type G8 star of magnitude 12.4, is host to one exoplanet, WASP-6 b. The star is 307 parsecs from Earth and has a mass of 0.888 solar masses and a radius of 0.87 solar radii. WASP-6 b was discovered in 2008 by the transit method. It orbits its parent star every 3.36 days at a distance of 0.042 astronomical units (AU). It is 0.503 Jupiter masses but has a proportionally larger radius of 1.224 Jupiter radii. HD 206610, a K0 star located 194 parsecs from Earth, is host to one planet, HD 206610 b. The host star is larger than the Sun; more massive at 1.56 solar masses and larger at 6.1 solar radii. The planet was discovered by the radial velocity method in 2010 and has a mass of 2.2 Jupiter masses. It orbits every 610 days at a distance of 1.68 AU. Much closer to its sun is WASP-47 b, which orbits every 4.15 days only 0.052 AU from its sun, yellow dwarf (G9V) WASP-47. WASP-47 is close in size to the Sun, having a radius of 1.15 solar radii and a mass even closer at 1.08 solar masses. WASP-47 b was discovered in 2011 by the transit method, like WASP-6 b. It is slightly larger than Jupiter with a mass of 1.14 Jupiter masses and a radius of 1.15 Jupiter masses. There are several more single-planet systems in Aquarius. HD 210277, a magnitude 6.63 yellow star located 21.29 parsecs from Earth, is host to one known planet: HD 210277 b. The 1.23 Jupiter mass planet orbits at nearly the same distance as Earth orbits the Sun1.1 AU, though its orbital period is significantly longer at around 442 days. HD 210277 b was discovered earlier than most of the other planets in Aquarius, detected by the radial velocity method in 1998. The star it orbits resembles the Sun beyond their similar spectral class; it has a radius of 1.1 solar radii and a mass of 1.09 solar masses. HD 212771 b, a larger planet at 2.3 Jupiter masses, orbits host star HD 212771 at a distance of 1.22 AU. The star itself, barely below the threshold of naked-eye visibility at magnitude 7.6, is a G8IV (yellow subgiant) star located 131 parsecs from Earth. Though it has a similar mass to the Sun1.15 solar massesit is significantly less dense with its radius of 5 solar radii. Its lone planet was discovered in 2010 by the radial velocity method, like several other exoplanets in the constellation. As of 2013, there were only two known multiple-planet systems within the bounds of Aquarius: the Gliese 876 and HD 215152 systems. The former is quite prominent; the latter has only two planets and has a host star farther away at 21.5 parsecs. The HD 215152 system consists of the planets HD 215152 b and HD 215152 c orbiting their K0-type, magnitude 8.13 sun. Both discovered in 2011 by the radial velocity method, the two tiny planets orbit very close to their host star. HD 215152 c is the larger at 0.0097 Jupiter masses (still significantly larger than the Earth, which weighs in at 0.00315 Jupiter masses); its smaller sibling is barely smaller at 0.0087 Jupiter masses. The error in the mass measurements (0.0032 and respectively) is large enough to make this discrepancy statistically insignificant. HD 215152 c also orbits further from the star than HD 215152 b, 0.0852 AU compared to 0.0652. On 23 February 2017, NASA announced that ultracool dwarf star TRAPPIST-1 in Aquarius has seven Earth-like rocky planets. Of these, three are in the system's habitable zone, and may contain water. The discovery of the TRAPPIST-1 system is seen by astronomers as a significant step toward finding life beyond Earth. Deep sky objects Because of its position away from the galactic plane, the majority of deep-sky objects in Aquarius are galaxies, globular clusters, and planetary nebulae. Aquarius contains three deep sky objects that are in the Messier catalog: the globular clusters Messier 2, Messier 72, and the asterism Messier 73. While M73 was originally catalogued as a sparsely populated open cluster, modern analysis indicates the 6 main stars are not close enough together to fit this definition, reclassifying M73 as an asterism. Two well-known planetary nebulae are also located in Aquarius: the Saturn Nebula (NGC 7009), to the southeast of μ Aquarii; and the famous Helix Nebula (NGC 7293), southwest of δ Aquarii. M2, also catalogued as NGC 7089, is a rich globular cluster located approximately 37,000 light-years from Earth. At magnitude 6.5, it is viewable in small-aperture instruments, but a 100 mm aperture telescope is needed to resolve any stars. M72, also catalogued as NGC 6981, is a small 9th
film broadcast on television; the first anime television series was Instant History (1961–64). An early and influential success was Astro Boy (1963–66), a television series directed by Tezuka based on his manga of the same name. Many animators at Tezuka's Mushi Production later established major anime studios (including Madhouse, Sunrise, and Pierrot). The 1970s saw growth in the popularity of manga, many of which were later animated. Tezuka's work—and that of other pioneers in the field—inspired characteristics and genres that remain fundamental elements of anime today. The giant robot genre (also known as "mecha"), for instance, took shape under Tezuka, developed into the super robot genre under Go Nagai and others, and was revolutionized at the end of the decade by Yoshiyuki Tomino, who developed the real robot genre. Robot anime series such as Gundam and Super Dimension Fortress Macross became instant classics in the 1980s, and the genre remained one of the most popular in the following decades. The bubble economy of the 1980s spurred a new era of high-budget and experimental anime films, including Nausicaä of the Valley of the Wind (1984), Royal Space Force: The Wings of Honnêamise (1987), and Akira (1988). Neon Genesis Evangelion (1995), a television series produced by Gainax and directed by Hideaki Anno, began another era of experimental anime titles, such as Ghost in the Shell (1995) and Cowboy Bebop (1998). In the 1990s, anime also began attracting greater interest in Western countries; major international successes include Sailor Moon and Dragon Ball Z, both of which were dubbed into more than a dozen languages worldwide. In 2003, Spirited Away, a Studio Ghibli feature film directed by Hayao Miyazaki, won the Academy Award for Best Animated Feature at the 75th Academy Awards. It later became the highest-grossing anime film, earning more than $355 million. Since the 2000s, an increased number of anime works have been adaptations of light novels and visual novels; successful examples include The Melancholy of Haruhi Suzumiya and Fate/stay night (both 2006). Demon Slayer: Kimetsu no Yaiba the Movie: Mugen Train became the highest-grossing Japanese film and one of the world's highest-grossing films of 2020. It also became the fastest grossing film in Japanese cinema, because in 10 days it made 10 billion yen ($95.3m; £72m). It beat the previous record of Spirited Away which took 25 days. Attributes Anime differs greatly from other forms of animation by its diverse art styles, methods of animation, its production, and its process. Visually, anime works exhibit a wide variety of art styles, differing between creators, artists, and studios. While no single art style predominates anime as a whole, they do share some similar attributes in terms of animation technique and character design. Technique Modern anime follows a typical animation production process, involving storyboarding, voice acting, character design, and cel production. Since the 1990s, animators have increasingly used computer animation to improve the efficiency of the production process. Early anime works were experimental, and consisted of images drawn on blackboards, stop motion animation of paper cutouts, and silhouette animation. Cel animation grew in popularity until it came to dominate the medium. In the 21st century, the use of other animation techniques is mostly limited to independent short films, including the stop motion puppet animation work produced by Tadahito Mochinaga, Kihachirō Kawamoto and Tomoyasu Murata. Computers were integrated into the animation process in the 1990s, with works such as Ghost in the Shell and Princess Mononoke mixing cel animation with computer-generated images. Fuji Film, a major cel production company, announced it would stop cel production, producing an industry panic to procure cel imports and hastening the switch to digital processes. Prior to the digital era, anime was produced with traditional animation methods using a pose to pose approach. The majority of mainstream anime uses fewer expressive key frames and more in-between animation. Japanese animation studios were pioneers of many limited animation techniques, and have given anime a distinct set of conventions. Unlike Disney animation, where the emphasis is on the movement, anime emphasizes the art quality and let limited animation techniques make up for the lack of time spent on movement. Such techniques are often used not only to meet deadlines but also as artistic devices. Anime scenes place emphasis on achieving three-dimensional views, and backgrounds are instrumental in creating the atmosphere of the work. The backgrounds are not always invented and are occasionally based on real locations, as exemplified in Howl's Moving Castle and The Melancholy of Haruhi Suzumiya. Oppliger stated that anime is one of the rare mediums where putting together an all-star cast usually comes out looking "tremendously impressive". The cinematic effects of anime differentiates itself from the stage plays found in American animation. Anime is cinematically shot as if by camera, including panning, zooming, distance and angle shots to more complex dynamic shots that would be difficult to produce in reality. In anime, the animation is produced before the voice acting, contrary to American animation which does the voice acting first. Characters The body proportions of human anime characters tend to accurately reflect the proportions of the human body in reality. The height of the head is considered by the artist as the base unit of proportion. Head heights can vary, but most anime characters are about seven to eight heads tall. Anime artists occasionally make deliberate modifications to body proportions to produce super deformed characters that feature a disproportionately small body compared to the head; many super deformed characters are two to four heads tall. Some anime works like Crayon Shin-chan completely disregard these proportions, in such a way that they resemble caricatured Western cartoons. A common anime character design convention is exaggerated eye size. The animation of characters with large eyes in anime can be traced back to Osamu Tezuka, who was deeply influenced by such early animation characters as Betty Boop, who was drawn with disproportionately large eyes. Tezuka is a central figure in anime and manga history, whose iconic art style and character designs allowed for the entire range of human emotions to be depicted solely through the eyes. The artist adds variable color shading to the eyes and particularly to the cornea to give them greater depth. Generally, a mixture of a light shade, the tone color, and a dark shade is used. Cultural anthropologist Matt Thorn argues that Japanese animators and audiences do not perceive such stylized eyes as inherently more or less foreign. However, not all anime characters have large eyes. For example, the works of Hayao Miyazaki are known for having realistically proportioned eyes, as well as realistic hair colors on their characters. Hair in anime is often unnaturally lively and colorful or uniquely styled. The movement of hair in anime is exaggerated and "hair action" is used to emphasize the action and emotions of characters for added visual effect. Poitras traces hairstyle color to cover illustrations on manga, where eye-catching artwork and colorful tones are attractive for children's manga. Despite being produced for a domestic market, anime features characters whose race or nationality is not always defined, and this is often a deliberate decision, such as in the Pokémon animated series. Anime and manga artists often draw from a common canon of iconic facial expression illustrations to denote particular moods and thoughts. These techniques are often different in form than their counterparts in Western animation, and they include a fixed iconography that is used as shorthand for certain emotions and moods. For example, a male character may develop a nosebleed when aroused. A variety of visual symbols are employed, including sweat drops to depict nervousness, visible blushing for embarrassment, or glowing eyes for an intense glare. Another recurring sight gag is the use of chibi (deformed, simplified character designs) figures to comedically punctuate emotions like confusion or embarrassment. Music The opening and credits sequences of most anime television series are accompanied by J-pop or J-rock songs, often by reputed bands—as written with the series in mind—but are also aimed at the general music market, therefore they often allude only vaguely or not at all, to the thematic settings or plot of the series. Also, they are often used as incidental music ("insert songs") in an episode, in order to highlight particularly important scenes. Genres Anime are often classified by target demographic, including , , and a diverse range of genres targeting an adult audience. Shoujo and shounen anime sometimes contain elements popular with children of both sexes in an attempt to gain crossover appeal. Adult anime may feature a slower pace or greater plot complexity that younger audiences may typically find unappealing, as well as adult themes and situations. A subset of adult anime works featuring pornographic elements are labeled "R18" in Japan, and are internationally known as hentai (originating from ). By contrast, some anime subgenres incorporate ecchi, sexual themes or undertones without depictions of sexual intercourse, as typified in the comedic or harem genres; due to its popularity among adolescent and adult anime enthusiasts, the inclusion of such elements is considered a form of fan service. Some genres explore homosexual romances, such as yaoi (male homosexuality) and yuri (female homosexuality). While often used in a pornographic context, the terms yaoi and yuri can also be used broadly in a wider context to describe or focus on the themes or the development of the relationships themselves. Anime's genre classification differs from other types of animation and does not lend itself to simple classification. Gilles Poitras compared the labeling Gundam 0080 and its complex depiction of war as a "giant robot" anime akin to simply labeling War and Peace a "war novel". Science fiction is a major anime genre and includes important historical works like Tezuka's Astro Boy and Yokoyama's Tetsujin 28-go. A major subgenre of science fiction is mecha, with the Gundam metaseries being iconic. The diverse fantasy genre includes works based on Asian and Western traditions and folklore; examples include the Japanese feudal fairytale InuYasha, and the depiction of Scandinavian goddesses who move to Japan to maintain a computer called Yggdrasil in Ah! My Goddess. Genre crossing in anime is also prevalent, such as the blend of fantasy and comedy in Dragon Half, and the incorporation of slapstick humor in the crime anime film Castle of Cagliostro. Other subgenres found in anime include magical girl, harem, sports, martial arts, literary adaptations, medievalism, and war. Formats Early anime works were made for theatrical viewing, and required played musical components before sound and vocal components were added to the production. In 1958, Nippon Television aired Mogura no Abanchūru ("Mole's Adventure"), both the first televised and first color anime to debut. It was not until the 1960s when the first televised series were broadcast and it has remained a popular medium since. Works released in a direct to video format are called "original video animation" (OVA) or "original animation video" (OAV); and are typically
on Asian and Western traditions and folklore; examples include the Japanese feudal fairytale InuYasha, and the depiction of Scandinavian goddesses who move to Japan to maintain a computer called Yggdrasil in Ah! My Goddess. Genre crossing in anime is also prevalent, such as the blend of fantasy and comedy in Dragon Half, and the incorporation of slapstick humor in the crime anime film Castle of Cagliostro. Other subgenres found in anime include magical girl, harem, sports, martial arts, literary adaptations, medievalism, and war. Formats Early anime works were made for theatrical viewing, and required played musical components before sound and vocal components were added to the production. In 1958, Nippon Television aired Mogura no Abanchūru ("Mole's Adventure"), both the first televised and first color anime to debut. It was not until the 1960s when the first televised series were broadcast and it has remained a popular medium since. Works released in a direct to video format are called "original video animation" (OVA) or "original animation video" (OAV); and are typically not released theatrically or televised prior to home media release. The emergence of the Internet has led some animators to distribute works online in a format called "original net anime" (ONA). The home distribution of anime releases were popularized in the 1980s with the VHS and LaserDisc formats. The VHS NTSC video format used in both Japan and the United States is credited as aiding the rising popularity of anime in the 1990s. The LaserDisc and VHS formats were transcended by the DVD format which offered the unique advantages; including multiple subtitling and dubbing tracks on the same disc. The DVD format also has its drawbacks in its usage of region coding; adopted by the industry to solve licensing, piracy and export problems and restricted region indicated on the DVD player. The Video CD (VCD) format was popular in Hong Kong and Taiwan, but became only a minor format in the United States that was closely associated with bootleg copies. A key characteristic of many anime television shows is serialization, where a continuous story arc stretches over multiple episodes or seasons. Traditional American television had an episodic format, with each episode typically consisting of a self-contained story. In contrast, anime shows such as Dragon Ball Z had a serialization format, where continuous story arcs stretch over multiple episodes or seasons, which distinguished them from traditional American television shows; serialization has since also become a common characteristic of American streaming television shows during the "Peak TV" era. Industry The animation industry consists of more than 430 production companies with some of the major studios including Toei Animation, Gainax, Madhouse, Gonzo, Sunrise, Bones, TMS Entertainment, Nippon Animation, P.A.Works, Studio Pierrot and Studio Ghibli. Many of the studios are organized into a trade association, The Association of Japanese Animations. There is also a labor union for workers in the industry, the Japanese Animation Creators Association. Studios will often work together to produce more complex and costly projects, as done with Studio Ghibli's Spirited Away. An anime episode can cost between US$100,000 and US$300,000 to produce. In 2001, animation accounted for 7% of the Japanese film market, above the 4.6% market share for live-action works. The popularity and success of anime is seen through the profitability of the DVD market, contributing nearly 70% of total sales. According to a 2016 article on Nikkei Asian Review, Japanese television stations have bought over worth of anime from production companies "over the past few years", compared with under from overseas. There has been a rise in sales of shows to television stations in Japan, caused by late night anime with adults as the target demographic. This type of anime is less popular outside Japan, being considered "more of a niche product". Spirited Away (2001) is the all-time highest-grossing film in Japan. It was also the highest-grossing anime film worldwide until it was overtaken by Makoto Shinkai's 2016 film Your Name. Anime films represent a large part of the highest-grossing Japanese films yearly in Japan, with 6 out of the top 10 in 2014, in 2015 and also in 2016. Anime has to be licensed by companies in other countries in order to be legally released. While anime has been licensed by its Japanese owners for use outside Japan since at least the 1960s, the practice became well-established in the United States in the late 1970s to early 1980s, when such TV series as Gatchaman and Captain Harlock were licensed from their Japanese parent companies for distribution in the US market. The trend towards American distribution of anime continued into the 1980s with the licensing of titles such as Voltron and the 'creation' of new series such as Robotech through use of source material from several original series. In the early 1990s, several companies began to experiment with the licensing of less children-oriented material. Some, such as A.D. Vision, and Central Park Media and its imprints, achieved fairly substantial commercial success and went on to become major players in the now very lucrative American anime market. Others, such as AnimEigo, achieved limited success. Many companies created directly by Japanese parent companies did not do as well, most releasing only one or two titles before completing their American operations. Licenses are expensive, often hundreds of thousands of dollars for one series and tens of thousands for one movie. The prices vary widely; for example, Jinki: Extend cost only $91,000 to license while Kurau Phantom Memory cost $960,000. Simulcast Internet streaming rights can be cheaper, with prices around $1,000-$2,000 an episode, but can also be more expensive, with some series costing more than per episode. The anime market for the United States was worth approximately $2.74 billion in 2009, today in 2022 the anime market for the United States is worth approximately $25 billion. Dubbed animation began airing in the United States in 2000 on networks like The WB and Cartoon Network's Adult Swim. In 2005, this resulted in five of the top ten anime titles having previously aired on Cartoon Network. As a part of localization, some editing of cultural references may occur to better follow the references of the non-Japanese culture. The cost of English localization averages US$10,000 per episode. The industry has been subject to both praise and condemnation for fansubs, the addition of unlicensed and unauthorized subtitled translations of anime series or films. Fansubs, which were originally distributed on VHS bootlegged cassettes in the 1980s, have been freely available and disseminated online since the 1990s. Since this practice raises concerns for copyright and piracy issues, fansubbers tend to adhere to an unwritten moral code to destroy or no longer distribute an anime once an official translated or subtitled version becomes licensed. They also try to encourage viewers to buy an official copy of the release once it comes out in English, although fansubs typically continue to circulate through file-sharing networks. Even so, the laid back regulations of the Japanese animation industry tend to overlook these issues, allowing it to grow underground and thus increasing the popularity until there is a demand for official high-quality releases for animation companies. This has led to an increase in global popularity with Japanese animations, reaching $40 million in sales in 2004. Since the 2010s anime has become a global multibillion industry setting a sales record in 2017 of ¥2.15 trillion ($19.8 billion), driven largely by demand from overseas audiences. In 2019, Japan's anime industry was valued at $24 billion a year with 48% of that revenue coming from overseas (which is now its largest industry sector). By 2025 the anime industry is expected to reach a value of $30 billion with over 60% of that revenue to come from overseas. Markets Japan External Trade Organization (JETRO) valued the domestic anime market in Japan at (), including from licensed products, in 2005. JETRO reported sales of overseas anime exports in 2004 to be (). JETRO valued the anime market in the United States at (), including in home video sales and over from licensed products, in 2005. JETRO projected in 2005 that the worldwide anime market, including sales of licensed products, would grow to (). The anime market in China was valued at in 2017, and is projected to reach by 2020. By 2030 the global anime market is expected to reach a value of $48.3 Billion with the largest contributors to this growth being North America, Europe, China and The Middle East. Awards The anime industry has several annual awards that honor the year's best works. Major annual awards in Japan include the Ōfuji Noburō Award, the Mainichi Film Award for Best Animation Film, the Animation Kobe Awards, the Japan Media Arts Festival animation awards, the Tokyo Anime Award and the Japan Academy Prize for Animation of the Year. In the United States, anime films compete in the Crunchyroll Anime Awards. There were also the American Anime Awards, which were designed to recognize excellence in anime titles nominated by the industry, and were held only once in 2006. Anime productions have also been nominated and won awards not exclusively for anime, like the Academy Award for Best Animated Feature or the Golden Bear. Working conditions In recent years the anime industry has been accused by both Japanese and foreign media for underpaying and overworking its animators. In response the Japanese Prime Minister Fumio Kishida promised to improve the working conditions and salary of all animators and creators working in the industry. A few anime studios such as MAPPA have taken actions to improve the working conditions of their employees. Globalization and Cultural Impact Anime has become commercially profitable in Western countries, as demonstrated by early commercially successful Western adaptations of anime, such as Astro Boy and Speed Racer. Early American adaptions in the 1960s made Japan expand into the continental European market, first with productions aimed at European and Japanese children, such as Heidi, Vicky the Viking and Barbapapa, which aired in various countries. Italy, Spain, and France grew a particular interest into Japan's output, due to its cheap selling price and productive output. In fact, Italy imported the most anime outside of Japan. These mass imports influenced anime popularity in South American, Arabic and German markets. The beginning of 1980 saw the introduction of Japanese anime series into the American culture. In the 1990s, Japanese animation slowly gained popularity in America. Media companies such as Viz and Mixx began publishing and releasing animation into the American market. The 1988 film Akira is largely credited with popularizing anime in the Western world during the early 1990s, before anime was further popularized by television shows such as Pokémon and Dragon Ball Z in the late 1990s. By 1997, Japanese anime was the fastest-growing genre in the American video industry. The growth of the Internet later provided international audiences an easy way to access Japanese content. Early on, online piracy played a major role in this, through over time many legal alternatives appeared. Since the 2010s various streaming services have become increasingly involved in the production and licensing of anime for the international markets. This is especially the case with net services such as Netflix and Crunchyroll which have large catalogs in Western countries, although as of 2020 anime fans in many developing non-Western countries, such as India and Philippines, have fewer options of obtaining access to legal content, and therefore still turn to online piracy. However beginning with the early 2020s anime has been experiencing yet another boom in global popularity and demand due to the Covid-19 pandemic and streaming services like Netflix, Prime video, Hulu and anime only services like Crunchyroll & Funimation increasing the international availability of the amount of new licensed anime shows as well as the size of their catalogs. Netflix reported that, between October 2019 and September 2020, more than member households worldwide had watched at least one anime title on the platform. Anime titles appeared on the streaming platforms top 10 lists in almost 100 countries within the 1-year period. As of 2021 Japanese anime are the most demanded foreign language shows in the United States accounting for 30.5% of the market share(In comparison, Spanish and Korean shows account for 21% and 11% of the market share). In 2022 the anime series Attack on Titan won the award of "Most in-demand TV series in the world 2021" in the Global TV Demand Awards. Attack on Titan became the first ever non-English language series to earn the title of World’s Most In-Demand TV Show, previously held by only The Walking Dead and Game of Thrones. Rising interest in anime as well as japanese video games has led to an increase of university students in the United Kingdom wanting to get a degree in the Japanese language. Various anime and manga series have influenced Hollywood in the making of numerous famous movies and characters. Hollywood itself has produced live-action adaptations of various anime series such as Ghost in the Shell, Death Note, Dragon Ball Evolution and Cowboy Bebop. However most of these adaptations have been reviewed negatively by both the critics and the audience and have become box-office flops. The main reasons for the unsuccessfulness of Hollywood's adaptions of anime being the often change of plot and characters from the original source material and the limited capabilities a live-action movie or series can do in comparison to an animated counterpart. One particular exception however is Alita: Battle Angel which has become a moderate commercial success, receiving generally positive reviews from both the critics and the audience for it's visual effects and following the source material. The movie grossed $404 million worldwide, making it directors Robert Rodriguez's highest-grossing film. Anime alongside many other parts of Japanese pop culture has helped Japan to gain a positive worldwide image and improve it's relations with other countries. In 2015 During remarks welcoming Japanese Prime Minister Shinzo Abe to the White House, President Barack Obama thanked Japan for its cultural contributions to the United States by saying: This visit is a celebration of the ties of friendship and family that bind our peoples. I first felt it when I was 6 years old when my mother took me to Japan. I felt it growing up in Hawaii, like communities across our country, home to so many proud Japanese Americans, and Today is also a chance for Americans, especially our young people, to say thank you for all the things we love from Japan. Like karate and karaoke. Manga and anime. And, of course, emojis. In July 2020, after the approval of a Chilean government project in which citizens of Chile would be allowed to withdraw up to 10% of their privately held retirement savings, journalist Pamela Jiles celebrated by running through congress with her arms spread out behind her, imitating the move of many characters of the anime and manga series Naruto. In April 2021 Peruvian politicians Jorge Hugo Romero of the PPC and Milagros Juárez of the UPP cosplayed as anime characters to get the otaku vote. A 2018 survey conducted in 20 countries and territories using a sample consisting of 6,600 respondents held by Dentsu revealed that 34% of all surveyed people found excellency in anime and manga more than other Japanese cultural or technological aspects which makes this mass Japanese media the 3rd most liked "Japanese thing", below japanese cuisine (34.6%) and japanese robotics (35.1%). The advertisement company views
optical phenomenon in gemstones Asterism (typography), (⁂) a moderately rare typographical
Asterism (gemology), an optical phenomenon in gemstones Asterism (typography), (⁂) a moderately rare typographical symbol denoting
the US (see Angora). History The region's history can be traced back to the Bronze Age Hattic civilization, which was succeeded in the 2nd millennium BC by the Hittites, in the 10th century BC by the Phrygians, and later by the Lydians, Persians, Greeks, Galatians, Romans, Byzantines, and Turks (the Seljuk Sultanate of Rûm, the Ottoman Empire and finally republican Turkey). Ancient history The oldest settlements in and around the city center of Ankara belonged to the Hattic civilization which existed during the Bronze Age and was gradually absorbed c. 2000 – 1700 BC by the Indo-European Hittites. The city grew significantly in size and importance under the Phrygians starting around 1000 BC, and experienced a large expansion following the mass migration from Gordion, (the capital of Phrygia), after an earthquake which severely damaged that city around that time. In Phrygian tradition, King Midas was venerated as the founder of Ancyra, but Pausanias mentions that the city was actually far older, which accords with present archeological knowledge. Phrygian rule was succeeded first by Lydian and later by Persian rule, though the strongly Phrygian character of the peasantry remained, as evidenced by the gravestones of the much later Roman period. Persian sovereignty lasted until the Persians' defeat at the hands of Alexander the Great who conquered the city in 333 BC. Alexander came from Gordion to Ankara and stayed in the city for a short period. After his death at Babylon in 323 BC and the subsequent division of his empire among his generals, Ankara, and its environs fell into the share of Antigonus. Another important expansion took place under the Greeks of Pontos who came there around 300 BC and developed the city as a trading center for the commerce of goods between the Black Sea ports and Crimea to the north; Assyria, Cyprus, and Lebanon to the south; and Georgia, Armenia and Persia to the east. By that time the city also took its name Ἄγκυρα (Ánkyra, meaning anchor in Greek) which, in slightly modified form, provides the modern name of Ankara. Celtic history In 278 BC, the city, along with the rest of central Anatolia, was occupied by a Celtic group, the Galatians, who were the first to make Ankara one of their main tribal centers, the headquarters of the Tectosages tribe. Other centers were Pessinus, today's Ballıhisar, for the Trocmi tribe, and Tavium, to the east of Ankara, for the Tolistobogii tribe. The city was then known as Ancyra. The Celtic element was probably relatively small in numbers; a warrior aristocracy which ruled over Phrygian-speaking peasants. However, the Celtic language continued to be spoken in Galatia for many centuries. At the end of the 4th century, St. Jerome, a native of Dalmatia, observed that the language spoken around Ankara was very similar to that being spoken in the northwest of the Roman world near Trier. Roman history The city was subsequently passed under the control of the Roman Empire. In 25 BC, Emperor Augustus raised it to the status of a polis and made it the capital city of the Roman province of Galatia. Ankara is famous for the Monumentum Ancyranum (Temple of Augustus and Rome) which contains the official record of the Acts of Augustus, known as the Res Gestae Divi Augusti, an inscription cut in marble on the walls of this temple. The ruins of Ancyra still furnish today valuable bas-reliefs, inscriptions and other architectural fragments. Two other Galatian tribal centers, Tavium near Yozgat, and Pessinus (Balhisar) to the west, near Sivrihisar, continued to be reasonably important settlements in the Roman period, but it was Ancyra that grew into a grand metropolis. An estimated 200,000 people lived in Ancyra in good times during the Roman Empire, a far greater number than was to be the case from after the fall of the Roman Empire until the early 20th century. The small Ankara River ran through the center of the Roman town. It has now been covered and diverted, but it formed the northern boundary of the old town during the Roman, Byzantine and Ottoman periods. Çankaya, the rim of the majestic hill to the south of the present city center, stood well outside the Roman city, but may have been a summer resort. In the 19th century, the remains of at least one Roman villa or large house were still standing not far from where the Çankaya Presidential Residence stands today. To the west, the Roman city extended until the area of the Gençlik Park and Railway Station, while on the southern side of the hill, it may have extended downwards as far as the site presently occupied by Hacettepe University. It was thus a sizeable city by any standards and much larger than the Roman towns of Gaul or Britannia. Ancyra's importance rested on the fact that it was the junction point where the roads in northern Anatolia running north–south and east–west intersected, giving it major strategic importance for Rome's eastern frontier. The great imperial road running east passed through Ankara and a succession of emperors and their armies came this way. They were not the only ones to use the Roman highway network, which was equally convenient for invaders. In the second half of the 3rd century, Ancyra was invaded in rapid succession by the Goths coming from the west (who rode far into the heart of Cappadocia, taking slaves and pillaging) and later by the Arabs. For about a decade, the town was one of the western outposts of one of Palmyrean empress Zenobia in the Syrian Desert, who took advantage of a period of weakness and disorder in the Roman Empire to set up a short-lived state of her own. The town was reincorporated into the Roman Empire under Emperor Aurelian in 272. The tetrarchy, a system of multiple (up to four) emperors introduced by Diocletian (284–305), seems to have engaged in a substantial program of rebuilding and of road construction from Ancyra westwards to Germe and Dorylaeum (now Eskişehir). In its heyday, Roman Ancyra was a large market and trading center but it also functioned as a major administrative capital, where a high official ruled from the city's Praetorium, a large administrative palace or office. During the 3rd century, life in Ancyra, as in other Anatolian towns, seems to have become somewhat militarized in response to the invasions and instability of the town. Byzantine history The city is well known during the 4th century as a center of Christian activity (see also below), due to frequent imperial visits, and through the letters of the pagan scholar Libanius. Bishop Marcellus of Ancyra and Basil of Ancyra were active in the theological controversies of their day, and the city was the site of no less than three church synods in 314, 358 and 375, the latter two in favor of Arianism. The city was visited by Emperor Constans I (r. 337–350) in 347 and 350, Julian (r. 361–363) during his Persian campaign in 362, and Julian's successor Jovian (r. 363–364) in winter 363/364 (he entered his consulship while in the city). After Jovian's death soon after, Valentinian I (r. 364–375) was acclaimed emperor at Ancyra, and in the next year his brother Valens (r. 364–378) used Ancyra as his base against the usurper Procopius. When the province of Galatia was divided sometime in 396/99, Ancyra remained the civil capital of Galatia I, as well as its ecclesiastical center (metropolitan see). Emperor Arcadius (r. 383–408) frequently used the city as his summer residence, and some information about the ecclesiastical affairs of the city during the early 5th century is found in the works of Palladius of Galatia and Nilus of Galatia. In 479, the rebel Marcian attacked the city, without being able to capture it. In 610/11, Comentiolus, brother of Emperor Phocas (r. 602–610), launched his own unsuccessful rebellion in the city against Heraclius (r. 610–641). Ten years later, in 620 or more likely 622, it was captured by the Sassanid Persians during the Byzantine–Sassanid War of 602–628. Although the city returned to Byzantine hands after the end of the war, the Persian presence left traces in the city's archeology, and likely began the process of its transformation from a late antique city to a medieval fortified settlement. In 654, the city was captured for the first time by the Arabs of the Rashidun Caliphate, under Muawiyah, the future founder of the Umayyad Caliphate. At about the same time, the themes were established in Anatolia, and Ancyra became capital of the Opsician Theme, which was the largest and most important theme until it was split up under Emperor Constantine V (r. 741–775); Ancyra then became the capital of the new Bucellarian Theme. The city was captured at least temporarily by the Umayyad prince Maslama ibn Hisham in 739/40, the last of the Umayyads' territorial gains from the Byzantine Empire. Ancyra was attacked without success by Abbasid forces in 776 and in 798/99. In 805, Emperor Nikephoros I (r. 802–811) strengthened its fortifications, a fact which probably saved it from sack during the large-scale invasion of Anatolia by Caliph Harun al-Rashid in the next year. Arab sources report that Harun and his successor al-Ma'mun (r. 813–833) took the city, but this information is later invention. In 838, however, during the Amorium campaign, the armies of Caliph al-Mu'tasim (r. 833–842) converged and met at the city; abandoned by its inhabitants, Ancara was razed to the ground, before the Arab armies went on to besiege and destroy Amorium. In 859, Emperor Michael III (r. 842–867) came to the city during a campaign against the Arabs, and ordered its fortifications restored. In 872, the city was menaced, but not taken, by the Paulicians under Chrysocheir. The last Arab raid to reach the city was undertaken in 931, by the Abbasid governor of Tarsus, Thamal al-Dulafi, but the city again was not captured. Ecclesiastical history Early Christian martyrs of Ancyra, about whom little is known, included Proklos and Hilarios who were natives of the otherwise unknown nearby village of Kallippi, and suffered repression under the emperor Trajan (98–117). In the 280s we hear of Philumenos, a Christian corn merchant from southern Anatolia, being captured and martyred in Ankara, and Eustathius. As in other Roman towns, the reign of Diocletian marked the culmination of the persecution of the Christians. In 303, Ancyra was one of the towns where the co-emperors Diocletian and his deputy Galerius launched their anti-Christian persecution. In Ancyra, their first target was the 38-year-old Bishop of the town, whose name was Clement. Clement's life describes how he was taken to Rome, then sent back, and forced to undergo many interrogations and hardship before he, and his brother, and various companions were put to death. The remains of the church of St. Clement can be found today in a building just off Işıklar Caddesi in the Ulus district. Quite possibly this marks the site where Clement was originally buried. Four years later, a doctor of the town named Plato and his brother Antiochus also became celebrated martyrs under Galerius. Theodotus of Ancyra is also venerated as a saint. However, the persecution proved unsuccessful and in 314 Ancyra was the center of an important council of the early church; its 25 disciplinary canons constitute one of the most important documents in the early history of the administration of the Sacrament of Penance. The synod also considered ecclesiastical policy for the reconstruction of the Christian Church after the persecutions, and in particular the treatment of lapsi—Christians who had given in to forced paganism (sacrifices) to avoid martyrdom during these persecutions. Though paganism was probably tottering in Ancyra in Clement's day, it may still have been the majority religion. Twenty years later, Christianity and monotheism had taken its place. Ancyra quickly turned into a Christian city, with a life dominated by monks and priests and theological disputes. The town council or senate gave way to the bishop as the main local figurehead. During the middle of the 4th century, Ancyra was involved in the complex theological disputes over the nature of Christ, and a form of Arianism seems to have originated there. In 362–363, Emperor Julian passed through Ancyra on his way to an ill-fated campaign against the Persians, and according to Christian sources, engaged in a persecution of various holy men. The stone base for a statue, with an inscription describing Julian as "Lord of the whole world from the British Ocean to the barbarian nations", can still be seen, built into the eastern side of the inner circuit of the walls of Ankara Castle. The Column of Julian which was erected in honor of the emperor's visit to the city in 362 still stands today. In 375, Arian bishops met at Ancyra and deposed several bishops, among them St. Gregory of Nyssa. In the late 4th century, Ancyra became something of an imperial holiday resort. After Constantinople became the East Roman capital, emperors in the 4th and 5th centuries would retire from the humid summer weather on the Bosporus to the drier mountain atmosphere of Ancyra. Theodosius II (408–450) kept his court in Ancyra in the summers. Laws issued in Ancyra testify to the time they spent there. The Metropolis of Ancyra continued to be a residential see of the Eastern Orthodox Church until the 20th century, with about 40,000 faithful, mostly Turkish-speaking, but that situation ended as a result of the 1923 Convention Concerning the Exchange of Greek and Turkish Populations. The earlier Armenian genocide put an end to the residential eparchy of Ancyra of the Armenian Catholic Church, which had been established in 1850. It is also a titular metropolis of the Ecumenical Patriarchate of Constantinople. Both the Ancient Byzantine Metropolitan archbishopric and the 'modern' Armenian eparchy are now listed by the Catholic Church as titular sees, with separate apostolic successions. Seljuk and Ottoman history After the Battle of Manzikert in 1071, the Seljuk Turks overran much of Anatolia. By 1073, the Turkish settlers had reached the vicinity of Ancyra, and the city was captured shortly after, at the latest by the time of the rebellion of Nikephoros Melissenos in 1081. In 1101, when the Crusade under
for many centuries. At the end of the 4th century, St. Jerome, a native of Dalmatia, observed that the language spoken around Ankara was very similar to that being spoken in the northwest of the Roman world near Trier. Roman history The city was subsequently passed under the control of the Roman Empire. In 25 BC, Emperor Augustus raised it to the status of a polis and made it the capital city of the Roman province of Galatia. Ankara is famous for the Monumentum Ancyranum (Temple of Augustus and Rome) which contains the official record of the Acts of Augustus, known as the Res Gestae Divi Augusti, an inscription cut in marble on the walls of this temple. The ruins of Ancyra still furnish today valuable bas-reliefs, inscriptions and other architectural fragments. Two other Galatian tribal centers, Tavium near Yozgat, and Pessinus (Balhisar) to the west, near Sivrihisar, continued to be reasonably important settlements in the Roman period, but it was Ancyra that grew into a grand metropolis. An estimated 200,000 people lived in Ancyra in good times during the Roman Empire, a far greater number than was to be the case from after the fall of the Roman Empire until the early 20th century. The small Ankara River ran through the center of the Roman town. It has now been covered and diverted, but it formed the northern boundary of the old town during the Roman, Byzantine and Ottoman periods. Çankaya, the rim of the majestic hill to the south of the present city center, stood well outside the Roman city, but may have been a summer resort. In the 19th century, the remains of at least one Roman villa or large house were still standing not far from where the Çankaya Presidential Residence stands today. To the west, the Roman city extended until the area of the Gençlik Park and Railway Station, while on the southern side of the hill, it may have extended downwards as far as the site presently occupied by Hacettepe University. It was thus a sizeable city by any standards and much larger than the Roman towns of Gaul or Britannia. Ancyra's importance rested on the fact that it was the junction point where the roads in northern Anatolia running north–south and east–west intersected, giving it major strategic importance for Rome's eastern frontier. The great imperial road running east passed through Ankara and a succession of emperors and their armies came this way. They were not the only ones to use the Roman highway network, which was equally convenient for invaders. In the second half of the 3rd century, Ancyra was invaded in rapid succession by the Goths coming from the west (who rode far into the heart of Cappadocia, taking slaves and pillaging) and later by the Arabs. For about a decade, the town was one of the western outposts of one of Palmyrean empress Zenobia in the Syrian Desert, who took advantage of a period of weakness and disorder in the Roman Empire to set up a short-lived state of her own. The town was reincorporated into the Roman Empire under Emperor Aurelian in 272. The tetrarchy, a system of multiple (up to four) emperors introduced by Diocletian (284–305), seems to have engaged in a substantial program of rebuilding and of road construction from Ancyra westwards to Germe and Dorylaeum (now Eskişehir). In its heyday, Roman Ancyra was a large market and trading center but it also functioned as a major administrative capital, where a high official ruled from the city's Praetorium, a large administrative palace or office. During the 3rd century, life in Ancyra, as in other Anatolian towns, seems to have become somewhat militarized in response to the invasions and instability of the town. Byzantine history The city is well known during the 4th century as a center of Christian activity (see also below), due to frequent imperial visits, and through the letters of the pagan scholar Libanius. Bishop Marcellus of Ancyra and Basil of Ancyra were active in the theological controversies of their day, and the city was the site of no less than three church synods in 314, 358 and 375, the latter two in favor of Arianism. The city was visited by Emperor Constans I (r. 337–350) in 347 and 350, Julian (r. 361–363) during his Persian campaign in 362, and Julian's successor Jovian (r. 363–364) in winter 363/364 (he entered his consulship while in the city). After Jovian's death soon after, Valentinian I (r. 364–375) was acclaimed emperor at Ancyra, and in the next year his brother Valens (r. 364–378) used Ancyra as his base against the usurper Procopius. When the province of Galatia was divided sometime in 396/99, Ancyra remained the civil capital of Galatia I, as well as its ecclesiastical center (metropolitan see). Emperor Arcadius (r. 383–408) frequently used the city as his summer residence, and some information about the ecclesiastical affairs of the city during the early 5th century is found in the works of Palladius of Galatia and Nilus of Galatia. In 479, the rebel Marcian attacked the city, without being able to capture it. In 610/11, Comentiolus, brother of Emperor Phocas (r. 602–610), launched his own unsuccessful rebellion in the city against Heraclius (r. 610–641). Ten years later, in 620 or more likely 622, it was captured by the Sassanid Persians during the Byzantine–Sassanid War of 602–628. Although the city returned to Byzantine hands after the end of the war, the Persian presence left traces in the city's archeology, and likely began the process of its transformation from a late antique city to a medieval fortified settlement. In 654, the city was captured for the first time by the Arabs of the Rashidun Caliphate, under Muawiyah, the future founder of the Umayyad Caliphate. At about the same time, the themes were established in Anatolia, and Ancyra became capital of the Opsician Theme, which was the largest and most important theme until it was split up under Emperor Constantine V (r. 741–775); Ancyra then became the capital of the new Bucellarian Theme. The city was captured at least temporarily by the Umayyad prince Maslama ibn Hisham in 739/40, the last of the Umayyads' territorial gains from the Byzantine Empire. Ancyra was attacked without success by Abbasid forces in 776 and in 798/99. In 805, Emperor Nikephoros I (r. 802–811) strengthened its fortifications, a fact which probably saved it from sack during the large-scale invasion of Anatolia by Caliph Harun al-Rashid in the next year. Arab sources report that Harun and his successor al-Ma'mun (r. 813–833) took the city, but this information is later invention. In 838, however, during the Amorium campaign, the armies of Caliph al-Mu'tasim (r. 833–842) converged and met at the city; abandoned by its inhabitants, Ancara was razed to the ground, before the Arab armies went on to besiege and destroy Amorium. In 859, Emperor Michael III (r. 842–867) came to the city during a campaign against the Arabs, and ordered its fortifications restored. In 872, the city was menaced, but not taken, by the Paulicians under Chrysocheir. The last Arab raid to reach the city was undertaken in 931, by the Abbasid governor of Tarsus, Thamal al-Dulafi, but the city again was not captured. Ecclesiastical history Early Christian martyrs of Ancyra, about whom little is known, included Proklos and Hilarios who were natives of the otherwise unknown nearby village of Kallippi, and suffered repression under the emperor Trajan (98–117). In the 280s we hear of Philumenos, a Christian corn merchant from southern Anatolia, being captured and martyred in Ankara, and Eustathius. As in other Roman towns, the reign of Diocletian marked the culmination of the persecution of the Christians. In 303, Ancyra was one of the towns where the co-emperors Diocletian and his deputy Galerius launched their anti-Christian persecution. In Ancyra, their first target was the 38-year-old Bishop of the town, whose name was Clement. Clement's life describes how he was taken to Rome, then sent back, and forced to undergo many interrogations and hardship before he, and his brother, and various companions were put to death. The remains of the church of St. Clement can be found today in a building just off Işıklar Caddesi in the Ulus district. Quite possibly this marks the site where Clement was originally buried. Four years later, a doctor of the town named Plato and his brother Antiochus also became celebrated martyrs under Galerius. Theodotus of Ancyra is also venerated as a saint. However, the persecution proved unsuccessful and in 314 Ancyra was the center of an important council of the early church; its 25 disciplinary canons constitute one of the most important documents in the early history of the administration of the Sacrament of Penance. The synod also considered ecclesiastical policy for the reconstruction of the Christian Church after the persecutions, and in particular the treatment of lapsi—Christians who had given in to forced paganism (sacrifices) to avoid martyrdom during these persecutions. Though paganism was probably tottering in Ancyra in Clement's day, it may still have been the majority religion. Twenty years later, Christianity and monotheism had taken its place. Ancyra quickly turned into a Christian city, with a life dominated by monks and priests and theological disputes. The town council or senate gave way to the bishop as the main local figurehead. During the middle of the 4th century, Ancyra was involved in the complex theological disputes over the nature of Christ, and a form of Arianism seems to have originated there. In 362–363, Emperor Julian passed through Ancyra on his way to an ill-fated campaign against the Persians, and according to Christian sources, engaged in a persecution of various holy men. The stone base for a statue, with an inscription describing Julian as "Lord of the whole world from the British Ocean to the barbarian nations", can still be seen, built into the eastern side of the inner circuit of the walls of Ankara Castle. The Column of Julian which was erected in honor of the emperor's visit to the city in 362 still stands today. In 375, Arian bishops met at Ancyra and deposed several bishops, among them St. Gregory of Nyssa. In the late 4th century, Ancyra became something of an imperial holiday resort. After Constantinople became the East Roman capital, emperors in the 4th and 5th centuries would retire from the humid summer weather on the Bosporus to the drier mountain atmosphere of Ancyra. Theodosius II (408–450) kept his court in Ancyra in the summers. Laws issued in Ancyra testify to the time they spent there. The Metropolis of Ancyra continued to be a residential see of the Eastern Orthodox Church until the 20th century, with about 40,000 faithful, mostly Turkish-speaking, but that situation ended as a result of the 1923 Convention Concerning the Exchange of Greek and Turkish Populations. The earlier Armenian genocide put an end to the residential eparchy of Ancyra of the Armenian Catholic Church, which had been established in 1850. It is also a titular metropolis of the Ecumenical Patriarchate of Constantinople. Both the Ancient Byzantine Metropolitan archbishopric and the 'modern' Armenian eparchy are now listed by the Catholic Church as titular sees, with separate apostolic successions. Seljuk and Ottoman history After the Battle of Manzikert in 1071, the Seljuk Turks overran much of Anatolia. By 1073, the Turkish settlers had reached the vicinity of Ancyra, and the city was captured shortly after, at the latest by the time of the rebellion of Nikephoros Melissenos in 1081. In 1101, when the Crusade under Raymond IV of Toulouse arrived, the city had been under Danishmend control for some time. The Crusaders captured the city, and handed it over to the Byzantine emperor Alexios I Komnenos (r. 1081–1118). Byzantine rule did not last long, and the city was captured by the Seljuk Sultanate of Rum at some unknown point; in 1127, it returned to Danishmend control until 1143, when the Seljuks of Rum retook it. After the Battle of Köse Dağ in 1243, in which the Mongols defeated the Seljuks, most of Anatolia became part of the dominion of the Mongols. Taking advantage of Seljuk decline, a semi-religious cast of craftsmen and trade people named Ahiler chose Angora as their independent city-state in 1290. Orhan I, the second Bey of the Ottoman Empire, captured the city in 1356. Timur defeated Bayezid I at the Battle of Ankara in 1402 and took the city, but in 1403 Angora was again under Ottoman control. The Levant Company maintained a factory in the town from 1639 to 1768. In the 19th century, its population was estimated at 20,000 to 60,000. It was sacked by Egyptians under Ibrahim Pasha in 1832. From 1867 to 1922, the city served as the capital of the Angora Vilayet, which included most of ancient Galatia. Prior to World War I, the town had a British consulate and a population of around 28,000, roughly of whom were Christian. Turkish republican capital Following the Ottoman defeat in World War I, the Ottoman capital Constantinople (modern Istanbul) and much of Anatolia was occupied by the Allies, who planned to share these lands between Armenia, France, Greece, Italy and the United Kingdom, leaving for the Turks the core piece of land in central Anatolia. In response, the leader of the Turkish nationalist movement, Mustafa Kemal Atatürk, established the headquarters of his resistance movement in Angora in 1920. After the Turkish War of Independence was won and the Treaty of Sèvres was superseded by the Treaty of Lausanne (1923), the Turkish nationalists replaced the Ottoman Empire with the Republic of Turkey on 29 October 1923. A few days earlier, Angora had officially replaced Constantinople as the new Turkish capital city, on 13 October 1923, and Republican officials declared that the city's name is Ankara. After Ankara became the capital of the newly founded Republic of Turkey, new development divided the city into an old section, called Ulus, and a new section, called Yenişehir. Ancient buildings reflecting Roman, Byzantine, and Ottoman history and narrow winding streets mark the old section. The new section, now centered on Kızılay Square, has the trappings of a more modern city: wide streets, hotels, theaters, shopping malls, and high-rises. Government offices and foreign embassies are also located in the new section. Ankara has experienced a phenomenal growth since it was made Turkey's capital in 1923, when it was "a small town of no importance". In 1924, the year after the government had moved there, Ankara had about 35,000 residents. By 1927 there were 44,553 residents and by 1950 the population had grown to 286,781. Ankara continued to grow rapidly during the latter half of the 20th century and eventually outranked Izmir as Turkey's second-largest city, after Istanbul. Ankara's urban population reached 4,587,558 in 2014, while the population of Ankara Province reached 5,150,072 in 2015. After 1930, it became known officially in Western languages as Ankara. After the late 1930s the public stopped using the name "Angora". Presidential Palace of Turkey is situated in Ankara. This building serves as the main residence of the president. Economy and infrastructure The city has exported mohair (from the Angora goat) and Angora wool (from the Angora rabbit) internationally for centuries. In the 19th century, the city also exported substantial amounts of goat and cat skins, gum, wax, honey, berries, and madder root. It was connected to Istanbul by railway before the First World War, continuing to export mohair, wool, berries, and grain. The Central Anatolia Region is one of the primary locations of grape and wine production in Turkey, and Ankara is particularly famous for its Kalecik Karası and Muscat grapes; and its Kavaklıdere wine, which is produced in the Kavaklıdere neighborhood within the Çankaya district of the city. Ankara is also famous for its pears. Another renowned natural product of Ankara is its indigenous type of honey (Ankara Balı) which is known for its light color and is mostly produced by the Atatürk Forest Farm and Zoo in the Gazi district, and by other facilities in the Elmadağ, Çubuk and Beypazarı districts. Çubuk-1 and Çubuk-2 dams on the Çubuk Brook in Ankara were among the first dams constructed in the Turkish Republic. Ankara is the center of the state-owned and private Turkish defence and aerospace companies, where the industrial plants and headquarters of the Turkish Aerospace Industries, MKE, ASELSAN, HAVELSAN, ROKETSAN, FNSS, Nurol Makina, and numerous other firms are located. Exports to foreign countries from these defense and aerospace firms have steadily increased in the past decades. The IDEF in Ankara is one of the largest international expositions of the global arms industry. A number of the global automotive companies also have production facilities in Ankara, such as the German bus and truck manufacturer MAN SE. Ankara hosts the OSTIM Industrial Zone, Turkey's largest industrial park. A large percentage of the complicated employment in Ankara is provided by the state institutions; such as the ministries, subministries, and other administrative bodies of the Turkish government. There are also many foreign citizens working as diplomats or clerks in the embassies of their respective countries. Geography Ankara and its province are located in the Central Anatolia Region of Turkey. The Çubuk Brook flows through the city center of Ankara. It is connected in the western suburbs of the city to the Ankara River, which is a tributary of the Sakarya River. Climate Ankara has a cold semi-arid climate (Köppen climate classification: BSk). Under the Trewartha climate classification, Ankara has a temperate continental climate (Dc). Due to its elevation and inland location, Ankara has cold and snowy winters, and hot and dry summers. Rainfall occurs mostly during the spring and autumn. The city lies in USDA Hardiness zone 7b, and its annual average precipitation is fairly low at , nevertheless precipitation can be observed throughout the year. Monthly mean temperatures range from in January to in July, with an annual mean of . Demographics Ankara had a population of 75,000 in 1927. As of 2019, Ankara Province has a population of 5,639,076. When Ankara became the capital of the Republic of Turkey in 1923, it was designated as a planned city for 500,000 future inhabitants. During the 1920s, 1930s and 1940s, the city grew in a planned and orderly pace. However, from the 1950s onward, the city grew much faster than envisioned, because unemployment and poverty forced people to migrate from the countryside into the city in order to seek a better standard of living. As a result, many illegal houses called gecekondu were built around
and that they should be considered Old Arabic. Linguists generally believe that "Old Arabic" (a collection of related dialects that constitute the precursor of Arabic) first emerged around the 1st century CE. Previously, the earliest attestation of Old Arabic was thought to be a single 1st century CE inscription in Sabaic script at Qaryat Al-Faw, in southern present-day Saudi Arabia. However, this inscription does not participate in several of the key innovations of the Arabic language group, such as the conversion of Semitic mimation to nunation in the singular. It is best reassessed as a separate language on the Central Semitic dialect continuum. It was also thought that Old Arabic coexisted alongside—and then gradually displaced--epigraphic Ancient North Arabian (ANA), which was theorized to have been the regional tongue for many centuries. ANA, despite its name, was considered a very distinct language, and mutually unintelligible, from "Arabic". Scholars named its variant dialects after the towns where the inscriptions were discovered (Dadanitic, Taymanitic, Hismaic, Safaitic). However, most arguments for a single ANA language or language family were based on the shape of the definite article, a prefixed h-. It has been argued that the h- is an archaism and not a shared innovation, and thus unsuitable for language classification, rendering the hypothesis of an ANA language family untenable. Safaitic and Hismaic, previously considered ANA, should be considered Old Arabic due to the fact that they participate in the innovations common to all forms of Arabic.The earliest attestation of continuous Arabic text in an ancestor of the modern Arabic script are three lines of poetry by a man named Garm(')allāhe found in En Avdat, Israel, and dated to around 125 CE. This is followed by the Namara inscription, an epitaph of the Lakhmid king Imru' al-Qays bar 'Amro, dating to 328 CE, found at Namaraa, Syria. From the 4th to the 6th centuries, the Nabataean script evolves into the Arabic script recognizable from the early Islamic era. There are inscriptions in an undotted, 17-letter Arabic script dating to the 6th century CE, found at four locations in Syria (Zabad, Jabal 'Usays, Harran, Umm al-Jimaal). The oldest surviving papyrus in Arabic dates to 643 CE, and it uses dots to produce the modern 28-letter Arabic alphabet. The language of that papyrus and of the Qur'an are referred to by linguists as "Quranic Arabic", as distinct from its codification soon thereafter into "Classical Arabic". Old Hejazi and Classical Arabic In late pre-Islamic times, a transdialectal and transcommunal variety of Arabic emerged in the Hejaz which continued living its parallel life after literary Arabic had been institutionally standardized in the 2nd and 3rd century of the Hijra, most strongly in Judeo-Christian texts, keeping alive ancient features eliminated from the "learned" tradition (Classical Arabic). This variety and both its classicizing and "lay" iterations have been termed Middle Arabic in the past, but they are thought to continue an Old Higazi register. It is clear that the orthography of the Qur'an was not developed for the standardized form of Classical Arabic; rather, it shows the attempt on the part of writers to record an archaic form of Old Higazi. In the late 6th century AD, a relatively uniform intertribal "poetic koine" distinct from the spoken vernaculars developed based on the Bedouin dialects of Najd, probably in connection with the court of al-Ḥīra. During the first Islamic century, the majority of Arabic poets and Arabic-writing persons spoke Arabic as their mother tongue. Their texts, although mainly preserved in far later manuscripts, contain traces of non-standardized Classical Arabic elements in morphology and syntax. Standardization Abu al-Aswad al-Du'ali (c. 603–689) is credited with standardizing Arabic grammar, or an-naḥw ( "the way"), and pioneering a system of diacritics to differentiate consonants ( nuqat l-i'jām "pointing for non-Arabs") and indicate vocalization ( at-tashkil). Al-Khalil ibn Ahmad al-Farahidi (718 – 786) compiled the first Arabic dictionary, Kitāb al-'Ayn ( "The Book of the Letter ع"), and is credited with establishing the rules of Arabic prosody. Al-Jahiz (776-868) proposed to Al-Akhfash al-Akbar an overhaul of the grammar of Arabic, but it would not come to pass two centuries. The standardization of Arabic reached completion around the end of the 8th century. The first comprehensive description of the ʿarabiyya "Arabic", Sībawayhi's al-Kitāb, is based first of all upon a corpus of poetic texts, in addition to Qur'an usage and Bedouin informants whom he considered to be reliable speakers of the ʿarabiyya. Spread Arabic spread with the spread of Islam. Following the early Muslim conquests, Arabic gained vocabulary from Middle Persian and Turkish. In the early Abbasid period, many Classical Greek terms entered Arabic through translations carried out at Baghdad's House of Wisdom. By the 8th century, knowledge of Classical Arabic had become an essential prerequisite for rising into the higher classes throughout the Islamic world, both for Muslims and non-Muslims. For example, Maimonides, the Andalusi Jewish philosopher, authored works in Judeo-Arabic—Arabic written in Hebrew script—including his famous The Guide for the Perplexed ( Dalālat al-ḥāʾirīn). Development Ibn Jinni of Mosul, a pioneer in phonology, wrote prolifically in the 10th century on Arabic morphology and phonology in works such as Kitāb Al-Munṣif, Kitāb Al-Muḥtasab, and . Ibn Mada' of Cordoba (1116–1196) realized the overhaul of Arabic grammar first proposed by Al-Jahiz 200 years prior. The Maghrebi lexicographer Ibn Manzur compiled (لسان العرب, "Tongue of Arabs"), a major reference dictionary of Arabic, in 1290. Neo-Arabic Charles Ferguson's koine theory (Ferguson 1959) claims that the modern Arabic dialects collectively descend from a single military koine that sprang up during the Islamic conquests; this view has been challenged in recent times. Ahmad al-Jallad proposes that there were at least two considerably distinct types of Arabic on the eve of the conquests: Northern and Central (Al-Jallad 2009). The modern dialects emerged from a new contact situation produced following the conquests. Instead of the emergence of a single or multiple koines, the dialects contain several sedimentary layers of borrowed and areal features, which they absorbed at different points in their linguistic histories. According to Veersteegh and Bickerton, colloquial Arabic dialects arose from pidginized Arabic formed from contact between Arabs and conquered peoples. Pidginization and subsequent creolization among Arabs and arabized peoples could explain relative morphological and phonological simplicity of vernacular Arabic compared to Classical and MSA. In around the 11th and 12th centuries in al-Andalus, the zajal and muwashah poetry forms developed in the dialectical Arabic of Cordoba and the Maghreb. Nahda The Nahda was a cultural and especially literary renaissance of the 19th century in which writers sought "to fuse Arabic and European forms of expression." According to James L. Gelvin, "Nahda writers attempted to simplify the Arabic language and script so that it might be accessible to a wider audience." In the wake of the industrial revolution and European hegemony and colonialism, pioneering Arabic presses, such as the Amiri Press established by Muhammad Ali (1819), dramatically changed the diffusion and consumption of Arabic literature and publications. Rifa'a al-Tahtawi proposed the establishment of in 1836 and led a translation campaign that highlighted the need for a lexical injection in Arabic, to suit concepts of the industrial and post-industrial age. In response, a number of Arabic academies modeled after the Académie française were established with the aim of developing standardized additions to the Arabic lexicon to suit these transformations, first in Damascus (1919), then in Cairo (1932), Baghdad (1948), Rabat (1960), Amman (1977), (1993), and Tunis (1993). In 1997, a bureau of Arabization standardization was added to the Educational, Cultural, and Scientific Organization of the Arab League. These academies and organizations have worked toward the Arabization of the sciences, creating terms in Arabic to describe new concepts, toward the standardization of these new terms throughout the Arabic-speaking world, and toward the development of Arabic as a world language. This gave rise to what Western scholars call Modern Standard Arabic. From the 1950s, Arabization became a postcolonial nationalist policy in countries such as Tunisia, Algeria, Morocco, and Sudan. Classical, Modern Standard and spoken Arabic Arabic usually refers to Standard Arabic, which Western linguists divide into Classical Arabic and Modern Standard Arabic. It could also refer to any of a variety of regional vernacular Arabic dialects, which are not necessarily mutually intelligible. Classical Arabic is the language found in the Quran, used from the period of Pre-Islamic Arabia to that of the Abbasid Caliphate. Classical Arabic is prescriptive, according to the syntactic and grammatical norms laid down by classical grammarians (such as Sibawayh) and the vocabulary defined in classical dictionaries (such as the Lisān al-ʻArab). Modern Standard Arabic (MSA) largely follows the grammatical standards of Classical Arabic and uses much of the same vocabulary. However, it has discarded some grammatical constructions and vocabulary that no longer have any counterpart in the spoken varieties and has adopted certain new constructions and vocabulary from the spoken varieties. Much of the new vocabulary is used to denote concepts that have arisen in the industrial and post-industrial era, especially in modern times. Due to its grounding in Classical Arabic, Modern Standard Arabic is removed over a millennium from everyday speech, which is construed as a multitude of dialects of this language. These dialects and Modern Standard Arabic are described by some scholars as not mutually comprehensible. The former are usually acquired in families, while the latter is taught in formal education settings. However, there have been studies reporting some degree of comprehension of stories told in the standard variety among preschool-aged children. The relation between Modern Standard Arabic and these dialects is sometimes compared to that of Classical Latin and Vulgar Latin vernaculars (which became Romance languages) in medieval and early modern Europe. This view though does not take into account the widespread use of Modern Standard Arabic as a medium of audiovisual communication in today's mass media—a function Latin has never performed. MSA is the variety used in most current, printed Arabic publications, spoken by some of the Arabic media across North Africa and the Middle East, and understood by most educated Arabic speakers. "Literary Arabic" and "Standard Arabic" ( ) are less strictly defined terms that may refer to Modern Standard Arabic or Classical Arabic. Some of the differences between Classical Arabic (CA) and Modern Standard Arabic (MSA) are as follows: Certain grammatical constructions of CA that have no counterpart in any modern vernacular dialect (e.g., the energetic mood) are almost never used in Modern Standard Arabic. Case distinctions are very rare in Arabic vernaculars. As a result, MSA is generally composed without case distinctions in mind, and the proper cases are added after the fact, when necessary. Because most case endings are noted using final short vowels, which are normally left unwritten in the Arabic script, it is unnecessary to determine the proper case of most words. The practical result of this is that MSA, like English and Standard Chinese, is written in a strongly determined word order and alternative orders that were used in CA for emphasis are rare. In addition, because of the lack of case marking in the spoken varieties, most speakers cannot consistently use the correct endings in extemporaneous speech. As a result, spoken MSA tends to drop or regularize the endings except when reading from a prepared text. The numeral system in CA is complex and heavily tied in with the case system. This system is never used in MSA, even in the most formal of circumstances; instead, a significantly simplified system is used, approximating the system of the conservative spoken varieties. MSA uses much Classical vocabulary (e.g., 'to go') that is not present in the spoken varieties, but deletes Classical words that sound obsolete in MSA. In addition, MSA has borrowed or coined many terms for concepts that did not exist in Quranic times, and MSA continues to evolve. Some words have been borrowed from other languages—notice that transliteration mainly indicates spelling and not real pronunciation (e.g., 'film' or 'democracy'). However, the current preference is to avoid direct borrowings, preferring to either use loan translations (e.g., 'branch', also used for the branch of a company or organization; 'wing', is also used for the wing of an airplane, building, air force, etc.), or to coin new words using forms within existing roots ( 'apoptosis', using the root m/w/t 'death' put into the Xth form, or 'university', based on 'to gather, unite'; 'republic', based on 'multitude'). An earlier tendency was to redefine an older word although this has fallen into disuse (e.g., 'telephone' < 'invisible caller (in Sufism)'; 'newspaper' < 'palm-leaf stalk'). Colloquial or dialectal Arabic refers to the many national or regional varieties which constitute the everyday spoken language and evolved from Classical Arabic. Colloquial Arabic has many regional variants; geographically distant varieties usually differ enough to be mutually unintelligible, and some linguists consider them distinct languages. However, research indicates a high degree of mutual intelligibility between closely related Arabic variants for native speakers listening to words, sentences, and texts; and between more distantly related dialects in interactional situations. The varieties are typically unwritten. They are often used in informal spoken media, such as soap operas and talk shows, as well as occasionally in certain forms of written media such as poetry and printed advertising. The only variety of modern Arabic to have acquired official language status is Maltese, which is spoken in (predominantly Catholic) Malta and written with the Latin script. It is descended from Classical Arabic through Siculo-Arabic, but is not mutually intelligible with any other variety of Arabic. Most linguists list it as a separate language rather than as a dialect of Arabic. Even during Muhammad's lifetime, there were dialects of spoken Arabic. Muhammad spoke in the dialect of Mecca, in the western Arabian peninsula, and it was in this dialect that the Quran was written down. However, the dialects of the eastern Arabian peninsula were considered the most prestigious at the time, so the language of the Quran was ultimately converted to follow the eastern phonology. It is this phonology that underlies the modern pronunciation of Classical Arabic. The phonological differences between these two dialects account for some of the complexities of Arabic writing, most notably the writing of the glottal stop or hamzah (which was preserved in the eastern dialects but lost in western speech) and the use of (representing a sound preserved in the western dialects but merged with in eastern speech). Language and dialect The sociolinguistic situation of Arabic in modern times provides a prime example of the linguistic phenomenon of diglossia, which is the normal use of two separate varieties of the same language, usually in different social situations. Tawleed is the process of giving a new shade of meaning to an old classical word. For example, al-hatif lexicographically, means the one whose sound is heard but whose person remains unseen. Now the term al-hatif is used for a telephone. Therefore, the process of tawleed can express the needs of modern civilization in a manner that would appear to be originally Arabic. In the case of Arabic, educated Arabs of any nationality can be assumed to speak both their school-taught Standard Arabic as well as their native dialects, which depending on the region may be mutually unintelligible. Some of these dialects can be considered to constitute separate languages which may have “sub-dialects” of their own. When educated Arabs of different dialects engage in conversation (for example, a Moroccan speaking with a Lebanese), many speakers code-switch back and forth between the dialectal and standard varieties of the language, sometimes even within the same sentence. Arabic speakers often improve their familiarity with other dialects via music or film. The issue of whether Arabic is one language or many languages is politically charged, in the same way it is for the varieties of Chinese, Hindi and Urdu, Serbian and Croatian, Scots and English, etc. In contrast to speakers of Hindi and Urdu who claim they cannot understand each other even when they can, speakers of the varieties of Arabic will claim they can all understand each other even when they cannot. While there is a minimum level of comprehension between all Arabic dialects, this level can increase or decrease based on geographic proximity: for example, Levantine and Gulf speakers understand each other much better than they do speakers from the Maghreb. The issue of diglossia between spoken and written language is a significant complicating factor: A single written form, significantly different from any of the spoken varieties learned natively, unites a number of sometimes divergent spoken forms. For political reasons, Arabs mostly assert that they all speak a single language, despite significant issues of mutual incomprehensibility among differing spoken versions. From a linguistic standpoint, it is often said that the various spoken varieties of Arabic differ among each other collectively about as much as the Romance languages. This is an apt comparison in a number of ways. The period of divergence from a single spoken form is similar—perhaps 1500 years for Arabic, 2000 years for the Romance languages. Also, while it is comprehensible to people from the Maghreb, a linguistically innovative variety such as Moroccan Arabic is essentially incomprehensible to Arabs from the Mashriq, much as French is incomprehensible to Spanish or Italian speakers but relatively easily learned by them. This suggests that the spoken varieties may linguistically be considered separate languages. Influence of Arabic on other languages The influence of Arabic has been most important in Islamic countries, because it is the language of the Islamic sacred book, the Quran. Arabic is also an important source of vocabulary for languages such as Amharic, Azerbaijani, Baluchi, Bengali, Berber, Bosnian, Chaldean, Chechen, Chittagonian, Croatian, Dagestani, Dhivehi, English, German, Gujarati, Hausa, Hindi, Kazakh, Kurdish, Kutchi, Kyrgyz, Malay (Malaysian and Indonesian), Pashto, Persian, Punjabi, Rohingya, Romance languages (French, Catalan, Italian, Portuguese, Sicilian, Spanish, etc.) Saraiki, Sindhi, Somali, Sylheti, Swahili, Tagalog, Tigrinya, Turkish, Turkmen, Urdu, Uyghur, Uzbek, Visayan and Wolof, as well as other languages in countries where these languages are spoken.Modern Hebrew has been also influenced by Arabic especially during the process of revival, as MSA was used as a source for modern Hebrew vocabulary and roots, as well as much of Modern Hebrew's slang. The Education Minister of France Jean-Michel Blanquer has emphasized the learning and usage of Arabic in French schools. In addition, English has many Arabic loanwords, some directly, but most via other Mediterranean languages. Examples of such words include admiral, adobe, alchemy, alcohol, algebra, algorithm, alkaline, almanac, amber, arsenal, assassin, candy, carat, cipher, coffee, cotton, ghoul, hazard, jar, kismet, lemon, loofah, magazine, mattress, sherbet, sofa, sumac, tariff, and zenith. Other languages such as Maltese and Kinubi derive ultimately from Arabic, rather than merely borrowing vocabulary or grammatical rules. Terms borrowed range from religious terminology (like Berber taẓallit, "prayer", from salat ( )), academic terms (like Uyghur mentiq, "logic"), and economic items (like English coffee) to placeholders (like Spanish fulano, "so-and-so"), everyday terms (like Hindustani lekin, "but", or Spanish taza and French tasse, meaning "cup"), and expressions (like Catalan a betzef, "galore, in quantity"). Most Berber varieties (such as Kabyle), along with Swahili, borrow some numbers from Arabic. Most Islamic religious terms are direct borrowings from Arabic, such as (salat), "prayer", and (imam), "prayer leader." In languages not directly in contact with the Arab world, Arabic loanwords are often transferred indirectly via other languages rather than being transferred directly from Arabic. For example, most Arabic loanwords in Hindustani and Turkish entered through Persian. Older Arabic loanwords in Hausa were borrowed from Kanuri. Most Arabic loanwords in Yoruba entered through Hausa. Arabic words also made their way into several West African languages as Islam spread across the Sahara. Variants of Arabic words such as kitāb ("book") have spread to the languages of African groups who had no direct contact with Arab traders. Since, throughout the Islamic world, Arabic occupied a position similar to that of Latin in Europe, many of the Arabic concepts in the fields of science, philosophy, commerce, etc. were coined from Arabic roots by non-native Arabic speakers, notably by Aramaic and Persian translators, and then found their way into other languages. This process of using Arabic roots, especially in Kurdish and Persian, to translate foreign concepts continued through to the 18th and 19th centuries, when swaths of Arab-inhabited lands were under Ottoman rule. Influence of other languages on Arabic The most important sources of borrowings into (pre-Islamic) Arabic are from the related (Semitic) languages Aramaic, which used to be the principal, international language of communication throughout the ancient Near and Middle East, and Ethiopic. In addition, many cultural, religious and political terms have entered Arabic from Iranian languages, notably Middle Persian, Parthian, and (Classical) Persian, and Hellenistic Greek (kīmiyāʼ has as origin the Greek khymia, meaning in that language the melting of metals; see Roger Dachez, Histoire de la Médecine de l'Antiquité au XXe siècle, Tallandier, 2008, p. 251), alembic (distiller) from ambix (cup), almanac (climate) from almenichiakon (calendar). (For the origin of the last three borrowed words, see Alfred-Louis de Prémare, Foundations of Islam, Seuil, L'Univers Historique, 2002.) Some Arabic borrowings from Semitic or Persian languages are, as presented in De Prémare's above-cited book: madīnah/medina (مدينة, city or city square), a word of Aramaic origin “madenta” (in which it means "a state"). jazīrah (جزيرة), as in the well-known form الجزيرة "Al-Jazeera," means "island" and has its origin in the Syriac ܓܙܝܪܗ gazarta. lāzaward (لازورد) is taken from Persian لاژورد lājvard, the name of a blue stone, lapis lazuli. This word was borrowed in several European languages to mean (light) blue – azure in English, azur in French and azul in Portuguese and Spanish. A comprehensive overview of the influence of other languages on Arabic is found in Lucas & Manfredi (2020). Arabic alphabet and nationalism There have been many instances of national movements to convert Arabic script into Latin script or to Romanize the language. Currently, the only language derived from Classical Arabic to use Latin script is Maltese. Lebanon The Beirut newspaper La Syrie pushed for the change from Arabic script to Latin letters in 1922. The major head of this movement was Louis Massignon, a French Orientalist, who brought his concern before the Arabic Language Academy in Damascus in 1928. Massignon's attempt at Romanization failed as the Academy and population viewed the proposal as an attempt from the Western world to take over their country. Sa'id Afghani, a member of the Academy, mentioned that the movement to Romanize the script was a Zionist plan to dominate Lebanon. Said Akl created a Latin-based alphabet for Lebanese and used it in a newspaper he founded, Lebnaan, as well as in some books he wrote. Egypt After the period of colonialism in Egypt, Egyptians were looking for a way to reclaim and re-emphasize Egyptian culture. As a result, some Egyptians pushed for an Egyptianization of the Arabic language in which the formal Arabic and the colloquial Arabic would be combined into one language and the Latin alphabet would be used. There was also the idea of finding a way to use Hieroglyphics instead of the Latin alphabet, but this was seen as too complicated to use. A scholar, Salama Musa agreed with the idea of applying a Latin alphabet to Arabic, as he believed that would allow Egypt to have a closer relationship with the West. He also believed that Latin script was key to the success of Egypt as it would allow for more advances in science and technology. This change in alphabet, he believed, would solve the problems inherent with Arabic, such as a lack of written vowels and difficulties writing foreign words that made it difficult for non-native speakers to learn. Ahmad Lutfi As Sayid and Muhammad Azmi, two Egyptian intellectuals, agreed with Musa and supported the push for Romanization. The idea that Romanization was necessary for modernization and growth in Egypt continued with Abd Al-Aziz Fahmi in 1944. He was the chairman for the Writing and Grammar Committee for the Arabic Language Academy of Cairo. However, this effort failed as the Egyptian people felt a strong cultural tie to the Arabic alphabet. In particular, the older Egyptian generations believed that the Arabic alphabet had strong connections to Arab values and history, due to the long history of the Arabic alphabet (Shrivtiel, 189) in Muslim societies. The language of the Quran and its influence on poetry The Quran introduced a new way of writing to the world. People began studying and applying the unique styles they learned from the Quran to not only their own writing, but also their culture. Writers studied the unique structure and format of the Quran in order to identify and apply the figurative devices and their impact on the reader. Quran's figurative devices The Quran inspired musicality in poetry through the internal rhythm of the verses. The arrangement of words, how certain sounds create harmony, and the agreement of rhymes create the sense of rhythm within each verse. At times, the chapters of the Quran only have the rhythm in common. The repetition in the Quran introduced the true power and impact repetition can have in poetry. The repetition of certain words and phrases made them appear more firm and explicit in the Quran. The Quran uses constant metaphors of blindness and deafness to imply unbelief. Metaphors were not a new concept to poetry, however the strength of extended metaphors was. The explicit imagery in the Quran inspired many poets to include and focus on the feature in their own work. The poet ibn al-Mu'tazz wrote a book regarding the figures of speech inspired by his study of the Quran. Poet Badr Shakir al-Sayyab expresses his political opinion in his work through imagery inspired by the forms of more harsher imagery used in the Quran. The Quran uses figurative devices in order to express the meaning in the most beautiful form possible. The study of the pauses in the Quran as well as other rhetoric allow it to be approached in a multiple ways. Structure Although the Quran is known for its fluency and harmony, the structure can be best described as not always being inherently chronological, but can also flow thematically instead (the chapters in the Quran have segments that flow in chronological order, however segments can transition into other segments not related in chronology, but could be related in topic). The suras, also known as chapters of the Quran, are not placed in chronological order. The only constant in their structure is that the longest are placed first and shorter ones follow. The topics discussed in the chapters can also have no direct relation to each other (as seen in many suras) and can share in their sense of rhyme. The Quran introduces to poetry the idea of abandoning order and scattering narratives throughout the text. Harmony is also present in the sound of the Quran. The elongations and accents present in the Quran create a harmonious flow within the writing. Unique sound of the Quran recited, due to the accents, create a deeper level of understanding through a deeper emotional connection. The Quran is written in a language that is simple and understandable by people. The simplicity of the writing inspired later poets to write in a more clear and clear-cut style. The words of the Quran, although unchanged, are to this day understandable and frequently used in both formal and informal Arabic. The simplicity of the language makes memorizing and reciting the Quran a slightly easier task. Culture and the Quran The writer al-Khattabi explains how culture is a required element to create a sense of art in work as well as understand it. He believes that the fluency and harmony which the Quran possess are not the only elements that make it beautiful and create a bond between the reader and the text. While a lot of poetry was deemed comparable to the Quran in that it is equal to or better than the composition of the Quran, a debate rose that such statements are not possible because humans are incapable of composing work comparable to the Quran. Because the structure of the Quran made it difficult for a clear timeline to be seen, Hadith were the main source of chronological order. The Hadith were passed down from generation to generation and this tradition became a large resource for understanding the context. Poetry after the Quran began possessing this element of tradition by including ambiguity and background information to be required to understand the meaning. After the Quran came down to the people, the tradition of memorizing the verses became present. It is believed that the greater the amount of the Quran memorized, the greater the faith. As technology improved over time, hearing recitations of the Quran became more available as well as more tools to help memorize the verses. The tradition of Love Poetry served as a symbolic representation of a Muslim's desire for a closer contact with their Lord. While the influence of the Quran on Arabic poetry is explained and defended by numerous writers, some writers such as Al-Baqillani believe that poetry and the Quran are in no conceivable way related due to the uniqueness of the Quran. Poetry's imperfections prove his points that they cannot be compared with the fluency the Quran holds. Arabic and Islam Classical Arabic is the language of poetry and literature (including news); it is also mainly the language of the Quran. Classical Arabic is closely associated with the religion of Islam because the Quran was written in it. Most of the world's Muslims do not speak Classical Arabic as their native language, but many can read the Quranic script and recite the Quran. Among non-Arab Muslims, translations of the Quran are most often accompanied by the original text. At present, Modern Standard Arabic (MSA) is also used in modernized versions of literary forms of the Quran. Some Muslims present a monogenesis of languages and claim that the Arabic language was the language revealed by God for the benefit of mankind and the original language as a prototype system of symbolic communication, based upon its system of triconsonantal roots, spoken by man from which all other languages were derived, having first been corrupted. Judaism has a similar account with the Tower of Babel. Dialects and descendants Colloquial Arabic is a collective term for the spoken dialects of Arabic used throughout the Arab world, which differ radically from the literary language. The main dialectal division is between the varieties within and outside of the Arabian peninsula, followed by that between sedentary varieties and the much more conservative Bedouin varieties. All the varieties outside of the Arabian peninsula (which include the large majority of speakers) have many features in common with each other that are not found in Classical Arabic. This has led researchers to postulate the existence of a prestige koine dialect in the one or two centuries immediately following the Arab conquest, whose features eventually spread to all newly conquered areas. These features are present to varying degrees inside the Arabian peninsula. Generally, the Arabian peninsula varieties have much more diversity than the non-peninsula varieties, but these have been understudied. Within the non-peninsula varieties, the largest difference is between the non-Egyptian North African dialects (especially Moroccan Arabic) and the others. Moroccan Arabic in particular is hardly comprehensible to Arabic speakers east of Libya (although the converse is not true, in part due to the popularity of Egyptian films and other media). One factor in the differentiation of the dialects is influence from the languages previously spoken in the areas, which have typically provided a significant number of new words and have sometimes also influenced pronunciation or word order; however, a much more significant factor for most dialects is, as among Romance languages, retention (or change of meaning) of different classical forms. Thus Iraqi aku, Levantine fīh and North African kayən all mean 'there is', and all come from Classical Arabic forms (yakūn, fīhi, kā'in respectively), but now sound very different. Examples Transcription is a broad IPA transcription, so minor differences were ignored for easier comparison. Also, the pronunciation of Modern Standard Arabic differs significantly from region to region. Koiné According to Charles A. Ferguson, the following are some of the characteristic features of the koiné that underlies all the modern dialects outside the Arabian peninsula. Although many other features are common to most or all of these varieties, Ferguson believes that these features in particular are unlikely to have evolved independently more than once or twice and together suggest the existence of the koine: Loss of the dual number except on nouns, with consistent plural agreement (cf. feminine singular agreement in plural inanimates). Change of a to i in many affixes (e.g., non-past-tense prefixes ti- yi- ni-; wi- 'and'; il- 'the'; feminine -it in the construct state). Loss of third-weak verbs ending in w (which merge with verbs ending in y). Reformation of geminate verbs, e.g., 'I untied' → . Conversion of separate words lī 'to me', laka 'to you', etc. into indirect-object clitic suffixes. Certain changes in the cardinal number system, e.g., 'five days' → , where certain words have a special plural with prefixed t. Loss of the feminine elative (comparative). Adjective plurals of the form 'big' → . Change of nisba suffix > . Certain lexical items, e.g., 'bring' < 'come with'; 'see'; 'what' (or similar) < 'which thing'; (relative pronoun). Merger of and . Dialect groups Egyptian Arabic is spoken by around 53 million people in Egypt (55 million worldwide). It is one of the most understood varieties of Arabic, due in large part to the widespread distribution of Egyptian films and television shows throughout the Arabic-speaking world Levantine Arabic includes North Levantine Arabic, South Levantine Arabic and Cypriot Arabic. It is spoken by about 21 million people in Lebanon, Syria, Jordan, Palestine, Israel, Cyprus and Turkey. Lebanese Arabic is a variety of Levantine Arabic spoken primarily in Lebanon. Jordanian Arabic is a continuum of mutually intelligible varieties of Levantine Arabic spoken by the population of the Kingdom of Jordan. Palestinian Arabic is a name of several dialects of the subgroup of Levantine Arabic spoken by the Palestinians in Palestine, by Arab citizens of Israel and in most Palestinian populations around the world. Samaritan Arabic, spoken by only several hundred in the Nablus region Cypriot Maronite Arabic, spoken in Cyprus Maghrebi Arabic, also called "Darija" spoken by about 70 million people in Morocco, Algeria, Tunisia and Libya. It also forms the basis of Maltese via the extinct Sicilian Arabic dialect. Maghrebi Arabic is very hard to understand for Arabic speakers from the Mashriq or Mesopotamia, the most comprehensible being Libyan Arabic and the most difficult Moroccan Arabic. The others such as Algerian Arabic can be considered in between the two in terms of difficulty. Libyan Arabic spoken in Libya and neighboring countries. Tunisian Arabic spoken in Tunisia and North-eastern Algeria Algerian Arabic spoken in Algeria Judeo-Algerian Arabic was spoken by Jews in Algeria until 1962 Moroccan Arabic spoken in Morocco Hassaniya Arabic (3 million speakers), spoken in Mauritania, Western Sahara, some parts of the Azawad in northern Mali, southern Morocco and south-western Algeria. Andalusian Arabic, spoken in Spain until the 16th century. Siculo-Arabic (Sicilian Arabic), was spoken in Sicily and Malta between the end of the 9th century and the end of the 12th century and eventually evolved into the Maltese language. Maltese, spoken on the island of Malta, is the only fully separate standardized language to have originated from an Arabic dialect (the extinct Siculo-Arabic dialect), with independent literary norms. Maltese has evolved independently of Modern Standard Arabic and its varieties into a standardized language over the past 800 years in a gradual process of Latinisation. Maltese is therefore considered an exceptional descendant of Arabic that has no diglossic relationship with Standard Arabic or Classical Arabic. Maltese is also different from Arabic and other Semitic languages since its morphology has been deeply influenced by Romance languages, Italian and Sicilian. It is also the only Semitic language written in the Latin script. In terms of basic everyday language, speakers of Maltese are reported to be able to understand less than a third of what is said to them in Tunisian Arabic, which is related to Siculo-Arabic, whereas speakers of Tunisian are able to understand about 40% of what is said to them in Maltese. This asymmetric intelligibility is considerably lower than the mutual intelligibility found between Maghrebi Arabic dialects. Maltese has its own dialects, with urban varieties of Maltese being closer to Standard Maltese than rural varieties. Mesopotamian Arabic, spoken by about 41.2 million people in Iraq (where it is called "Aamiyah"), eastern Syria and southwestern Iran (Khuzestan) and in the southeastern of Turkey (in the eastern Mediterranean, Southeastern Anatolia Region) North Mesopotamian Arabic is a spoken north of the Hamrin Mountains in Iraq, in western Iran, northern Syria, and in southeastern Turkey (in the eastern Mediterranean Region, Southeastern Anatolia Region, and southern Eastern Anatolia Region). Judeo-Mesopotamian Arabic, also known as Iraqi Judeo Arabic and Yahudic, is a variety of Arabic spoken by Iraqi Jews of Mosul. Baghdad Arabic is the Arabic dialect spoken in Baghdad, and the surrounding cities and it is a subvariety of Mesopotamian Arabic. Baghdad Jewish Arabic is the dialect spoken by the Iraqi Jews of Baghdad. South Mesopotamian Arabic (Basrawi dialect) is the dialect spoken in southern Iraq, such as Basra, Dhi Qar and Najaf. Khuzestani Arabic is the dialect spoken in the Iranian province of Khuzestan. This dialect is a mix of Southen Mesopotamian Arabic and Gulf Arabic. Khorasani Arabic spoken in the Iranian province of Khorasan. Kuwaiti Arabic is a Gulf Arabic dialect spoken in Kuwait. Sudanese Arabic is spoken by 17 million people in Sudan and some parts of southern Egypt. Sudanese Arabic is quite distinct from the dialect of its neighbor to the north; rather, the Sudanese have a dialect similar to the Hejazi dialect. Juba Arabic spoken in South Sudan and southern Sudan Gulf Arabic, spoken by around four million people, predominantly in Kuwait, Bahrain, some parts of Oman, eastern Saudi Arabia coastal areas and some parts of UAE and Qatar. Also spoken in Iran's Bushehr and Hormozgan provinces. Although Gulf Arabic is spoken in Qatar, most Qatari citizens speak Najdi Arabic (Bedawi). Omani Arabic, distinct from the Gulf Arabic of Eastern Arabia and Bahrain, spoken in Central Oman. With recent oil wealth and mobility has spread over other parts of the Sultanate. Hadhrami Arabic, spoken by around 8 million people, predominantly in Hadhramaut, and in parts of the Arabian Peninsula, South and Southeast Asia, and East Africa by Hadhrami descendants. Yemeni Arabic spoken in Yemen, and southern Saudi Arabia by 15 million people. Similar to Gulf Arabic. Najdi Arabic, spoken by around 10 million people, mainly spoken in Najd, central and northern Saudi Arabia. Most Qatari citizens speak Najdi Arabic (Bedawi). Hejazi Arabic (6 million speakers), spoken in Hejaz, western Saudi Arabia Saharan Arabic spoken in some parts of Algeria, Niger and Mali Baharna Arabic (600,000 speakers), spoken by Bahrani Shiʻah in Bahrain and Qatif, the dialect exhibits many big differences from Gulf Arabic. It is also spoken to a lesser extent in Oman. Judeo-Arabic dialects – these are the dialects spoken by the Jews that had lived or continue to live in the Arab World. As Jewish migration to Israel took hold, the language did not thrive and is now considered endangered. So-called Qəltu Arabic. Chadian Arabic, spoken in Chad, Sudan, some parts of South Sudan, Central African Republic, Niger, Nigeria, Cameroon Central Asian Arabic, spoken in Uzbekistan, Tajikistan and Afghanistan, is highly endangered Shirvani Arabic, spoken in Azerbaijan and Dagestan until the 1930s, now extinct. Phonology History Of the 29 Proto-Semitic consonants, only one has been lost: , which merged with , while became (see Semitic languages). Various other consonants have changed their sound too, but have remained distinct. An original lenited to , and – consistently attested in pre-Islamic Greek transcription of Arabic languages – became palatalized to or by the time of the Quran and , , or after early Muslim conquests and in MSA (see Arabic phonology#Local variations for more detail). An original voiceless alveolar lateral fricative became . Its emphatic counterpart was considered by Arabs to be the most unusual sound in Arabic (Hence the Classical Arabic's appellation or "language of the "); for most modern dialects, it has become an emphatic stop with loss of the laterality or with complete loss of any pharyngealization or velarization, . (The classical pronunciation of pharyngealization still occurs in the Mehri language, and the similar sound without velarization, , exists in other Modern South Arabian languages.) Other changes may also have happened. Classical Arabic pronunciation is not thoroughly recorded and different reconstructions of the sound system of Proto-Semitic propose different phonetic values. One example is the emphatic consonants, which are pharyngealized in modern pronunciations but may have been velarized in the eighth century and glottalized in Proto-Semitic. Reduction of and between vowels occurs in a number of circumstances and is responsible for much of the complexity of third-weak ("defective") verbs. Early Akkadian transcriptions of Arabic names shows that this reduction had not yet occurred as of the early part of the 1st millennium BC. The Classical Arabic language as recorded was a poetic koine that reflected a consciously archaizing dialect, chosen based on the tribes of the western part of the Arabian Peninsula, who spoke the most conservative variants of Arabic. Even at the time of Muhammed and before, other dialects existed with many more changes, including the loss of most glottal stops, the loss of case endings, the reduction of the diphthongs and into monophthongs , etc. Most of these changes are present in most or all modern varieties of Arabic. An interesting feature of the writing system of the Quran (and hence of Classical Arabic) is that it contains certain features of Muhammad's native dialect of Mecca, corrected through diacritics into the forms of standard Classical Arabic. Among these features visible under the corrections are the loss of the glottal stop and a differing development of the reduction of certain final sequences containing : Evidently, final became as in the Classical language, but final became a different sound, possibly (rather than again in the Classical language). This is the apparent source of the alif maqṣūrah 'restricted alif' where a final is reconstructed: a letter that would normally indicate or some similar high-vowel sound, but is taken in this context to be a logical variant of alif and represent the sound . Although Classical Arabic was a unitary language and is now used in Quran, its pronunciation varies somewhat from country to country and from region to region within a country. It is influenced by colloquial dialects. Literary Arabic The "colloquial" spoken dialects of Arabic are learned at home and constitute the native languages of Arabic speakers. "Formal" Modern Standard Arabic is learned at school; although many speakers have a native-like command of the language, it is technically not the native language of any speakers. Both varieties can be both written and spoken, although the colloquial varieties are rarely written down and the formal variety is spoken mostly in formal circumstances, e.g., in radio and TV broadcasts, formal lectures, parliamentary discussions and to some extent between speakers of different colloquial dialects. Even when the literary language is spoken, however, it is normally only spoken in its pure form when reading a prepared text out loud and communication between speakers of different colloquial dialects. When speaking extemporaneously (i.e. making up the language on the spot, as in a normal discussion among people), speakers tend to deviate somewhat from the strict literary language in the direction of the colloquial varieties. In fact, there is a continuous range of "in-between" spoken varieties: from nearly pure Modern Standard Arabic (MSA), to a form that still uses MSA grammar and vocabulary but with significant colloquial influence, to a form of the colloquial language that imports a number of words and grammatical constructions in MSA, to a form that is close to pure colloquial but with the "rough edges" (the most noticeably "vulgar" or non-Classical aspects) smoothed out, to pure colloquial. The particular variant (or register) used depends on the social class and education level of the speakers involved and the level of formality of the speech situation. Often it will vary within a single encounter, e.g., moving from nearly pure MSA to a more mixed language in the process of a radio interview, as the interviewee becomes more comfortable with the interviewer. This type of variation is characteristic of the diglossia that exists throughout the Arabic-speaking world. Although Modern Standard Arabic (MSA) is a unitary language, its pronunciation varies somewhat from country to country and from region to region within a country. The variation in individual "accents" of MSA speakers tends to mirror corresponding variations in the colloquial speech of the speakers in question, but with the distinguishing characteristics moderated somewhat. It is important in descriptions of "Arabic" phonology to distinguish between pronunciation of a given colloquial (spoken) dialect and the pronunciation of MSA by these same speakers. Although they are related, they are not the same. For example, the phoneme that derives from Classical Arabic has many different pronunciations in the modern spoken varieties, e.g., including the proposed original . Speakers whose native variety has either or will
claim they cannot understand each other even when they can, speakers of the varieties of Arabic will claim they can all understand each other even when they cannot. While there is a minimum level of comprehension between all Arabic dialects, this level can increase or decrease based on geographic proximity: for example, Levantine and Gulf speakers understand each other much better than they do speakers from the Maghreb. The issue of diglossia between spoken and written language is a significant complicating factor: A single written form, significantly different from any of the spoken varieties learned natively, unites a number of sometimes divergent spoken forms. For political reasons, Arabs mostly assert that they all speak a single language, despite significant issues of mutual incomprehensibility among differing spoken versions. From a linguistic standpoint, it is often said that the various spoken varieties of Arabic differ among each other collectively about as much as the Romance languages. This is an apt comparison in a number of ways. The period of divergence from a single spoken form is similar—perhaps 1500 years for Arabic, 2000 years for the Romance languages. Also, while it is comprehensible to people from the Maghreb, a linguistically innovative variety such as Moroccan Arabic is essentially incomprehensible to Arabs from the Mashriq, much as French is incomprehensible to Spanish or Italian speakers but relatively easily learned by them. This suggests that the spoken varieties may linguistically be considered separate languages. Influence of Arabic on other languages The influence of Arabic has been most important in Islamic countries, because it is the language of the Islamic sacred book, the Quran. Arabic is also an important source of vocabulary for languages such as Amharic, Azerbaijani, Baluchi, Bengali, Berber, Bosnian, Chaldean, Chechen, Chittagonian, Croatian, Dagestani, Dhivehi, English, German, Gujarati, Hausa, Hindi, Kazakh, Kurdish, Kutchi, Kyrgyz, Malay (Malaysian and Indonesian), Pashto, Persian, Punjabi, Rohingya, Romance languages (French, Catalan, Italian, Portuguese, Sicilian, Spanish, etc.) Saraiki, Sindhi, Somali, Sylheti, Swahili, Tagalog, Tigrinya, Turkish, Turkmen, Urdu, Uyghur, Uzbek, Visayan and Wolof, as well as other languages in countries where these languages are spoken.Modern Hebrew has been also influenced by Arabic especially during the process of revival, as MSA was used as a source for modern Hebrew vocabulary and roots, as well as much of Modern Hebrew's slang. The Education Minister of France Jean-Michel Blanquer has emphasized the learning and usage of Arabic in French schools. In addition, English has many Arabic loanwords, some directly, but most via other Mediterranean languages. Examples of such words include admiral, adobe, alchemy, alcohol, algebra, algorithm, alkaline, almanac, amber, arsenal, assassin, candy, carat, cipher, coffee, cotton, ghoul, hazard, jar, kismet, lemon, loofah, magazine, mattress, sherbet, sofa, sumac, tariff, and zenith. Other languages such as Maltese and Kinubi derive ultimately from Arabic, rather than merely borrowing vocabulary or grammatical rules. Terms borrowed range from religious terminology (like Berber taẓallit, "prayer", from salat ( )), academic terms (like Uyghur mentiq, "logic"), and economic items (like English coffee) to placeholders (like Spanish fulano, "so-and-so"), everyday terms (like Hindustani lekin, "but", or Spanish taza and French tasse, meaning "cup"), and expressions (like Catalan a betzef, "galore, in quantity"). Most Berber varieties (such as Kabyle), along with Swahili, borrow some numbers from Arabic. Most Islamic religious terms are direct borrowings from Arabic, such as (salat), "prayer", and (imam), "prayer leader." In languages not directly in contact with the Arab world, Arabic loanwords are often transferred indirectly via other languages rather than being transferred directly from Arabic. For example, most Arabic loanwords in Hindustani and Turkish entered through Persian. Older Arabic loanwords in Hausa were borrowed from Kanuri. Most Arabic loanwords in Yoruba entered through Hausa. Arabic words also made their way into several West African languages as Islam spread across the Sahara. Variants of Arabic words such as kitāb ("book") have spread to the languages of African groups who had no direct contact with Arab traders. Since, throughout the Islamic world, Arabic occupied a position similar to that of Latin in Europe, many of the Arabic concepts in the fields of science, philosophy, commerce, etc. were coined from Arabic roots by non-native Arabic speakers, notably by Aramaic and Persian translators, and then found their way into other languages. This process of using Arabic roots, especially in Kurdish and Persian, to translate foreign concepts continued through to the 18th and 19th centuries, when swaths of Arab-inhabited lands were under Ottoman rule. Influence of other languages on Arabic The most important sources of borrowings into (pre-Islamic) Arabic are from the related (Semitic) languages Aramaic, which used to be the principal, international language of communication throughout the ancient Near and Middle East, and Ethiopic. In addition, many cultural, religious and political terms have entered Arabic from Iranian languages, notably Middle Persian, Parthian, and (Classical) Persian, and Hellenistic Greek (kīmiyāʼ has as origin the Greek khymia, meaning in that language the melting of metals; see Roger Dachez, Histoire de la Médecine de l'Antiquité au XXe siècle, Tallandier, 2008, p. 251), alembic (distiller) from ambix (cup), almanac (climate) from almenichiakon (calendar). (For the origin of the last three borrowed words, see Alfred-Louis de Prémare, Foundations of Islam, Seuil, L'Univers Historique, 2002.) Some Arabic borrowings from Semitic or Persian languages are, as presented in De Prémare's above-cited book: madīnah/medina (مدينة, city or city square), a word of Aramaic origin “madenta” (in which it means "a state"). jazīrah (جزيرة), as in the well-known form الجزيرة "Al-Jazeera," means "island" and has its origin in the Syriac ܓܙܝܪܗ gazarta. lāzaward (لازورد) is taken from Persian لاژورد lājvard, the name of a blue stone, lapis lazuli. This word was borrowed in several European languages to mean (light) blue – azure in English, azur in French and azul in Portuguese and Spanish. A comprehensive overview of the influence of other languages on Arabic is found in Lucas & Manfredi (2020). Arabic alphabet and nationalism There have been many instances of national movements to convert Arabic script into Latin script or to Romanize the language. Currently, the only language derived from Classical Arabic to use Latin script is Maltese. Lebanon The Beirut newspaper La Syrie pushed for the change from Arabic script to Latin letters in 1922. The major head of this movement was Louis Massignon, a French Orientalist, who brought his concern before the Arabic Language Academy in Damascus in 1928. Massignon's attempt at Romanization failed as the Academy and population viewed the proposal as an attempt from the Western world to take over their country. Sa'id Afghani, a member of the Academy, mentioned that the movement to Romanize the script was a Zionist plan to dominate Lebanon. Said Akl created a Latin-based alphabet for Lebanese and used it in a newspaper he founded, Lebnaan, as well as in some books he wrote. Egypt After the period of colonialism in Egypt, Egyptians were looking for a way to reclaim and re-emphasize Egyptian culture. As a result, some Egyptians pushed for an Egyptianization of the Arabic language in which the formal Arabic and the colloquial Arabic would be combined into one language and the Latin alphabet would be used. There was also the idea of finding a way to use Hieroglyphics instead of the Latin alphabet, but this was seen as too complicated to use. A scholar, Salama Musa agreed with the idea of applying a Latin alphabet to Arabic, as he believed that would allow Egypt to have a closer relationship with the West. He also believed that Latin script was key to the success of Egypt as it would allow for more advances in science and technology. This change in alphabet, he believed, would solve the problems inherent with Arabic, such as a lack of written vowels and difficulties writing foreign words that made it difficult for non-native speakers to learn. Ahmad Lutfi As Sayid and Muhammad Azmi, two Egyptian intellectuals, agreed with Musa and supported the push for Romanization. The idea that Romanization was necessary for modernization and growth in Egypt continued with Abd Al-Aziz Fahmi in 1944. He was the chairman for the Writing and Grammar Committee for the Arabic Language Academy of Cairo. However, this effort failed as the Egyptian people felt a strong cultural tie to the Arabic alphabet. In particular, the older Egyptian generations believed that the Arabic alphabet had strong connections to Arab values and history, due to the long history of the Arabic alphabet (Shrivtiel, 189) in Muslim societies. The language of the Quran and its influence on poetry The Quran introduced a new way of writing to the world. People began studying and applying the unique styles they learned from the Quran to not only their own writing, but also their culture. Writers studied the unique structure and format of the Quran in order to identify and apply the figurative devices and their impact on the reader. Quran's figurative devices The Quran inspired musicality in poetry through the internal rhythm of the verses. The arrangement of words, how certain sounds create harmony, and the agreement of rhymes create the sense of rhythm within each verse. At times, the chapters of the Quran only have the rhythm in common. The repetition in the Quran introduced the true power and impact repetition can have in poetry. The repetition of certain words and phrases made them appear more firm and explicit in the Quran. The Quran uses constant metaphors of blindness and deafness to imply unbelief. Metaphors were not a new concept to poetry, however the strength of extended metaphors was. The explicit imagery in the Quran inspired many poets to include and focus on the feature in their own work. The poet ibn al-Mu'tazz wrote a book regarding the figures of speech inspired by his study of the Quran. Poet Badr Shakir al-Sayyab expresses his political opinion in his work through imagery inspired by the forms of more harsher imagery used in the Quran. The Quran uses figurative devices in order to express the meaning in the most beautiful form possible. The study of the pauses in the Quran as well as other rhetoric allow it to be approached in a multiple ways. Structure Although the Quran is known for its fluency and harmony, the structure can be best described as not always being inherently chronological, but can also flow thematically instead (the chapters in the Quran have segments that flow in chronological order, however segments can transition into other segments not related in chronology, but could be related in topic). The suras, also known as chapters of the Quran, are not placed in chronological order. The only constant in their structure is that the longest are placed first and shorter ones follow. The topics discussed in the chapters can also have no direct relation to each other (as seen in many suras) and can share in their sense of rhyme. The Quran introduces to poetry the idea of abandoning order and scattering narratives throughout the text. Harmony is also present in the sound of the Quran. The elongations and accents present in the Quran create a harmonious flow within the writing. Unique sound of the Quran recited, due to the accents, create a deeper level of understanding through a deeper emotional connection. The Quran is written in a language that is simple and understandable by people. The simplicity of the writing inspired later poets to write in a more clear and clear-cut style. The words of the Quran, although unchanged, are to this day understandable and frequently used in both formal and informal Arabic. The simplicity of the language makes memorizing and reciting the Quran a slightly easier task. Culture and the Quran The writer al-Khattabi explains how culture is a required element to create a sense of art in work as well as understand it. He believes that the fluency and harmony which the Quran possess are not the only elements that make it beautiful and create a bond between the reader and the text. While a lot of poetry was deemed comparable to the Quran in that it is equal to or better than the composition of the Quran, a debate rose that such statements are not possible because humans are incapable of composing work comparable to the Quran. Because the structure of the Quran made it difficult for a clear timeline to be seen, Hadith were the main source of chronological order. The Hadith were passed down from generation to generation and this tradition became a large resource for understanding the context. Poetry after the Quran began possessing this element of tradition by including ambiguity and background information to be required to understand the meaning. After the Quran came down to the people, the tradition of memorizing the verses became present. It is believed that the greater the amount of the Quran memorized, the greater the faith. As technology improved over time, hearing recitations of the Quran became more available as well as more tools to help memorize the verses. The tradition of Love Poetry served as a symbolic representation of a Muslim's desire for a closer contact with their Lord. While the influence of the Quran on Arabic poetry is explained and defended by numerous writers, some writers such as Al-Baqillani believe that poetry and the Quran are in no conceivable way related due to the uniqueness of the Quran. Poetry's imperfections prove his points that they cannot be compared with the fluency the Quran holds. Arabic and Islam Classical Arabic is the language of poetry and literature (including news); it is also mainly the language of the Quran. Classical Arabic is closely associated with the religion of Islam because the Quran was written in it. Most of the world's Muslims do not speak Classical Arabic as their native language, but many can read the Quranic script and recite the Quran. Among non-Arab Muslims, translations of the Quran are most often accompanied by the original text. At present, Modern Standard Arabic (MSA) is also used in modernized versions of literary forms of the Quran. Some Muslims present a monogenesis of languages and claim that the Arabic language was the language revealed by God for the benefit of mankind and the original language as a prototype system of symbolic communication, based upon its system of triconsonantal roots, spoken by man from which all other languages were derived, having first been corrupted. Judaism has a similar account with the Tower of Babel. Dialects and descendants Colloquial Arabic is a collective term for the spoken dialects of Arabic used throughout the Arab world, which differ radically from the literary language. The main dialectal division is between the varieties within and outside of the Arabian peninsula, followed by that between sedentary varieties and the much more conservative Bedouin varieties. All the varieties outside of the Arabian peninsula (which include the large majority of speakers) have many features in common with each other that are not found in Classical Arabic. This has led researchers to postulate the existence of a prestige koine dialect in the one or two centuries immediately following the Arab conquest, whose features eventually spread to all newly conquered areas. These features are present to varying degrees inside the Arabian peninsula. Generally, the Arabian peninsula varieties have much more diversity than the non-peninsula varieties, but these have been understudied. Within the non-peninsula varieties, the largest difference is between the non-Egyptian North African dialects (especially Moroccan Arabic) and the others. Moroccan Arabic in particular is hardly comprehensible to Arabic speakers east of Libya (although the converse is not true, in part due to the popularity of Egyptian films and other media). One factor in the differentiation of the dialects is influence from the languages previously spoken in the areas, which have typically provided a significant number of new words and have sometimes also influenced pronunciation or word order; however, a much more significant factor for most dialects is, as among Romance languages, retention (or change of meaning) of different classical forms. Thus Iraqi aku, Levantine fīh and North African kayən all mean 'there is', and all come from Classical Arabic forms (yakūn, fīhi, kā'in respectively), but now sound very different. Examples Transcription is a broad IPA transcription, so minor differences were ignored for easier comparison. Also, the pronunciation of Modern Standard Arabic differs significantly from region to region. Koiné According to Charles A. Ferguson, the following are some of the characteristic features of the koiné that underlies all the modern dialects outside the Arabian peninsula. Although many other features are common to most or all of these varieties, Ferguson believes that these features in particular are unlikely to have evolved independently more than once or twice and together suggest the existence of the koine: Loss of the dual number except on nouns, with consistent plural agreement (cf. feminine singular agreement in plural inanimates). Change of a to i in many affixes (e.g., non-past-tense prefixes ti- yi- ni-; wi- 'and'; il- 'the'; feminine -it in the construct state). Loss of third-weak verbs ending in w (which merge with verbs ending in y). Reformation of geminate verbs, e.g., 'I untied' → . Conversion of separate words lī 'to me', laka 'to you', etc. into indirect-object clitic suffixes. Certain changes in the cardinal number system, e.g., 'five days' → , where certain words have a special plural with prefixed t. Loss of the feminine elative (comparative). Adjective plurals of the form 'big' → . Change of nisba suffix > . Certain lexical items, e.g., 'bring' < 'come with'; 'see'; 'what' (or similar) < 'which thing'; (relative pronoun). Merger of and . Dialect groups Egyptian Arabic is spoken by around 53 million people in Egypt (55 million worldwide). It is one of the most understood varieties of Arabic, due in large part to the widespread distribution of Egyptian films and television shows throughout the Arabic-speaking world Levantine Arabic includes North Levantine Arabic, South Levantine Arabic and Cypriot Arabic. It is spoken by about 21 million people in Lebanon, Syria, Jordan, Palestine, Israel, Cyprus and Turkey. Lebanese Arabic is a variety of Levantine Arabic spoken primarily in Lebanon. Jordanian Arabic is a continuum of mutually intelligible varieties of Levantine Arabic spoken by the population of the Kingdom of Jordan. Palestinian Arabic is a name of several dialects of the subgroup of Levantine Arabic spoken by the Palestinians in Palestine, by Arab citizens of Israel and in most Palestinian populations around the world. Samaritan Arabic, spoken by only several hundred in the Nablus region Cypriot Maronite Arabic, spoken in Cyprus Maghrebi Arabic, also called "Darija" spoken by about 70 million people in Morocco, Algeria, Tunisia and Libya. It also forms the basis of Maltese via the extinct Sicilian Arabic dialect. Maghrebi Arabic is very hard to understand for Arabic speakers from the Mashriq or Mesopotamia, the most comprehensible being Libyan Arabic and the most difficult Moroccan Arabic. The others such as Algerian Arabic can be considered in between the two in terms of difficulty. Libyan Arabic spoken in Libya and neighboring countries. Tunisian Arabic spoken in Tunisia and North-eastern Algeria Algerian Arabic spoken in Algeria Judeo-Algerian Arabic was spoken by Jews in Algeria until 1962 Moroccan Arabic spoken in Morocco Hassaniya Arabic (3 million speakers), spoken in Mauritania, Western Sahara, some parts of the Azawad in northern Mali, southern Morocco and south-western Algeria. Andalusian Arabic, spoken in Spain until the 16th century. Siculo-Arabic (Sicilian Arabic), was spoken in Sicily and Malta between the end of the 9th century and the end of the 12th century and eventually evolved into the Maltese language. Maltese, spoken on the island of Malta, is the only fully separate standardized language to have originated from an Arabic dialect (the extinct Siculo-Arabic dialect), with independent literary norms. Maltese has evolved independently of Modern Standard Arabic and its varieties into a standardized language over the past 800 years in a gradual process of Latinisation. Maltese is therefore considered an exceptional descendant of Arabic that has no diglossic relationship with Standard Arabic or Classical Arabic. Maltese is also different from Arabic and other Semitic languages since its morphology has been deeply influenced by Romance languages, Italian and Sicilian. It is also the only Semitic language written in the Latin script. In terms of basic everyday language, speakers of Maltese are reported to be able to understand less than a third of what is said to them in Tunisian Arabic, which is related to Siculo-Arabic, whereas speakers of Tunisian are able to understand about 40% of what is said to them in Maltese. This asymmetric intelligibility is considerably lower than the mutual intelligibility found between Maghrebi Arabic dialects. Maltese has its own dialects, with urban varieties of Maltese being closer to Standard Maltese than rural varieties. Mesopotamian Arabic, spoken by about 41.2 million people in Iraq (where it is called "Aamiyah"), eastern Syria and southwestern Iran (Khuzestan) and in the southeastern of Turkey (in the eastern Mediterranean, Southeastern Anatolia Region) North Mesopotamian Arabic is a spoken north of the Hamrin Mountains in Iraq, in western Iran, northern Syria, and in southeastern Turkey (in the eastern Mediterranean Region, Southeastern Anatolia Region, and southern Eastern Anatolia Region). Judeo-Mesopotamian Arabic, also known as Iraqi Judeo Arabic and Yahudic, is a variety of Arabic spoken by Iraqi Jews of Mosul. Baghdad Arabic is the Arabic dialect spoken in Baghdad, and the surrounding cities and it is a subvariety of Mesopotamian Arabic. Baghdad Jewish Arabic is the dialect spoken by the Iraqi Jews of Baghdad. South Mesopotamian Arabic (Basrawi dialect) is the dialect spoken in southern Iraq, such as Basra, Dhi Qar and Najaf. Khuzestani Arabic is the dialect spoken in the Iranian province of Khuzestan. This dialect is a mix of Southen Mesopotamian Arabic and Gulf Arabic. Khorasani Arabic spoken in the Iranian province of Khorasan. Kuwaiti Arabic is a Gulf Arabic dialect spoken in Kuwait. Sudanese Arabic is spoken by 17 million people in Sudan and some parts of southern Egypt. Sudanese Arabic is quite distinct from the dialect of its neighbor to the north; rather, the Sudanese have a dialect similar to the Hejazi dialect. Juba Arabic spoken in South Sudan and southern Sudan Gulf Arabic, spoken by around four million people, predominantly in Kuwait, Bahrain, some parts of Oman, eastern Saudi Arabia coastal areas and some parts of UAE and Qatar. Also spoken in Iran's Bushehr and Hormozgan provinces. Although Gulf Arabic is spoken in Qatar, most Qatari citizens speak Najdi Arabic (Bedawi). Omani Arabic, distinct from the Gulf Arabic of Eastern Arabia and Bahrain, spoken in Central Oman. With recent oil wealth and mobility has spread over other parts of the Sultanate. Hadhrami Arabic, spoken by around 8 million people, predominantly in Hadhramaut, and in parts of the Arabian Peninsula, South and Southeast Asia, and East Africa by Hadhrami descendants. Yemeni Arabic spoken in Yemen, and southern Saudi Arabia by 15 million people. Similar to Gulf Arabic. Najdi Arabic, spoken by around 10 million people, mainly spoken in Najd, central and northern Saudi Arabia. Most Qatari citizens speak Najdi Arabic (Bedawi). Hejazi Arabic (6 million speakers), spoken in Hejaz, western Saudi Arabia Saharan Arabic spoken in some parts of Algeria, Niger and Mali Baharna Arabic (600,000 speakers), spoken by Bahrani Shiʻah in Bahrain and Qatif, the dialect exhibits many big differences from Gulf Arabic. It is also spoken to a lesser extent in Oman. Judeo-Arabic dialects – these are the dialects spoken by the Jews that had lived or continue to live in the Arab World. As Jewish migration to Israel took hold, the language did not thrive and is now considered endangered. So-called Qəltu Arabic. Chadian Arabic, spoken in Chad,
153 Cromwell Road, Kensington. Reville, who was born just hours after Hitchcock, converted from Protestantism to Catholicism, apparently at the insistence of Hitchcock's mother; she was baptised on 31 May 1927 and confirmed at Westminster Cathedral by Cardinal Francis Bourne on 5 June. In 1928, when they learned that Reville was pregnant, the Hitchcocks purchased "Winter's Grace", a Tudor farmhouse set in 11 acres on Stroud Lane, Shamley Green, Surrey, for £2,500. Their daughter and only child, Patricia Alma Hitchcock, was born on 7 July that year. Patricia died on 9 August 2021 at 93. Reville became her husband's closest collaborator; Charles Champlin wrote in 1982: "The Hitchcock touch had four hands, and two were Alma's." When Hitchcock accepted the AFI Life Achievement Award in 1979, he said that he wanted to mention "four people who have given me the most affection, appreciation and encouragement, and constant collaboration. The first of the four is a film editor, the second is a scriptwriter, the third is the mother of my daughter, Pat, and the fourth is as fine a cook as ever performed miracles in a domestic kitchen. And their names are Alma Reville." Reville wrote or co-wrote on many of Hitchcock's films, including Shadow of a Doubt, Suspicion and The 39 Steps. Early sound films Hitchcock began work on his tenth film, Blackmail (1929), when its production company, British International Pictures (BIP), converted its Elstree studios to sound. The film was the first British "talkie"; this followed the rapid development of sound films in the United States, from the use of brief sound segments in The Jazz Singer (1927) to the first full sound feature Lights of New York (1928). Blackmail began the Hitchcock tradition of using famous landmarks as a backdrop for suspense sequences, with the climax taking place on the dome of the British Museum. It also features one of his longest cameo appearances, which shows him being bothered by a small boy as he reads a book on the London Underground. In the PBS series The Men Who Made The Movies, Hitchcock explained how he used early sound recording as a special element of the film, stressing the word "knife" in a conversation with the woman suspected of murder. During this period, Hitchcock directed segments for a BIP revue, Elstree Calling (1930), and directed a short film, An Elastic Affair (1930), featuring two Film Weekly scholarship winners. An Elastic Affair is one of the lost films. In 1933, Hitchcock signed a multi-film contract with Gaumont-British, once again working for Michael Balcon. His first film for the company, The Man Who Knew Too Much (1934), was a success; his second, The 39 Steps (1935), was acclaimed in the UK and gained him recognition in the United States. It also established the quintessential English "Hitchcock blonde" (Madeleine Carroll) as the template for his succession of ice-cold, elegant leading ladies. Screenwriter Robert Towne remarked, "It's not much of an exaggeration to say that all contemporary escapist entertainment begins with The 39 Steps". This film was one of the first to introduce the "MacGuffin" plot device, a term coined by the English screenwriter Angus MacPhail. The MacGuffin is an item or goal the protagonist is pursuing, one that otherwise has no narrative value; in The 39 Steps, the MacGuffin is a stolen set of design plans. Hitchcock released two spy thrillers in 1936. Sabotage was loosely based on Joseph Conrad's novel, The Secret Agent (1907), about a woman who discovers that her husband is a terrorist, and Secret Agent, based on two stories in Ashenden: Or the British Agent (1928) by W. Somerset Maugham. At this time, Hitchcock also became notorious for pranks against the cast and crew. These jokes ranged from simple and innocent to crazy and maniacal. For instance, he hosted a dinner party where he dyed all the food blue because he claimed there weren't enough blue foods. He also had a horse delivered to the dressing room of his friend, actor Gerald du Maurier. Hitchcock followed up with Young and Innocent in 1937, a crime thriller based on the 1936 novel A Shilling for Candles by Josephine Tey. Starring Nova Pilbeam and Derrick De Marney, the film was relatively enjoyable for the cast and crew to make. To meet distribution purposes in America, the film's runtime was cut and this included removal of one of Hitchcock's favourite scenes: a children's tea party which becomes menacing to the protagonists. Hitchcock's next major success was The Lady Vanishes (1938), "one of the greatest train movies from the genre's golden era", according to Philip French, in which Miss Froy (May Whitty), a British spy posing as a governess, disappears on a train journey through the fictional European country of Bandrika. The film saw Hitchcock receive the 1938 New York Film Critics Circle Award for Best Director. Benjamin Crisler of the New York Times wrote in June 1938: "Three unique and valuable institutions the British have that we in America have not: Magna Carta, the Tower Bridge and Alfred Hitchcock, the greatest director of screen melodramas in the world." The film was based on the novel The Wheel Spins (1936) written by Ethel Lina White. By 1938 Hitchcock was aware that he had reached his peak in Britain. He had received numerous offers from producers in the United States, but he turned them all down because he disliked the contractual obligations or thought the projects were repellent. However, producer David O. Selznick offered him a concrete proposal to make a film based on the sinking of , which was eventually shelved, but Selznick persuaded Hitchcock to come to Hollywood. In July 1938, Hitchcock flew to New York, and found that he was already a celebrity; he was featured in magazines and gave interviews to radio stations. In Hollywood, Hitchcock met Selznick for the first time. Selznick offered him a four-film contract, approximately $40,000 for each picture (). Early Hollywood years: 1939–1945 Selznick contract Selznick signed Hitchcock to a seven-year contract beginning in April 1939, and the Hitchcocks moved to Hollywood. The Hitchcocks lived in a spacious flat on Wilshire Boulevard, and slowly acclimatised themselves to the Los Angeles area. He and his wife Alma kept a low profile, and were not interested in attending parties or being celebrities. Hitchcock discovered his taste for fine food in West Hollywood, but still carried on his way of life from England. He was impressed with Hollywood's filmmaking culture, expansive budgets and efficiency, compared to the limits that he had often faced in Britain. In June that year, Life magazine called him the "greatest master of melodrama in screen history". Although Hitchcock and Selznick respected each other, their working arrangements were sometimes difficult. Selznick suffered from constant financial problems, and Hitchcock was often unhappy about Selznick's creative control and interference over his films. Selznick was also displeased with Hitchcock's method of shooting just what was in the script, and nothing more, which meant that the film could not be cut and remade differently at a later time. As well as complaining about Hitchcock's "goddamn jigsaw cutting", their personalities were mismatched: Hitchcock was reserved whereas Selznick was flamboyant. Eventually, Selznick generously lent Hitchcock to the larger film studios. Selznick made only a few films each year, as did fellow independent producer Samuel Goldwyn, so he did not always have projects for Hitchcock to direct. Goldwyn had also negotiated with Hitchcock on a possible contract, only to be outbid by Selznick. In a later interview, Hitchcock said: "[Selznick] was the Big Producer. ... Producer was king. The most flattering thing Mr. Selznick ever said about me—and it shows you the amount of control—he said I was the 'only director' he'd 'trust with a film'." Hitchcock approached American cinema cautiously; his first American film was set in England in which the "Americanness" of the characters was incidental: Rebecca (1940) was set in a Hollywood version of England's Cornwall and based on a novel by English novelist Daphne du Maurier. Selznick insisted on a faithful adaptation of the book, and disagreed with Hitchcock with the use of humour. The film, starring Laurence Olivier and Joan Fontaine, concerns an unnamed naïve young woman who marries a widowed aristocrat. She lives in his large English country house, and struggles with the lingering reputation of his elegant and worldly first wife Rebecca, who died under mysterious circumstances. The film won Best Picture at the 13th Academy Awards; the statuette was given to producer Selznick. Hitchcock received his first nomination for Best Director, his first of five such nominations. Hitchcock's second American film was the thriller Foreign Correspondent (1940), set in Europe, based on Vincent Sheean's book Personal History (1935) and produced by Walter Wanger. It was nominated for Best Picture that year. Hitchcock felt uneasy living and working in Hollywood while Britain was at war; his concern resulted in a film that overtly supported the British war effort. Filmed in 1939, it was inspired by the rapidly changing events in Europe, as covered by an American newspaper reporter played by Joel McCrea. By mixing footage of European scenes with scenes filmed on a Hollywood backlot, the film avoided direct references to Nazism, Nazi Germany, and Germans, to comply with the Motion Picture Production Code at the time. Early war years In September 1940 the Hitchcocks bought the Cornwall Ranch near Scotts Valley, California, in the Santa Cruz Mountains. Their primary residence was an English-style home in Bel Air, purchased in 1942. Hitchcock's films were diverse during this period, ranging from the romantic comedy Mr. & Mrs. Smith (1941) to the bleak film noir Shadow of a Doubt (1943). Suspicion (1941) marked Hitchcock's first film as a producer and director. It is set in England; Hitchcock used the north coast of Santa Cruz for the English coastline sequence. The film is the first of four in which Cary Grant was cast by Hitchcock, and it is one of the rare occasions that Grant plays a sinister character. Grant plays Johnnie Aysgarth, an English conman whose actions raise suspicion and anxiety in his shy young English wife, Lina McLaidlaw (Joan Fontaine). In one scene, Hitchcock placed a light inside a glass of milk, perhaps poisoned, that Grant is bringing to his wife; the light ensures that the audience's attention is on the glass. Grant's character is actually a killer, as per written in the book, Before the Fact by Francis Iles, but the studio felt that Grant's image would be tarnished by that. Hitchcock therefore settled for an ambiguous finale, although he would have preferred to end with the wife's murder. Fontaine won Best Actress for her performance. Saboteur (1942) is the first of two films that Hitchcock made for Universal Studios during the decade. Hitchcock was forced by Universal to use Universal contract player Robert Cummings and Priscilla Lane, a freelancer who signed a one-picture deal with the studio, both known for their work in comedies and light dramas. The story depicts a confrontation between a suspected saboteur (Cummings) and a real saboteur (Norman Lloyd) atop the Statue of Liberty. Hitchcock took a three-day tour of New York City to scout for Saboteurs filming locations. He also directed Have You Heard? (1942), a photographic dramatisation for Life magazine of the dangers of rumours during wartime. In 1943, he wrote a mystery story for Look magazine, "The Murder of Monty Woolley", a sequence of captioned photographs inviting the reader to find clues to the murderer's identity; Hitchcock cast the performers as themselves, such as Woolley, Doris Merrick, and make-up man Guy Pearce. Back in England, Hitchcock's mother Emma was severely ill; she died on 26 September 1942 at age 79. Hitchcock never spoke publicly about his mother, but his assistant said that he admired her. Four months later, on 4 January 1943, his brother William died of an overdose at age 52. Hitchcock was not very close to William, but his death made Hitchcock conscious about his own eating and drinking habits. He was overweight and suffering from back aches. His New Year's resolution in 1943 was to take his diet seriously with the help of a physician. In January that year, Shadow of a Doubt was released, which Hitchcock had fond memories of making. In the film, Charlotte "Charlie" Newton (Teresa Wright) suspects her beloved uncle Charlie Oakley (Joseph Cotten) of being a serial killer. Hitchcock filmed extensively on location, this time in the Northern California city of Santa Rosa. At 20th Century Fox, Hitchcock approached John Steinbeck with an idea for a film, which recorded the experiences of the survivors of a German U-boat attack. Steinbeck began work on the script for what would become Lifeboat (1944). However, Steinbeck was unhappy with the film and asked that his name be removed from the credits, to no avail. The idea was rewritten as a short story by Harry Sylvester and published in Collier's in 1943. The action sequences were shot in a small boat in the studio water tank. The locale posed problems for Hitchcock's traditional cameo appearance; it was solved by having Hitchcock's image appear in a newspaper that William Bendix is reading in the boat, showing the director in a before-and-after advertisement for "Reduco-Obesity Slayer". He told Truffaut in 1962: Hitchcock's typical dinner before his weight loss had been a roast chicken, boiled ham, potatoes, bread, vegetables, relishes, salad, dessert, a bottle of wine and some brandy. To lose weight, his diet consisted of black coffee for breakfast and lunch, and steak and salad for dinner, but it was hard to maintain; Donald Spoto wrote that his weight fluctuated considerably over the next 40 years. At the end of 1943, despite the weight loss, the Occidental Insurance Company of Los Angeles refused his application for life insurance. Wartime non-fiction films Hitchcock returned to the UK for an extended visit in late 1943 and early 1944. While there he made two short propaganda films, Bon Voyage (1944) and Aventure Malgache (1944), for the Ministry of Information. In June and July 1945, Hitchcock served as "treatment advisor" on a Holocaust documentary that used Allied Forces footage of the liberation of Nazi concentration camps. The film was assembled in London and produced by Sidney Bernstein of the Ministry of Information, who brought Hitchcock (a friend of his) on board. It was originally intended to be broadcast to the Germans, but the British government deemed it too traumatic to be shown to a shocked post-war population. Instead, it was transferred in 1952 from the British War Office film vaults to London's Imperial War Museum and remained unreleased until 1985, when an edited version was broadcast as an episode of PBS Frontline, under the title the Imperial War Museum had given it: Memory of the Camps. The full-length version of the film, German Concentration Camps Factual Survey, was restored in 2014 by scholars at the Imperial War Museum. Post-war Hollywood years: 1945–1953 Later Selznick films Hitchcock worked for David Selznick again when he directed Spellbound (1945), which explores psychoanalysis and features a dream sequence designed by Salvador Dalí. The dream sequence as it appears in the film is ten minutes shorter than was originally envisioned; Selznick edited it to make it "play" more effectively. Gregory Peck plays amnesiac Dr. Anthony Edwardes under the treatment of analyst Dr. Peterson (Ingrid Bergman), who falls in love with him while trying to unlock his repressed past. Two point-of-view shots were achieved by building a large wooden hand (which would appear to belong to the character whose point of view the camera took) and out-sized props for it to hold: a bucket-sized glass of milk and a large wooden gun. For added novelty and impact, the climactic gunshot was hand-coloured red on some copies of the black-and-white film. The original musical score by Miklós Rózsa makes use of the theremin, and some of it was later adapted by the composer into Rozsa's Piano Concerto Op. 31 (1967) for piano and orchestra. The spy film Notorious followed next in 1946. Hitchcock told François Truffaut that Selznick sold him, Ingrid Bergman, Cary Grant, and Ben Hecht's screenplay, to RKO Radio Pictures as a "package" for $500,000 (equivalent to $ million in ) because of cost overruns on Selznick's Duel in the Sun (1946). Notorious stars Bergman and Grant, both Hitchcock collaborators, and features a plot about Nazis, uranium and South America. His prescient use of uranium as a plot device led to him being briefly placed under surveillance by the Federal Bureau of Investigation. According to Patrick McGilligan, in or around March 1945, Hitchcock and Hecht consulted Robert Millikan of the California Institute of Technology about the development of a uranium bomb. Selznick complained that the notion was "science fiction", only to be confronted by the news of the detonation of two atomic bombs on Hiroshima and Nagasaki in Japan in August 1945. Transatlantic Pictures Hitchcock formed an independent production company, Transatlantic Pictures, with his friend Sidney Bernstein. He made two films with Transatlantic, one of which was his first colour film. With Rope (1948), Hitchcock experimented with marshalling suspense in a confined environment, as he had done earlier with Lifeboat. The film appears as a very limited number of continuous shots, but it was actually shot in 10 ranging from 4- to 10 minutes each; a 10-minute length of film was the most that a camera's film magazine could hold at the time. Some transitions between reels were hidden by having a dark object fill the entire screen for a moment. Hitchcock used those points to hide the cut, and began the next take with the camera in the same place. The film features James Stewart in the leading role, and was the first of four films that Stewart made with Hitchcock. It was inspired by the Leopold and Loeb case of the 1920s. Critical response at the time was mixed. Under Capricorn (1949), set in 19th-century Australia, also uses the short-lived technique of long takes, but to a more limited extent. He again used Technicolor in this production, then returned to black-and-white for several years. Transatlantic Pictures became inactive after the last two films. Hitchcock filmed Stage Fright (1950) at Elstree Studios in England, where he had worked during his British International Pictures contract many years before. He paired one of Warner Bros.' most popular stars, Jane Wyman, with the expatriate German actor Marlene Dietrich and used several prominent British actors, including Michael Wilding, Richard Todd and Alastair Sim. This was Hitchcock's first proper production for Warner Bros., which had distributed Rope and Under Capricorn, because Transatlantic Pictures was experiencing financial difficulties. His thriller Strangers on a Train (1951) was based on the novel of the same name by Patricia Highsmith. Hitchcock combined many elements from his preceding films. He approached Dashiell Hammett to write the dialogue, but Raymond Chandler took over, then left over disagreements with the director. In the film, two men casually meet, one of whom speculates on a foolproof method to murder; he suggests that two people, each wishing to do away with someone, should each perform the other's murder. Farley Granger's role was as the innocent victim of the scheme, while Robert Walker, previously known for "boy-next-door" roles, played the villain. I Confess (1953) was set in Quebec with Montgomery Clift as a Catholic priest. Peak years: 1954–1964 Dial M for Murder and Rear Window I Confess was followed by three colour films starring Grace Kelly: Dial M for Murder (1954), Rear Window (1954), and To Catch a Thief (1955). In Dial M for Murder, Ray Milland plays the villain who tries to murder his unfaithful wife (Kelly) for her money. She kills the hired assassin in self-defence, so Milland manipulates the evidence to make it look like murder. Her lover, Mark Halliday (Robert Cummings), and Police Inspector Hubbard (John Williams) save her from execution. Hitchcock experimented with 3D cinematography for Dial M for Murder. Hitchcock moved to Paramount Pictures and filmed Rear Window (1954), starring James Stewart and Grace Kelly, as well as Thelma Ritter and Raymond Burr. Stewart's character is a photographer called Jeff (based on Robert Capa) who must temporarily use a wheelchair. Out of boredom, he begins observing his neighbours across the courtyard, then becomes convinced that one of them (Raymond Burr) has murdered his wife. Jeff eventually manages to convince his policeman buddy (Wendell Corey) and his girlfriend (Kelly). As with Lifeboat and Rope, the principal characters are depicted in confined or cramped quarters, in this case Stewart's studio apartment. Hitchcock uses close-ups of Stewart's face to show his character's reactions, "from the comic voyeurism directed at his neighbours to his helpless terror watching Kelly and Burr in the villain's apartment". Alfred Hitchcock Presents From 1955 to 1965, Hitchcock was the host of the television series Alfred Hitchcock Presents. With his droll delivery, gallows humour and iconic image, the series made Hitchcock a celebrity. The title-sequence of the show pictured a minimalist caricature of his profile (he drew it himself; it is composed of only nine strokes), which his real silhouette then filled. The series theme tune was Funeral March of a Marionette by the French composer Charles Gounod (1818–1893). His introductions always included some sort of wry humour, such as the description of a recent multi-person execution hampered by having only one electric chair, while two are shown with a sign "Two chairs—no waiting!" He directed 18 episodes of the series, which
the dialogue, but Raymond Chandler took over, then left over disagreements with the director. In the film, two men casually meet, one of whom speculates on a foolproof method to murder; he suggests that two people, each wishing to do away with someone, should each perform the other's murder. Farley Granger's role was as the innocent victim of the scheme, while Robert Walker, previously known for "boy-next-door" roles, played the villain. I Confess (1953) was set in Quebec with Montgomery Clift as a Catholic priest. Peak years: 1954–1964 Dial M for Murder and Rear Window I Confess was followed by three colour films starring Grace Kelly: Dial M for Murder (1954), Rear Window (1954), and To Catch a Thief (1955). In Dial M for Murder, Ray Milland plays the villain who tries to murder his unfaithful wife (Kelly) for her money. She kills the hired assassin in self-defence, so Milland manipulates the evidence to make it look like murder. Her lover, Mark Halliday (Robert Cummings), and Police Inspector Hubbard (John Williams) save her from execution. Hitchcock experimented with 3D cinematography for Dial M for Murder. Hitchcock moved to Paramount Pictures and filmed Rear Window (1954), starring James Stewart and Grace Kelly, as well as Thelma Ritter and Raymond Burr. Stewart's character is a photographer called Jeff (based on Robert Capa) who must temporarily use a wheelchair. Out of boredom, he begins observing his neighbours across the courtyard, then becomes convinced that one of them (Raymond Burr) has murdered his wife. Jeff eventually manages to convince his policeman buddy (Wendell Corey) and his girlfriend (Kelly). As with Lifeboat and Rope, the principal characters are depicted in confined or cramped quarters, in this case Stewart's studio apartment. Hitchcock uses close-ups of Stewart's face to show his character's reactions, "from the comic voyeurism directed at his neighbours to his helpless terror watching Kelly and Burr in the villain's apartment". Alfred Hitchcock Presents From 1955 to 1965, Hitchcock was the host of the television series Alfred Hitchcock Presents. With his droll delivery, gallows humour and iconic image, the series made Hitchcock a celebrity. The title-sequence of the show pictured a minimalist caricature of his profile (he drew it himself; it is composed of only nine strokes), which his real silhouette then filled. The series theme tune was Funeral March of a Marionette by the French composer Charles Gounod (1818–1893). His introductions always included some sort of wry humour, such as the description of a recent multi-person execution hampered by having only one electric chair, while two are shown with a sign "Two chairs—no waiting!" He directed 18 episodes of the series, which aired from 1955 to 1965. It became The Alfred Hitchcock Hour in 1962, and NBC broadcast the final episode on 10 May 1965. In the 1980s, a new version of Alfred Hitchcock Presents was produced for television, making use of Hitchcock's original introductions in a colourised form. Hitchcock's success in television spawned a set of short-story collections in his name; these included Alfred Hitchcock's Anthology, Stories They Wouldn't Let Me Do on TV, and Tales My Mother Never Told Me. In 1956, HSD Publications also licensed the director's name to create Alfred Hitchcock's Mystery Magazine, a monthly digest specialising in crime and detective fiction. Hitchcock's television series' were very profitable, and his foreign-language versions of books were bringing revenues of up to $100,000 a year (). From To Catch a Thief to Vertigo In 1955, Hitchcock became a United States citizen. In the same year, his third Grace Kelly film, To Catch a Thief, was released; it is set in the French Riviera, and stars Kelly and Cary Grant. Grant plays retired thief John Robie, who becomes the prime suspect for a spate of robberies in the Riviera. A thrill-seeking American heiress played by Kelly surmises his true identity and tries to seduce him. "Despite the obvious age disparity between Grant and Kelly and a lightweight plot, the witty script (loaded with double entendres) and the good-natured acting proved a commercial success." It was Hitchcock's last film with Kelly; she married Prince Rainier of Monaco in 1956, and ended her film career afterward. Hitchcock then remade his own 1934 film The Man Who Knew Too Much in 1956. This time, the film starred James Stewart and Doris Day, who sang the theme song "Que Sera, Sera", which won the Academy Award for Best Original Song and became a big hit. They play a couple whose son is kidnapped to prevent them from interfering with an assassination. As in the 1934 film, the climax takes place at the Royal Albert Hall. The Wrong Man (1956), Hitchcock's final film for Warner Bros., is a low-key black-and-white production based on a real-life case of mistaken identity reported in Life magazine in 1953. This was the only film of Hitchcock to star Henry Fonda, playing a Stork Club musician mistaken for a liquor store thief, who is arrested and tried for robbery while his wife (Vera Miles) emotionally collapses under the strain. Hitchcock told Truffaut that his lifelong fear of the police attracted him to the subject and was embedded in many scenes. While directing episodes for Alfred Hitchcock Presents during the summer of 1957, Hitchcock was admitted to hospital for hernia and gallstones, and had to have his gallbladder removed. Following a successful surgery, he immediately returned to work to prepare for his next project. Vertigo (1958) again starred James Stewart, with Kim Novak and Barbara Bel Geddes. He had wanted Vera Miles to play the lead, but she was pregnant. He told Oriana Fallaci: "I was offering her a big part, the chance to become a beautiful sophisticated blonde, a real actress. We'd have spent a heap of dollars on it, and she has the bad taste to get pregnant. I hate pregnant women, because then they have children." In Vertigo, Stewart plays Scottie, a former police investigator suffering from acrophobia, who becomes obsessed with a woman he has been hired to shadow (Novak). Scottie's obsession leads to tragedy, and this time Hitchcock did not opt for a happy ending. Some critics, including Donald Spoto and Roger Ebert, agree that Vertigo is the director's most personal and revealing film, dealing with the Pygmalion-like obsessions of a man who moulds a woman into the person he desires. Vertigo explores more frankly and at greater length his interest in the relation between sex and death, than any other work in his filmography. Vertigo contains a camera technique developed by Irmin Roberts, commonly referred to as a dolly zoom, which has been copied by many filmmakers. The film premiered at the San Sebastián International Film Festival, and Hitchcock won the Silver Seashell prize. Vertigo is considered a classic, but it attracted mixed reviews and poor box-office receipts at the time; the critic from Variety magazine opined that the film was "too slow and too long". Bosley Crowther of the New York Times thought it was "devilishly far-fetched", but praised the cast performances and Hitchcock's direction. The picture was also the last collaboration between Stewart and Hitchcock. In the 2002 Sight & Sound polls, it ranked just behind Citizen Kane (1941); ten years later, in the same magazine, critics chose it as the best film ever made. North by Northwest and Psycho After Vertigo, the rest of 1958 was a difficult year for Hitchcock. During pre-production of North by Northwest (1959), which was a "slow" and "agonising" process, his wife Alma was diagnosed with cancer. While she was in hospital, Hitchcock kept himself occupied with his television work and would visit her every day. Alma underwent surgery and made a full recovery, but it caused Hitchcock to imagine, for the first time, life without her. Hitchcock followed up with three more successful films, which are also recognised as among his best: North by Northwest, Psycho (1960) and The Birds (1963). In North by Northwest, Cary Grant portrays Roger Thornhill, a Madison Avenue advertising executive who is mistaken for a government secret agent. He is pursued across the United States by enemy agents, including Eve Kendall (Eva Marie Saint). At first, Thornhill believes Kendall is helping him, but then realises that she is an enemy agent; he later learns that she is working undercover for the CIA. During its opening two-week run at Radio City Music Hall, the film grossed $404,056 (equivalent to $ million in ), setting a non-holiday gross record for that theatre. Time magazine called the film "smoothly troweled and thoroughly entertaining". Psycho (1960) is arguably Hitchcock's best-known film. Based on Robert Bloch's 1959 novel Psycho, which was inspired by the case of Ed Gein, the film was produced on a tight budget of $800,000 (equivalent to $ million in ) and shot in black-and-white on a spare set using crew members from Alfred Hitchcock Presents. The unprecedented violence of the shower scene, the early death of the heroine, and the innocent lives extinguished by a disturbed murderer became the hallmarks of a new horror-film genre. The film proved popular with audiences, with lines stretching outside theatres as viewers waited for the next showing. It broke box-office records in the United Kingdom, France, South America, the United States and Canada, and was a moderate success in Australia for a brief period. Psycho was the most profitable of Hitchcock's career, and he personally earned in excess of $15 million (equivalent to $ million in ). He subsequently swapped his rights to Psycho and his TV anthology for 150,000 shares of MCA, making him the third largest shareholder and his own boss at Universal, in theory at least, although that did not stop studio interference. Following the first film, Psycho became an American horror franchise: Psycho II, Psycho III, Bates Motel, Psycho IV: The Beginning, and a colour 1998 remake of the original. Truffaut interview On 13 August 1962, Hitchcock's 63rd birthday, the French director François Truffaut began a 50-hour interview of Hitchcock, filmed over eight days at Universal Studios, during which Hitchcock agreed to answer 500 questions. It took four years to transcribe the tapes and organise the images; it was published as a book in 1967, which Truffaut nicknamed the "Hitchbook". The audio tapes were used as the basis of a documentary in 2015. Truffaut sought the interview because it was clear to him that Hitchcock was not simply the mass-market entertainer the American media made him out to be. It was obvious from his films, Truffaut wrote, that Hitchcock had "given more thought to the potential of his art than any of his colleagues". He compared the interview to "Oedipus' consultation of the oracle". The Birds The film scholar Peter William Evans wrote that The Birds (1963) and Marnie (1964) are regarded as "undisputed masterpieces". Hitchcock had intended to film Marnie first, and in March 1962 it was announced that Grace Kelly, Princess Grace of Monaco since 1956, would come out of retirement to star in it. When Kelly asked Hitchcock to postpone Marnie until 1963 or 1964, he recruited Evan Hunter, author of The Blackboard Jungle (1954), to develop a screenplay based on a Daphne du Maurier short story, "The Birds" (1952), which Hitchcock had republished in his My Favorites in Suspense (1959). He hired Tippi Hedren to play the lead role. It was her first role; she had been a model in New York when Hitchcock saw her, in October 1961, in an NBC television advert for Sego, a diet drink: "I signed her because she is a classic beauty. Movies don't have them any more. Grace Kelly was the last." He insisted, without explanation, that her first name be written in single quotation marks: 'Tippi'. In The Birds, Melanie Daniels, a young socialite, meets lawyer Mitch Brenner (Rod Taylor) in a bird shop; Jessica Tandy plays his possessive mother. Hedren visits him in Bodega Bay (where The Birds was filmed) carrying a pair of lovebirds as a gift. Suddenly waves of birds start gathering, watching, and attacking. The question: "What do the birds want?" is left unanswered. Hitchcock made the film with equipment from the Revue Studio, which made Alfred Hitchcock Presents. He said it was his most technically challenging film, using a combination of trained and mechanical birds against a backdrop of wild ones. Every shot was sketched in advance. An HBO/BBC television film, The Girl (2012), depicted Hedren's experiences on set; she said that Hitchcock became obsessed with her and sexually harassed her. He reportedly isolated her from the rest of the crew, had her followed, whispered obscenities to her, had her handwriting analysed, and had a ramp built from his private office directly into her trailer. Diane Baker, her co-star in Marnie, said: "[N]othing could have been more horrible for me than to arrive on that movie set and to see her being treated the way she was." While filming the attack scene in the attic—which took a week to film—she was placed in a caged room while two men wearing elbow-length protective gloves threw live birds at her. Toward the end of the week, to stop the birds' flying away from her too soon, one leg of each bird was attached by nylon thread to elastic bands sewn inside her clothes. She broke down after a bird cut her lower eyelid, and filming was halted on doctor's orders. Marnie In June 1962, Grace Kelly announced that she had decided against appearing in Marnie (1964). Hedren had signed an exclusive seven-year, $500-a-week contract with Hitchcock in October 1961, and he decided to cast her in the lead role opposite Sean Connery. In 2016, describing Hedren's performance as "one of the greatest in the history of cinema", Richard Brody called the film a "story of sexual violence" inflicted on the character played by Hedren: "The film is, to put it simply, sick, and it's so because Hitchcock was sick. He suffered all his life from furious sexual desire, suffered from the lack of its gratification, suffered from the inability to transform fantasy into reality, and then went ahead and did so virtually, by way of his art." A 1964 New York Times film review called it Hitchcock's "most disappointing film in years", citing Hedren's and Connery's lack of experience, an amateurish script and "glaringly fake cardboard backdrops". In the film, Marnie Edgar (Hedren) steals $10,000 from her employer and goes on the run. She applies for a job at Mark Rutland's (Connery) company in Philadelphia and steals from there too. Earlier she is shown having a panic attack during a thunderstorm and fearing the colour red. Mark tracks her down and blackmails her into marrying him. She explains that she does not want to be touched, but during the "honeymoon", Mark rapes her. Marnie and Mark discover that Marnie's mother had been a prostitute when Marnie was a child, and that, while the mother was fighting with a client during a thunderstorm—the mother believed the client had tried to molest Marnie—Marnie had killed the client to save her mother. Cured of her fears when she remembers what happened, she decides to stay with Mark. Hitchcock told cinematographer Robert Burks that the camera had to be placed as close as possible to Hedren when he filmed her face. Evan Hunter, the screenwriter of The Birds who was writing Marnie too, explained to Hitchcock that, if Mark loved Marnie, he would comfort her, not rape her. Hitchcock reportedly replied: "Evan, when he sticks it in her, I want that camera right on her face!" When Hunter submitted two versions of the script, one without the rape scene, Hitchcock replaced him with Jay Presson Allen. Later years: 1966–1980 Final films Failing health reduced Hitchcock's output during the last two decades of his life. Biographer Stephen Rebello claimed Universal imposed two films on him, Torn Curtain (1966) and Topaz (1969), the latter of which is based on a Leon Uris novel, partly set in Cuba. Both were spy thrillers with Cold War-related themes. Torn Curtain, with Paul Newman and Julie Andrews, precipitated the bitter end of the 12-year collaboration between Hitchcock and composer Bernard Herrmann. Hitchcock was unhappy with Herrmann's score and replaced him with John Addison, Jay Livingston and Ray Evans. Upon release, Torn Curtain was a box office disappointment, and Topaz was disliked by critics and the studio. Hitchcock returned to Britain to make his penultimate film, Frenzy (1972), based on the novel Goodbye Piccadilly, Farewell Leicester Square (1966). After two espionage films, the plot marked a return to the murder-thriller genre. Richard Blaney (Jon Finch), a volatile barman with a history of explosive anger, becomes the prime suspect in the investigation into the "Necktie Murders", which are actually committed by his friend Bob Rusk (Barry Foster). This time, Hitchcock makes the victim and villain kindreds, rather than opposites as in Strangers on a Train. In Frenzy, Hitchcock allowed nudity for the first time. Two scenes show naked women, one of whom is being raped and strangled; Donald Spoto called the latter "one of the most repellent examples of a detailed murder in the history of film". Both actors, Barbara Leigh-Hunt and Anna Massey, refused to do the scenes, so models were used instead. Biographers have noted that Hitchcock had always pushed the limits of film censorship, often managing to fool Joseph Breen, the head of the Motion Picture Production Code. Hitchcock would add subtle hints of improprieties forbidden by censorship until the mid-1960s. Yet Patrick McGilligan wrote that Breen and others often realised that Hitchcock was inserting such material and were actually amused, as well as alarmed by Hitchcock's "inescapable inferences". Family Plot (1976) was Hitchcock's last film. It relates the escapades of "Madam" Blanche Tyler, played by Barbara Harris, a fraudulent spiritualist, and her taxi-driver lover Bruce Dern, making a living from her phony powers. While Family Plot was based on the Victor Canning novel The Rainbird Pattern (1972), the novel's tone is more sinister. Screenwriter Ernest Lehman originally wrote the film, under the working title Deception, with a dark tone but was pushed to a lighter, more comical tone by Hitchcock where it took the name Deceit, then finally, Family Plot. Knighthood and death Toward the end of his life, Hitchcock was working on the script for a spy thriller, The Short Night, collaborating with James Costigan, Ernest Lehman and David Freeman. Despite preliminary work, it was never filmed. Hitchcock's health was declining and he was worried about his wife, who had suffered a stroke. The screenplay was eventually published in Freeman's book The Last Days of Alfred Hitchcock (1999). Having refused a CBE in 1962, Hitchcock was appointed a Knight Commander of the Most Excellent Order of the British Empire (KBE) in the 1980 New Year Honours. He was too ill to travel to London—he had a pacemaker and was being given cortisone injections for his arthritis—so on 3 January 1980 the British consul general presented him with the papers at Universal Studios. Asked by a reporter after the ceremony why it had taken the Queen so long, Hitchcock quipped, "I suppose it was a matter of carelessness." Cary Grant, Janet Leigh, and others attended a luncheon afterwards. His last public appearance was on 16 March 1980, when he introduced the next year's winner of the American Film Institute award. He died of kidney failure the following month, on 29 April, in his Bel Air home. Donald Spoto, one of Hitchcock's biographers, wrote that Hitchcock had declined to see a priest, but according to Jesuit priest Mark Henninger, he and another priest, Tom Sullivan, celebrated Mass at the filmmaker's home, and Sullivan heard his confession. Hitchcock was survived by his wife and daughter. His funeral was held at Good Shepherd Catholic Church in Beverly Hills on 30 April, after which his body was cremated. His remains were scattered over the Pacific Ocean on 10 May 1980. Filmmaking Style and themes Hitchcock's film production career evolved from small-scale silent films to financially significant sound films. Hitchcock remarked that he was influenced by early filmmakers George Méliès, D.W. Griffith and Alice Guy-Blaché. His silent films between 1925 and 1929 were in the crime and suspense genres, but also included melodramas and comedies. Whilst visual storytelling was pertinent during the silent era, even after the arrival of sound, Hitchcock still relied on visuals in cinema; Hitchcock referred to this emphasis on visual storytelling as "pure cinema". In Britain, he honed his craft so that by the time he moved to Hollywood, the director had perfected his style and camera techniques. Hitchcock later said that his British work was the "sensation of cinema", whereas the American phase was when his "ideas were fertilised". Scholar Robin Wood writes that the director's first two films, The Pleasure Garden and The Mountain Eagle, were influenced by German Expressionism. Afterward, he discovered Soviet cinema, and Sergei Eisenstein's and Vsevolod Pudovkin's theories of montage. 1926's The Lodger was inspired by both German and Soviet aesthetics, styles which solidified the rest of his career. Although Hitchcock's work in the 1920s found some success, several British reviewers criticised Hitchcock's films for being unoriginal and conceited. Raymond Durgnat opined that Hitchcock's films were carefully and intelligently constructed, but thought they can be shallow and rarely present a "coherent worldview". Earning the title "Master of Suspense", the director experimented with ways to generate tension in his work. He said, "My suspense work comes out of creating nightmares for the audience. And I play with an audience. I make them gasp and surprise them and shock them. When you have a nightmare, it's awfully vivid if you're dreaming that you're being led to the electric chair. Then you're as happy as can be when you wake up because you're relieved." During filming of North by Northwest, Hitchcock explained his reasons for recreating the set of Mount Rushmore: "The audience responds in proportion to how realistic you make it. One of the dramatic reasons for this type of photography is to get it looking so natural that the audience gets involved and believes, for the time being, what's going on up there on the screen." Hitchcock's films, from the silent to the sound era, contained a number of recurring themes that he is famous for. His films explored audience as a voyeur, notably in Rear Window, Marnie and Psycho. He understood that human beings enjoy voyeuristic activities and made the audience participate in it through the character's actions. Of his fifty-three films, eleven revolved around stories of mistaken identity, where an innocent protagonist is accused of a crime and is pursued by police. In most cases, it is an ordinary, everyday person who finds themselves in a dangerous situation. Hitchcock told Truffaut: "That's because the theme of the innocent man being accused, I feel, provides the audience with a greater sense of danger. It's easier for them to identify with him than with a guilty man on the run." One of his constant themes were the struggle of a personality torn between "order and chaos"; known as the notion of "double", which is a comparison or contrast between two characters or objects: the double representing a dark or evil side. According to Robin Wood, Hitchcock had mixed feelings towards homosexuality despite working with gay actors in his career. Donald Spoto suggests that Hitchcock's sexually repressive childhood may have contributed to his exploration of deviancy. During the 1950s, the Motion Picture Production Code prohibited direct references to homosexuality but the director was known for his subtle references, and pushing the boundaries of the censors. Moreover, Shadow of a Doubt has a double incest theme through the storyline, expressed implicitly through images. Author Jane Sloan argues that Hitchcock was drawn to both conventional and unconventional sexual expression in his work, and the theme of marriage was usually presented in a "bleak and skeptical" manner. It was also not until after his mother's death in 1942, that Hitchcock portrayed motherly figures as "notorious monster-mothers". The espionage backdrop, and murders committed by characters with psychopathic tendencies were common themes too. In Hitchcock's depiction of villains and murderers, they were usually charming and friendly, forcing viewers to identify with them. The director's strict childhood and Jesuit education may have led to his distrust of authoritarian figures such as policemen and politicians; a theme which he has explored. Also, he used the "MacGuffin"—the use of an object, person or event to keep the plot moving along even if it was non-essential to the story. Some examples include the microfilm in North by Northwest and the stolen $40,000 in Psycho. Hitchcock appears briefly in most of his own films. For example, he is seen struggling to get a double bass onto a train (Strangers on a Train), walking dogs out of a pet shop (The Birds), fixing a neighbour's clock (Rear Window), as a shadow (Family Plot), sitting at a table in a photograph (Dial M for Murder), and riding a bus (North by Northwest, To Catch a Thief). Representation of women Hitchcock's portrayal of women has been the subject of much scholarly debate. Bidisha wrote in The Guardian in 2010: "There's the vamp, the tramp, the snitch, the witch, the slink, the double-crosser and, best of all, the demon mommy. Don't worry, they all get punished in the end." In a widely cited essay in 1975, Laura Mulvey introduced the idea of the male gaze; the view of the spectator in Hitchcock's films, she argued, is that of the heterosexual male protagonist. "The female characters in his films reflected the same qualities over and over again", Roger Ebert wrote in 1996: "They were blonde. They were icy and remote. They were imprisoned in costumes that subtly combined fashion with fetishism. They mesmerised the men, who often had physical or psychological handicaps. Sooner or later, every Hitchcock woman was humiliated." The victims in The Lodger are all blondes. In The 39 Steps, Madeleine Carroll is put in handcuffs. Ingrid Bergman, whom Hitchcock directed three times (Spellbound, Notorious, and Under Capricorn), is dark blonde. In Rear Window, Lisa (Grace Kelly) risks her life by breaking into Lars Thorwald's apartment. In To Catch a Thief, Francie (also Kelly) offers to help a man she believes is a burglar. In Vertigo and North by Northwest respectively, Kim Novak and Eva Marie Saint play the blonde heroines. In Psycho, Janet Leigh's character steals $40,000 and is murdered by Norman Bates, a reclusive psychopath. Tippi Hedren, a blonde, appears to be the focus of the attacks in The Birds. In Marnie, the title character, again played by Hedren, is a thief. In Topaz, French actresses Dany Robin as Stafford's wife and Claude Jade as Stafford's daughter are blonde heroines, the mistress was played by brunette Karin Dor. Hitchcock's last blonde heroine was Barbara Harris as a phony psychic turned amateur sleuth in Family Plot (1976), his final film. In the same film, the diamond smuggler played by Karen Black wears a long blonde wig in several scenes. His films often feature characters struggling in their relationships with their mothers, such as Norman Bates in Psycho. In North by Northwest, Roger Thornhill (Cary Grant) is an innocent man ridiculed by his mother for insisting that shadowy, murderous men are after him. In The Birds, the Rod Taylor character, an innocent man, finds his world under attack by vicious birds, and struggles to free himself from a clinging mother (Jessica Tandy). The killer in Frenzy has a loathing of women but idolises his mother. The villain Bruno in Strangers on a Train hates his father, but has an incredibly close relationship with his mother (played by Marion Lorne). Sebastian (Claude Rains) in Notorious has a clearly conflicting relationship with his mother, who is (rightly) suspicious of his new bride, Alicia Huberman (Ingrid Bergman). Relationship with actors Hitchcock became known for having remarked that "actors should be treated like cattle". During the filming of Mr. & Mrs. Smith (1941), Carole Lombard brought three cows onto the set wearing the name tags of Lombard, Robert Montgomery, and Gene Raymond, the stars of the film, to surprise him. In an episode of The Dick Cavett Show, originally broadcast on 8 June 1972, Dick Cavett stated as fact that Hitchcock had once called actors cattle. Hitchcock responded by saying that, at one time, he had been accused of calling actors cattle. "I said that I would never say such an unfeeling, rude thing about actors at all. What I probably said, was that all actors should be treated like cattle...In a nice way of course." He then described Carole Lombard's joke, with a smile. Hitchcock believed that actors should concentrate on their performances and leave work on script and character to the directors and screenwriters. He told Bryan Forbes in 1967: "I remember discussing with a method actor how he was taught and so forth. He said, 'We're taught using improvisation. We are given an idea and then we are turned loose to develop in any way we want to.' I said, 'That's not acting. That's writing.' " Recalling their experiences on Lifeboat for Charles Chandler, author of It's Only a Movie: Alfred Hitchcock A Personal Biography, Walter Slezak said that Hitchcock "knew more about how to help an actor than any director I ever worked with", and Hume Cronyn dismissed the idea that Hitchcock was not concerned with his actors as "utterly fallacious", describing at length the process of rehearsing and filming Lifeboat. Critics observed that, despite his reputation as a man who disliked actors, actors who worked with him often gave brilliant performances. He used the same actors in many of his films; Cary Grant and James Stewart both worked with Hitchcock four times, and Ingrid Bergman and Grace Kelly three. James Mason said that Hitchcock regarded actors as "animated props". For Hitchcock, the actors were part of the film's setting. He told François Truffaut: "The chief requisite for an actor is the ability to do nothing well, which is by no means as easy as it sounds. He should be willing to be used and wholly integrated into the picture by the director and the camera. He must allow the camera to determine the proper emphasis and the most effective dramatic highlights." Writing, storyboards and production Hitchcock planned his scripts in detail with his writers. In Writing with Hitchcock (2001), Steven DeRosa noted that Hitchcock supervised them through every draft, asking that they tell the story visually. Hitchcock told Roger Ebert in 1969: Hitchcock's films were extensively storyboarded to the finest detail. He was reported to have never even bothered looking through the viewfinder, since he did not need to, although in publicity photos he was shown doing so. He also used this as an excuse to never have to change his films from his initial vision. If a studio asked him to change a film, he would claim that it was already shot in a single way, and that there were no alternative takes to consider. This view of Hitchcock as a director who relied more on pre-production than on the actual production itself has been challenged by Bill Krohn, the American correspondent of French film magazine Cahiers du cinéma, in his book Hitchcock at Work. After investigating script revisions, notes to other production personnel written by or to Hitchcock, and other production material, Krohn observed that Hitchcock's work often deviated from how the screenplay was written or how the film was originally envisioned. He noted that the myth of storyboards in relation to Hitchcock, often regurgitated by generations of commentators on his films, was to a great degree perpetuated by Hitchcock himself or the publicity arm of the studios. For example, the celebrated crop-spraying sequence of North by Northwest was not storyboarded at all. After the scene was filmed, the publicity department asked Hitchcock to
small species, found in eastern Bolivia, southern Brazil, Paraguay, and northeastern Argentina Eunectes deschauenseei, the darkly-spotted anaconda – a rare species, found in northeastern Brazil and coastal French Guiana Eunectes beniensis, the Bolivian anaconda – the most recently defined species, found in the Departments of Beni and Pando in Bolivia The term was previously applied imprecisely, indicating any large snake that constricts its prey, though this usage is now archaic. "Anaconda" is also used as a metaphor for an action aimed at constricting and suffocating an opponent – for example, the Anaconda Plan proposed at the beginning of the American Civil War, in which the Union Army was to effectively "suffocate" the Confederacy. Another example is the anaconda choke in the martial art Brazilian jiu-jitsu, which is performed by wrapping your arms under the opponent's neck and through the armpit, and grasping the biceps of the opposing arm, when caught in this move, you will lose consciousness if you do not tap out. See also South American jaguar, a competitor or predator Notes References Eunectes Snake common names
green anaconda (Eunectes murinus), which is the largest snake in the world by weight, and the second longest. Etymology The South American names anacauchoa and anacaona were suggested in an account by Peter Martyr d'Anghiera, but the idea of a South American origin was questioned by Henry Walter Bates who, in his travels in South America, failed to find any similar name in use. The word anaconda is derived from the name of a snake from Ceylon (Sri Lanka) that John Ray described in Latin in his Synopsis Methodica Animalium (1693) as serpens indicus bubalinus anacandaia zeylonibus, ides bubalorum aliorumque jumentorum membra conterens. Ray used a catalogue of snakes from the Leyden museum supplied by Dr. Tancred Robinson, but the description of its habit was based on Andreas Cleyer who in 1684 described a gigantic snake that crushed large animals by coiling around their bodies and crushing their bones. Henry Yule in his Hobson-Jobson notes that the word became more popular due to a piece of fiction published in 1768 in the Scots Magazine by a certain R. Edwin. Edwin described a 'tiger' being crushed to death by an anaconda, when there actually never were any tigers in Sri Lanka. Yule and Frank Wall noted that the snake was in fact a python and suggested a Tamil origin
the Turkic, Mongolic, and Tungusic languages was published in 1730 by Philip Johan von Strahlenberg, a Swedish officer who traveled in the eastern Russian Empire while a prisoner of war after the Great Northern War. However, he may not have intended to imply a closer relationship among those languages. Uralo-Altaic hypothesis In 1844, the Finnish philologist Matthias Castrén proposed a broader grouping, that later came to be called the Ural–Altaic family, which included Turkic, Mongolian, and Manchu-Tungus (=Tungusic) as an "Altaic" branch, and also the Finno-Ugric and Samoyedic languages as the "Uralic" branch (though Castrén himself used the terms "Tataric" and "Chudic"). The name "Altaic" referred to the Altai Mountains in East-Central Asia, which are approximately the center of the geographic range of the three main families. The name "Uralic" referred to the Ural Mountains. While the Ural-Altaic family hypothesis can still be found in some encyclopedias, atlases, and similar general references, after the 1960s it has been heavily criticized. Even linguists who accept the basic Altaic family, like Sergei Starostin, completely discard the inclusion of the "Uralic" branch. Korean and Japanese languages In 1857, the Austrian scholar Anton Boller suggested adding Japanese to the Ural–Altaic family. In the 1920s, G.J. Ramstedt and E.D. Polivanov advocated the inclusion of Korean. Decades later, in his 1952 book, Ramstedt rejected the Ural–Altaic hypothesis but again included Korean in Altaic, an inclusion followed by most leading Altaicists (supporters of the theory) to date. His book contained the first comprehensive attempt to identify regular correspondences among the sound systems within the Altaic language families. In 1960, Nicholas Poppe published what was in effect a heavily revised version of Ramstedt's volume on phonology that has since set the standard in Altaic studies. Poppe considered the issue of the relationship of Korean to Turkic-Mongolic-Tungusic not settled. In his view, there were three possibilities: (1) Korean did not belong with the other three genealogically, but had been influenced by an Altaic substratum; (2) Korean was related to the other three at the same level they were related to each other; (3) Korean had split off from the other three before they underwent a series of characteristic changes. Roy Andrew Miller's 1971 book Japanese and the Other Altaic Languages convinced most Altaicists that Japanese also belonged to Altaic. Since then, the "Macro-Altaic" has been generally assumed to include Turkic, Mongolic, Tungusic, Korean, and Japanese. In 1990, Unger advocated a family consisting of Tungusic, Korean, and Japonic languages, but not Turkic or Mongolic. However, many linguists dispute the alleged affinities of Korean and Japanese to the other three groups. Some authors instead tried to connect Japanese to the Austronesian languages. In 2017, Martine Robbeets proposed that Japanese (and possibly Korean) originated as a hybrid language. She proposed that the ancestral home of the Turkic, Mongolic, and Tungusic languages was somewhere in northwestern Manchuria. A group of those proto-Altaic ("Transeurasian") speakers would have migrated south into the modern Liaoning province, where they would have been mostly assimilated by an agricultural community with an Austronesian-like language. The fusion of the two languages would have resulted in proto-Japanese and proto-Korean. In a typological study that does not directly evaluate the validity of the Altaic hypothesis, Yurayong and Szeto (2020) discuss for Koreanic and Japonic the stages of convergence to the Altaic typological model and subsequent divergence from that model, which resulted in the present typological similarity between Koreanic and Japonic. They state that both are "still so different from the Core Altaic languages that we can even speak of an independent Japanese-Korean type of grammar. Given also that there is neither a strong proof of common Proto-Altaic lexical items nor solid regular sound correspondences but, rather, only lexical and structural borrowings between languages of the Altaic typology, our results indirectly speak in favour of a “Paleo-Asiatic” origin of the Japonic and Koreanic languages." The Ainu language In 1962, John C. Street proposed an alternative classification, with Turkic-Mongolic-Tungusic in one grouping and Korean-Japanese-Ainu in another, joined in what he designated as the "North Asiatic" family. The inclusion of Ainu was adopted also by James Patrie in 1982. The Turkic-Mongolic-Tungusic and Korean-Japanese-Ainu groupings were also posited in 2000–2002 by Joseph Greenberg. However, he treated them as independent members of a larger family, which he termed Eurasiatic. The inclusion of Ainu is not widely accepted by Altaicists. In fact, no convincing genealogical relationship between Ainu and any other language family has been demonstrated, and it is generally regarded as a language isolate. Early criticism and rejection Starting in the late 1950s, some linguists became increasingly critical of even the minimal Altaic family hypothesis, disputing the alleged evidence of genetic connection between Turkic, Mongolic and Tungusic languages. Among the earlier critics were Gerard Clauson (1956), Gerhard Doerfer (1963), and Alexander Shcherbak. They claimed that the words and features shared by Turkic, Mongolic, and Tungusic languages were for the most part borrowings and that the rest could be attributed to chance resemblances. In 1988, Doerfer again rejected all the genetic claims over these major groups. Modern controversy A major continuing supporter of the Altaic hypothesis has been Sergei Starostin, who published a comparative lexical analysis of the Altaic languages in 1991. He concluded that the analysis supported the Altaic grouping, although it was "older than most other language families in Eurasia, such as Indo-European or Finno-Ugric, and this is the reason why the modern Altaic languages preserve few common elements". In 1991 and again in 1996, Roy Miller defended the Altaic hypothesis and claimed that the criticisms of Clauson and Doerfer apply exclusively to the lexical correspondences, whereas the most pressing evidence for the theory is the similarities in verbal morphology. In 2003, Claus Schönig published a critical overview of the history of the Altaic hypothesis up to that time, siding with the earlier criticisms of Clauson, Doerfer, and Shcherbak. In 2003, Starostin, Anna Dybo and Oleg Mudrak published the Etymological Dictionary of the Altaic Languages, which expanded the 1991 lexical lists and added other phonological and grammatical arguments. Starostin's book was criticized by Stefan Georg in 2004 and 2005, and by Alexander Vovin in 2005. Other defenses of the theory, in response to the criticisms of Georg and Vovin, were published by Starostin in 2005, Blažek in 2006, Robbeets in 2007, and Dybo and G. Starostin in 2008 In 2010, Lars Johanson echoed Miller's 1996 rebuttal to the critics, and called for a muting of the polemic. List of supporters and critics of the Altaic hypothesis The list below comprises linguists who have worked specifically on the Altaic problem since the publication of the first volume of Ramstedt's Einführung in 1952. The dates given are those of works concerning Altaic. For supporters of the theory, the version of Altaic they favor is given at the end of the entry, if other than the prevailing one of Turkic–Mongolic–Tungusic–Korean–Japanese. Major supporters Pentti Aalto (1955). Turkic–Mongolic–Tungusic–Korean. Anna V. Dybo (S. Starostin et al. 2003, A. Dybo and G. Starostin 2008). Frederik Kortlandt (2010). Karl H. Menges (1975). Common ancestor of Korean, Japanese and traditional Altaic dated back to the 7th or 8th millennium BC (1975: 125). Roy Andrew Miller (1971, 1980, 1986, 1996). Supported the inclusion of Korean and Japanese. Oleg A. Mudrak (S. Starostin et al. 2003). Nicholas Poppe (1965). Turkic–Mongolic–Tungusic and perhaps Korean. Alexis Manaster Ramer. Martine Robbeets (2004, 2005, 2007, 2008, 2015) (in the form of "Transeurasian"). G. J. Ramstedt (1952–1957). Turkic–Mongolic–Tungusic–Korean. George Starostin (A. Dybo and G. Starostin 2008). Sergei Starostin (1991, S. Starostin et al. 2003). John C. Street (1962). Turkic–Mongolic–Tungusic and Korean–Japanese–Ainu, grouped as "North Asiatic". Talat Tekin (1994). Turkic–Mongolic–Tungusic–Korean. Major critics Gerard Clauson (1956, 1959, 1962). Gerhard Doerfer (1963, 1966, 1967, 1968, 1972, 1973, 1974, 1975, 1981, 1985, 1988, 1993). Susumu Ōno (1970, 2000) Juha Janhunen (1992, 1995) (tentative support of Mongolic-Tungusic). Claus Schönig (2003). Stefan Georg (2004, 2005). Alexander Vovin (2005, 2010, 2017). Formerly an advocate of Altaic (1994, 1995, 1997, 1999, 2000, 2001), now a critic. Alexander Shcherbak. Alexander B. M. Stiven (2008, 2010). Advocates of alternative hypotheses James Patrie (1982) and Joseph Greenberg (2000–2002). Turkic–Mongolic–Tungusic and Korean–Japanese–Ainu, grouped in a common taxon (cf. John C. Street 1962), called Eurasiatic by Greenberg. J. Marshall Unger (1990). Tungusic–Korean–Japanese ("Macro-Tungusic"), with Turkic and Mongolic as separate language families. Lars Johanson (2010). Agnostic, proponent of a "Transeurasian" verbal morphology not necessarily genealogically linked. Languages Tungusic languages With fewer speakers than Mongolic or Turkic languages, Tungusic languages are distributed across most of Eastern Siberia (including the Sakhalin Island), northern Manchuria and extending into some parts of Xinjiang and Mongolia. Some Tungusic languages are extinct or endangered languages as a consequence of language shift to Chinese and Russian. In China, where the Tungusic population is over 10 million, just 46,000 still retain knowledge of their ethnic languages. Scholars have yet to reach agreement on how to classify the Tungusic languages but two subfamilies have been proposed: South Tungusic (or Manchu) and North Tungusic (Tungus). Jurchen (now extinct; Da Jin 大金), Manchu (critically endangered; Da Qing 大清), Sibe (Xibo 锡伯) and other minor languages comprise the Manchu group. The Northern Tungusic languages can be reclassified even further into the Siberian Tungusic languages (Evenki, Lamut, Solon and Negidal) and the Lower Amur Tungusic languages (Nanai, Ulcha, Orok to name a few). Significant disagreements remain, not only about the linguistic sub-classifications but also some controversy around the Chinese names of some ethnic groups, like the use of Hezhe (赫哲) for the Nanai people. Mongolic languages Mongolic languages are spoken in three geographic areas: Russia (especially Siberia), China and Mongolia. Although Russia and China host significant Mongol populations many of the Mongol people in these countries don't speak their own ethnic language. They are usually sub-classified into two groups: the Western languages (Oirat, Kalmyk and related dialects) and Eastern languages. The latter group can be further subdivided as follows: Southern Mongol - Ordos, Chakhar and Khorchin Central Mongol - Khalkha, Darkhat Northern Mongol - Buriat and dialects, Khamnigan There are also additional archaic and obscure languages within these groups: Moghol (Afghanistan), Dagur (Manchuria) and languages associated with Gansu and Qinghai. Linguisitically two branches emerge, the Common Mongolic and the Khitan/Serbi (sometimes called "para-Mongolic"). Of the latter, only Dagur survives into the present day. Arguments For the Altaic grouping Phonological and grammatical features The original arguments for grouping the "micro-Altaic" languages within a Uralo-Altaic family were based on such shared features as vowel harmony and agglutination. According to Roy Miller, the most pressing evidence for the theory is the similarities in verbal morphology. The Etymological Dictionary by Starostin and others (2003) proposes a set of sound change laws that would explain the evolution from Proto-Altaic to the descendant languages. For example, although most of today's Altaic languages have vowel harmony, Proto-Altaic as reconstructed by them lacked it; instead, various vowel assimilations between the first and second syllables of words occurred in Turkic, Mongolic, Tungusic, Korean, and Japonic. They also included a number of grammatical correspondences between the languages. Shared lexicon Starostin claimed in 1991 that the members of the proposed Altaic group shared about 15–20% of apparent cognates within a 110-word Swadesh-Yakhontov list; in particular, Turkic–Mongolic 20%, Turkic–Tungusic 18%, Turkic–Korean 17%, Mongolic–Tungusic 22%, Mongolic–Korean 16%, and Tungusic–Korean 21%. The 2003 Etymological Dictionary includes a list of 2,800 proposed cognate sets, as well as a few important changes to the reconstruction of Proto-Altaic. The authors tried hard to distinguish loans between Turkic and Mongolic and between Mongolic and Tungusic from cognates; and suggest words that occur in Turkic
also additional archaic and obscure languages within these groups: Moghol (Afghanistan), Dagur (Manchuria) and languages associated with Gansu and Qinghai. Linguisitically two branches emerge, the Common Mongolic and the Khitan/Serbi (sometimes called "para-Mongolic"). Of the latter, only Dagur survives into the present day. Arguments For the Altaic grouping Phonological and grammatical features The original arguments for grouping the "micro-Altaic" languages within a Uralo-Altaic family were based on such shared features as vowel harmony and agglutination. According to Roy Miller, the most pressing evidence for the theory is the similarities in verbal morphology. The Etymological Dictionary by Starostin and others (2003) proposes a set of sound change laws that would explain the evolution from Proto-Altaic to the descendant languages. For example, although most of today's Altaic languages have vowel harmony, Proto-Altaic as reconstructed by them lacked it; instead, various vowel assimilations between the first and second syllables of words occurred in Turkic, Mongolic, Tungusic, Korean, and Japonic. They also included a number of grammatical correspondences between the languages. Shared lexicon Starostin claimed in 1991 that the members of the proposed Altaic group shared about 15–20% of apparent cognates within a 110-word Swadesh-Yakhontov list; in particular, Turkic–Mongolic 20%, Turkic–Tungusic 18%, Turkic–Korean 17%, Mongolic–Tungusic 22%, Mongolic–Korean 16%, and Tungusic–Korean 21%. The 2003 Etymological Dictionary includes a list of 2,800 proposed cognate sets, as well as a few important changes to the reconstruction of Proto-Altaic. The authors tried hard to distinguish loans between Turkic and Mongolic and between Mongolic and Tungusic from cognates; and suggest words that occur in Turkic and Tungusic but not in Mongolic. All other combinations between the five branches also occur in the book. It lists 144 items of shared basic vocabulary, including words for such items as 'eye', 'ear', 'neck', 'bone', 'blood', 'water', 'stone', 'sun', and 'two'. Robbeets and Bouckaert (2018) use Bayesian phylolinguistic methods to argue for the coherence of the "narrow" Altaic languages (Turkic, Mongolic, and Tungusic) together with Japonic and Koreanic, which they refer to as the Transeurasian languages. Their results include the following phylogenetic tree: Martine Robbeets (2020) argues that early Transeurasian speakers were originally agriculturalists in northeastern China, only becoming pastoralists later on. Some lexical reconstructions of agricultural terms by Robbeets (2020) are listed below. Abbreviations PTEA = Proto-Transeurasian PA = Proto-Altaic PTk = Proto-Turkic PMo = Proto-Mongolic PTg = Proto-Tungusic PJK = Proto-Japano-Koreanic PK = Proto-Koreanic PJ = Proto-Japonic Additional family-level reconstructions of agricultural vocabulary from Robbeets et al. (2020): Proto-Turkic *ek- ‘to sprinkle with the hand; sow’ > *ek-e.g. ‘plow’ Proto-Turkic *tarï- ‘to cultivate (the ground)’ > *tarï-g ‘what is cultivated; crops, main crop, cultivated land’ Proto-Turkic *ko- ‘to put’ > *koːn- ‘to settle down (of animals), to take up residence (of people), to be planted (of plants)’ > *konak ‘foxtail millet (Setaria italica)’ Proto-Turkic *tög- ‘to hit, beat; to pound, crush (food in a mortar); to husk, thresh (cereals)’ > *tögi ‘husked millet; husked rice’ Proto-Turkic *ügür ‘(broomcorn) millet’ Proto-Turkic *arpa ‘barley (Hordeum vulgare)' < ? Proto-Iranian *arbusā ‘barley’ Proto-Mongolic *amun ‘cereals; broomcorn millet (Panicum miliaceum)’ (Nugteren 2011: 268) Proto-Mongolic *konag ‘foxtail millet’ < PTk *konak ‘foxtail millet (Setaria italica)’ Proto-Mongolic *budaga ‘cooked cereals; porridge; meal’ Proto-Mongolic *tari- ‘to sow, plant’ (Nugteren 2011: 512–13) Proto-Macro-Mongolic *püre ‘seed; descendants’ Proto-Tungusic *pisi-ke ‘broomcorn millet (Panicum miliaceum)’ Proto-Tungusic *jiya- ‘foxtail millet (Setaria italica)’ Proto-Tungusic *murgi ‘barley (Hordeum vulgare)’ Proto-Tungusic *üse- ~ *üsi- ‘to plant’ üse ~ üsi ‘seed, seedling’, üsi-n ‘field for cultivation’ Proto-Tungusic *tari- ‘to sow, to plant’ Proto-Koreanic *pisi ‘seed’, *pihi ‘barnyard millet’ < Proto-Transeurasian (PTEA) *pisi-i (sow-NMLZ) ‘seed’ ~ *pisi-ke (sow-RES.NMLZ) ‘what is sown, major crop’ Proto-Koreanic *patʌ-k ‘dry field’ < Proto-Japano-Koreanic (PJK) *pata ‘dry field’ < PTEA *pata ‘field for cultivation’ Proto-Koreanic *mutʌ-k ‘dry land’ < PJK *muta ‘land’ < PTEA *mudu ‘uncultivated land’ Proto-Koreanic *mat-ʌk ‘garden plot’ < PJK *mat ‘plot of land for cultivation’ Proto-Koreanic *non ‘rice paddy field’ < PJK *non ‘field’ Proto-Koreanic *pap ‘any boiled preparation of cereal; boiled rice’ Proto-Koreanic *pʌsal ‘hulled (of any grain); hulled corn of grain; hulled rice’ < Proto-Japonic *wasa-ra ‘early ripening (of any grain)’ Proto-Koreanic *ipi > *pi > *pye ‘(unhusked) rice’ < Proto-Japonic *ip-i (eat-NMLZ) ‘cooked millet, steamed rice’ Proto-Japonic *nuka ‘rice bran’ < PJ *nuka- (remove.NMLZ) Proto-Japonic *məmi ‘hulled rice’ < PJ *məm-i (move.back.and.forth.with.force-NMLZ) Proto-Japonic *ipi ‘cooked millet, steamed rice’ < *ip-i (eat-NMLZ) < PK *me(k)i ‘rice offered to a higher rank’ < *mek-i (eat-NMLZ) ‘what you eat, food’ < Proto-Austronesian *ka-en eat-OBJ.NMLZ Proto-Japonic *wasa- ~ *wəsə- ‘to be early ripening (of crops); an early ripening variety (of any crop); early-ripening rice plant’ Proto-Japonic *usu ‘(rice and grain) mortar’ < Para-Austronesian *lusuŋ ‘(rice) mortar’; cf. Proto-Austronesian *lusuŋ ‘(rice) mortar’ Proto-Japonic *kəmai ‘dehusked rice’ < Para-Austronesian *hemay < Proto-Macro-Austronesian *Semay ‘cooked rice’; cf. Proto-Austronesian *Semay ‘cooked rice’ Against the grouping Weakness of lexical and typological data According to G. Clauson (1956), G. Doerfer (1963), and A. Shcherbak (1963), many of the typological features of the supposed Altaic languages, particularly agglutinative strongly suffixing morphology and subject–object–verb (SOV) word order, often occur together in languages. Those critics also argued that the words and features shared by Turkic, Mongolic, and Tungusic languages were for the most part borrowings and that the rest could be attributed to chance resemblances. They noted that there was little vocabulary shared by Turkic and Tungusic languages, though more shared with Mongolic languages. They reasoned that, if all three families had a common ancestor, we should expect losses to happen at random, and not only at the geographical margins of the family; and that the observed pattern is consistent with borrowing. According to C. Schönig (2003), after accounting for areal effects, the shared lexicon that could have a common genetic origin was reduced to a small number of monosyllabic lexical roots, including the personal pronouns and a few other deictic and auxiliary items, whose sharing could be explained in other ways; not the kind of sharing expected in cases of genetic relationship. The Sprachbund hypothesis Instead of a common genetic origin, Clauson, Doerfer, and Shcherbak proposed (in 1956–1966) that Turkic, Mongolic, and Tungusic languages form a Sprachbund: a set of languages with similarities due to convergence through intensive borrowing and long contact, rather than common origin. Asya Pereltsvaig further observed in 2011 that, in general, genetically related languages and families tend to diverge over time: the earlier forms are more similar than modern forms. However, she claims that an analysis of the earliest written records of Mongolic and Turkic languages shows the opposite, suggesting that they do not share a common traceable ancestor, but rather have become more similar through language contact and areal effects. Hypothesis about the original homeland The prehistory of the peoples speaking the "Altaic" languages is largely unknown. Whereas for certain other language families, such as the speakers of Indo-European, Uralic, and Austronesian, it is possible to frame substantial hypotheses, in the case of the proposed Altaic family much remains to be done. Some scholars have hypothesised a possible Uralic and Altaic homeland in the Central Asian steppes. According to Juha Janhunen, the ancestral languages of Turkic, Mongolic, Tungusic, Korean, and Japanese were spoken in a relatively small area comprising present-day North Korea, Southern Manchuria, and Southeastern Mongolia. However Janhunen is sceptical about an affiliation of Japanese to Altaic, while András Róna-Tas remarked that a relationship between Altaic and Japanese, if it ever existed, must be more remote than the relationship of any two of the Indo-European languages. Ramsey stated that "the genetic relationship between Korean and Japanese, if it in fact exists, is probably more complex and distant than we can imagine on the basis of our present state of knowledge". Supporters of the Altaic hypothesis formerly set the date of the Proto-Altaic language at around 4000 BC, but today at around 5000 BC or 6000 BC. This would make Altaic a language family older than Indo-European (around 3000 to 4000 BC according to mainstream hypotheses) but considerably younger than Afroasiatic (c. 10,000 BC or 11,000 to 16,000 BC according to different sources). See also Classification of the Japonic languages Nostratic languages Pan-Turanism Turco-Mongol Uralo-Siberian languages Xiongnu Comparison of Japanese and Korean References Citations Sources Aalto, Pentti. 1955. "On the Altaic initial *p-." Central Asiatic Journal 1, 9–16. Anonymous. 2008. [title missing]. Bulletin of the Society for the Study of the Indigenous Languages of the Americas, 31 March 2008, 264: . Anthony, David W. 2007. The Horse, the Wheel, and Language. Princeton: Princeton University Press. Boller, Anton. 1857. Nachweis, daß das Japanische zum ural-altaischen Stamme gehört. Wien. Clauson, Gerard. 1959. "The case for the Altaic theory examined." Akten des vierundzwanzigsten internationalen Orientalisten-Kongresses, edited by H. Franke. Wiesbaden: Deutsche Morgenländische Gesellschaft, in Komission bei Franz Steiner Verlag. Clauson, Gerard. 1968. "A lexicostatistical appraisal of the Altaic theory." Central Asiatic Journal 13: 1–23. Doerfer, Gerhard. 1973. "Lautgesetze und Zufall: Betrachtungen zum Omnicomparativismus." Innsbrucker Beiträge zur Sprachwissenschaft 10. Doerfer, Gerhard. 1974. "Ist das Japanische mit den altaischen Sprachen verwandt?" Zeitschrift der Deutschen Morgenländischen Gesellschaft 114.1. Doerfer, Gerhard. 1985. Mongolica-Tungusica. Wiesbaden: Otto Harrassowitz. Georg, Stefan. 1999 / 2000. "Haupt und Glieder der altaischen Hypothese: die Körperteilbezeichnungen im Türkischen, Mongolischen und Tungusischen" ('Head and members of the Altaic hypothesis: The body-part designations in Turkic, Mongolic, and Tungusic'). Ural-altaische Jahrbücher, neue Folge B 16, 143–182. . Lee, Ki-Moon and S. Robert Ramsey. 2011. A History of the Korean Language. Cambridge: Cambridge University Press. Menges, Karl. H. 1975. Altajische Studien II. Japanisch und Altajisch. Wiesbaden: Franz Steiner Verlag. Miller, Roy Andrew. 1980. Origins of the Japanese Language: Lectures in Japan during the Academic Year 1977–1978. Seattle: University of Washington Press.
that express a state tend to use as the auxiliary verb in the perfect, as well as verbs of movement. Verbs which fall into this category include sitzen (to sit), liegen (to lie) and, in parts of Carinthia, schlafen (to sleep). Therefore, the perfect of these verbs would be ich bin gesessen, ich bin gelegen and ich bin geschlafen respectively. In Germany, the words stehen (to stand) and gestehen (to confess) are identical in the present perfect: habe gestanden. The Austrian variant avoids this potential ambiguity (bin gestanden from stehen, "to stand"; and habe gestanden from gestehen, "to confess", e.g. "der Verbrecher ist vor dem Richter gestanden und hat gestanden"). In addition, the preterite (simple past) is very rarely used in Austria, especially in the spoken language, with the exception of some modal verbs (i.e. ich sollte, ich wollte). Vocabulary There are many official terms that differ in Austrian German from their usage in most parts of Germany. Words used in Austria are Jänner (January) rather than Januar, Feber (seldom, February) along with Februar, heuer (this year) along with dieses Jahr, Stiege (stairs) along with Treppen, Rauchfang (chimney) instead of Schornstein, many administrative, legal and political terms, and many food terms, including the following: There are, however, some false friends between the two regional varieties: Kasten (wardrobe) along with or instead of Schrank (and, similarly, Eiskasten along with Kühlschrank, fridge), as opposed to Kiste (box) instead of Kasten. Kiste in Germany means both "box" and "chest". Sessel (chair) instead of Stuhl. Sessel means "" in Germany and Stuhl means "stool (faeces)" in both varieties. Dialects Classification Dialects of the Austro-Bavarian group, which also comprises dialects from Bavaria Central Austro-Bavarian (along the main rivers Isar and Danube, spoken in the northern parts of the State of Salzburg, Upper Austria, Lower Austria, and northern Burgenland) Viennese German Southern Austro-Bavarian (in Tyrol, South Tyrol, Carinthia, Styria, and the southern parts of Salzburg and Burgenland) Vorarlbergerisch, spoken in Vorarlberg, is a High Alemannic dialect. Regional accents In addition to the standard variety, in everyday life most Austrians speak one of a number of Upper German dialects. While strong forms of the various dialects are not fully mutually intelligible to northern Germans, communication is much easier in Bavaria, especially rural areas, where the Bavarian dialect still predominates as the mother tongue. The Central Austro-Bavarian dialects are more intelligible to speakers of Standard German than the Southern Austro-Bavarian dialects of Tyrol. Viennese, the Austro-Bavarian dialect of Vienna, is seen for many in Germany as quintessentially Austrian. The people of Graz, the capital of Styria, speak yet another dialect which is not very Styrian and more easily understood by people from other parts of Austria than other Styrian dialects, for example from western Styria. Simple words in the various dialects are very similar, but pronunciation is distinct for each and, after listening to a few spoken words, it may be possible for an Austrian to realise which dialect is being spoken. However, in regard to the dialects of the deeper valleys of the Tyrol, other Tyroleans are often unable to understand them. Speakers from the different states of Austria can easily be distinguished from each other by their particular accents (probably more so than Bavarians), those of Carinthia, Styria, Vienna, Upper Austria, and the Tyrol being very characteristic. Speakers from those regions, even those speaking Standard German, can usually be easily identified by their accent, even by an untrained listener. Several of the dialects have been influenced by contact with non-Germanic linguistic groups, such as the dialect of Carinthia, where, in the past, many speakers were bilingual (and, in the southeastern portions of the state, many still are even today) with Slovene, and the dialect of Vienna, which has been influenced by immigration during the Austro-Hungarian period, particularly from what is today Czechia. The German dialects of South Tyrol have been influenced by local Romance languages, particularly noticeable with the many loanwords from Italian and Ladin. The geographic borderlines between the different accents (isoglosses) coincide strongly with the borders of the states and also with the border with Bavaria, with Bavarians having a markedly different rhythm of speech in spite of the linguistic similarities. References Notes Citations Works cited Further reading : Die deutsche Sprache in Deutschland, Österreich
language for official government documents. This form is known as , or "Austrian chancellery language". It is a very traditional form of the language, probably derived from medieval deeds and documents, and has a very complex structure and vocabulary generally reserved for such documents. For most speakers (even native speakers), this form of the language is generally difficult to understand, as it contains many highly specialised terms for diplomatic, internal, official, and military matters. There are no regional variations, because this special written form has mainly been used by a government that has now for centuries been based in Vienna. is now used less and less, thanks to various administrative reforms that reduced the number of traditional civil servants (). As a result, Standard Austrian German is replacing it in government and administrative texts. European Union When Austria became a member of the European Union, 23 food-related terms were listed in its accession agreement as having the same legal status as the equivalent terms used in Germany, for example, the words for "potato", "tomato", and "Brussels sprouts". (Examples in "Vocabulary") Austrian German is the only variety of a pluricentric language recognized under international law or EU primary law. Grammar Verbs In Austria, as in the German-speaking parts of Switzerland and in southern Germany, verbs that express a state tend to use as the auxiliary verb in the perfect, as well as verbs of movement. Verbs which fall into this category include sitzen (to sit), liegen (to lie) and, in parts of Carinthia, schlafen (to sleep). Therefore, the perfect of these verbs would be ich bin gesessen, ich bin gelegen and ich bin geschlafen respectively. In Germany, the words stehen (to stand) and gestehen (to confess) are identical in the present perfect: habe gestanden. The Austrian variant avoids this potential ambiguity (bin gestanden from stehen, "to stand"; and habe gestanden from gestehen, "to confess", e.g. "der Verbrecher ist vor dem Richter gestanden und hat gestanden"). In addition, the preterite (simple past) is very rarely used in Austria, especially in the spoken language, with the exception of some modal verbs (i.e. ich sollte, ich wollte). Vocabulary There are many official terms that differ in Austrian German from their usage in most parts of Germany. Words used in Austria are Jänner (January) rather than Januar, Feber (seldom, February) along with Februar, heuer (this year) along with dieses Jahr, Stiege (stairs) along with Treppen, Rauchfang (chimney) instead of Schornstein, many administrative, legal and political terms, and many food terms, including the following: There are, however, some false friends between the two regional varieties: Kasten (wardrobe) along with or instead of Schrank (and, similarly, Eiskasten along with Kühlschrank, fridge), as opposed to Kiste (box) instead of Kasten. Kiste in Germany means both "box" and "chest". Sessel (chair) instead of Stuhl. Sessel means "" in Germany and Stuhl means "stool (faeces)" in both varieties. Dialects Classification Dialects of the Austro-Bavarian group, which also comprises dialects from Bavaria Central Austro-Bavarian (along the main rivers Isar and Danube, spoken in the northern parts of the State of Salzburg, Upper Austria, Lower Austria, and northern Burgenland) Viennese German Southern Austro-Bavarian (in Tyrol, South Tyrol, Carinthia, Styria, and the southern parts of Salzburg and Burgenland) Vorarlbergerisch, spoken in Vorarlberg, is a High Alemannic dialect. Regional accents In addition to the standard variety, in everyday life most Austrians speak one of a number of Upper German dialects. While strong forms of the various dialects are not fully mutually intelligible to northern Germans, communication is much easier in Bavaria, especially rural areas, where the Bavarian dialect still predominates as the mother tongue. The Central Austro-Bavarian dialects are more intelligible to speakers of Standard German than the Southern Austro-Bavarian dialects of Tyrol. Viennese, the Austro-Bavarian dialect of Vienna, is seen for many in Germany as quintessentially Austrian. The people of Graz, the capital of Styria, speak yet another dialect which is not very Styrian and more easily understood by people from other parts of Austria than other Styrian dialects, for example from western Styria. Simple words in the various dialects are very similar, but pronunciation is distinct for each and, after listening to a few spoken words, it may be possible for an Austrian to realise which dialect is being spoken. However, in regard to the dialects of the deeper valleys of the Tyrol, other Tyroleans are often unable to understand them. Speakers from the different states of Austria can easily be distinguished from each other by their particular accents (probably more so than Bavarians), those of Carinthia, Styria, Vienna, Upper Austria, and the Tyrol being very characteristic. Speakers from those regions, even those speaking Standard German, can usually be easily identified by their accent, even by an untrained listener. Several of the dialects have been influenced by contact with non-Germanic linguistic groups, such as the dialect of Carinthia, where, in the past, many speakers were bilingual (and, in the southeastern portions of the state, many still are even today) with Slovene, and the dialect of Vienna, which has been influenced by immigration during the Austro-Hungarian period, particularly from what is today Czechia.
limitation of size. Tarski's axiom, which is used in Tarski–Grothendieck set theory and states (in the vernacular) that every set belongs to Grothendieck universe, is stronger than the axiom of choice. Equivalents There are important statements that, assuming the axioms of ZF but neither AC nor ¬AC, are equivalent to the axiom of choice. The most important among them are Zorn's lemma and the well-ordering theorem. In fact, Zermelo initially introduced the axiom of choice in order to formalize his proof of the well-ordering theorem. Set theory Well-ordering theorem: Every set can be well-ordered. Consequently, every cardinal has an initial ordinal. Tarski's theorem about choice: For every infinite set A, there is a bijective map between the sets A and A×A. Trichotomy: If two sets are given, then either they have the same cardinality, or one has a smaller cardinality than the other. Given two non-empty sets, one has a surjection to the other. The Cartesian product of any family of nonempty sets is nonempty. König's theorem: Colloquially, the sum of a sequence of cardinals is strictly less than the product of a sequence of larger cardinals. (The reason for the term "colloquially" is that the sum or product of a "sequence" of cardinals cannot be defined without some aspect of the axiom of choice.) Every surjective function has a right inverse. Order theory Zorn's lemma: Every non-empty partially ordered set in which every chain (i.e., totally ordered subset) has an upper bound contains at least one maximal element. Hausdorff maximal principle: In any partially ordered set, every totally ordered subset is contained in a maximal totally ordered subset. The restricted principle "Every partially ordered set has a maximal totally ordered subset" is also equivalent to AC over ZF. Tukey's lemma: Every non-empty collection of finite character has a maximal element with respect to inclusion. Antichain principle: Every partially ordered set has a maximal antichain. Abstract algebra Every vector space has a basis. Krull's theorem: Every unital ring other than the trivial ring contains a maximal ideal. For every non-empty set S there is a binary operation defined on S that gives it a group structure. (A cancellative binary operation is enough, see group structure and the axiom of choice.) Every free abelian group is projective. Baer's criterion: Every divisible abelian group is injective. Every set is a projective object in the category Set of sets. Functional analysis The closed unit ball of the dual of a normed vector space over the reals has an extreme point. Point-set topology Tychonoff's theorem: Every product of compact topological spaces is compact. In the product topology, the closure of a product of subsets is equal to the product of the closures. Mathematical logic If S is a set of sentences of first-order logic and B is a consistent subset of S, then B is included in a set that is maximal among consistent subsets of S. The special case where S is the set of all first-order sentences in a given signature is weaker, equivalent to the Boolean prime ideal theorem; see the section "Weaker forms" below. Graph theory Every connected graph has a spanning tree. Category theory There are several results in category theory which invoke the axiom of choice for their proof. These results might be weaker than, equivalent to, or stronger than the axiom of choice, depending on the strength of the technical foundations. For example, if one defines categories in terms of sets, that is, as sets of objects and morphisms (usually called a small category), or even locally small categories, whose hom-objects are sets, then there is no category of all sets, and so it is difficult for a category-theoretic formulation to apply to all sets. On the other hand, other foundational descriptions of category theory are considerably stronger, and an identical category-theoretic statement of choice may be stronger than the standard formulation, à la class theory, mentioned above. Examples of category-theoretic statements which require choice include: Every small category has a skeleton. If two small categories are weakly equivalent, then they are equivalent. Every continuous functor on a small-complete category which satisfies the appropriate solution set condition has a left-adjoint (the Freyd adjoint functor theorem). Weaker forms There are several weaker statements that are not equivalent to the axiom of choice, but are closely related. One example is the axiom of dependent choice (DC). A still weaker example is the axiom of countable choice (ACω or CC), which states that a choice function exists for any countable set of nonempty sets. These axioms are sufficient for many proofs in elementary mathematical analysis, and are consistent with some principles, such as the Lebesgue measurability of all sets of reals, that are disprovable from the full axiom of choice. Other choice axioms weaker than axiom of choice include the Boolean prime ideal theorem and the axiom of uniformization. The former is equivalent in ZF to Tarski's 1930 ultrafilter lemma: every filter is a subset of some ultrafilter. Results requiring AC (or weaker forms) but weaker than it One of the most interesting aspects of the axiom of choice is the large number of places in mathematics that it shows up. Here are some statements that require the axiom of choice in the sense that they are not provable from ZF but are provable from ZFC (ZF plus AC). Equivalently, these statements are true in all models of ZFC but false in some models of ZF. Set theory The ultrafilter lemma (with ZF) can be used to prove the Axiom of choice for finite sets: Given and a collection of non-empty sets, their product is not empty. Any union of countably many countable sets is itself countable (because it is necessary to choose a particular ordering for each of the countably many sets). If the set A is infinite, then there exists an injection from the natural numbers N to A (see Dedekind infinite). Eight definitions of a finite set are equivalent. Every infinite game in which is a Borel subset of Baire space is determined. Measure theory The Vitali theorem on the existence of non-measurable sets which states that there is a subset of the real numbers that is not Lebesgue measurable. The Hausdorff paradox. The Banach–Tarski paradox. Algebra Every field has an algebraic closure. Every field extension has a transcendence basis. Stone's representation theorem for Boolean algebras needs the Boolean prime ideal theorem. The Nielsen–Schreier theorem, that every subgroup of a free group is free. The additive groups of R and C are isomorphic. Functional analysis The Hahn–Banach theorem in functional analysis, allowing the extension of linear functionals The theorem that every Hilbert space has an orthonormal basis. The Banach–Alaoglu theorem about compactness of sets of functionals. The Baire category theorem about complete metric spaces, and its consequences, such as the open mapping theorem and the closed graph theorem. On every infinite-dimensional topological vector space there is a discontinuous linear map. General topology A uniform space is compact if and only if it is complete and totally bounded. Every Tychonoff space has a Stone–Čech compactification. Mathematical logic Gödel's completeness theorem for first-order logic: every consistent set of first-order sentences has a completion. That is, every consistent set of first-order sentences can be extended to a maximal consistent set. The compactness theorem: If is a set of first-order (or alternatively, zero-order) sentences such that every finite subset of has a model, then has a model. Possibly equivalent implications of AC There are several historically important set-theoretic statements implied by AC whose equivalence to AC is open. The partition principle, which was formulated before AC itself, was cited by Zermelo as a justification for believing AC. In 1906, Russell declared PP to be equivalent, but whether the partition principle implies AC is still the oldest open problem in set theory, and the equivalences of the other statements are similarly hard old open problems. In every known model of ZF where choice fails, these statements fail too, but it is unknown if they can hold without choice. Set theory Partition principle: if there is a surjection from A to B, there is an injection from B to A. Equivalently, every partition P of a set S is less than or equal to S in size. Converse Schröder–Bernstein theorem: if two sets have surjections to each other, they are equinumerous. Weak partition principle: A partition of a set S cannot be strictly larger than S. If WPP holds, this already implies the existence of a non-measurable set. Each of the previous three statements is implied by the preceding one, but it is unknown if any of these implications can be reversed. There is no infinite decreasing sequence of cardinals. The equivalence was conjectured by Schoenflies in 1905. Abstract algebra Hahn embedding theorem: Every ordered abelian group G order-embeds as a subgroup of the additive group endowed with a lexicographical order, where Ω is the set of Archimedean equivalence classes of G. This equivalence was conjectured by Hahn in 1907. Stronger forms of the negation of AC If we abbreviate by BP the claim that every set of real numbers has the property of Baire, then BP is stronger than ¬AC, which asserts the nonexistence of any choice function on perhaps only a single set of nonempty sets. Strengthened negations may be compatible with weakened forms of AC. For example, ZF + DC + BP is consistent, if ZF is. It is also consistent with ZF + DC that every set of reals is Lebesgue measurable; however, this consistency result, due to Robert M. Solovay, cannot be proved in ZFC itself, but requires a mild large cardinal assumption (the existence of an inaccessible cardinal). The much stronger axiom of determinacy, or AD, implies that every set of reals is Lebesgue measurable, has the property of Baire, and has the perfect set property (all three of these results are refuted by AC itself). ZF + DC + AD is consistent provided that a sufficiently strong large cardinal axiom is consistent (the existence of infinitely many Woodin cardinals). Quine's system of axiomatic set theory, "New Foundations" (NF), takes its name from the title ("New Foundations for Mathematical Logic") of the 1937 article which introduced it. In the NF axiomatic system, the axiom of choice can be disproved. Statements consistent with the negation of AC There are models of Zermelo-Fraenkel set theory in which the axiom of choice is false. We shall abbreviate "Zermelo-Fraenkel set theory plus the negation of the axiom of choice" by ZF¬C. For certain models of ZF¬C, it is possible to prove the negation of some standard facts. Any model of ZF¬C is also a model of ZF, so for each of the following statements, there exists a model of ZF in which that statement is true. In some model, there is a set that can be partitioned into strictly more equivalence classes than the original set has elements, and a function whose domain is strictly smaller than its range. In fact, this is the case in all known models. There is a function f from the real numbers to the real numbers such that f is not continuous at a, but f is sequentially continuous at a, i.e., for any sequence {xn} converging to a, limn f(xn)=f(a). In some model, there is an infinite set of real numbers without a countably infinite subset. In some model, the real numbers are a countable union of countable sets. This does not imply that the real numbers are countable: As pointed out above, to show that a countable union of countable sets is itself countable requires the Axiom of countable choice. In some model, there is a field with no algebraic closure. In all models of ZF¬C there is a vector space with no basis. In some model, there is a vector space with two bases of different cardinalities. In some model there is a free complete boolean algebra on countably many generators. In
function that selects one sock from each pair, without invoking the axiom of choice. Although originally controversial, the axiom of choice is now used without reservation by most mathematicians, and it is included in the standard form of axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). One motivation for this use is that a number of generally accepted mathematical results, such as Tychonoff's theorem, require the axiom of choice for their proofs. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, such as the axiom of determinacy. The axiom of choice is avoided in some varieties of constructive mathematics, although there are varieties of constructive mathematics in which the axiom of choice is embraced. Statement A choice function (also called selector or selection) is a function f, defined on a collection X of nonempty sets, such that for every set A in X, f(A) is an element of A. With this concept, the axiom can be stated: Formally, this may be expressed as follows: Thus, the negation of the axiom of choice states that there exists a collection of nonempty sets that has no choice function. (, so where is negation.) Each choice function on a collection X of nonempty sets is an element of the Cartesian product of the sets in X. This is not the most general situation of a Cartesian product of a family of sets, where a given set can occur more than once as a factor; however, one can focus on elements of such a product that select the same element every time a given set appears as factor, and such elements correspond to an element of the Cartesian product of all distinct sets in the family. The axiom of choice asserts the existence of such elements; it is therefore equivalent to: Given any family of nonempty sets, their Cartesian product is a nonempty set. Nomenclature ZF, AC, and ZFC In this article and other discussions of the Axiom of Choice the following abbreviations are common: AC – the Axiom of Choice. ZF – Zermelo–Fraenkel set theory omitting the Axiom of Choice. ZFC – Zermelo–Fraenkel set theory, extended to include the Axiom of Choice. Variants There are many other equivalent statements of the axiom of choice. These are equivalent in the sense that, in the presence of other basic axioms of set theory, they imply the axiom of choice and are implied by it. One variation avoids the use of choice functions by, in effect, replacing each choice function with its range. Given any set X of pairwise disjoint non-empty sets, there exists at least one set C that contains exactly one element in common with each of the sets in X. This guarantees for any partition of a set X the existence of a subset C of X containing exactly one element from each part of the partition. Another equivalent axiom only considers collections X that are essentially powersets of other sets: For any set A, the power set of A (with the empty set removed) has a choice function. Authors who use this formulation often speak of the choice function on A, but this is a slightly different notion of choice function. Its domain is the power set of A (with the empty set removed), and so makes sense for any set A, whereas with the definition used elsewhere in this article, the domain of a choice function on a collection of sets is that collection, and so only makes sense for sets of sets. With this alternate notion of choice function, the axiom of choice can be compactly stated as Every set has a choice function. which is equivalent to For any set A there is a function f such that for any non-empty subset B of A, f(B) lies in B. The negation of the axiom can thus be expressed as: There is a set A such that for all functions f (on the set of non-empty subsets of A), there is a B such that f(B) does not lie in B. Restriction to finite sets The statement of the axiom of choice does not specify whether the collection of nonempty sets is finite or infinite, and thus implies that every finite collection of nonempty sets has a choice function. However, that particular case is a theorem of the Zermelo–Fraenkel set theory without the axiom of choice (ZF); it is easily proved by mathematical induction. In the even simpler case of a collection of one set, a choice function just corresponds to an element, so this instance of the axiom of choice says that every nonempty set has an element; this holds trivially. The axiom of choice can be seen as asserting the generalization of this property, already evident for finite collections, to arbitrary collections. Usage Until the late 19th century, the axiom of choice was often used implicitly, although it had not yet been formally stated. For example, after having established that the set X contains only non-empty sets, a mathematician might have said "let F(s) be one of the members of s for all s in X" to define a function F. In general, it is impossible to prove that F exists without the axiom of choice, but this seems to have gone unnoticed until Zermelo. Not every situation requires the axiom of choice. For finite sets X, the axiom of choice follows from the other axioms of set theory. In that case, it is equivalent to saying that if we have several (a finite number of) boxes, each containing at least one item, then we can choose exactly one item from each box. Clearly, we can do this: We start at the first box, choose an item; go to the second box, choose an item; and so on. The number of boxes is finite, so eventually, our choice procedure comes to an end. The result is an explicit choice function: a function that takes the first box to the first element we chose, the second box to the second element we chose, and so on. (A formal proof for all finite sets would use the principle of mathematical induction to prove "for every natural number k, every family of k nonempty sets has a choice function.") This method cannot, however, be used to show that every countable family of nonempty sets has a choice function, as is asserted by the axiom of countable choice. If the method is applied to an infinite sequence (Xi : i∈ω) of nonempty sets, a function is obtained at each finite stage, but there is no stage at which a choice function for the entire family is constructed, and no "limiting" choice function can be constructed, in general, in ZF without the axiom of choice. Examples The nature of the individual nonempty sets in the collection may make it possible to avoid the axiom of choice even for certain infinite collections. For example, suppose that each member of the collection X is a nonempty subset of the natural numbers. Every such subset has a smallest element, so to specify our choice function we can simply say that it maps each set to the least element of that set. This gives us a definite choice of an element from each set, and makes it unnecessary to apply the axiom of choice. The difficulty appears when there is no natural choice of elements from each set. If we cannot make explicit choices, how do we know that our set exists? For example, suppose that X is the set of all non-empty subsets of the real numbers. First we might try to proceed as if X were finite. If we try to choose an element from each set, then, because X is infinite, our choice procedure will never come to an end, and consequently, we shall never be able to produce a choice function for all of X. Next we might try specifying the least element from each set. But some subsets of the real numbers do not have least elements. For example, the open interval (0,1) does not have a least element: if x is in (0,1), then so is x/2, and x/2 is always strictly smaller than x. So this attempt also fails. Additionally, consider for instance the unit circle S, and the action on S by a group G consisting of all rational rotations. Namely, these are rotations by angles which are rational multiples of π. Here G is countable while S is uncountable. Hence S breaks up into uncountably many orbits under G. Using the axiom of choice, we could pick a single point from each orbit, obtaining an uncountable subset X of S with the property that all of its translates by G are disjoint from X. The set of those translates partitions the circle into a countable collection of disjoint sets, which are all pairwise congruent. Since X is not measurable for any rotation-invariant countably additive finite measure on S, finding an algorithm to select a point in each orbit requires the axiom of choice. See non-measurable set for more details. The reason that we are able to choose least elements from subsets of the natural numbers is the fact that the natural numbers are well-ordered: every nonempty subset of the natural numbers has a unique least element under the natural ordering. One might say, "Even though the usual ordering of the real numbers does not work, it may be possible to find a different ordering of the real numbers which is a well-ordering. Then our choice function can choose the least element of every set under our unusual ordering." The problem then becomes that of constructing a well-ordering, which turns out to require the axiom of choice for its existence; every set can be well-ordered if and only if the axiom of choice holds. Criticism and acceptance A proof requiring the axiom of choice may establish the existence of an object without explicitly defining the object in the language of set theory. For example, while the axiom of choice implies that there is a well-ordering of the real numbers, there are models of set theory with the axiom of choice in which no well-ordering of the reals is definable. Similarly, although a subset of the real numbers that is not Lebesgue measurable can be proved to exist using the axiom of choice, it is consistent that no such set is definable. The axiom of choice proves the existence of these intangibles (objects that are proved to exist, but which cannot be explicitly constructed), which may conflict with some philosophical principles. Because there is no canonical well-ordering of all sets, a construction that relies on a well-ordering may not produce a canonical result, even if a canonical result is desired (as is often the case in category theory). This has been used as an argument against the use of the axiom of choice. Another argument against the axiom of choice is that it implies the existence of objects that may seem counterintuitive. One example is the Banach–Tarski paradox which says that it is possible to decompose the 3-dimensional solid unit ball into finitely many pieces and, using only rotations and translations, reassemble the pieces into two solid balls each with the same volume as the original. The pieces in this decomposition, constructed using the axiom of choice, are non-measurable sets. Despite these seemingly paradoxical facts, most mathematicians accept the axiom of choice as a valid principle for proving new results in mathematics. The debate is interesting enough, however, that it is considered of note when a theorem in ZFC (ZF plus AC) is logically equivalent (with just the ZF axioms) to the axiom of choice, and mathematicians look for results that require the axiom of choice to be false, though this type of deduction is less common than the type which requires the axiom of choice to be true. It is possible to prove many theorems using neither the axiom of choice nor its negation; such statements will be true in any model of ZF, regardless of the truth or falsity of the axiom of choice in that particular model. The restriction to ZF renders any claim that relies on either the axiom of choice or its negation unprovable. For example, the Banach–Tarski paradox is neither provable nor disprovable from ZF alone: it is impossible to construct the required decomposition of the unit ball in ZF, but also impossible to prove there is no such decomposition. Similarly, all the statements listed below which require choice or some weaker version thereof for their proof are unprovable in ZF, but since each is provable in ZF plus the axiom of choice, there are models of ZF in which each statement is true. Statements such as the Banach–Tarski paradox can be rephrased as conditional statements, for example, "If AC holds, then the decomposition in the Banach–Tarski paradox exists." Such conditional statements are provable in ZF when the original statements are provable from ZF and the axiom of choice. In constructive mathematics As discussed above, in ZFC, the axiom of choice is able to provide "nonconstructive proofs" in which the existence of an object is proved although no explicit example is constructed. ZFC, however, is still formalized in classical logic. The axiom of choice has also been thoroughly studied in the context of constructive mathematics, where non-classical logic is employed. The status of the axiom of choice varies between different varieties of constructive mathematics. In Martin-Löf type theory and higher-order Heyting arithmetic, the appropriate statement of the axiom of choice is (depending on approach) included as an axiom or provable as a theorem. Errett Bishop argued that the axiom of choice was constructively acceptable, saying In constructive set theory, however, Diaconescu's theorem shows that the axiom of choice implies the law of excluded middle (unlike in Martin-Löf type theory, where it does not). Thus the axiom of choice is not generally available in constructive set theory. A cause for this difference is that the axiom of choice in type theory does not have the extensionality properties that the axiom of choice in constructive set theory does. Some results in constructive set theory use the axiom of countable choice or the axiom of dependent choice, which do not imply the law of the excluded middle in constructive set theory. Although the axiom of countable choice
a composite title-name which derived from Turkic *es (great, old), and *til (sea, ocean), and the suffix /a/. The stressed back syllabic til assimilated the front member es, so it became *as. It is a nominative, in form of attíl- (< *etsíl < *es tíl) with the meaning "the oceanic, universal ruler". J. J. Mikkola connected it with Turkic āt (name, fame). As another Turkic possibility, H. Althof (1902) considered it was related to Turkish atli (horseman, cavalier), or Turkish at (horse) and dil (tongue). Maenchen-Helfen argues that Pritsak's derivation is "ingenious but for many reasons unacceptable", while dismissing Mikkola's as "too farfetched to be taken seriously". M. Snædal similarly notes that none of these proposals has achieved wide acceptance. Criticizing the proposals of finding Turkic or other etymologies for Attila, Doerfer notes that King George VI of the United Kingdom had a name of Greek origin, and Süleyman the Magnificent had a name of Arabic origin, yet that does not make them Greeks or Arabs: it is therefore plausible that Attila would have a name not of Hunnic origin. Historian Hyun Jin Kim, however, has argued that the Turkic etymology is "more probable". M. Snædal, in a paper that rejects the Germanic derivation but notes the problems with the existing proposed Turkic etymologies, argues that Attila's name could have originated from Turkic-Mongolian at, adyy/agta (gelding, warhorse) and Turkish atli (horseman, cavalier), meaning "possessor of geldings, provider of warhorses". Historiography and source The historiography of Attila is faced with a major challenge, in that the only complete sources are written in Greek and Latin by the enemies of the Huns. Attila's contemporaries left many testimonials of his life, but only fragments of these remain. Priscus was a Byzantine diplomat and historian who wrote in Greek, and he was both a witness to and an actor in the story of Attila, as a member of the embassy of Theodosius II at the Hunnic court in 449. He was obviously biased by his political position, but his writing is a major source for information on the life of Attila, and he is the only person known to have recorded a physical description of him. He wrote a history of the late Roman Empire in eight books covering the period from 430 to 476. Only fragments of Priscus' work remain. It was cited extensively by 6th-century historians Procopius and Jordanes, especially in Jordanes' The Origin and Deeds of the Goths, which contains numerous references to Priscus's history, and it is also an important source of information about the Hunnic empire and its neighbors. He describes the legacy of Attila and the Hunnic people for a century after Attila's death. Marcellinus Comes, a chancellor of Justinian during the same era, also describes the relations between the Huns and the Eastern Roman Empire. Numerous ecclesiastical writings contain useful but scattered information, sometimes difficult to authenticate or distorted by years of hand-copying between the 6th and 17th centuries. The Hungarian writers of the 12th century wished to portray the Huns in a positive light as their glorious ancestors, and so repressed certain historical elements and added their own legends. The literature and knowledge of the Huns themselves was transmitted orally, by means of epics and chanted poems that were handed down from generation to generation. Indirectly, fragments of this oral history have reached us via the literature of the Scandinavians and Germans, neighbors of the Huns who wrote between the 9th and 13th centuries. Attila is a major character in many Medieval epics, such as the Nibelungenlied, as well as various Eddas and sagas. Archaeological investigation has uncovered some details about the lifestyle, art, and warfare of the Huns. There are a few traces of battles and sieges, but the tomb of Attila and the location of his capital have not yet been found. Early life and background The Huns were a group of Eurasian nomads, appearing from east of the Volga, who migrated further into Western Europe c. 370 and built up an enormous empire there. Their main military techniques were mounted archery and javelin throwing. They were in the process of developing settlements before their arrival in Western Europe, yet the Huns were a society of pastoral warriors whose primary form of nourishment was meat and milk, products of their herds. The origin and language of the Huns has been the subject of debate for centuries. According to some theories, their leaders at least may have spoken a Turkic language, perhaps closest to the modern Chuvash language. One scholar suggests a relationship to Yeniseian. According to the Encyclopedia of European Peoples, "the Huns, especially those who migrated to the west, may have been a combination of central Asian Turkic, Mongolic, and Ugric stocks". Attila's father Mundzuk was the brother of kings Octar and Ruga, who reigned jointly over the Hunnic empire in the early fifth century. This form of diarchy was recurrent with the Huns, but historians are unsure whether it was institutionalized, merely customary, or an occasional occurrence. His family was from a noble lineage, but it is uncertain whether they constituted a royal dynasty. Attila's birthdate is debated; journalist Éric Deschodt and writer Herman Schreiber have proposed a date of 395. However, historian Iaroslav Lebedynsky and archaeologist Katalin Escher prefer an estimate between the 390s and the first decade of the fifth century. Several historians have proposed 406 as the date. Attila grew up in a rapidly changing world. His people were nomads who had only recently arrived in Europe. They crossed the Volga river during the 370s and annexed the territory of the Alans, then attacked the Gothic kingdom between the Carpathian mountains and the Danube. They were a very mobile people, whose mounted archers had acquired a reputation for invincibility, and the Germanic tribes seemed unable to withstand them. Vast populations fleeing the Huns moved from Germania into the Roman Empire in the west and south, and along the banks of the Rhine and Danube. In 376, the Goths crossed the Danube, initially submitting to the Romans but soon rebelling against Emperor Valens, whom they killed in the Battle of Adrianople in 378. Large numbers of Vandals, Alans, Suebi, and Burgundians crossed the Rhine and invaded Roman Gaul on December 31, 406 to escape the Huns. The Roman Empire had been split in half since 395 and was ruled by two distinct governments, one based in Ravenna in the West, and the other in Constantinople in the East. The Roman Emperors, both East and West, were generally from the Theodosian family in Attila's lifetime (despite several power struggles). The Huns dominated a vast territory with nebulous borders determined by the will of a constellation of ethnically varied peoples. Some were assimilated to Hunnic nationality, whereas many retained their own identities and rulers but acknowledged the suzerainty of the king of the Huns. The Huns were also the indirect source of many of the Romans' problems, driving various Germanic tribes into Roman territory, yet relations between the two empires were cordial: the Romans used the Huns as mercenaries against the Germans and even in their civil wars. Thus, the usurper Joannes was able to recruit thousands of Huns for his army against Valentinian III in 424. It was Aëtius, later Patrician of the West, who managed this operation. They exchanged ambassadors and hostages, the alliance lasting from 401 to 450 and permitting the Romans numerous military victories. The Huns considered the Romans to be paying them tribute, whereas the Romans preferred to view this as payment for services rendered. The Huns had become a great power by the time that Attila came of age during the reign of his uncle Ruga, to the point that Nestorius, the Patriarch of Constantinople, deplored the situation with these words: "They have become both masters and slaves of the Romans". Campaigns against the Eastern Roman Empire The death of Rugila (also known as Rua or Ruga) in 434 left the sons of his brother Mundzuk, Attila and Bleda, in control of the united Hun tribes. At the time of the two brothers' accession, the Hun tribes were bargaining with Eastern Roman Emperor Theodosius II's envoys for the return of several renegades who had taken refuge within the Eastern Roman Empire, possibly Hunnic nobles who disagreed with the brothers' assumption of leadership. The following year, Attila and Bleda met with the imperial legation at Margus (Požarevac), all seated on horseback in the Hunnic manner, and negotiated an advantageous treaty. The Romans agreed to return the fugitives, to double their previous tribute of 350 Roman pounds (c. 115 kg) of gold, to open their markets to Hunnish traders, and to pay a ransom of eight solidi for each Roman taken prisoner by the Huns. The Huns, satisfied with the treaty, decamped from the Roman Empire and returned to their home in the Great Hungarian Plain, perhaps to consolidate and strengthen their empire. Theodosius used this opportunity to strengthen the walls of Constantinople, building the city's first sea wall, and to build up his border defenses along the Danube. The Huns remained out of Roman sight for the next few years while they invaded the Sassanid Empire. They were defeated in Armenia by the Sassanids, abandoned their invasion, and turned their attentions back to Europe. In 440, they reappeared in force on the borders of the Roman Empire, attacking the merchants at the market on the north bank of the Danube that had been established by the treaty of 435. Crossing the Danube, they laid waste to the cities of Illyricum and forts on the river, including (according to Priscus) Viminacium, a city of Moesia. Their advance began at Margus, where they demanded that the Romans turn over a bishop who had retained property that Attila regarded as his. While the Romans discussed the bishop's fate, he slipped away secretly to the Huns and betrayed the city to them. While the Huns attacked city-states along the Danube, the Vandals (led by Geiseric) captured the Western Roman province of Africa and its capital of Carthage. Africa was the richest province of the Western Empire and a main source of food for Rome. The Sassanid Shah Yazdegerd II invaded Armenia in 441. The Romans stripped the Balkan area of forces, sending them to Sicily in order to mount an expedition against the Vandals in Africa. This left Attila and Bleda a clear path through Illyricum into the Balkans, which they invaded in 441. The Hunnish army sacked Margus and Viminacium, and then took Singidunum (Belgrade) and Sirmium. During 442, Theodosius recalled his troops from Sicily and ordered a large issue of new coins to finance operations against the Huns. He believed that he could defeat the Huns and refused the Hunnish kings' demands. Attila responded with a campaign in 443. For the first time (as far as the Romans knew) his forces were equipped with battering rams and rolling siege towers, with which they successfully assaulted the military centers of Ratiara and Naissus (Niš) and massacred the inhabitants. Priscus said "When we arrived at Naissus we found the city deserted, as though it had been sacked; only a few sick persons lay in the churches. We halted at a short distance from the river, in an open space, for all the ground adjacent to the bank was full of the bones of men slain in war." Advancing along the Nišava River, the Huns next took Serdica (Sofia), Philippopolis (Plovdiv), and Arcadiopolis (Lüleburgaz). They encountered and destroyed a Roman army outside Constantinople but were stopped by the double walls of the Eastern capital. They defeated a second army near Callipolis (Gelibolu). Theodosius, unable to make effective armed resistance, admitted defeat, sending the Magister militum per Orientem Anatolius to negotiate peace terms. The terms were harsher than the previous treaty: the Emperor agreed to hand over 6,000 Roman pounds (c. 2000 kg) of gold as punishment for having disobeyed the terms of the treaty during the invasion; the yearly tribute was tripled, rising to 2,100 Roman pounds (c. 700 kg) in gold; and the ransom for each Roman prisoner rose to 12 solidi. Their demands were met for a time, and the Hun kings withdrew into the interior of their empire. Bleda died following the Huns' withdrawal from Byzantium (probably around 445). Attila then took the throne for himself, becoming the sole ruler of the Huns. Solitary kingship In 447, Attila again rode south into the Eastern Roman Empire through Moesia. The Roman army, under Gothic magister militum Arnegisclus, met him in the Battle of the Utus and was defeated, though not without inflicting heavy losses. The Huns were left unopposed and rampaged through the Balkans as far as Thermopylae. Constantinople itself was saved by the Isaurian troops of magister militum per Orientem Zeno and protected by the intervention of prefect Constantinus, who organized the reconstruction of the walls that had been previously damaged by earthquakes and, in some places, to construct a new line of fortification in front of the old. Callinicus, in his Life of Saint Hypatius, wrote: In the west In 450, Attila proclaimed his intent to attack the Visigoth kingdom of Toulouse by making an alliance with Emperor Valentinian III. He had previously been on good terms with the Western Roman Empire and its influential general Flavius Aëtius. Aëtius had spent a brief exile among the Huns in 433, and the troops that Attila provided against the Goths and Bagaudae had helped earn him the largely honorary title of magister militum in the west. The gifts and diplomatic efforts of Geiseric, who opposed and feared the Visigoths, may also have influenced Attila's plans. However, Valentinian's sister was Honoria, who had sent the Hunnish king a plea for help—and her engagement ring—in order to escape her forced betrothal to
in 378. Large numbers of Vandals, Alans, Suebi, and Burgundians crossed the Rhine and invaded Roman Gaul on December 31, 406 to escape the Huns. The Roman Empire had been split in half since 395 and was ruled by two distinct governments, one based in Ravenna in the West, and the other in Constantinople in the East. The Roman Emperors, both East and West, were generally from the Theodosian family in Attila's lifetime (despite several power struggles). The Huns dominated a vast territory with nebulous borders determined by the will of a constellation of ethnically varied peoples. Some were assimilated to Hunnic nationality, whereas many retained their own identities and rulers but acknowledged the suzerainty of the king of the Huns. The Huns were also the indirect source of many of the Romans' problems, driving various Germanic tribes into Roman territory, yet relations between the two empires were cordial: the Romans used the Huns as mercenaries against the Germans and even in their civil wars. Thus, the usurper Joannes was able to recruit thousands of Huns for his army against Valentinian III in 424. It was Aëtius, later Patrician of the West, who managed this operation. They exchanged ambassadors and hostages, the alliance lasting from 401 to 450 and permitting the Romans numerous military victories. The Huns considered the Romans to be paying them tribute, whereas the Romans preferred to view this as payment for services rendered. The Huns had become a great power by the time that Attila came of age during the reign of his uncle Ruga, to the point that Nestorius, the Patriarch of Constantinople, deplored the situation with these words: "They have become both masters and slaves of the Romans". Campaigns against the Eastern Roman Empire The death of Rugila (also known as Rua or Ruga) in 434 left the sons of his brother Mundzuk, Attila and Bleda, in control of the united Hun tribes. At the time of the two brothers' accession, the Hun tribes were bargaining with Eastern Roman Emperor Theodosius II's envoys for the return of several renegades who had taken refuge within the Eastern Roman Empire, possibly Hunnic nobles who disagreed with the brothers' assumption of leadership. The following year, Attila and Bleda met with the imperial legation at Margus (Požarevac), all seated on horseback in the Hunnic manner, and negotiated an advantageous treaty. The Romans agreed to return the fugitives, to double their previous tribute of 350 Roman pounds (c. 115 kg) of gold, to open their markets to Hunnish traders, and to pay a ransom of eight solidi for each Roman taken prisoner by the Huns. The Huns, satisfied with the treaty, decamped from the Roman Empire and returned to their home in the Great Hungarian Plain, perhaps to consolidate and strengthen their empire. Theodosius used this opportunity to strengthen the walls of Constantinople, building the city's first sea wall, and to build up his border defenses along the Danube. The Huns remained out of Roman sight for the next few years while they invaded the Sassanid Empire. They were defeated in Armenia by the Sassanids, abandoned their invasion, and turned their attentions back to Europe. In 440, they reappeared in force on the borders of the Roman Empire, attacking the merchants at the market on the north bank of the Danube that had been established by the treaty of 435. Crossing the Danube, they laid waste to the cities of Illyricum and forts on the river, including (according to Priscus) Viminacium, a city of Moesia. Their advance began at Margus, where they demanded that the Romans turn over a bishop who had retained property that Attila regarded as his. While the Romans discussed the bishop's fate, he slipped away secretly to the Huns and betrayed the city to them. While the Huns attacked city-states along the Danube, the Vandals (led by Geiseric) captured the Western Roman province of Africa and its capital of Carthage. Africa was the richest province of the Western Empire and a main source of food for Rome. The Sassanid Shah Yazdegerd II invaded Armenia in 441. The Romans stripped the Balkan area of forces, sending them to Sicily in order to mount an expedition against the Vandals in Africa. This left Attila and Bleda a clear path through Illyricum into the Balkans, which they invaded in 441. The Hunnish army sacked Margus and Viminacium, and then took Singidunum (Belgrade) and Sirmium. During 442, Theodosius recalled his troops from Sicily and ordered a large issue of new coins to finance operations against the Huns. He believed that he could defeat the Huns and refused the Hunnish kings' demands. Attila responded with a campaign in 443. For the first time (as far as the Romans knew) his forces were equipped with battering rams and rolling siege towers, with which they successfully assaulted the military centers of Ratiara and Naissus (Niš) and massacred the inhabitants. Priscus said "When we arrived at Naissus we found the city deserted, as though it had been sacked; only a few sick persons lay in the churches. We halted at a short distance from the river, in an open space, for all the ground adjacent to the bank was full of the bones of men slain in war." Advancing along the Nišava River, the Huns next took Serdica (Sofia), Philippopolis (Plovdiv), and Arcadiopolis (Lüleburgaz). They encountered and destroyed a Roman army outside Constantinople but were stopped by the double walls of the Eastern capital. They defeated a second army near Callipolis (Gelibolu). Theodosius, unable to make effective armed resistance, admitted defeat, sending the Magister militum per Orientem Anatolius to negotiate peace terms. The terms were harsher than the previous treaty: the Emperor agreed to hand over 6,000 Roman pounds (c. 2000 kg) of gold as punishment for having disobeyed the terms of the treaty during the invasion; the yearly tribute was tripled, rising to 2,100 Roman pounds (c. 700 kg) in gold; and the ransom for each Roman prisoner rose to 12 solidi. Their demands were met for a time, and the Hun kings withdrew into the interior of their empire. Bleda died following the Huns' withdrawal from Byzantium (probably around 445). Attila then took the throne for himself, becoming the sole ruler of the Huns. Solitary kingship In 447, Attila again rode south into the Eastern Roman Empire through Moesia. The Roman army, under Gothic magister militum Arnegisclus, met him in the Battle of the Utus and was defeated, though not without inflicting heavy losses. The Huns were left unopposed and rampaged through the Balkans as far as Thermopylae. Constantinople itself was saved by the Isaurian troops of magister militum per Orientem Zeno and protected by the intervention of prefect Constantinus, who organized the reconstruction of the walls that had been previously damaged by earthquakes and, in some places, to construct a new line of fortification in front of the old. Callinicus, in his Life of Saint Hypatius, wrote: In the west In 450, Attila proclaimed his intent to attack the Visigoth kingdom of Toulouse by making an alliance with Emperor Valentinian III. He had previously been on good terms with the Western Roman Empire and its influential general Flavius Aëtius. Aëtius had spent a brief exile among the Huns in 433, and the troops that Attila provided against the Goths and
post-ice age sea levels continuing to rise for another 3,000 years after that. The subsequent Bronze Age civilizations of Greece and the Aegean Sea have given rise to the general term Aegean civilization. In ancient times, the sea was the birthplace of two ancient civilizations – the Minoans of Crete and the Myceneans of the Peloponnese. The Minoan civilization was a Bronze Age civilization on the island of Crete and other Aegean islands, flourishing from around 3000 to 1450 BC before a period of decline, finally ending at around 1100 BC. It represented the first advanced civilization in Europe, leaving behind massive building complexes, tools, stunning artwork, writing systems, and a massive network of trade. The Minoan period saw extensive trade between Crete, Aegean, and Mediterranean settlements, particularly the Near East. The most notable Minoan palace is that of Knossos, followed by that of Phaistos. The Mycenaean Greeks arose on the mainland, becoming the first advanced civilization in mainland Greece, which lasted from approximately 1600 to 1100 BC. It is believed that the site of Mycenae, which sits close to the Aegean coast, was the center of Mycenaean civilization. The Mycenaeans introduced several innovations in the fields of engineering, architecture and military infrastructure, while trade over vast areas of the Mediterranean, including the Aegean, was essential for the Mycenaean economy. Their syllabic script, the Linear B, offers the first written records of the Greek language and their religion already included several deities that can also be found in the Olympic Pantheon. Mycenaean Greece was dominated by a warrior elite society and consisted of a network of palace-centered states that developed rigid hierarchical, political, social and economic systems. At the head of this society was the king, known as wanax. The civilization of Mycenaean Greeks perished with the collapse of Bronze Age culture in the eastern Mediterranean, to be followed by the so-called Greek Dark Ages. It is undetermined what cause the collapse of the Mycenaeans. During the Greek Dark Ages, writing in the Linear B script ceased, vital trade links were lost, and towns and villages were abandoned. Ancient Greece The Archaic period followed the Greek Dark Ages in the 8th century BC. Greece became divided into small self-governing communities, and adopted the Phoenician alphabet, modifying it to create the Greek alphabet. By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes, of which Athens, Sparta, and Corinth were closest to the Aegean Sea. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well. In the 8th and 7th centuries BC many Greeks emigrated to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield. The Aegean Sea was the setting for one of the most pivotal naval engagements in history, when on September 20, 480 B.C. the Athenian fleet gained a decisive victory over the Persian fleet of the Xerxes II of Persia at the Battle of Salamis. Thus ending any further attempt of western expansion by the Achaemenid Empire. The Aegean Sea would later come to be under the control, albeit briefly, of the Kingdom of Macedonia. Philip II and his son Alexander the Great led a series of conquests that led not only to the unification of the Greek mainland and the control of the Aegean Sea under his rule, but also the destruction of the Achaemenid Empire. After Alexander the Great's death, his empire was divided among his generals. Cassander became king of the Hellenistic kingdom of Macedon, which held territory along the western coast of the Aegean, roughly corresponding to modern-day Greece. The Kingdom of Lysimachus had control over the sea's eastern coast. Greece had entered the Hellenistic period. Roman rule The Macedonian Wars were a series of conflicts fought by the Roman Republic and its Greek allies in the eastern Mediterranean against several different major Greek kingdoms. They resulted in Roman control or influence over the eastern Mediterranean basin, including the Aegean, in addition to their hegemony in the western Mediterranean after the Punic Wars. During Roman rule, the land around the Aegean Sea fell under the provinces of Achaea, Macedonia, Thracia, Asia and Creta et Cyrenica (island of Crete) Medieval period The Fall of the Western Roman Empire allowed its successor state, the Byzantine Empire, to continue Roman control over the Aegean Sea. However, their territory would later be threatened by the Early Muslim conquests initiated by Muhammad in the 7th century. Although the Rashidun Caliphate did not manage to obtain land along the coast of the Aegean sea, its conquest of the Eastern Anatolian peninsula as well as Egypt, the Levant, and North Africa left the Byzantine Empire weakened. The Umayyad Caliphate expanded the territorial gains of the Rashidun Caliphate, conquering much of North Africa, and threatened the Byzantine Empire's control of Western Anatolia, where it meets the Aegean Sea. During the 820s, Crete was conquered by a group of Berbers Andalusians exiles led by Abu Hafs Umar al-Iqritishi, and it became an independent Islamic state. The Byzantine Empire launched a campaign that took most of the island back in 842 and 843 under Theoktistos, but the reconquest was not completed and was soon reversed. Later attempts by the Byzantine Empire to recover the island were without success. For the approximately 135 years of its existence, the emirate of Crete was one of the major foes of Byzantium. Crete commanded the sea lanes of the Eastern Mediterranean and functioned as a forward base and haven for Muslim corsair fleets that ravaged the Byzantine-controlled shores of the Aegean Sea. Crete returned to Byzantine rule under Nikephoros Phokas, who launched a huge campaign against the Emirate of Crete in 960 to 961. Meanwhile, the Bulgarian Empire threatened Byzantine control of Northern Greece and the Aegean coast to the south. Under Presian I and his successor Boris I, the Bulgarian Empire managed to obtain a small portion of the northern Aegean coast. Simeon I of Bulgaria led Bulgaria to its greatest territorial expansion, and managed to conqueror much of the northern and western coasts of the Aegean. The Byzantines later regained control. The Second Bulgarian Empire achieved similar success along, again, the northern and western coasts, under Ivan Asen II of Bulgaria. The Seljuq Turks, under the Seljuk Empire, invaded the Byzantine Empire in 1068, from which they annexed almost all the territories of Anatolia, including the east coast of the Aegean Sea, during the reign of Alp Arslan, the second Sultan of the Seljuk Empire. After the death of his successor, Malik Shah I, the empire was divided, and Malik Shah was succeeded in Anatolia by Kilij Arslan I, who founded the Sultanate of Rum. The Byzantines yet again recaptured the eastern coast of the Aegean. After Constantinople was occupied by Western European and Venetian forces during the Fourth Crusade, the area around the Aegean sea was fragmented into multiple entities, including the Latin Empire, the Kingdom of Thessalonica, the Empire of Nicaea, the Principality of Achaea, and the Duchy of Athens. The Venetians created the maritime state of the Duchy of the Archipelago, which included all the Cyclades except Mykonos and Tinos. The Empire of Nicaea, a Byzantine rump state, managed to effect the Recapture of Constantinople from the Latins in 1261 and defeat Epirus. Byzantine successes were not to last; the Ottomans would conquer the area around the Aegean coast, but before their expansion the Byzantine Empire had already been weakened from internal conflict. By the late 14th century the Byzantine Empire had lost all control of the coast of the Aegean Sea and could exercise power around their capital, Constantinople. The Ottoman Empire then gained control of all the Aegean coast with the exception of Crete, which was a Venetian colony until 1669. Modern Period The Greek War of Independence allowed a Greek state on the coast of the Aegean from 1829 onward. The Ottoman Empire held a presence over the sea for over 500 years until its dissolution following World War I, when it was replaced by modern Turkey. During the war, Greece gained control over the area around the northern coast of the Aegean. By the 1930s, Greece and Turkey had about resumed their present-day borders. In the Italo-Turkish War of 1912, Italy captured the Dodecanese islands, and had occupied them since, reneging on the 1919 Venizelos–Tittoni agreement to cede them to Greece. The Greco-Italian War took place from October 1940 to April 1941 as part of the Balkans Campaign of World War II. The Italian war aim was to establish a Greek puppet state, which would permit the Italian annexation of the Sporades and the Cyclades islands in the Aegean Sea, to be administered as a part of the Italian Aegean Islands. The German invasion resulted in the Axis occupation of Greece. The German troops evacuated Athens on 12 October 1944, and by the end of the month, they had withdrawn from mainland Greece. Greece was then liberated by Allied troops. Economy and politics Many of the islands in the Aegean have safe harbours and bays. In ancient times, navigation through the sea was easier than travelling across the rough terrain of the Greek mainland, and to some extent, the coastal areas of Anatolia. Many of the islands are volcanic, and marble and iron are mined on other islands. The larger islands have some fertile valleys and plains. The Armenian king dynasty Achaemenids made one of the greatest highways of the Ancient world. Its name was "Royal road," its length was 2400km, and it was situated between Persian Empire and the Aegean sea. A part of the road passed by the southwest of Armenia, which gave an excellent opportunity to participate in international trading. Of the main islands in the Aegean Sea, two belong to Turkey – Bozcaada (Tenedos) and Gökçeada (Imbros); the rest belong to Greece. Between the two countries, there are political disputes over several aspects of political control over the Aegean space, including the size of territorial waters, air control and the delimitation of economic rights to the continental shelf. These issues are known as the Aegean dispute. Transport Multiple ports are located along the Greek and Turkish coasts of the Aegean Sea. The port of Piraeus in Athens is the chief port in Greece, the largest passenger port in Europe and the third largest in the world, servicing about 20 million passengers annually. With a throughput of 1.4 million
third extends across the Peloponnese and Crete to Rhodes, dividing the Aegean from the Mediterranean. The bays and gulfs of the Aegean beginning at the South and moving clockwise include on Crete, the Mirabello, Almyros, Souda and Chania bays or gulfs, on the mainland the Myrtoan Sea to the west with the Argolic Gulf, the Saronic Gulf northwestward, the Petalies Gulf which connects with the South Euboic Sea, the Pagasetic Gulf which connects with the North Euboic Sea, the Thermian Gulf northwestward, the Chalkidiki Peninsula including the Cassandra and the Singitic Gulfs, northward the Strymonian Gulf and the Gulf of Kavala and the rest are in Turkey; Saros Gulf, Edremit Gulf, Dikili Gulf, Gulf of Çandarlı, Gulf of İzmir, Gulf of Kuşadası, Gulf of Gökova, Güllük Gulf. The Aegean sea is connected to the Sea of Marmara by the Dardanelles, also known from Classical Antiquity as the Hellespont. The Dardanelles are located to the northeast of the sea. It ultimately connects with the Black Sea through the Bosphoros strait, upon which lies the city of Istanbul. The Dardanelles and the Bosphoros are known as the Turkish Straits. Extent According to the International Hydrographic Organization, the limits of the Aegean Sea as follows: On the south: A line running from Cape Aspro (28°16′E) in Asia Minor, to Cum Burnù (Capo della Sabbia) the Northeast extreme of the Island of Rhodes, through the island to Cape Prasonisi, the Southwest point thereof, on to Vrontos Point (35°33′N) in Skarpanto [Karpathos], through this island to Castello Point, the South extreme thereof, across to Cape Plaka (East extremity of Crete), through Crete to Agria Grabusa, the Northwest extreme thereof, thence to Cape Apolitares in Antikithera Island, through the island to Psira Rock (off the Northwest point) and across to Cape Trakhili in Kithera Island, through Kithera to the Northwest point (Cape Karavugia) and thence to Cape Santa Maria () in the Morea. In the Dardanelles: A line joining Kum Kale (26°11′E) and Cape Helles. Hydrography Aegean surface water circulates in a counterclockwise gyre, with hypersaline Mediterranean water moving northward along the west coast of Turkey, before being displaced by less dense Black Sea outflow. The dense Mediterranean water sinks below the Black Sea inflow to a depth of , then flows through the Dardanelles Strait and into the Sea of Marmara at velocities of . The Black Sea outflow moves westward along the northern Aegean Sea, then flows southwards along the east coast of Greece. The physical oceanography of the Aegean Sea is controlled mainly by the regional climate, the fresh water discharge from major rivers draining southeastern Europe, and the seasonal variations in the Black Sea surface water outflow through the Dardanelles Strait. Analysis of the Aegean during 1991 and 1992 revealed three distinct water masses: Aegean Sea Surface Water – thick veneer, with summer temperatures of 21–26 °C and winter temperatures ranging from in the north to in the south. Aegean Sea Intermediate Water – Aegean Sea Intermediate Water extends from 40 to 50 m to with temperatures ranging from 11 to 18 °C. Aegean Sea Bottom Water – occurring at depths below 500–1000 m with a very uniform temperature (13–14 °C) and salinity (3.91–3.92%). Climate The climate of the Aegean Sea largely reflects the climate of Greece and Western Turkey, which is to say, predominately Mediterranean. According to the Köppen climate classification, most of the Aegean is classified as Hot-summer Mediterranean (Csa), with hotter and drier summers along with milder and wetter winters. However, high temperatures during summers are generally not quite as high as those in arid or semiarid climates due to the presence of a large body of water. This is most predominant in the west and east coasts of the Aegean, and within the Aegean islands. In the north of the Aegean Sea, the climate is instead classified as Cold semi-arid (BSk), which feature cooler summers than Hot-summer Mediterranean climates. The Etesian winds are a dominant weather influence in the Aegean Basin. The below table lists climate conditions of some major Aegean cities: Population Numerous Greek and Turkish settlements are located along their mainland coast, as well as on towns on the Aegean islands. The largest cities are Athens and Thessaloniki in Greece and İzmir in Turkey. The most populated of the Aegean islands is Crete, followed by Euboea and Rhodes. Biogeography and ecology Protected Areas Greece has established several marine protected areas along its coasts. According to the Network of Managers of Marine Protected Areas in the Mediterranean (MedPAN), four Greek MPAs are participating in the Network. These include Alonnisos Marine Park, while the Missolonghi–Aitoliko Lagoons and the island of Zakynthos are not on the Aegean. History Ancient history The current coastline dates back to about 4000 BC. Before that time, at the peak of the last ice age (about 18,000 years ago) sea levels everywhere were 130 metres lower, and there were large well-watered coastal plains instead of much of the northern Aegean. When they were first occupied, the present-day islands including Milos with its important obsidian production were probably still connected to the mainland. The present coastal arrangement appeared around 9,000 years ago, with post-ice age sea levels continuing to rise for another 3,000 years after that. The subsequent Bronze Age civilizations of Greece and the Aegean Sea have given rise to the general term Aegean civilization. In ancient times, the sea was the birthplace of two ancient civilizations – the Minoans of Crete and the Myceneans of the Peloponnese. The Minoan civilization was a Bronze Age civilization on the island of Crete and other Aegean islands, flourishing from around 3000 to 1450 BC before a period of decline, finally ending at around 1100 BC. It represented the first advanced civilization in Europe, leaving behind massive building complexes, tools, stunning artwork, writing systems, and a massive network of trade. The Minoan period saw extensive trade between Crete, Aegean, and Mediterranean settlements, particularly the Near East. The most notable Minoan palace is that of Knossos, followed by that of Phaistos. The Mycenaean Greeks arose on the mainland, becoming the first advanced civilization in mainland Greece, which lasted from approximately 1600 to 1100 BC. It is believed that the site of Mycenae, which sits close to the Aegean coast, was the center of Mycenaean civilization. The Mycenaeans introduced several innovations in the fields of engineering, architecture and military infrastructure, while trade over vast areas of the Mediterranean, including the Aegean, was essential for the Mycenaean economy. Their syllabic script, the Linear B, offers the first written records of the Greek language and their religion already included several deities that can also be found in the Olympic Pantheon. Mycenaean Greece was dominated by a warrior elite society and consisted of a network of palace-centered states that developed rigid hierarchical, political, social and economic systems. At the head of this society was the king, known as wanax. The civilization of Mycenaean Greeks perished with the collapse of Bronze Age culture in the eastern Mediterranean, to be followed by the so-called Greek Dark Ages. It is undetermined what cause the collapse of the Mycenaeans. During the Greek Dark Ages, writing in the Linear B script ceased, vital trade links were lost, and towns and villages were abandoned. Ancient Greece The Archaic period followed the Greek Dark Ages in the 8th century BC. Greece became divided into small self-governing communities, and adopted the Phoenician alphabet, modifying it to create the Greek alphabet. By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes, of which Athens, Sparta, and Corinth were closest to the Aegean Sea. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well. In the 8th and 7th centuries BC many Greeks emigrated to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield. The Aegean Sea was the setting for one of the most pivotal naval engagements in history, when on September 20, 480 B.C. the Athenian fleet gained a decisive victory over the Persian fleet of the Xerxes II of Persia at the Battle of Salamis. Thus ending any further attempt of western expansion by the Achaemenid Empire. The Aegean Sea would later come to be under the control, albeit briefly, of the Kingdom of Macedonia. Philip II and his son Alexander the Great led a series of conquests that led not only to the unification of the Greek mainland and the control of the Aegean Sea under his rule, but also the destruction of the Achaemenid Empire. After Alexander the Great's death, his empire was divided among his generals. Cassander became king of the Hellenistic kingdom of Macedon, which held territory along the western coast of the Aegean, roughly corresponding to modern-day Greece. The Kingdom of Lysimachus had control over the sea's eastern coast. Greece had entered the Hellenistic period. Roman rule The Macedonian Wars were a series of conflicts fought by the Roman Republic and its Greek allies in the eastern Mediterranean against several different major Greek kingdoms. They resulted in Roman control or influence over the eastern Mediterranean basin, including the Aegean, in addition to their hegemony in the western Mediterranean after the Punic Wars. During Roman rule, the land around the Aegean Sea fell under the provinces of Achaea, Macedonia, Thracia, Asia and Creta et Cyrenica (island of Crete) Medieval period The Fall of the Western Roman Empire allowed its successor state, the Byzantine Empire, to continue Roman control over the Aegean Sea. However, their territory would later be threatened by the Early Muslim conquests initiated by Muhammad in the 7th century. Although the Rashidun Caliphate did not manage to obtain land along the coast of the Aegean sea, its conquest of the Eastern Anatolian peninsula as well as Egypt, the Levant, and North Africa left the Byzantine Empire weakened. The Umayyad Caliphate expanded the territorial gains of the Rashidun Caliphate, conquering much of North Africa, and threatened the Byzantine Empire's control of Western Anatolia, where it meets the Aegean Sea. During the 820s, Crete was conquered by a group of Berbers Andalusians exiles led by Abu Hafs Umar al-Iqritishi, and it became an independent Islamic state. The Byzantine Empire launched a campaign that took most of the island back in 842 and 843 under Theoktistos, but the reconquest was not completed and was soon reversed. Later attempts by the Byzantine Empire to recover the island were without success. For the approximately 135 years of its existence, the emirate of Crete was one of the major foes of Byzantium. Crete commanded the sea lanes of the Eastern Mediterranean and functioned as a forward base and haven for Muslim corsair fleets that ravaged the Byzantine-controlled shores of the Aegean Sea. Crete returned to Byzantine rule under Nikephoros Phokas, who launched a huge campaign against the Emirate of Crete in 960 to 961. Meanwhile, the Bulgarian Empire threatened Byzantine control of Northern Greece and the Aegean coast to the south. Under Presian I and his successor Boris I, the Bulgarian Empire managed to obtain a small portion of the northern Aegean coast. Simeon I of Bulgaria led Bulgaria to its greatest territorial expansion, and managed to conqueror much of the northern and western coasts of the Aegean. The Byzantines later regained control. The Second Bulgarian Empire achieved similar success along, again, the northern and western coasts, under Ivan Asen II of Bulgaria. The Seljuq Turks, under the Seljuk Empire, invaded the Byzantine Empire in 1068, from which they annexed almost all the territories of Anatolia, including the east coast of the Aegean Sea, during the reign of Alp Arslan, the second Sultan of the Seljuk Empire. After the death of his successor, Malik Shah I, the empire was divided, and Malik Shah was succeeded in Anatolia by Kilij Arslan I, who founded the Sultanate of Rum. The Byzantines yet again recaptured the eastern coast of the Aegean. After Constantinople was occupied by Western European and Venetian forces during the Fourth Crusade, the area around the Aegean sea was fragmented into multiple entities, including the Latin Empire, the Kingdom of Thessalonica, the Empire of Nicaea, the Principality of Achaea, and the Duchy of Athens. The Venetians created the maritime state of the Duchy of the Archipelago, which included all the Cyclades except Mykonos and Tinos. The Empire of Nicaea, a Byzantine rump state, managed to effect the Recapture of Constantinople from the Latins in 1261 and defeat Epirus. Byzantine successes were not to last; the
and Research Collections in Hamilton, Ontario, Canada since the institution purchased the documents in 1971. It is considered one of the most influential dystopian books. Plot summary Part 1: Alex's world Alex is a 15-year-old gang leader living in a near-future dystopian city. His friends ("droogs" in the novel's Anglo-Russian slang, "Nadsat") and fellow gang members are Dim, a slow-witted bruiser, who is the gang's muscle; Georgie, an ambitious second-in-command; and Pete, who mostly plays along as the droogs indulge their taste for "ultra-violence" (random, violent mayhem). Characterised as a sociopath and hardened juvenile delinquent, Alex is also intelligent, quick-witted, and enjoys classical music; he is particularly fond of Beethoven, whom he calls "Lovely Ludwig Van". The story begins with the droogs sitting in their favourite hangout, the Korova Milk Bar, and drinking "milk-plus" – a beverage consisting of milk laced with the customer's drug of choice – to prepare for a night of ultra-violence. They assault a scholar walking home from the public library; rob a store, leaving the owner and his wife bloodied and unconscious; beat up a beggar; then scuffle with a rival gang. Joyriding through the countryside in a stolen car, they break into an isolated cottage and terrorise the young couple living there, beating the husband and gang-raping his wife. In a metafictional touch, the husband is a writer working on a manuscript called "A Clockwork Orange", and Alex contemptuously reads out a paragraph that states the novel's main theme before shredding the manuscript. Back at the Korova, Alex strikes Dim for his crude response to a woman's singing of an operatic passage, and strains within the gang become apparent. At home in his parents' flat, Alex plays classical music at top volume, which he describes as giving him orgasmic bliss before falling asleep. Alex feigns illness to his parents to stay out of school the next day. Following an unexpected visit from P.R. Deltoid, his "post-corrective adviser", Alex visits a record store, where he meets two pre-teen girls. He invites them back to the flat, where he drugs and rapes them. That night after a nap, Alex finds his droogs in a mutinous mood, waiting downstairs in the torn-up and graffitied lobby. Georgie challenges Alex for leadership of the gang, demanding that they focus on higher-value targets in their robberies. Alex quells the rebellion by slashing Dim's hand and fighting with Georgie, then pacifies the gang by agreeing to Georgie's plan to rob the home of a wealthy elderly woman. Alex breaks in and knocks the woman unconscious; but, when he hears sirens and opens the door to flee, Dim strikes him in payback for the earlier fight. The gang abandons Alex on the front step to be arrested by the police; while in custody, he learns that the woman has died from her injuries. Part 2: The Ludovico Technique Alex is convicted of murder and sentenced to 14 years in prison. His parents visit one day to inform him that Georgie has been killed in a botched robbery. Two years into his term, he has obtained a job in one of the prison chapels, playing music on the stereo to accompany the Sunday Christian services. The chaplain mistakes Alex's Bible studies for stirrings of faith; in reality, Alex is only reading Scripture for the violent or sexual passages. After his fellow cellmates blame him for beating a troublesome cellmate to death, he is chosen to undergo an experimental behaviour modification treatment called the Ludovico Technique in exchange for having the remainder of his sentence commuted. The technique is a form of aversion therapy in which Alex is injected with nausea-inducing drugs while watching graphically violent films, eventually conditioning him to become severely ill at the mere thought of violence. As an unintended consequence, the soundtrack to one of the films, Beethoven's Ninth Symphony, renders Alex unable to enjoy his beloved classical music as before. The effectiveness of the technique is demonstrated to a group of VIPs, who watch as Alex collapses before a bully and abases himself before a scantily clad young woman. Although the prison chaplain accuses the state of stripping Alex of free will, the government officials on the scene are pleased with the results, and Alex is released from prison. Part 3: After prison Alex returns to his parents' flat, only to find that they are letting his room to a lodger. Now homeless, he wanders the streets and enters a public library, hoping to learn of a painless method for committing suicide. The old scholar whom Alex had assaulted in Part 1 finds him and beats him, with the help of several friends. Two policemen come to Alex's rescue, but they turn out to be Dim and Billyboy, a former rival gang leader. They take Alex outside town, brutalise him, and abandon him there. Alex collapses at the door of an isolated cottage, realising too late that it is the one he and his droogs invaded in Part 1. The writer, F. Alexander, still lives here, but his wife has since died of what he believes to be injuries she sustained in the rape. He does not recognise Alex but gives him shelter and questions him about the conditioning he has undergone. Alexander and his colleagues, all highly critical of the government, plan to use Alex as a symbol of state brutality and thus prevent the incumbent government from being re-elected. Alex inadvertently reveals that he was the ringleader of the home invasion; he is removed from the cottage and locked in an upper-story bedroom as a relentless barrage of classical music plays over speakers. He attempts suicide by leaping from the window. Alex wakes up in a hospital, where he is courted by government officials anxious to counter the bad publicity created by his suicide attempt. He is informed that Alexander has been "put away" for Alex's protection and his own. Alex is offered a well-paying job if he agrees to side with the government once he is discharged. A round of tests reveals that his old violent impulses have returned, indicating that the hospital doctors have undone the effects of his conditioning. As photographers snap pictures, Alex daydreams of orgiastic violence and reflects, "I was cured all right." In the final chapter, Alex — now 18 years old and working for the nation's musical recording archives — finds himself halfheartedly preparing for yet another night of crime with a new gang (Len, Rick and Bully). After a chance encounter with Pete, who has reformed and married, Alex finds himself taking less and less pleasure in acts of senseless violence. He begins contemplating giving up crime himself to become a productive member of society and start a family of his own, while reflecting on the notion that his own children could possibly end up being just as destructive as he has been, if not more so. Omission of the final chapter The book has three parts, each with seven chapters. Burgess has stated that the total of 21 chapters was an intentional nod to the age of 21 being recognised as a milestone in human maturation. The 21st chapter was omitted from the editions published in the United States prior to 1986. In the introduction to the updated American text (these newer editions include the missing 21st chapter), Burgess explains that when he first brought the book to an American publisher, he was told that U.S. audiences would never go for the final chapter, in which Alex sees the error of his ways, decides he has simply gotten bored of violence and resolves to turn his life around. At the American publisher's insistence, Burgess allowed their editors to cut the redeeming final chapter from the U.S. version, so that the tale would end on a darker note, with Alex becoming his old, ultraviolent self again – an ending which the publisher insisted would be "more realistic" and appealing to a US audience. The film adaptation, directed by Stanley Kubrick, is based on the American edition of the book (which Burgess considered to be "badly flawed"). Kubrick called Chapter 21 "an extra chapter" and claimed that he had not read the original version until he had virtually finished the screenplay and that he had never given serious consideration to using it. In Kubrick's opinion – as in the opinion of other readers, including the original American editor – the final chapter was unconvincing and inconsistent with the book. Characters Alex: The novel's protagonist and leader among his droogs. He often refers to himself as "Your Humble Narrator". Having coaxed two ten-year-old girls into his bedroom, Alex refers to himself as "Alexander the Large" while raping them; this was later the basis for Alex's claimed surname DeLarge in the 1971 film. George, Georgie or Georgie Boy: Effectively Alex's greedy second-in-command. Georgie attempts to undermine Alex's status as leader of the gang and take over their gang as the new leader. He is later killed during a botched robbery while Alex is in prison. Pete: The only one who does not take particular sides when the droogs fight among themselves. He later meets and marries a girl named Georgina, renouncing his violent ways and even losing his former (Nadsat) speech patterns. A chance encounter with Pete in the final chapter influences Alex to realise that he has grown bored with violence and recognise that human energy is better expended on creation than destruction. Dim: An idiotic and thoroughly gormless member of the gang, persistently condescended to by Alex, but respected to some extent by his droogs for his formidable fighting abilities, his weapon of choice being a length of bike chain. He later becomes a police officer, exacting his revenge on Alex for the abuse he once suffered under his command. P. R. Deltoid: A criminal rehabilitation social worker assigned the task of keeping Alex on the straight and narrow. He seemingly has no clue about dealing with young people, and is devoid of empathy or understanding for his troublesome charge. Indeed, when Alex is arrested for murdering an old woman and then ferociously beaten by several police officers, Deltoid simply spits on him. Prison Chaplain: The character who first questions whether it is moral to turn a violent person into a behavioural automaton who can make no choice in such matters. This is the only character who is truly concerned about Alex's welfare; he is not taken seriously by Alex, though. He is nicknamed by Alex "prison charlie" or "chaplin", a pun on Charlie Chaplin. Billyboy: A rival of Alex's. Early on in the story, Alex and his droogs battle Billyboy and his droogs, which ends abruptly when the police arrive. Later, after Alex is released from prison, Billyboy (along with Dim, who like Billyboy has become a police officer) rescues Alex from a mob, then subsequently beats him in a location out of town. Prison Governor: The man who decides to let Alex "choose" to be the first reformed by the Ludovico technique. The Minister of the Interior: The government high-official who determined that the Ludovico's technique will be used to cut recidivism. He is referred to as the Inferior by Alex. Dr Branom: A scientist, co-developer of the Ludovico technique. He appears friendly and almost paternal towards Alex at first, before forcing him into the theatre and what Alex calls the "chair of torture". Dr Brodsky: Branom's colleague and co-developer of the Ludovico technique. He seems much more passive than Branom and says considerably less. F. Alexander: An author who was in the process of typing his magnum opus A Clockwork Orange when Alex and his droogs broke into his house, beat him, tore up his work and then brutally gang-raped his wife, which caused her subsequent death. He is left deeply scarred by these events and when he encounters Alex two years later, he uses him as a guinea
suffered under his command. P. R. Deltoid: A criminal rehabilitation social worker assigned the task of keeping Alex on the straight and narrow. He seemingly has no clue about dealing with young people, and is devoid of empathy or understanding for his troublesome charge. Indeed, when Alex is arrested for murdering an old woman and then ferociously beaten by several police officers, Deltoid simply spits on him. Prison Chaplain: The character who first questions whether it is moral to turn a violent person into a behavioural automaton who can make no choice in such matters. This is the only character who is truly concerned about Alex's welfare; he is not taken seriously by Alex, though. He is nicknamed by Alex "prison charlie" or "chaplin", a pun on Charlie Chaplin. Billyboy: A rival of Alex's. Early on in the story, Alex and his droogs battle Billyboy and his droogs, which ends abruptly when the police arrive. Later, after Alex is released from prison, Billyboy (along with Dim, who like Billyboy has become a police officer) rescues Alex from a mob, then subsequently beats him in a location out of town. Prison Governor: The man who decides to let Alex "choose" to be the first reformed by the Ludovico technique. The Minister of the Interior: The government high-official who determined that the Ludovico's technique will be used to cut recidivism. He is referred to as the Inferior by Alex. Dr Branom: A scientist, co-developer of the Ludovico technique. He appears friendly and almost paternal towards Alex at first, before forcing him into the theatre and what Alex calls the "chair of torture". Dr Brodsky: Branom's colleague and co-developer of the Ludovico technique. He seems much more passive than Branom and says considerably less. F. Alexander: An author who was in the process of typing his magnum opus A Clockwork Orange when Alex and his droogs broke into his house, beat him, tore up his work and then brutally gang-raped his wife, which caused her subsequent death. He is left deeply scarred by these events and when he encounters Alex two years later, he uses him as a guinea pig in a sadistic experiment intended to prove the Ludovico technique unsound. The government imprisons him afterwards. He is given the name Frank Alexander in the film. Cat Woman: An indirectly named woman who blocks Alex's gang's entrance scheme, and threatens to shoot Alex and set her cats on him if he does not leave. After Alex breaks into her house, she fights with him, ordering her cats to join the melee, but reprimands Alex for fighting them off. She sustains a fatal blow to the head during the scuffle. She is given the name Miss Weathers in the film. Analysis Background A Clockwork Orange was written in Hove, then a senescent seaside town. Burgess had arrived back in Britain after his stint abroad to see that much had changed. A youth culture had developed, based around coffee bars, pop music and teenage gangs. England was gripped by fears over juvenile delinquency. Burgess stated that the novel's inspiration was his first wife Lynne's beating by a gang of drunk American servicemen stationed in England during World War II. She subsequently miscarried. In its investigation of free will, the book's target is ostensibly the concept of behaviourism, pioneered by such figures as B. F. Skinner. Burgess later stated that he wrote the book in three weeks. Title Burgess has offered several clarifications about the meaning and origin of its title: He had overheard the phrase "as queer as a clockwork orange" in a London pub in 1945 and assumed it was a Cockney expression. In Clockwork Marmalade, an essay published in the Listener in 1972, he said that he had heard the phrase several times since that occasion. He also explained the title in response to a question from William Everson on the television programme Camera Three in 1972, "Well, the title has a very different meaning but only to a particular generation of London Cockneys. It's a phrase which I heard many years ago and so fell in love with, I wanted to use it, the title of the book. But the phrase itself I did not make up. The phrase "as queer as a clockwork orange" is good old East London slang and it didn't seem to me necessary to explain it. Now, obviously, I have to give it an extra meaning. I've implied an extra dimension. I've implied the junction of the organic, the lively, the sweet – in other words, life, the orange – and the mechanical, the cold, the disciplined. I've brought them together in this kind of oxymoron, this sour-sweet word." Nonetheless, no other record of the expression being used before 1962 has ever appeared. Kingsley Amis notes in his Memoirs (1991) that no trace of it appears in Eric Partridge's Dictionary of Historical Slang. The saying "as queer as ..." followed by an improbable object: "... a clockwork orange", or "... a four-speed walking stick" or "... a left-handed corkscrew" etc. predates Burgess' novel. An early example, "as queer as Dick's hatband", appeared in 1796, and was alluded to in 1757. His second explanation was that it was a pun on the Malay word orang, meaning "man". The novella contains no other Malay words or links. In a prefatory note to A Clockwork Orange: A Play with Music, he wrote that the title was a metaphor for "an organic entity, full of juice and sweetness and agreeable odour, being turned into a mechanism". In his essay Clockwork Oranges, Burgess asserts that "this title would be appropriate for a story about the application of Pavlovian or mechanical laws to an organism which, like a fruit, was capable of colour and sweetness". While addressing the reader in a letter before some editions of the book, the author says that when a man ceases to have free will, they are no longer a man. "Just a clockwork orange", a shiny, appealing object, but "just a toy to be wound-up by either God or the Devil, or (what is increasingly replacing both) the State. This title alludes to the protagonist's negative emotional responses to feelings of evil which prevent the exercise of his free will subsequent to the administration of the Ludovico Technique. To induce this conditioning, Alex is forced to watch scenes of violence on a screen that are systematically paired with negative physical stimulation. The negative physical stimulation takes the form of nausea and "feelings of terror", which are caused by an emetic medicine administered just before the presentation of the films. Use of slang The book, narrated by Alex, contains many words in a slang argot which Burgess invented for the book, called Nadsat. It is a mix of modified Slavic words, rhyming slang and derived Russian (like baboochka). For instance, these terms have the following meanings in Nadsat: droog (друг) = friend; moloko (молоко) = milk; gulliver (голова) = head; malchick (мальчик) or malchickiwick = boy; soomka (сумка) = sack or bag; Bog = God; horrorshow (хорошо) = good; prestoopnick (преступник) = criminal; rooker (рука) = hand; cal (кал) = crap; veck ("человек") = man or guy; litso (лицо) = face; malenky (маленький) = little; and so on. Some words Burgess invented himself or just adapted from pre-existing languages. Compare Polari. One of Alex's doctors explains the language to a colleague as "odd bits of old rhyming slang; a bit of gypsy talk, too. But most of the roots are Slav propaganda. Subliminal penetration." Some words are not derived from anything, but merely easy to guess, e.g. "in-out, in-out" or "the old in-out" means sexual intercourse. Cutter, however, means "money", because "cutter" rhymes with "bread-and-butter"; this is rhyming slang, which is intended to be impenetrable to outsiders (especially eavesdropping policemen). Additionally, slang like appypolly loggy ("apology") seems to derive from school boy slang. This reflects Alex's age of 15. In the first edition of the book, no key was provided, and the reader was left to interpret the meaning from the context. In his appendix to the restored edition, Burgess explained that the slang would keep the book from seeming dated, and served to muffle "the raw response of pornography" from the acts of violence. The term "ultraviolence", referring to excessive or unjustified violence, was coined by Burgess in the book, which includes the phrase "do the ultra-violent". The term's association with aesthetic violence has led to its use in the media. Banning and censorship history in the US In 1976, A Clockwork Orange was removed from an Aurora, Colorado high school because of "objectionable language". A year later in 1977 it was removed from high school classrooms in Westport, Massachusetts over similar concerns with "objectionable" language. In 1982, it was removed from two Anniston, Alabama libraries, later to be reinstated on a restricted basis. Also, in 1973 a bookseller was arrested for selling the novel. The charges were later dropped. However, each of these instances came after the release of Stanley Kubrick's popular 1971 film adaptation of A Clockwork Orange, itself the subject of much controversy. Reception Initial response The Sunday Telegraph review was positive, and described the book as "entertaining ... even profound". Kingsley Amis in The Observer acclaimed the novel as "cheerful horror", writing "Mr Burgess has written a fine farrago of outrageousness, one which incidentally suggests a view of juvenile violence I can’t remember having met before". Malcolm Bradbury wrote "All of Mr Burgess’s powers as a comic writer, which are considerable, have gone into the rich language of his inverted Utopia. If you can stomach the horrors, you’ll enjoy the manner". Roald Dahl called it "a terrifying and marvellous book". Many reviewers praised the inventiveness of the language, but expressed unease at the violent subject matter. The Spectator praised Burgess's "extraordinary technical feat" but was uncomfortable with "a certain arbitrariness about the plot which is slightly irritating". New Statesman acclaimed Burgess for addressing "acutely and savagely the tendencies of our time" but called the book "a great strain to read". The Sunday Times review was negative, and described the book as "a very ordinary, brutal and psychologically shallow story". The Times also reviewed the book negatively, describing it as "a somewhat clumsy experiment with science fiction [with] clumsy cliches about juvenile delinquency". The violence was criticised as "unconvincing in detail". Writer's appraisal Burgess dismissed A Clockwork Orange as "too didactic to be artistic". He claimed that the violent content of the novel "nauseated" him. In 1985, Burgess published Flame into Being: The Life and Work of D. H. Lawrence and while discussing Lady Chatterley's Lover in his biography, Burgess compared that novel's notoriety with A Clockwork Orange: "We all suffer from the popular desire to make the known notorious. The book I am best known for, or only known for, is a novel I am prepared to repudiate: written a quarter of a century ago, a jeu d'esprit knocked off for money in three weeks, it became known as the raw material for a film which seemed to glorify sex and violence. The film made it easy for readers of the book to misunderstand what it was about, and the misunderstanding will pursue me until I die. I should not have written the book because of this danger of misinterpretation, and the same may be said of Lawrence and Lady Chatterley's Lover." Awards and nominations and rankings 1983 – Prometheus Award (Preliminary Nominee) 1999 – Prometheus Award (Nomination) 2002 – Prometheus Award (Nomination) 2003 – Prometheus Award (Nomination) 2006 – Prometheus Award (Nomination) 2008 – Prometheus Award (Hall of Fame Award) A Clockwork Orange was chosen by Time magazine as one of the 100 best English-language books from 1923 to 2005. Adaptations A 1965 film by Andy Warhol entitled Vinyl was an adaptation of Burgess's novel. The best known adaptation of the novella to other forms is the 1971 film A Clockwork Orange by Stanley Kubrick, featuring Malcolm McDowell as Alex. In 1987, Burgess published a stage play titled A Clockwork Orange: A Play with Music. The play includes songs, written by Burgess, which are inspired by Beethoven and Nadsat slang. A manga anthology by Osamu Tezuka entitled Tokeijikake no Ringo (Clockwork Apple) was released in 1983. In 1988, a German adaptation of A Clockwork Orange at the intimate theatre of Bad Godesberg featured a musical score by the German punk rock band Die Toten Hosen which, combined with orchestral clips of Beethoven's Ninth
have increasingly relocated outside Amsterdam's city centre. Consequently, the Zuidas (English: South Axis) has become the new financial and legal hub of Amsterdam, with the country's five largest law firms and several subsidiaries of large consulting firms, such as Boston Consulting Group and Accenture, as well as the World Trade Centre (Amsterdam) located in the Zuidas district. In addition to the Zuidas, there are three smaller financial districts in Amsterdam: around Amsterdam Sloterdijk railway station. Where one can find the offices of several newspapers, such as De Telegraaf. as well as those of Deloitte, the Gemeentelijk Vervoerbedrijf (municipal public transport company), and the Dutch tax offices (Belastingdienst); around the Johan Cruyff Arena in Amsterdam Zuidoost, with the headquarters of ING Group; around the Amstel railway station in the Amsterdam-Oost district to the east of the historical city. Amsterdam's tallest building, the Rembrandt Tower, is located here. As are the headquarters of Philips, the Dutch multinational conglomerate. Amsterdam has been a leading city to reduce the use of raw materials and has created a plan to become a circular city by 2050. The adjoining municipality of Amstelveen is the location of KPMG International's global headquarters. Other non-Dutch companies have chosen to settle in communities surrounding Amsterdam since they allow freehold property ownership, whereas Amsterdam retains ground rent. The Amsterdam Stock Exchange (AEX), now part of Euronext, is the world's oldest stock exchange and, due to Brexit, has overtaken LSE as the largest bourse in Europe. It is near Dam Square in the city centre. Port of Amsterdam The Port of Amsterdam is the fourth-largest port in Europe, the 38th largest port in the world and the second-largest port in the Netherlands by metric tons of cargo. In 2014, the Port of Amsterdam had a cargo throughput of 97,4 million tons of cargo, which was mostly bulk cargo. Amsterdam has the biggest cruise port in the Netherlands with more than 150 cruise ships every year. In 2019, the new lock in IJmuiden opened; since then, the port has been able to grow to 125 million tonnes in capacity. Tourism Amsterdam is one of the most popular tourist destinations in Europe, receiving more than 5.34 million international visitors annually, this is excluding the 16 million day-trippers visiting the city every year. The number of visitors has been growing steadily over the past decade. This can be attributed to an increasing number of European visitors. Two-thirds of the hotels are located in the city's centre. Hotels with 4 or 5 stars contribute 42% of the total beds available and 41% of the overnight stays in Amsterdam. The room occupation rate was 85% in 2017, up from 78% in 2006. The majority of tourists (74%) originate from Europe. The largest group of non-European visitors come from the United States, accounting for 14% of the total. Certain years have a theme in Amsterdam to attract extra tourists. For example, the year 2006 was designated "Rembrandt 400", to celebrate the 400th birthday of Rembrandt van Rijn. Some hotels offer special arrangements or activities during these years. The average number of guests per year staying at the four campsites around the city range from 12,000 to 65,000. De Wallen (red-light district) De Wallen, also known as Walletjes or Rosse Buurt, is a designated area for legalised prostitution and is Amsterdam's largest and best-known red-light district. This neighbourhood has become a famous attraction for tourists. It consists of a network of canals, streets, and alleys containing several hundred small, one-room apartments rented by sex workers who offer their services from behind a window or glass door, typically illuminated with red lights. In recent years, the city government has been closing and repurposing the famous red-light district windows in an effort to clean up the area and reduce the amount of party and sex tourism. Retail Shops in Amsterdam range from large high-end department stores such as De Bijenkorf founded in 1870 to small speciality shops. Amsterdam's high-end shops are found in the streets P.C. Hooftstraat and Cornelis Schuytstraat, which are located in the vicinity of the Vondelpark. One of Amsterdam's busiest high streets is the narrow, medieval Kalverstraat in the heart of the city. Other shopping areas include the Negen Straatjes and Haarlemmerdijk and Haarlemmerstraat. Negen Straatjes are nine narrow streets within the Grachtengordel, the concentric canal system of Amsterdam. The Negen Straatjes differ from other shopping districts with the presence of a large diversity of privately owned shops. The Haarlemmerstraat and Haarlemmerdijk were voted best shopping street in the Netherlands in 2011. These streets have as the Negen Straatjes a large diversity of privately owned shops. However, as the Negen Straatjes are dominated by fashion stores, the Haarlemmerstraat and Haarlemmerdijk offer a wide variety of stores, just to name some specialities: candy and other food-related stores, lingerie, sneakers, wedding clothing, interior shops, books, Italian deli's, racing and mountain bikes, skatewear, etc. The city also features a large number of open-air markets such as the Albert Cuyp Market, Westerstraat-markt, Ten Katemarkt, and Dappermarkt. Some of these markets are held daily, like the Albert Cuypmarkt and the Dappermarkt. Others, like the Westerstraatmarkt, are held every week. Fashion Several fashion brands and designers are based in Amsterdam. Fashion designers include Iris van Herpen, Mart Visser, Viktor & Rolf, Marlies Dekkers and Frans Molenaar. Fashion models like Yfke Sturm, Doutzen Kroes and Kim Noorda started their careers in Amsterdam. Amsterdam has its garment centre in the World Fashion Center. Fashion photographers Inez van Lamsweerde and Vinoodh Matadin were born in Amsterdam. Culture During the later part of the 16th-century, Amsterdam's Rederijkerskamer (Chamber of rhetoric) organised contests between different Chambers in the reading of poetry and drama. In 1637, Schouwburg, the first theatre in Amsterdam was built, opening on 3 January 1638. The first ballet performances in the Netherlands were given in Schouwburg in 1642 with the Ballet of the Five Senses. In the 18th century, French theatre became popular. While Amsterdam was under the influence of German music in the 19th century there were few national opera productions; the Hollandse Opera of Amsterdam was built in 1888 for the specific purpose of promoting Dutch opera. In the 19th century, popular culture was centred on the Nes area in Amsterdam (mainly vaudeville and music-hall). An improved metronome was invented in 1812 by Dietrich Nikolaus Winkel. The Rijksmuseum (1885) and Stedelijk Museum (1895) were built and opened. In 1888, the Concertgebouworkest orchestra was established. With the 20th century came cinema, radio and television. Though most studios are located in Hilversum and Aalsmeer, Amsterdam's influence on programming is very strong. Many people who work in the television industry live in Amsterdam. Also, the headquarters of the Dutch SBS Broadcasting Group is located in Amsterdam. Museums The most important museums of Amsterdam are located on the Museumplein (Museum Square), located at the southwestern side of the Rijksmuseum. It was created in the last quarter of the 19th century on the grounds of the former World's fair. The northeastern part of the square is bordered by the large Rijksmuseum. In front of the Rijksmuseum on the square itself is a long, rectangular pond. This is transformed into an ice rink in winter. The northwestern part of the square is bordered by the Van Gogh Museum, House of Bols Cocktail & Genever Experience and Coster Diamonds. The southwestern border of the Museum Square is the Van Baerlestraat, which is a major thoroughfare in this part of Amsterdam. The Concertgebouw is located across this street from the square. To the southeast of the square are several large houses, one of which contains the American consulate. A parking garage can be found underneath the square, as well as a supermarket. The Museumplein is covered almost entirely with a lawn, except for the northeastern part of the square which is covered with gravel. The current appearance of the square was realised in 1999, when the square was remodelled. The square itself is the most prominent site in Amsterdam for festivals and outdoor concerts, especially in the summer. Plans were made in 2008 to remodel the square again because many inhabitants of Amsterdam are not happy with its current appearance. The Rijksmuseum possesses the largest and most important collection of classical Dutch art. It opened in 1885. Its collection consists of nearly one million objects. The artist most associated with Amsterdam is Rembrandt, whose work, and the work of his pupils, is displayed in the Rijksmuseum. Rembrandt's masterpiece The Night Watch is one of the top pieces of art of the museum. It also houses paintings from artists like Bartholomeus van der Helst, Johannes Vermeer, Frans Hals, Ferdinand Bol, Albert Cuyp, Jacob van Ruisdael and Paulus Potter. Aside from paintings, the collection consists of a large variety of decorative art. This ranges from Delftware to giant doll-houses from the 17th century. The architect of the gothic revival building was P.J.H. Cuypers. The museum underwent a 10-year, 375 million euro renovation starting in 2003. The full collection was reopened to the public on 13 April 2013 and the Rijksmuseum has remained the most visited museum in Amsterdam with 2.2 million visitors in 2016 and 2.16 million in 2017. Van Gogh lived in Amsterdam for a short while and there is a museum dedicated to his work. The museum is housed in one of the few modern buildings in this area of Amsterdam. The building was designed by Gerrit Rietveld. This building is where the permanent collection is displayed. A new building was added to the museum in 1999. This building, known as the performance wing, was designed by Japanese architect Kisho Kurokawa. Its purpose is to house temporary exhibitions of the museum. Some of Van Gogh's most famous paintings, like The Potato Eaters and Sunflowers, are in the collection. The Van Gogh museum is the second most visited museum in Amsterdam, not far behind the Rijksmuseum in terms of the number of visits, being approximately 2.1 million in 2016, for example. Next to the Van Gogh museum stands the Stedelijk Museum. This is Amsterdam's most important museum of modern art. The museum is as old as the square it borders and was opened in 1895. The permanent collection consists of works of art from artists like Piet Mondrian, Karel Appel, and Kazimir Malevich. After renovations lasting several years, the museum opened in September 2012 with a new composite extension that has been called 'The Bathtub' due to its resemblance to one. Amsterdam contains many other museums throughout the city. They range from small museums such as the Verzetsmuseum (Resistance Museum), the Anne Frank House, and the Rembrandt House Museum, to the very large, like the Tropenmuseum (Museum of the Tropics), Amsterdam Museum (formerly known as Amsterdam Historical Museum), Hermitage Amsterdam (a dependency of the Hermitage Museum in Saint Petersburg) and the Joods Historisch Museum (Jewish Historical Museum). The modern-styled Nemo is dedicated to child-friendly science exhibitions. Music Amsterdam's musical culture includes a large collection of songs that treat the city nostalgically and lovingly. The 1949 song "Aan de Amsterdamse grachten" ("On the canals of Amsterdam") was performed and recorded by many artists, including John Kraaijkamp Sr.; the best-known version is probably that by Wim Sonneveld (1962). In the 1950s Johnny Jordaan rose to fame with "Geef mij maar Amsterdam" ("I prefer Amsterdam"), which praises the city above all others (explicitly Paris); Jordaan sang especially about his own neighbourhood, the Jordaan ("Bij ons in de Jordaan"). Colleagues and contemporaries of Johnny include Tante Leen and Manke Nelis. Another notable Amsterdam song is "Amsterdam" by Jacques Brel (1964). A 2011 poll by Amsterdam newspaper Het Parool that Trio Bier's "Oude Wolf" was voted "Amsterdams lijflied". Notable Amsterdam bands from the modern era include the Osdorp Posse and The Ex. AFAS Live (formerly known as the Heineken Music Hall) is a concert hall located near the Johan Cruyff Arena (known as the Amsterdam Arena until 2018). Its main purpose is to serve as a podium for pop concerts for big audiences. Many famous international artists have performed there. Two other notable venues, Paradiso and the Melkweg are located near the Leidseplein. Both focus on broad programming, ranging from indie rock to hip hop, R&B, and other popular genres. Other more subcultural music venues are OCCII, OT301, De Nieuwe Anita, Winston Kingdom, and Zaal 100. Jazz has a strong following in Amsterdam, with the Bimhuis being the premier venue. In 2012, Ziggo Dome was opened, also near Amsterdam Arena, a state-of-the-art indoor music arena. AFAS Live is also host to many electronic dance music festivals, alongside many other venues. Armin van Buuren and Tiesto, some of the worlds leading Trance DJ's hail from the Netherlands and frequently perform in Amsterdam. Each year in October, the city hosts the Amsterdam Dance Event (ADE) which is one of the leading electronic music conferences and one of the biggest club festivals for electronic music in the world, attracting over 350,000 visitors each year. Another popular dance festival is 5daysoff, which takes place in the venues Paradiso and Melkweg. In the summertime, there are several big outdoor dance parties in or nearby Amsterdam, such as Awakenings, Dance Valley, Mystery Land, Loveland, A Day at the Park, Welcome to the Future, and Valtifest. Amsterdam has a world-class symphony orchestra, the Royal Concertgebouw Orchestra. Their home is the Concertgebouw, which is across the Van Baerlestraat from the Museum Square. It is considered by critics to be a concert hall with some of the best acoustics in the world. The building contains three halls, Grote Zaal, Kleine Zaal, and Spiegelzaal. Some nine hundred concerts and other events per year take place in the Concertgebouw, for a public of over 700,000, making it one of the most-visited concert halls in the world. The opera house of Amsterdam is located adjacent to the city hall. Therefore, the two buildings combined are often called the Stopera, (a word originally coined by protesters against it very construction: Stop the Opera[-house]). This huge modern complex, opened in 1986, lies in the former Jewish neighbourhood at Waterlooplein next to the river Amstel. The Stopera is the home base of Dutch National Opera, Dutch National Ballet and the Holland Symfonia. Muziekgebouw aan 't IJ is a concert hall, which is located in the IJ near the central station. Its concerts perform mostly modern classical music. Located adjacent to it, is the Bimhuis, a concert hall for improvised and Jazz music. Performing arts Amsterdam has three main theatre buildings. The Stadsschouwburg at the Leidseplein is the home base of Toneelgroep Amsterdam. The current building dates from 1894. Most plays are performed in the Grote Zaal (Great Hall). The normal program of events encompasses all sorts of theatrical forms. The Stadsschouwburg is currently being renovated and expanded. The third theatre space, to be operated jointly with next door Melkweg, will open in late 2009 or early 2010. The Dutch National Opera and Ballet (formerly known as Het Muziektheater), dating from 1986, is the principal opera house and home to Dutch National Opera and Dutch National Ballet. Royal Theatre Carré was built as a permanent circus theatre in 1887 and is currently mainly used for musicals, cabaret performances, and pop concerts. The recently re-opened DeLaMar Theater houses more commercial plays and musicals. A new theatre has also moved into the Amsterdam scene in 2014, joining other established venues: Theater Amsterdam is located in the west part of Amsterdam, on the Danzigerkade. It is housed in a modern building with a panoramic view over the harbour. The theatre is the first-ever purpose-built venue to showcase a single play entitled ANNE, the play based on Anne Frank's life. On the east side of town, there is a small theatre in a converted bathhouse, the Badhuistheater. The theatre often has English programming. The Netherlands has a tradition of cabaret or kleinkunst, which combines music, storytelling, commentary, theatre and comedy. Cabaret dates back to the 1930s and artists like Wim Kan, Wim Sonneveld and Toon Hermans were pioneers of this form of art in the Netherlands. In Amsterdam is the Kleinkunstacademie (English: Cabaret Academy) and Nederlied Kleinkunstkoor (English: Cabaret Choir). Contemporary popular artists are Youp van 't Hek, Freek de Jonge, Herman Finkers, Hans Teeuwen, Theo Maassen, Herman van Veen, Najib Amhali, Raoul Heertje, Jörgen Raymann, Brigitte Kaandorp and Comedytrain. The English spoken comedy scene was established with the founding of Boom Chicago in 1993. They have their own theatre at Leidseplein. Nightlife Amsterdam is famous for its vibrant and diverse nightlife. Amsterdam has many cafés (bars). They range from large and modern to small and cosy. The typical Bruine Kroeg (brown café) breathe a more old fashioned atmosphere with dimmed lights, candles, and somewhat older clientele. These brown cafés mostly offer a wide range of local and international artisanal beers. Most cafés have terraces in summertime. A common sight on the Leidseplein during summer is a square full of terraces packed with people drinking beer or wine. Many restaurants can be found in Amsterdam as well. Since Amsterdam is a multicultural city, a lot of different ethnic restaurants can be found. Restaurants range from being rather luxurious and expensive to being ordinary and affordable. Amsterdam also possesses many discothèques. The two main nightlife areas for tourists are the Leidseplein and the Rembrandtplein. The Paradiso, Melkweg and Sugar Factory are cultural centres, which turn into discothèques on some nights. Examples of discothèques near the Rembrandtplein are the Escape, Air, John Doe and Club Abe. Also noteworthy are Panama, Hotel Arena (East), TrouwAmsterdam and Studio 80. In recent years '24-hour' clubs opened their doors, most notably Radion De School, Shelter and Marktkantine. Bimhuis located near the Central Station, with its rich programming hosting the best in the field is considered one of the best jazz clubs in the world. The Reguliersdwarsstraat is the main street for the LGBT community and nightlife. Festivals In 2008, there were 140 festivals and events in Amsterdam. Famous festivals and events in Amsterdam include: Koningsdag (which was named Koninginnedag until the crowning of King Willem-Alexander in 2013) (King's Day – Queen's Day); the Holland Festival for the performing arts; the yearly Prinsengrachtconcert (classical concerto on the Prinsen canal) in August; the 'Stille Omgang' (a silent Roman Catholic evening procession held every March); Amsterdam Gay Pride; The Cannabis Cup; and the Uitmarkt. On Koningsdag—that is held each year on 27 April—hundreds of thousands of people travel to Amsterdam to celebrate with the city's residents. The entire city becomes overcrowded with people buying products from the freemarket, or visiting one of the many music concerts. The yearly Holland Festival attracts international artists and visitors from all over Europe. Amsterdam Gay Pride is a yearly local LGBT parade of boats in Amsterdam's canals, held on the first Saturday in August. The annual Uitmarkt is a three-day cultural event at the start of the cultural season in late August. It offers previews of many different artists, such as musicians and poets, who perform on podia. Sports Amsterdam is home of the Eredivisie football club AFC Ajax. The stadium Johan Cruyff Arena is the home of Ajax. It is located in the south-east of the city next to the new Amsterdam Bijlmer ArenA railway station. Before moving to their current location in 1996, Ajax played their regular matches in the now demolished De Meer Stadion in the eastern part of the city or in the Olympic Stadium. In 1928, Amsterdam hosted the Summer Olympics. The Olympic Stadium built for the occasion has been completely restored and is now used for cultural and sporting events, such as the Amsterdam Marathon. In 1920, Amsterdam assisted in hosting some of the sailing events for the Summer Olympics held in neighbouring Antwerp, Belgium by hosting events at Buiten IJ. The city holds the Dam to Dam Run, a race from Amsterdam to Zaandam, as well as the Amsterdam Marathon. The ice hockey team Amstel Tijgers play in the Jaap Eden ice rink. The team competes in the Dutch ice hockey premier league. Speed skating championships have been held on the 400-meter lane of this ice rink. Amsterdam holds two American football franchises: the Amsterdam Crusaders and the Amsterdam Panthers. The Amsterdam Pirates baseball team competes in the Dutch Major League. There are three field hockey teams: Amsterdam, Pinoké and Hurley, who play their matches around the Wagener Stadium in the nearby city of Amstelveen. The basketball team MyGuide Amsterdam competes in the Dutch premier division and play their games in the Sporthallen Zuid. There is one rugby club in Amsterdam, which also hosts sports training classes such as RTC (Rugby Talenten Centrum or Rugby Talent Centre) and the National Rugby stadium.
shortage, and heating fuel became scarce. The shortages sparked riots in which several people were killed. These riots are known as the Aardappeloproer (Potato rebellion). People started looting stores and warehouses in order to get supplies, mainly food. On 1 January 1921, after a flood in 1916, the depleted municipalities of Durgerdam, Holysloot, Zunderdorp and Schellingwoude, all lying north of Amsterdam, were, at their own request, annexed to the city. Between the wars, the city continued to expand, most notably to the west of the Jordaan district in the Frederik Hendrikbuurt and surrounding neighbourhoods. Nazi Germany invaded the Netherlands on 10 May 1940 and took control of the country. Some Amsterdam citizens sheltered Jews, thereby exposing themselves and their families to a high risk of being imprisoned or sent to concentration camps. More than 100,000 Dutch Jews were deported to Nazi concentration camps, of whom some 60,000 lived in Amsterdam. In response, the Dutch Communist Party organized the February strike attended by 300,000 people to protest against the raids. Perhaps the most famous deportee was the young Jewish girl Anne Frank, who died in the Bergen-Belsen concentration camp. At the end of the Second World War, communication with the rest of the country broke down, and food and fuel became scarce. Many citizens traveled to the countryside to forage. Dogs, cats, raw sugar beets, and tulip bulbs—cooked to a pulp—were consumed to stay alive. Many trees in Amsterdam were cut down for fuel, and wood was taken from the houses, apartments and other buildings of deported Jews. Many new suburbs, such as Osdorp, Slotervaart, Slotermeer and Geuzenveld, were built in the years after the Second World War. These suburbs contained many public parks and wide-open spaces, and the new buildings provided improved housing conditions with larger and brighter rooms, gardens, and balconies. Because of the war and other events of the 20th century, almost the entire city centre had fallen into disrepair. As society was changing, politicians and other influential figures made plans to redesign large parts of it. There was an increasing demand for office buildings, and also for new roads, as the automobile became available to most people. A metro started operating in 1977 between the new suburb of Bijlmermeer in the city's Zuidoost (southeast) exclave and the centre of Amsterdam. Further plans were to build a new highway above the metro to connect Amsterdam Centraal and the city centre with other parts of the city. The required large-scale demolitions began in Amsterdam's former Jewish neighborhood. Smaller streets, such as the Jodenbreestraat and Weesperstraat, were widened and almost all houses and buildings were demolished. At the peak of the demolition, the Nieuwmarktrellen (Nieuwmarkt Riots) broke out; the rioters expressed their fury about the demolition caused by the restructuring of the city. As a result, the demolition was stopped and the highway into the city's centre was never fully built; only the metro was completed. Only a few streets remained widened. The new city hall was built on the almost completely demolished Waterlooplein. Meanwhile, large private organizations, such as Stadsherstel Amsterdam, were founded to restore the entire city centre. Although the success of this struggle is visible today, efforts for further restoration are still ongoing. The entire city centre has reattained its former splendour and, as a whole, is now a protected area. Many of its buildings have become monuments, and in July 2010 the Grachtengordel (the three concentric canals: Herengracht, Keizersgracht, and Prinsengracht) was added to the UNESCO World Heritage List. In the 21st century, the Amsterdam city centre has attracted large numbers of tourists: between 2012 and 2015, the annual number of visitors rose from 10 to 17 million. Real estate prices have surged, and local shops are making way for tourist-oriented ones, making the centre unaffordable for the city's inhabitants. These developments have evoked comparisons with Venice, a city thought to be overwhelmed by the tourist influx. Construction of a new metro line connecting the part of the city north of the IJ to its southern part was started in 2003. The project was controversial because its cost had exceeded its budget by a factor three by 2008, because of fears of damage to buildings in the centre, and because construction had to be halted and restarted multiple times. The new metro line was completed in 2018. Since 2014, renewed focus has been given to urban regeneration and renewal, especially in areas directly bordering the city centre, such as Frederik Hendrikbuurt. This urban renewal and expansion of the traditional centre of the city—with the construction on artificial islands of the new eastern IJburg neighbourhood—is part of the Structural Vision Amsterdam 2040 initiative. Geography Amsterdam is located in the Western Netherlands, in the province of North Holland, the capital of which is not Amsterdam, but rather Haarlem. The river Amstel ends in the city centre and connects to a large number of canals that eventually terminate in the IJ. Amsterdam is about below sea level. The surrounding land is flat as it is formed of large polders. A man-made forest, Amsterdamse Bos, is in the southwest. Amsterdam is connected to the North Sea through the long North Sea Canal. Amsterdam is intensely urbanised, as is the Amsterdam metropolitan area surrounding the city. Comprising of land, the city proper has 4,457 inhabitants per km2 and 2,275 houses per km2. Parks and nature reserves make up 12% of Amsterdam's land area. Water Amsterdam has more than of canals, most of which are navigable by boat. The city's three main canals are the Prinsengracht, Herengracht and Keizersgracht. In the Middle Ages, Amsterdam was surrounded by a moat, called the Singel, which now forms the innermost ring in the city, and gives the city centre a horseshoe shape. The city is also served by a seaport. It has been compared with Venice, due to its division into about 90 islands, which are linked by more than 1,200 bridges. Climate Amsterdam has an oceanic climate (Köppen Cfb) strongly influenced by its proximity to the North Sea to the west, with prevailing westerly winds. Amsterdam, as well as most of the North Holland province, lies in USDA Hardiness zone 8b. Frosts mainly occur during spells of easterly or northeasterly winds from the inner European continent. Even then, because Amsterdam is surrounded on three sides by large bodies of water, as well as having a significant heat-island effect, nights rarely fall below , while it could easily be in Hilversum, southeast. Summers are moderately warm with a number of hot and humid days every month. The average daily high in August is , and or higher is only measured on average on 2.5 days, placing Amsterdam in AHS Heat Zone 2. The record extremes range from to . Days with more than of precipitation are common, on average 133 days per year. Amsterdam's average annual precipitation is . A large part of this precipitation falls as light rain or brief showers. Cloudy and damp days are common during the cooler months of October through March. Demographics Historical population In 1300, Amsterdam's population was around 1,000 people. While many towns in Holland experienced population decline during the 15th and 16th centuries, Amsterdam's population grew, mainly due to the rise of the profitable Baltic maritime trade after the Burgundian victory in the Dutch–Hanseatic War. Still, the population of Amsterdam was only modest compared to the towns and cities of Flanders and Brabant, which comprised the most urbanised area of the Low Countries. This changed when, during the Dutch Revolt, many people from the Southern Netherlands fled to the North, especially after Antwerp fell to Spanish forces in 1585. Jewish people from Spain, Portugal and Eastern Europe similarly settled in Amsterdam, as did Germans and Scandinavians. In thirty years, Amsterdam's population more than doubled between 1585 and 1610. By 1600, its population was around 50,000. During the 1660s, Amsterdam's population reached 200,000. The city's growth levelled off and the population stabilised around 240,000 for most of the 18th century. In 1750, Amsterdam was the fourth largest city in Western Europe, behind London (676,000), Paris (560,000) and Naples (324,000). This was all the more remarkable as Amsterdam was neither the capital city nor the seat of government of the Dutch Republic, which itself was a much smaller state than England, France or the Ottoman Empire. In contrast to those other metropolises, Amsterdam was also surrounded by large towns such as Leiden (about 67,000), Rotterdam (45,000), Haarlem (38,000) and Utrecht (30,000). The city's population declined in the early 19th century, dipping under 200,000 in 1820. By the second half of the 19th century, industrialisation spurred renewed growth. Amsterdam's population hit an all-time high of 872,000 in 1959, before declining in the following decades due to government-sponsored suburbanisation to so-called groeikernen (growth centres) such as Purmerend and Almere. Between 1970 and 1980, Amsterdam experienced its sharp population decline, peaking at a net loss of 25,000 people in 1973. By 1985 the city had only 675,570 residents. This was soon followed by reurbanisation and gentrification, leading to renewed population growth in the 2010s. Also in the 2010s, much of Amsterdam's population growth was due to immigration to the city. Amsterdam's population failed to beat the expectations of 873,000 in 2019. Immigration In the 16th and 17th century, non-Dutch immigrants to Amsterdam were mostly Huguenots, Flemings, Sephardi Jews and Westphalians. Huguenots came after the Edict of Fontainebleau in 1685, while the Flemish Protestants came during the Eighty Years' War. The Westphalians came to Amsterdam mostly for economic reasons – their influx continued through the 18th and 19th centuries. Before the Second World War, 10% of the city population was Jewish. Just twenty percent of them survived the Shoah. The first mass immigration in the 20th century was by people from Indonesia, who came to Amsterdam after the independence of the Dutch East Indies in the 1940s and 1950s. In the 1960s guest workers from Turkey, Morocco, Italy, and Spain emigrated to Amsterdam. After the independence of Suriname in 1975, a large wave of Surinamese settled in Amsterdam, mostly in the Bijlmer area. Other immigrants, including refugees asylum seekers and illegal immigrants, came from Europe, America, Asia and Africa. In the 1970s and 1980s, many 'old' Amsterdammers moved to 'new' cities like Almere and Purmerend, prompted by the third planological bill of the Dutch Government. This bill promoted suburbanisation and arranged for new developments in so-called "groeikernen", literally cores of growth. Young professionals and artists moved into neighborhoods De Pijp and the Jordaan abandoned by these Amsterdammers. The non-Western immigrants settled mostly in the social housing projects in Amsterdam-West and the Bijlmer. Today, people of non-Western origin make up approximately one-fifth of the population of Amsterdam, and more than 30% of the city's children. Ethnic Dutch (as defined by the Dutch census) now make up a minority of the total population, although by far the largest one. Only one in three inhabitants under 15 is an autochthon, or a person who has two parents of Dutch origin. Segregation along ethnic lines is clearly visible, with people of non-Western origin, considered a separate group by Statistics Netherlands, concentrating in specific neighbourhoods especially in Nieuw-West, Zeeburg, Bijlmer and in certain areas of Amsterdam-Noord. In 2000, Christians formed the largest religious group in the city (28% of the population). The next largest religion was Islam (8%), most of whose followers were Sunni. In 2015, Christians formed the largest religious group in the city (28% of the population). The next largest religion was Islam (7.1%), most of whose followers were Sunni. Religion In 1578, the largely Catholic city of Amsterdam joined the revolt against Spanish rule, late in comparison to other major northern Dutch cities. Catholic priests were driven out of the city. Following the Dutch takeover, all churches were converted to Protestant worship. Calvinism was declared the main religion; although Catholicism was not forbidden and priests allowed to serve, the Catholic hierarchy was prohibited. This led to the establishment of schuilkerken, covert religious buildings that were hidden in pre-existing buildings. Catholics, some Jewish and dissenting Protestants worshiped in such buildings. A large influx of foreigners of many religions came to 17th-century Amsterdam, in particular Sefardic Jews from Spain and Portugal, Huguenots from France, Lutherans, Mennonites, as well as Protestants from across the Netherlands. This led to the establishment of many non-Dutch-speaking churches. In 1603, the Jewish received permission to practice their religion in the city. In 1639, the first synagogue was consecrated. The Jews came to call the town 'Jerusalem of the West'. As they became established in the city, other Christian denominations used converted Catholic chapels to conduct their own services. The oldest English-language church congregation in the world outside the United Kingdom is found at the Begijnhof. Regular services there are still offered in English under the auspices of the Church of Scotland. Being Calvinists, the Huguenots soon integrated into the Dutch Reformed Church, though often retaining their own congregations. Some, commonly referred by the moniker 'Walloon', are recognizable today as they offer occasional services in French. In the second half of the 17th century, Amsterdam experienced an influx of Ashkenazim, Jews from Central and Eastern Europe. Jews often fled the pogroms in those areas. The first Ashkenazis who arrived in Amsterdam were refugees from the Khmelnytsky Uprising occurring in Ukraine and the Thirty Years' War, which devastated much of Central Europe. They not only founded their own synagogues, but had a strong influence on the 'Amsterdam dialect' adding a large Yiddish local vocabulary. Despite an absence of an official Jewish ghetto, most Jews preferred to live in the eastern part, which used to be the center of medieval Amsterdam. The main street of this Jewish neighbourhood was Jodenbreestraat. The neighbourhood comprised the Waterlooplein and the Nieuwmarkt. Buildings in this neighbourhood fell into disrepair after the Second World War a large section of the neighbourhood was demolished during the construction of the metro system. This led to riots, and as a result the original plans for large-scale reconstruction were abandoned by the government. The neighbourhood was rebuilt with smaller-scale residence buildings on the basis of its original layout. Catholic churches in Amsterdam have been constructed since the restoration of the episcopal hierarchy in 1853. One of the principal architects behind the city's Catholic churches, Cuypers, was also responsible for the Amsterdam Centraal station and the Rijksmuseum. In 1924, the Catholic Church hosted the International Eucharistic Congress in Amsterdam; numerous Catholic prelates visited the city, where festivities were held in churches and stadiums. Catholic processions on the public streets, however, were still forbidden under law at the time. Only in the 20th century was Amsterdam's relation to Catholicism normalised, but despite its far larger population size, the episcopal see of the city was placed in the provincial town of Haarlem. Historically, Amsterdam has been predominantly Christian, in 1900 Christians formed the largest religious group in the city (70% of the population), Dutch Reformed Church formed 45% of the city population, while the Catholic Church formed 25% of the city population. In recent times, religious demographics in Amsterdam have been changed by immigration from former colonies. Hinduism has been introduced from the Hindu diaspora from Suriname and several distinct branches of Islam have been brought from various parts of the world. Islam is now the largest non-Christian religion in Amsterdam. The large community of Ghanaian immigrants have established African churches, often in parking garages in the Bijlmer area. Diversity and immigration Amsterdam experienced an influx of religions and cultures after the Second World War. With 180 different nationalities, Amsterdam is home to one of the widest varieties of nationalities of any city in the world. The proportion of the population of immigrant origin in the city proper is about 50% and 88% of the population are Dutch citizens. Amsterdam has been one of the municipalities in the Netherlands which provided immigrants with extensive and free Dutch-language courses, which have benefited many immigrants. Cityscape and architecture Amsterdam fans out south from the Amsterdam Centraal station and Damrak, the main street off the station. The oldest area of the town is known as De Wallen (English: "The Quays"). It lies to the east of Damrak and contains the city's famous red-light district. To the south of De Wallen is the old Jewish quarter of Waterlooplein. The medieval and colonial age canals of Amsterdam, known as grachten, embraces the heart of the city where homes have interesting gables. Beyond the Grachtengordel are the former working-class areas of Jordaan and de Pijp. The Museumplein with the city's major museums, the Vondelpark, a 19th-century park named after the Dutch writer Joost van den Vondel, as well as the Plantage neighbourhood, with the zoo, are also located outside the Grachtengordel. Several parts of the city and the surrounding urban area are polders. This can be recognised by the suffix -meer which means lake, as in Aalsmeer, Bijlmermeer, Haarlemmermeer and Watergraafsmeer. Canals The Amsterdam canal system is the result of conscious city planning. In the early 17th century, when immigration was at a peak, a comprehensive plan was developed that was based on four concentric half-circles of canals with their ends emerging at the IJ bay. Known as the Grachtengordel, three of the canals were mostly for residential development: the Herengracht (where "Heren" refers to Heren Regeerders van de stad Amsterdam, ruling lords of Amsterdam, whilst gracht means canal, so that the name can be roughly translated as "Canal of the Lords"), Keizersgracht (Emperor's Canal) and Prinsengracht (Prince's Canal). The fourth and outermost canal is the Singelgracht, which is often not mentioned on maps because it is a collective name for all canals in the outer ring. The Singelgracht should not be confused with the oldest and innermost canal, the Singel. The canals served for defense, water management and transport. The defenses took the form of a moat and earthen dikes, with gates at transit points, but otherwise no masonry superstructures. The original plans have been lost, so historians, such as Ed Taverne, need to speculate on the original intentions: it is thought that the considerations of the layout were purely practical and defensive rather than ornamental. Construction started in 1613 and proceeded from west to east, across the breadth of the layout, like a gigantic windshield wiper as the historian Geert Mak calls it – and not from the centre outwards, as a popular myth has it. The canal construction in the southern sector was completed by 1656. Subsequently, the construction of residential buildings proceeded slowly. The eastern part of the concentric canal plan, covering the area between the Amstel river and the IJ bay, has never been implemented. In the following centuries, the land was used for parks, senior citizens' homes, theatres, other public facilities, and waterways without much planning. Over the years, several canals have been filled in, becoming streets or squares, such as the Nieuwezijds Voorburgwal and the Spui. Expansion After the development of Amsterdam's canals in the 17th century, the city did not grow beyond its borders for two centuries. During the 19th century, Samuel Sarphati devised a plan based on the grandeur of Paris and London at that time. The plan envisaged the construction of new houses, public buildings and streets just outside the Grachtengordel. The main aim of the plan, however, was to improve public health. Although the plan did not expand the city, it did produce some of the largest public buildings to date, like the Paleis voor Volksvlijt. Following Sarphati, civil engineers Jacobus van Niftrik and Jan Kalff designed an entire ring of 19th-century neighbourhoods surrounding the city's centre, with the city preserving the ownership of all land outside the 17th-century limit, thus firmly controlling development. Most of these neighbourhoods became home to the working class. In response to overcrowding, two plans were designed at the beginning of the 20th century which were very different from anything Amsterdam had ever seen before: Plan Zuid (designed by the architect Berlage) and West. These plans involved the development of new neighbourhoods consisting of housing blocks for all social classes. After the Second World War, large new neighbourhoods were built in the western, southeastern, and northern parts of the city. These new neighbourhoods were built to relieve the city's shortage of living space and give people affordable houses with modern conveniences. The neighbourhoods consisted mainly of large housing blocks located among green spaces, connected to wide roads, making the neighbourhoods easily accessible by motor car. The western suburbs which were built in that period are collectively called the Westelijke Tuinsteden. The area to the southeast of the city built during the same period is known as the Bijlmer. Architecture Amsterdam has a rich architectural history. The oldest building in Amsterdam is the Oude Kerk (English: Old Church), at the heart of the Wallen, consecrated in 1306. The oldest wooden building is Het Houten Huys at the Begijnhof. It was constructed around 1425 and is one of only two existing wooden buildings. It is also one of the few examples of Gothic architecture in Amsterdam. The oldest stone building of the Netherlands, The Moriaan is built in 's-Hertogenbosch. In the 16th century, wooden buildings were razed and replaced with brick ones. During this period, many buildings were constructed in the architectural style of the Renaissance. Buildings of this period are very recognisable with their stepped gable façades, which is the common Dutch Renaissance style. Amsterdam quickly developed its own Renaissance architecture. These buildings were built according to the principles of the architect Hendrick de Keyser. One of the most striking buildings designed by Hendrick de Keyser is the Westerkerk. In the 17th century baroque architecture became very popular, as it was elsewhere in Europe. This roughly coincided with Amsterdam's Golden Age. The leading architects of this style in Amsterdam were Jacob van Campen, Philips Vingboons and Daniel Stalpaert. Philip Vingboons designed splendid merchants' houses throughout the city. A famous building in baroque style in Amsterdam is the Royal Palace on Dam Square. Throughout the 18th century, Amsterdam was heavily influenced by French culture. This is reflected in the architecture of that period. Around 1815, architects broke with the baroque style and started building in different neo-styles. Most Gothic style buildings date from that era and are therefore said to be built in a neo-gothic style. At the end of the 19th century, the Jugendstil or Art Nouveau style became popular and many new buildings were constructed in this architectural style. Since Amsterdam expanded rapidly during this period, new buildings adjacent to the city centre were also built in this style. The houses in the vicinity of the Museum Square in Amsterdam Oud-Zuid are an example of Jugendstil. The last style that was popular in Amsterdam before the modern era was Art Deco. Amsterdam had its own version of the style, which was called the Amsterdamse School. Whole districts were built this style, such as the Rivierenbuurt. A notable feature of the façades of buildings designed in Amsterdamse School is that they are highly decorated and ornate, with oddly shaped windows and doors. The old city centre is the focal point of all the architectural styles before the end of the 19th century. Jugendstil and Georgian are mostly found outside the city's centre in the neighbourhoods built in the early 20th century, although there are also some striking examples of these styles in the city centre. Most historic buildings in the city centre and nearby are houses, such as the famous merchants' houses lining the canals. Parks and recreational areas Amsterdam has many parks, open spaces, and squares throughout the city. The Vondelpark, the largest park in the city, is located in the Oud-Zuid neighbourhood and is named after the 17th-century Amsterdam author Joost van den Vondel. Yearly, the park has around 10 million visitors. In the park is an open-air theatre, a playground and several horeca facilities. In the Zuid borough, is the Beatrixpark, named after Queen Beatrix. Between Amsterdam and Amstelveen is the Amsterdamse Bos ("Amsterdam Forest"), the largest recreational area in Amsterdam. Annually, almost 4.5 million people visit the park, which has a size of 1.000 hectares and is approximately three times the size of Central Park. The Amstelpark in the Zuid borough houses the Rieker windmill, which dates to 1636. Other parks include the Sarphatipark in the De Pijp neighbourhood, the Oosterpark in the Oost borough and the Westerpark in the Westerpark neighbourhood. The city has three beaches: Nemo Beach, Citybeach "Het stenen hoofd" (Silodam) and Blijburg, all located in the Centrum borough. The city has many open squares (plein in Dutch). The namesake of the city as the site of the original dam, Dam Square, is the main city square and has the Royal Palace and National Monument. Museumplein hosts various museums, including the Rijksmuseum, Van Gogh Museum, and Stedelijk Museum. Other squares include Rembrandtplein, Muntplein, Nieuwmarkt, Leidseplein, Spui and Waterlooplein. Also, near to Amsterdam is the Nekkeveld estate conservation project. Economy Amsterdam is the financial and business capital of the Netherlands. According to the 2007 European Cities Monitor (ECM) – an annual location survey of Europe's leading companies carried out by global real estate consultant Cushman & Wakefield – Amsterdam is one of the top European cities in which to locate an international business, ranking fifth in the survey. with the survey determining London, Paris, Frankfurt and Barcelona as the four European cities surpassing Amsterdam in this regard. A substantial number of large corporations and banks' headquarters are located in the Amsterdam area, including: AkzoNobel, Heineken International, ING Group, ABN AMRO, TomTom, Delta Lloyd Group, Booking.com and Philips. Although many small offices remain along the historic canals, centrally based companies have increasingly relocated outside Amsterdam's city centre. Consequently, the Zuidas (English: South Axis) has become the new financial and legal hub of Amsterdam, with the country's five largest law firms and several subsidiaries of large consulting firms, such as Boston Consulting Group and Accenture, as well as the World Trade Centre (Amsterdam) located in the Zuidas district. In addition to the Zuidas, there are three smaller financial districts in Amsterdam: around Amsterdam Sloterdijk railway station. Where one can find the offices of several newspapers, such as De Telegraaf. as well as those of Deloitte, the Gemeentelijk Vervoerbedrijf (municipal public transport company), and the Dutch tax offices (Belastingdienst); around the Johan Cruyff Arena in Amsterdam Zuidoost, with the headquarters of ING Group; around the Amstel railway station in the Amsterdam-Oost district to the east of the historical city. Amsterdam's tallest building, the Rembrandt Tower, is located here. As are the headquarters of Philips, the Dutch multinational conglomerate. Amsterdam has been a leading city to reduce the use of raw materials and has created a plan to become a circular city by 2050. The adjoining municipality of Amstelveen is the location of KPMG International's global headquarters. Other non-Dutch companies have chosen to settle in communities surrounding Amsterdam since they allow freehold property ownership, whereas Amsterdam retains ground rent. The Amsterdam Stock Exchange (AEX), now part of Euronext, is the world's oldest stock exchange and, due to Brexit, has overtaken LSE as the largest bourse in Europe. It is near Dam Square in the city centre. Port of Amsterdam The Port of Amsterdam is the fourth-largest port in Europe, the 38th largest port in the world and the second-largest port in the Netherlands by metric tons of cargo. In 2014, the Port of Amsterdam had a cargo throughput of 97,4 million tons of cargo, which was mostly bulk cargo. Amsterdam has the biggest cruise port in the Netherlands with more than 150 cruise ships every year. In 2019, the new lock in IJmuiden opened; since then, the port has been able to grow to 125 million tonnes in capacity. Tourism Amsterdam is one of the most popular tourist destinations in Europe, receiving more than 5.34 million international visitors annually, this is excluding the 16 million day-trippers visiting the city every year. The number of visitors has been growing steadily over the past decade. This can be attributed to an increasing number of European visitors. Two-thirds of the hotels are located in the city's centre. Hotels with 4 or 5 stars contribute 42% of the total beds available and 41% of the overnight stays in Amsterdam. The room occupation rate was 85% in 2017, up from 78% in 2006. The majority of tourists (74%) originate from Europe. The largest group of non-European visitors come from the United States, accounting for 14% of the total. Certain years have a theme in Amsterdam to attract extra tourists. For example, the year 2006 was designated "Rembrandt 400", to celebrate the 400th birthday of Rembrandt van Rijn. Some hotels offer special arrangements or activities during these years. The average number of guests per year staying at the four campsites around the city range from 12,000 to 65,000. De Wallen (red-light district) De Wallen, also known as Walletjes or Rosse Buurt, is a designated area for legalised prostitution and is Amsterdam's largest and best-known red-light district. This neighbourhood has become a famous attraction for tourists. It consists of a network of canals, streets, and alleys containing several hundred small, one-room apartments rented by sex workers who offer their services from behind a window or glass door, typically illuminated with red lights. In recent years, the city government has been closing and repurposing the famous red-light district windows in an effort to clean up the area and reduce the amount of party and sex tourism. Retail Shops in Amsterdam range from large high-end department stores such as De Bijenkorf founded in 1870 to small speciality shops. Amsterdam's high-end shops are found in the streets P.C. Hooftstraat and Cornelis Schuytstraat, which are located in the vicinity of the Vondelpark. One of Amsterdam's busiest high streets is the narrow, medieval Kalverstraat in the heart of the city. Other shopping areas include the Negen Straatjes and Haarlemmerdijk and Haarlemmerstraat. Negen Straatjes are nine narrow streets within the Grachtengordel, the concentric canal system of Amsterdam. The Negen Straatjes differ from other shopping districts with the presence of a large diversity of privately owned shops. The Haarlemmerstraat and Haarlemmerdijk were voted best shopping street in the Netherlands in 2011. These streets have as the Negen Straatjes a large diversity of privately owned shops. However, as the Negen Straatjes are dominated by fashion stores, the Haarlemmerstraat and Haarlemmerdijk offer a wide variety of stores, just to name some specialities: candy and other food-related stores, lingerie, sneakers, wedding clothing, interior shops, books, Italian deli's, racing and mountain bikes, skatewear, etc. The city also features a large number of open-air markets such as the Albert Cuyp Market, Westerstraat-markt, Ten Katemarkt, and Dappermarkt. Some of these markets are held daily, like the Albert Cuypmarkt and the Dappermarkt. Others, like the Westerstraatmarkt, are held every week. Fashion Several fashion brands and designers are based in Amsterdam. Fashion designers include Iris van Herpen, Mart Visser, Viktor & Rolf, Marlies Dekkers and Frans Molenaar. Fashion models like Yfke Sturm, Doutzen Kroes and Kim Noorda started their careers in Amsterdam. Amsterdam has its garment centre in the World Fashion Center. Fashion photographers Inez van Lamsweerde and Vinoodh Matadin were born in Amsterdam. Culture During the later part of the 16th-century, Amsterdam's Rederijkerskamer (Chamber of rhetoric) organised contests between different Chambers in the reading of poetry and drama. In 1637, Schouwburg, the first theatre in Amsterdam was built, opening on 3 January 1638. The first ballet performances in the Netherlands were given in Schouwburg in 1642 with the Ballet of the Five Senses. In the 18th century, French theatre became popular. While Amsterdam was under the influence of German music in the 19th century there were few national opera productions; the Hollandse Opera of Amsterdam was built in 1888 for the specific purpose of promoting Dutch opera. In the 19th century, popular culture was centred on the Nes area in Amsterdam (mainly vaudeville and music-hall). An improved metronome was invented in 1812 by Dietrich Nikolaus Winkel. The Rijksmuseum (1885) and Stedelijk Museum (1895) were built and opened. In 1888, the Concertgebouworkest orchestra was established. With the 20th century came cinema, radio and television. Though most studios are located in Hilversum and Aalsmeer, Amsterdam's influence on programming is very strong. Many people who work in the television industry live in Amsterdam. Also, the headquarters of the Dutch SBS Broadcasting Group is located in Amsterdam. Museums The most important museums of Amsterdam are located on the Museumplein (Museum Square), located at the southwestern side of the Rijksmuseum. It was created in the last quarter of the 19th century on the grounds of the former World's fair. The northeastern part of the square is bordered by the large Rijksmuseum. In front of the Rijksmuseum on the square itself is a long, rectangular pond. This is transformed into an ice rink in winter. The northwestern part of the square is bordered by the Van Gogh Museum, House of Bols Cocktail & Genever Experience and Coster Diamonds. The southwestern border of the Museum Square is the Van Baerlestraat, which is a major thoroughfare in this part of Amsterdam. The Concertgebouw is located across this street from the square. To the southeast of the square are several large houses, one of which contains the American consulate. A parking garage can be found underneath the square, as well as a supermarket. The Museumplein is covered almost entirely with a lawn, except for the northeastern part of the square which is covered with gravel. The current appearance of the square was realised in 1999, when the square was remodelled. The square itself is the most prominent site in Amsterdam for festivals and outdoor concerts, especially in the summer. Plans were made in 2008 to remodel the square again because many inhabitants of Amsterdam are not happy with its current appearance. The Rijksmuseum possesses the largest and most important collection of classical Dutch art. It opened in 1885. Its collection consists of nearly one million objects. The artist most associated with Amsterdam is Rembrandt, whose work, and the work of his pupils, is displayed in the Rijksmuseum. Rembrandt's masterpiece The Night Watch is one of the top pieces of art of the museum. It also houses paintings from artists like Bartholomeus van der Helst, Johannes Vermeer, Frans Hals, Ferdinand Bol, Albert Cuyp, Jacob van Ruisdael and Paulus Potter. Aside from paintings, the collection consists of a large variety of decorative art. This ranges from Delftware to giant doll-houses from the 17th century. The architect of the gothic revival building was P.J.H. Cuypers. The museum underwent a 10-year, 375 million euro renovation starting in 2003. The full collection was reopened to the public on 13 April 2013 and the Rijksmuseum has remained the most visited museum in Amsterdam with 2.2 million visitors in 2016 and 2.16 million in 2017. Van Gogh lived in Amsterdam for a short while and there is a museum dedicated to his work. The museum is housed in one of the few modern buildings in this area of Amsterdam. The building was designed by Gerrit Rietveld. This building is where the permanent collection is displayed. A new building was added to the museum in 1999. This building, known as the performance wing, was designed by Japanese architect Kisho Kurokawa. Its purpose is to house temporary exhibitions of the museum. Some of Van Gogh's most famous paintings, like
the satirist Ewert Karlsson (1918 — 2004). For decades he was frequently published in the Swedish tabloid, Aftonbladet. Overview The museum is a national central museum with the task of preserving and telling about work and everyday life. It has, among other things, exhibitions on the terms and conditions of the work and the history of the industrial society. The museum is also known to highlight gender perspective in their exhibitions. The work museum documents work and everyday life by collecting personal stories, including people's professional life from both the past and present. In the museum's archive, there is a rich material of memory collections and documentation projects — over 2600 interviews, stories and photodocumentations have been collected since the museum opened. The museum is also a support for the country's approximately 1,500 working life museums that are old workplaces preserved to convey their history. Exhibitions The Museum of Work shows exhibitions going on over several years, but also shorter exhibitions — including several photo exhibitions on themes that can be linked to work and everyday life. The history of Alva The history of Alva Karlsson is the only exhibition in the museum that is permanent. The exhibition connects to the museum's building and its history as part of the textile industry in Norrköping. Alva worked as a rollers between the years 1927 — 1962. Industriland One of the museum long-term exhibitions is Industriland — when Sweden became modern, the exhibition was in 2007 — 2013 and consisted of an ongoing bond with various objects that were somehow significant both for working life and everyday during the period 1930
the past and present. In the museum's archive, there is a rich material of memory collections and documentation projects — over 2600 interviews, stories and photodocumentations have been collected since the museum opened. The museum is also a support for the country's approximately 1,500 working life museums that are old workplaces preserved to convey their history. Exhibitions The Museum of Work shows exhibitions going on over several years, but also shorter exhibitions — including several photo exhibitions on themes that can be linked to work and everyday life. The history of Alva The history of Alva Karlsson is the only exhibition in the museum that is permanent. The exhibition connects to the museum's building and its history as part of the textile industry in Norrköping. Alva worked as a rollers between the years 1927 — 1962. Industriland One of the museum long-term exhibitions is Industriland — when Sweden became modern, the exhibition was in
Audi S1, which he had retired from the WRC two years earlier. The Audi S1 employed Audi's time-tested inline-five-cylinder turbocharged engine, with the final version generating . The engine was mated to a six-speed gearbox and ran on Audi's famous four-wheel drive system. All of Audi's top drivers drove this car; Hannu Mikkola, Stig Blomqvist, Walter Röhrl and Michèle Mouton. This Audi S1 started the range of Audi 'S' cars, which now represents an increased level of sports-performance equipment within the mainstream Audi model range. In the United States As Audi moved away from rallying and into circuit racing, they chose to move first into America with the Trans-Am in 1988. In 1989, Audi moved to International Motor Sports Association (IMSA) GTO with the Audi 90, however as they avoided the two major endurance events (Daytona and Sebring) despite winning on a regular basis, they would lose out on the title. Touring cars In 1990, having completed their objective to market cars in North America, Audi returned to Europe, turning first to the Deutsche Tourenwagen Meisterschaft (DTM) series with the Audi V8, and then in 1993, being unwilling to build cars for the new formula, they turned their attention to the fast-growing Super Touring series, which are a series of national championships. Audi first entered in the French Supertourisme and Italian Superturismo. In the following year, Audi would switch to the German Super Tourenwagen Cup (known as STW), and then to British Touring Car Championship (BTCC) the year after that. The Fédération Internationale de l'Automobile (FIA), having difficulty regulating the quattro four-wheel drive system, and the impact it had on the competitors, would eventually ban all four-wheel drive cars from competing in the series in 1998, but by then, Audi switched all their works efforts to sports car racing. By 2000, Audi would still compete in the US with their RS4 for the SCCA Speed World GT Challenge, through dealer/team Champion Racing competing against Corvettes, Vipers, and smaller BMWs (where it is one of the few series to permit 4WD cars). In 2003, Champion Racing entered an RS6. Once again, the quattro four-wheel drive was superior, and Champion Audi won the championship. They returned in 2004 to defend their title, but a newcomer, Cadillac with the new Omega Chassis CTS-V, gave them a run for their money. After four victories in a row, the Audis were sanctioned with several negative changes that deeply affected the car's performance. Namely, added ballast weights, and Champion Audi deciding to go with different tyres, and reducing the boost pressure of the turbocharger. In 2004, after years of competing with the TT-R in the revitalised DTM series, with privateer team Abt Racing/Christian Abt taking the 2002 title with Laurent Aïello, Audi returned as a full factory effort to touring car racing by entering two factory-supported Joest Racing A4 DTM cars. 24 Hours of Le Mans Audi began racing prototype sportscars in 1999, debuting at the Le Mans 24 hour. Two car concepts were developed and raced in their first season - the Audi R8R (open-cockpit 'roadster' prototype) and the Audi R8C (closed-cockpit 'coupé' GT-prototype). The R8R scored a credible podium on its racing debut at Le Mans and was the concept which Audi continued to develop into the 2000 season due to favourable rules for open-cockpit prototypes. However, most of the competitors (such as BMW, Toyota, Mercedes and Nissan) retired at the end of 1999. The factory-supported Joest Racing team won at Le Mans three times in a row with the Audi R8 (2000–2002), as well as winning every race in the American Le Mans Series in its first year. Audi also sold the car to customer teams such as Champion Racing. In 2003, two Bentley Speed 8s, with engines designed by Audi, and driven by Joest drivers loaned to the fellow Volkswagen Group company, competed in the GTP class, and finished the race in the top two positions, while the Champion Racing R8 finished third overall, and first in the LMP900 class. Audi returned to the winner's podium at the 2004 race, with the top three finishers all driving R8s: Audi Sport Japan Team Goh finished first, Audi Sport UK Veloqx second, and Champion Racing third. At the 2005 24 Hours of Le Mans, Champion Racing entered two R8s, along with an R8 from the Audi PlayStation Team Oreca. The R8s (which were built to old LMP900 regulations) received a narrower air inlet restrictor, reducing power, and an additional of weight compared to the newer LMP1 chassis. On average, the R8s were about 2–3 seconds off pace compared to the Pescarolo–Judd. But with a team of excellent drivers and experience, both Champion R8s were able to take first and third, while the Oreca team took fourth. The Champion team was also the first American team to win Le Mans since the Gulf Ford GTs in 1967. This also ends the long era of the R8; however, its replacement for 2006, called the Audi R10 TDI, was unveiled on 13 December 2005. The R10 TDI employed many new and innovative features, the most notable being the twin-turbocharged direct injection diesel engine. It was first raced in the 2006 12 Hours of Sebring as a race-test in preparation for the 2006 24 Hours of Le Mans, which it later went on to win. Audi had a win in the first diesel sports car at 12 Hours of Sebring (the car was developed with a Diesel engine due to ACO regulations that favor diesel engines). As well as winning the 24 Hours of Le Mans in 2006, the R10 TDI beat the Peugeot 908 HDi FAP in , and in , (however Peugeot won the 24h in 2009) with a podium clean-sweep (all four 908 entries retired) while breaking a distance record (set by the Porsche 917K of Martini Racing in ), in with the R15 TDI Plus. Audi's sports car racing success would continue with the Audi R18's victory at the 2011 24 Hours of Le Mans. Audi Sport Team Joest's Benoît Tréluyer earned Audi their first pole position in five years while the team's sister car locked out the front row. Early accidents eliminated two of Audi's three entries, but the sole remaining Audi R18 TDI of Tréluyer, Marcel Fässler, and André Lotterer held off the trio of Peugeot 908s to claim victory by a margin of 13.8 seconds. Results American Le Mans Series Audi entered a factory racing team run by Joest Racing into the American Le Mans Series under the Audi Sport North America name in 2000. This was a successful operation with the team winning on its debut in the series at the 2000 12 Hours of Sebring. Factory-backed Audi R8s were the dominant car in ALMS taking 25 victories between 2000 and the end of the 2002 season. In 2003, Audi sold customer cars to Champion Racing as well as continuing to race the factory Audi Sport North America team. Champion Racing won many races as a private team running Audi R8s and eventually replaced Team Joest as the Audi Sport North America between 2006 and 2008. Since 2009 Audi has not taken part in full American Le Mans Series Championships, but has competed in the series opening races at Sebring, using the 12-hour race as a test for Le Mans, and also as part of the 2012 FIA World Endurance Championship season calendar. Results European Le Mans Series Audi participated in the 2003 1000km of Le Mans which was a one-off sports car race in preparation for the 2004 European Le Mans Series. The factory team Audi Sport UK won races and the championship in the 2004 season but Audi was unable to match their sweeping success of Audi Sport North America in the American Le Mans Series, partly due to the arrival of a factory competitor in LMP1, Peugeot. The French manufacturer's 908 HDi FAP became the car to beat in the series from 2008 onwards with 20 LMP wins. However, Audi were able to secure the championship in 2008 even though Peugeot scored more race victories in the season. Results World Endurance Championship 2012 In 2012, the FIA sanctioned a World Endurance Championship which would be organised by the ACO as a continuation of the ILMC. Audi competed won the first WEC race at Sebring and followed this up with a further three successive wins, including the 2012 24 Hours of Le Mans. Audi scored a final 5th victory in the 2012 WEC in Bahrain and were able to win the inaugural WEC Manufacturers' Championship. 2013 As defending champions, Audi once again entered the Audi R18 e-tron quattro chassis into the 2013 WEC and the team won the first five consecutive races, including the 2013 24 Hours of Le Mans. The victory at Round 5, Circuit of the Americas, was of particular significance as it marked the 100th win for Audi in Le Mans prototypes. Audi secured their second consecutive WEC Manufacturers' Championship at Round 6 after taking second place and half points in the red-flagged Fuji race. 2014 For the 2014 season, Audi entered a redesigned and upgraded R18 e-tron quattro which featured a 2 MJ energy recovery system. As defending champions, Audi would once again face a challenge in LMP1 from Toyota, and additionally from Porsche who returned to endurance racing after a 16-year absence. The season-opening 6hrs of Silverstone was a disaster for Audi who saw both cars retire from the race, marking the first time that an Audi car has failed to score a podium in a World Endurance Championship race. Results Formula E Audi provide factory support to Abt Sportsline in the FIA Formula E Championship, The team competed under the title of Audi Sport Abt Formula E Team in the inaugural 2014-15 Formula E season. On 13 February 2014 the team announced its driver line up as Daniel Abt and World Endurance Championship driver Lucas di Grassi. Formula One Audi has been linked to Formula One in recent years but has always resisted due to the company's opinion that it is not relevant to road cars, but hybrid power unit technology has been adopted into the sport, swaying the company's view and encouraging research into the program by former Ferrari team principal Stefano Domenicali. Marketing Branding The Audi emblem is four overlapping rings that represent the four marques of Auto Union. The Audi emblem symbolises the amalgamation of Audi with DKW, Horch and Wanderer: the first ring from the left represents Audi, the second represents DKW, third is Horch, and the fourth and last ring Wanderer. The design is popularly believed to have been the idea of Klaus von Oertzen, the director of sales at Wanderer – when Berlin was chosen as the host city for the 1936 Summer Olympics and that a form of the Olympic logo symbolized the newly established Auto Union's desire to succeed. Somewhat ironically, the International Olympic Committee later sued Audi in the International Trademark Court in 1995, where they lost. The original "Audi" script, with the distinctive slanted tails on the "A" and "d" was created for the historic Audi company in 1920 by the famous graphic designer Lucian Bernhard, and was resurrected when Volkswagen revived the brand in 1965. Following the demise of NSU in 1977, less prominence was given to the four rings, in preference to the "Audi" script encased within a black (later red) ellipse, and was commonly displayed next to the Volkswagen roundel when the two brands shared a dealer network under the V.A.G banner. The ellipse (known as the Audi Oval) was phased out after 1994, when Audi formed its own independent dealer network, and prominence was given back to the four rings – at the same time Audi Sans (a derivative of Univers) was adopted as the font for all marketing materials, corporate communications and was also used in the vehicles themselves. As part of Audi's centennial celebration in 2009, the company updated the logo, changing the font to left-aligned Audi Type, and altering the shading for the overlapping rings. The revised logo was designed by Rayan Abdullah. Audi developed a Corporate Sound concept, with Audi Sound Studio designed for producing the Corporate Sound. The Corporate Sound project began with sound agency Klangerfinder GmbH & Co KG and s12 GmbH. Audio samples were created in Klangerfinder's sound studio in Stuttgart, becoming part of Audi Sound Studio collection. Other Audi Sound Studio components include The Brand Music Pool, The Brand Voice. Audi also developed Sound Branding Toolkit including certain instruments, sound themes, rhythm and car sounds which all are supposed to reflect the AUDI sound character. Audi started using a beating heart sound trademark beginning in 1996. An updated heartbeat sound logo, developed by agencies KLANGERFINDER GmbH & Co KG of Stuttgart and S12 GmbH of Munich, was first used in 2010 in an Audi A8 commercial with the slogan The Art of Progress. Slogans Audi's corporate tagline is , meaning "Progress through Technology". The German-language tagline is used in many European countries, including the United Kingdom (but not in Italy, where is used), and in other markets, such as Latin America, Oceania, Africa and parts of Asia including Japan. Originally, the American tagline was Innovation through technology, but in Canada Vorsprung durch Technik was used. Since 2007, Audi has used the slogan Truth in Engineering in the U.S. However, since the Audi emissions testing scandal came to light in September 2015, this slogan was lambasted for being discordant with reality. In fact, just hours after disgraced Volkswagen CEO Martin Winterkorn admitted to cheating on emissions data, an advertisement during the 2015 Primetime Emmy Awards promoted Audi's latest advances in low emissions technology with Kermit the Frog stating, "It's not that easy being green." Vorsprung durch Technik was first used in English-language advertising after Sir John Hegarty of the Bartle Bogle Hegarty advertising agency visited the Audi factory in 1982. In the original British television commercials, the phrase was voiced by Geoffrey Palmer. After its repeated use in advertising campaigns, the phrase found its way into popular culture, including the British comedy Only Fools and Horses, the U2 song "Zooropa" and the Blur song "Parklife". Similar-sounding phrases have also been used, including as the punchline for a joke in the movie Lock, Stock, and Two Smoking Barrels and in the British TV series Peep Show. Typography Audi Sans (based on Univers Extended) was originally created in 1997 by Ole Schäfer for MetaDesign. MetaDesign was later commissioned for a new corporate typeface called Audi Type, designed by Paul van der Laan and Pieter van Rosmalen of Bold Monday. The font began to appear in Audi's 2009 products and marketing materials. Sponsorships Audi is a strong partner of different kinds of sports. In football, long partnerships exist between Audi and domestic clubs including Bayern Munich, Hamburger SV, 1. FC Nürnberg, Hertha BSC, and Borussia Mönchengladbach and international clubs including Chelsea, Real Madrid, FC Barcelona, A.C. Milan, AFC Ajax and Perspolis. Audi also sponsors winter sports: The Audi FIS Alpine Ski World Cup is named after the company. Additionally, Audi supports the German Ski Association (DSV) as well as the alpine skiing national teams of Switzerland, Sweden, Finland, France, Liechtenstein, Italy, Austria and the U.S. For almost two decades, Audi fosters golf sport: for example with the Audi quattro Cup and the HypoVereinsbank Ladies German Open presented by Audi. In sailing, Audi is engaged in the Medcup regatta and supports the team Luna Rossa during the Louis Vuitton Pacific Series and also is the primary sponsor of the Melges 20 sailboat. Further, Audi sponsors the regional teams ERC Ingolstadt (hockey) and FC Ingolstadt 04 (soccer). In 2009, the year of Audi's 100th anniversary, the company organized the Audi Cup for the first time. Audi also sponsor the New York Yankees as well. In October 2010 they agreed to a three sponsorship year-deal with Everton. Audi also sponsors the England Polo Team and holds the Audi Polo Awards. Marvel Cinematic Universe Since the start of the Marvel Cinematic Universe, Audi signed a deal to sponsor, promote and provide vehicles for several films. So far these have been, Iron Man, Iron Man 2, Iron Man 3, Avengers: Age of Ultron, Captain America: Civil War, Spider-Man: Homecoming, Avengers: Endgame and Spider-Man: Far From Home. The R8 supercar became the personal vehicle for Tony Stark (played by Robert Downey Jr.) for six of these films. The e-tron vehicles were promoted in Endgame and Far From Home. Several commercials were co-produced by Marvel and Audi to promote several new concepts and some of the latest vehicles such as the A8, SQ7 and the e-Tron fleet. Multitronic campaign In 2001, Audi promoted the new multitronic continuously variable transmission with television commercials throughout Europe, featuring an impersonator of musician and actor Elvis Presley. A prototypical dashboard figure – later named "Wackel-Elvis" ("Wobble Elvis" or "Wobbly Elvis") – appeared in the commercials to demonstrate the smooth ride in an Audi equipped with the multitronic transmission. The dashboard figure was originally intended for use in the commercials only, but after they aired the demand for Wackel-Elvis fans grew among fans and the figure was mass-produced in China and marketed by Audi in their factory outlet store. Audi TDI As part of Audi's attempt to promote its Diesel technology in 2009, the company began Audi Mileage Marathon. The driving tour featured a fleet of 23 Audi TDI vehicles from 4 models (Audi Q7 3.0 TDI, Audi Q5 3.0 TDI, Audi A4 3.0 TDI, Audi A3 Sportback 2.0 TDI with S tronic transmission) travelling across the American continent from New York to Los Angeles, passing major cities like Chicago, Dallas and Las Vegas during the 13 daily stages, as well as natural wonders including the Rocky Mountains, Death Valley and the Grand Canyon. Audi e-tron The next phase of technology Audi is developing is the e-tron electric drive powertrain system. They have shown several concept cars , each with different levels of size and performance. The original e-tron concept shown at the 2009 Frankfurt motor show is based on the platform of the R8 and has been scheduled for limited production. Power is provided by electric motors at all four wheels. The second concept was shown at the 2010 Detroit Motor Show. Power is provided by two electric motors at the rear axle. This concept is also considered to be the direction for a future mid-engined gas-powered 2-seat performance coupe. The Audi A1 e-tron concept, based on the Audi A1 production model, is a hybrid vehicle with a range extending Wankel rotary engine to provide power after the initial charge of the battery is depleted. It is the only concept of the three to have range-extending capability. The car is powered through the front wheels, always using electric power. It is all set to be displayed at the Auto Expo 2012 in New Delhi, India, from 5 January. Powered by a 1.4 litre engine, and can cover a distance up to 54 km s on a single charge. The e-tron was also shown in the 2013 blockbuster film Iron Man 3 and was driven by Tony Stark (Iron Man). In video games Audi has supported the European version of PlayStation Home, the PlayStation 3's online community-based service, by releasing a dedicated Home space. Audi is the first carmaker to develop such a space for Home. On 17 December 2009, Audi released two spaces; the Audi Home Terminal and the Audi Vertical Run. The Audi Home Terminal features an Audi TV channel delivering video content, an Internet Browser feature, and a view of a city. The Audi Vertical Run is where users can access the mini-game Vertical Run, a futuristic mini-game featuring Audi's e-tron concept. Players collect energy and race for the highest possible speeds and the fastest players earn a place in the Audi apartments located in a large tower in the centre of the Audi Space. In both the Home Terminal and Vertical Run spaces, there are teleports where users can teleport
having classified the model internally as the F103, sold it simply as the "Audi". Later developments of the model were named after their horsepower ratings and sold as the Audi 60, 75, 80, and Super 90, selling until 1972. Initially, Volkswagen was hostile to the idea of Auto Union as a standalone entity producing its own models having acquired the company merely to boost its own production capacity through the Ingolstadt assembly plant – to the point where Volkswagen executives ordered that the Auto Union name and flags bearing the four rings were removed from the factory buildings. Then VW chief Heinz Nordhoff explicitly forbade Auto Union from any further product development. Fearing that Volkswagen had no long-term ambition for the Audi brand, Auto Union engineers under the leadership of Ludwig Kraus developed the first Audi 100 in secret, without Nordhoff's knowledge. When presented with a finished prototype, Nordhoff was so impressed he authorised the car for production, which when launched in 1968, went on to be a huge success. With this, the resurrection of the Audi brand was now complete, this being followed by the first generation Audi 80 in 1972, which would in turn provide a template for VW's new front-wheel-drive water-cooled range which debuted from the mid-1970s onward. In 1969, Auto Union merged with NSU, based in Neckarsulm, near Stuttgart. In the 1950s, NSU had been the world's largest manufacturer of motorcycles, but had moved on to produce small cars like the NSU Prinz, the TT and TTS versions of which are still popular as vintage race cars. NSU then focused on new rotary engines based on the ideas of Felix Wankel. In 1967, the new NSU Ro 80 was a car well ahead of its time in technical details such as aerodynamics, light weight, and safety. However, teething problems with the rotary engines put an end to the independence of NSU. The Neckarsulm plant is now used to produce the larger Audi models A6 and A8. The Neckarsulm factory is also home of the "quattro GmbH" (from November 2016 "Audi Sport GmbH"), a subsidiary responsible for development and production of Audi high-performance models: the R8 and the RS model range. Modern era The new merged company was incorporated on 1 January 1969 and was known as Audi NSU Auto Union AG, with its headquarters at NSU's Neckarsulm plant, and saw the emergence of Audi as a separate brand for the first time since the pre-war era. Volkswagen introduced the Audi brand to the United States for the 1970 model year. That same year, the mid-sized car that NSU had been working on, the K70, originally intended to slot between the rear-engined Prinz models and the futuristic NSU Ro 80, was instead launched as a Volkswagen. After the launch of the Audi 100 of 1968, the Audi 80/Fox (which formed the basis for the 1973 Volkswagen Passat) followed in 1972 and the Audi 50 (later rebadged as the Volkswagen Polo) in 1974. The Audi 50 was a seminal design because it was the first incarnation of the Golf/Polo concept, one that led to a hugely successful world car. Ultimately, the Audi 80 and 100 (progenitors of the A4 and A6, respectively) became the company's biggest sellers, whilst little investment was made in the fading NSU range; the Prinz models were dropped in 1973 whilst the fatally flawed NSU Ro80 went out of production in 1977, spelling the effective end of the NSU brand. Production of the Audi 100 had been steadily moved from Ingolstadt to Neckarsulm as the 1970s had progressed, and by the appearance of the second generation C2 version in 1976, all production was now at the former NSU plant. Neckarsulm from that point onward would produce Audi's higher-end models. The Audi image at this time was a conservative one, and so, a proposal from chassis engineer Jörg Bensinger was accepted to develop the four-wheel drive technology in Volkswagen's Iltis military vehicle for an Audi performance car and rally racing car. The performance car, introduced in 1980, was named the "Audi Quattro", a turbocharged coupé which was also the first German large-scale production vehicle to feature permanent all-wheel drive through a centre differential. Commonly referred to as the "Ur-Quattro" (the "Ur-" prefix is a German augmentative used, in this case, to mean "original" and is also applied to the first generation of Audi's S4 and S6 Sport Saloons, as in "UrS4" and "UrS6"), few of these vehicles were produced (all hand-built by a single team), but the model was a great success in rallying. Prominent wins proved the viability of all-wheel-drive racecars, and the Audi name became associated with advances in automotive technology. In 1985, with the Auto Union and NSU brands effectively dead, the company's official name was now shortened to simply Audi AG. At the same time the company's headquarters moved back to Ingolstadt and two new wholly owned subsidiaries; Auto Union GmbH and NSU GmbH, were formed to own and manage the historical trademarks and intellectual property of the original constituent companies (the exception being Horch, which had been retained by Daimler-Benz after the VW takeover), and to operate Audi's heritage operations. In 1986, as the Passat-based Audi 80 was beginning to develop a kind of "grandfather's car" image, the type 89 was introduced. This completely new development sold extremely well. However, its modern and dynamic exterior belied the low performance of its base engine, and its base package was quite spartan (even the passenger-side mirror was an option.) In 1987, Audi put forward a new and very elegant Audi 90, which had a much superior set of standard features. In the early 1990s, sales began to slump for the Audi 80 series, and some basic construction problems started to surface. In the early part of the 21st century, Audi set forth on a German racetrack to claim and maintain several world records, such as top speed endurance. This effort was in-line with the company's heritage from the 1930s racing era Silver Arrows. Through the early 1990s, Audi began to shift its target market upscale to compete against German automakers Mercedes-Benz and BMW. This began with the release of the Audi V8 in 1990. It was essentially a new engine fitted to the Audi 100/200, but with noticeable bodywork differences. Most obvious was the new grille that was now incorporated in the bonnet. By 1991, Audi had the four-cylinder Audi 80, the 5-cylinder Audi 90 and Audi 100, the turbocharged Audi 200 and the Audi V8. There was also a coupé version of the 80/90 with both four- and five-cylinder engines. Although the five-cylinder engine was a successful and robust powerplant, it was still a little too different for the target market. With the introduction of an all-new Audi 100 in 1992, Audi introduced a 2.8L V6 engine. This engine was also fitted to a face-lifted Audi 80 (all 80 and 90 models were now badged 80 except for the USA), giving this model a choice of four-, five-, and six-cylinder engines, in saloon, coupé and convertible body styles. The five-cylinder was soon dropped as a major engine choice; however, a turbocharged version remained. The engine, initially fitted to the 200 quattro 20V of 1991, was a derivative of the engine fitted to the Sport Quattro. It was fitted to the Audi Coupé, named the S2, and also to the Audi 100 body, and named the S4. These two models were the beginning of the mass-produced S series of performance cars. Audi 5000 unintended acceleration allegations Sales in the United States fell after a series of recalls from 1982 to 1987 of Audi 5000 models associated with reported incidents of sudden unintended acceleration linked to six deaths and 700 accidents. At the time, NHTSA was investigating 50 car models from 20 manufacturers for sudden surges of power. A 60 Minutes report aired 23 November 1986, featuring interviews with six people who had sued Audi after reporting unintended acceleration, showing an Audi 5000 ostensibly suffering a problem when the brake pedal was pushed. Subsequent investigation revealed that 60 Minutes had engineered the failure – fitting a canister of compressed air on the passenger-side floor, linked via a hose to a hole drilled into the transmission. Audi contended, prior to findings by outside investigators, that the problems were caused by driver error, specifically pedal misapplication. Subsequently, the National Highway Traffic Safety Administration (NHTSA) concluded that the majority of unintended acceleration cases, including all the ones that prompted the 60 Minutes report, were caused by driver error such as confusion of pedals. CBS did not acknowledge the test results of involved government agencies, but did acknowledge the similar results of another study. In a review study published in 2012, NHTSA summarized its past findings about the Audi unintended acceleration problems: "Once an unintended acceleration had begun, in the Audi 5000, due to a failure in the idle-stabilizer system (producing an initial acceleration of 0.3g), pedal misapplication resulting from panic, confusion, or unfamiliarity with the Audi 5000 contributed to the severity of the incident." This summary is consistent with the conclusions of NHTSA's most technical analysis at the time: "Audi idle-stabilization systems were prone to defects which resulted in excessive idle speeds and brief unanticipated accelerations of up to 0.3g [which is similar in magnitude to an emergency stop in a subway car]. These accelerations could not be the sole cause of [(long-duration) sudden acceleration incidents (SAI)], but might have triggered some SAIs by startling the driver. The defective idle-stabilization system performed a type of electronic throttle control. Significantly: multiple "intermittent malfunctions of the electronic control unit were observed and recorded ... and [were also observed and] reported by Transport Canada." With a series of recall campaigns, Audi made several modifications; the first adjusted the distance between the brake and accelerator pedal on automatic-transmission models. Later repairs, of 250,000 cars dating back to 1978, added a device requiring the driver to press the brake pedal before shifting out of park. A legacy of the Audi 5000 and other reported cases of sudden unintended acceleration are intricate gear stick patterns and brake interlock mechanisms to prevent inadvertent shifting into forward or reverse. It is unclear how the defects in the idle-stabilization system were addressed. Audi's U.S. sales, which had reached 74,061 in 1985, dropped to 12,283 in 1991 and remained level for three years, – with resale values falling dramatically. Audi subsequently offered increased warranty protection and renamed the affected models – with the 5000 becoming the 100 and 200 in 1989 – and reached the same sales levels again only by model year 2000. A 2010 BusinessWeek article – outlining possible parallels between Audi's experience and 2009–2010 Toyota vehicle recalls – noted a class-action lawsuit filed in 1987 by about 7,500 Audi 5000-model owners remains unsettled and remains contested in Chicago's Cook County after appeals at the Illinois state and U.S. federal levels. Model introductions In the mid-to-late 1990s, Audi introduced new technologies including the use of aluminium construction. Produced from 1999 to 2005, the Audi A2 was a futuristic super mini, born from the Al2 concept, with many features that helped regain consumer confidence, like the aluminium space frame, which was a first in production car design. In the A2 Audi further expanded their TDI technology through the use of frugal three-cylinder engines. The A2 was extremely aerodynamic and was designed around a wind tunnel. The Audi A2 was criticised for its high price and was never really a sales success but it planted Audi as a cutting-edge manufacturer. The model, a Mercedes-Benz A-Class competitor, sold relatively well in Europe. However, the A2 was discontinued in 2005 and Audi decided not to develop an immediate replacement. The next major model change came in 1995 when the Audi A4 replaced the Audi 80. The new nomenclature scheme was applied to the Audi 100 to become the Audi A6 (with a minor facelift). This also meant the S4 became the S6 and a new S4 was introduced in the A4 body. The S2 was discontinued. The Audi Cabriolet continued on (based on the Audi 80 platform) until 1999, gaining the engine upgrades along the way. A new A3 hatchback model (sharing the Volkswagen Golf Mk4's platform) was introduced to the range in 1996, and the radical Audi TT coupé and roadster were debuted in 1998 based on the same underpinnings. The engines available throughout the range were now a 1.4 L, 1.6 L and 1.8 L four-cylinder, 1.8 L four-cylinder turbo, 2.6 L and 2.8 L V6, 2.2 L turbo-charged five-cylinder and the 4.2 L V8 engine. The V6s were replaced by new 2.4 L and 2.8 L 30V V6s in 1998, with marked improvement in power, torque and smoothness. Further engines were added along the way, including a 3.7 L V8 and 6.0 L W12 engine for the A8. Audi AG today Audi's sales grew strongly in the 2000s, with deliveries to customers increasing from 653,000 in 2000 to 1,003,000 in 2008. The largest sales increases came from Eastern Europe (+19.3%), Africa (+17.2%) and the Middle East (+58.5%). China in particular has become a key market, representing 108,000 out of 705,000 cars delivered in the first three quarters of 2009. One factor for its popularity in China is that Audis have become the car of choice for purchase by the Chinese government for officials, and purchases by the government are responsible for 20% of its sales in China. As of late 2009, Audi's operating profit of €1.17 billion ($1.85 billion) made it the biggest contributor to parent Volkswagen Group's nine-month operating profit of €1.5 billion, while the other marques in Group such as Bentley and SEAT had suffered considerable losses. May 2011 saw record sales for Audi of America with the new Audi A7 and Audi A3 TDI Clean Diesel. In May 2012, Audi reported a 10% increase in its sales—from 408 units to 480 in the last year alone. Audi manufactures vehicles in seven plants around the world, some of which are shared with other VW Group marques although many sub-assemblies such as engines and transmissions are manufactured within other Volkswagen Group plants. Audi's two principal assembly plants are: Ingolstadt, opened by Auto Union in 1964 (A3, A4, A5, Q5) Neckarsulm, acquired from NSU in 1969 (A4, A6, A7, A8, R8, and all RS variants) Outside of Germany, Audi produces vehicles at: Aurangabad, India, since 2006 Bratislava, Slovakia, shared with Volkswagen, SEAT, Škoda and Porsche (Q7 and Q8) Brussels, Belgium, acquired from Volkswagen in 2007 (e-tron) Changchun, China, since 1995 Győr, Hungary (TT and some A3 variants) Jakarta, Indonesia, since 2011 Martorell, Spain, shared with SEAT and Volkswagen (A1) San José Chiapa, Mexico (2nd gen Q5) In September 2012, Audi announced the construction of its first North American manufacturing plant in Puebla, Mexico. This plant became operative in 2016 and produces the second generation Q5. From 2002 up to 2003, Audi headed the Audi Brand Group, a subdivision of the Volkswagen Group's Automotive Division consisting of Audi, Lamborghini and SEAT, which was focused on sporty values, with the marques' product vehicles and performance being under the higher responsibility of the Audi brand. In January 2014, Audi, along with the Wireless Power Consortium, operated a booth which demonstrated a phone compartment using the Qi open interface standard at the Consumer Electronics Show (CES). In May, most of the Audi dealers in the UK falsely claimed that the Audi A7, A8, and R8 were Euro NCAP safety tested, all achieving five out of five stars. In fact none were tested. In 2015, Audi admitted that at least 2.1 million Audi cars had been involved in the Volkswagen emissions testing scandal in which software installed in the cars manipulated emissions data to fool regulators and allow the cars to pollute at higher than government-mandated levels. The A1, A3, A4, A5, A6, TT, Q3 and Q5 models were implicated in the scandal. Audi promised to quickly find a technical solution and upgrade the cars so they can function within emissions regulations. Ulrich Hackenberg, the head of research and development at Audi, was suspended in relation to the scandal. Despite widespread media coverage about the scandal through the month of September, Audi reported that U.S. sales for the month had increased by 16.2%. Audi's parent company Volkswagen announced on 18 June 2018 that Audi chief executive Rupert Stadler had been arrested. In November 2015, the U.S. Environmental Protection Agency implicated the 3-liter diesel engine versions of the 2016 Audi A6 Quattro, A7 Quattro, A8, A8L and the Q5 as further models that had emissions regulation defeat-device software installed. Thus, these models emitted nitrogen oxide at up to nine times the legal limit when the car detected that it was not hooked up to emissions testing equipment. In November 2016, Audi expressed an intention to establish an assembly factory in Pakistan, with the company's local partner acquiring land for a plant in Korangi Creek Industrial Park in Karachi. Approval of the plan would lead to an investment of $30 million in the new plant. Audi planned to cut 9,500 jobs in Germany starting from 2020 till 2025 to fund electric vehicles and digital working. In February 2020, Volkswagen AG announced that it plans to take over all Audi shares it does not own (totalling 0.36%) via a squeeze-out according to German stock corporation law, thus making Audi a fully owned subsidiary of the Volkswagen Group. This change took effect from 16 November 2020, when Audi became a wholly owned subsidiary of the Volkswagen Group. In January 2021, Audi announced that it is planning to sell 1 million vehicles in China in 2023, comparing to 726,000 vehicles in 2020. Technology Audi AI Audi AI is a driver assist feature offered by Audi. The company's stated intent is to offer fully autonomous driving at a future time, acknowledging that legal, regulatory and technical hurdles must be overcome to achieve this goal. On 4 June 2017, Audi stated that its new A8 will be fully self-driving for speeds up to 60 km/h using its Audi AI. Contrary to other cars, the driver will not have to do safety checks such as touching the steering wheel every 15 seconds to use this feature. The Audi A8 will therefore be the first production car to reach level 3 autonomous driving, meaning that the driver can safely turn their attention away from driving tasks, e.g. the driver can text or watch a movie. Audi will also be the first manufacturer to use a 3D Lidar system in addition to cameras and ultrasonic sensors for their AI. Bodyshells Audi produces 100% galvanised cars to prevent corrosion, and was the first mass-market vehicle to do so, following introduction of the process by Porsche, c. 1975. Along with other precautionary measures, the full-body zinc coating has proved to be very effective in preventing rust. The body's resulting durability even surpassed Audi's own expectations, causing the manufacturer to extend its original 10-year warranty against corrosion perforation to currently 12 years (except for aluminium bodies which do not rust). Space frame Audi introduced a new series of vehicles in the mid-1990s and continues to pursue new technology and high performance. An all-aluminium car was brought forward by Audi, and in 1994 the Audi A8 was launched, which introduced aluminium space frame technology (called Audi Space Frame or ASF) which saves weight and improves torsion rigidity compared to a conventional steel frame. Prior to that effort, Audi used examples of the Type 44 chassis fabricated out of aluminium as test-beds for the technique. The disadvantage of the aluminium frame is that it is very expensive to repair and requires a specialized aluminium bodyshop. The weight reduction is somewhat offset by the quattro four-wheel drive system which is standard in most markets. Nonetheless, the A8 is usually the lightest all-wheel drive car in the full-size luxury segment, also having best-in-class fuel economy. The Audi A2, Audi TT and Audi R8 also use Audi Space Frame designs. Drivetrains Layout For most of its lineup (excluding the A3, A1, and TT models), Audi has not adopted the transverse engine layout which is typically found in economy cars (such as Peugeot and Citroën), since that would limit the type and power of engines that can be installed. To be able to mount powerful engines (such as a V8 engine in the Audi S4 and Audi RS4, as well as the W12 engine in the Audi A8L W12), Audi has usually engineered its more expensive cars with a longitudinally front-mounted engine, in an "overhung" position, over the front wheels in front of the axle line - this layout dates back to the DKW and Auto Union saloons from the 1950s. But while this allows for the easy adoption of all-wheel drive, it goes against the ideal 50:50 weight distribution. In all its post Volkswagen-era models, Audi has firmly refused to adopt the traditional rear-wheel drive layout favored by its two archrivals Mercedes-Benz and BMW, favoring either front-wheel drive or all-wheel drive. The majority of Audi's lineup in the United States features all-wheel drive standard on most of its expensive vehicles (only the entry-level trims of the A4 and A6 are available with front-wheel drive), in contrast to Mercedes-Benz and BMW whose lineup treats all-wheel drive as an option. BMW did not offer all-wheel drive on its V8-powered cars (as opposed to crossover SUVs) until the 2010 BMW 7 Series and 2011 BMW 5 Series, while the Audi A8 has had all-wheel drive available/standard since the 1990s. Regarding high-performance variants, Audi S and RS models have always had all-wheel drive, unlike their direct rivals from BMW M and Mercedes-AMG whose cars are rear-wheel drive only (although their performance crossover SUVs are all-wheel drive). Audi has recently applied the quattro badge to models such as the A3 and TT which do not use the Torsen-based system as in prior years with a mechanical center differential, but with the Haldex Traction electro-mechanical clutch AWD system. Engines Prior to the introduction of the Audi 80 and Audi 50 in 1972 and 1974, respectively, Audi had led the development of the EA111 and EA827 inline-four engine families. These new power units underpinned the water-cooled revival of parent company Volkswagen (in the Polo, Golf, Passat and Scirocco), whilst the many derivatives and descendants of these two basic engine designs have appeared in every generation of VW Group vehicles right up to the present day. In the 1980s, Audi, along with Volvo, was the champion of the inline-five cylinder, 2.1/2.2 L engine as a longer-lasting alternative to more traditional six-cylinder engines. This engine was used not only in production cars but also in their race cars. The 2.1 L inline five-cylinder engine was used as a base for the rally cars in the 1980s, providing well over after modification. Before 1990, there were engines produced with a displacement between 2.0 L and 2.3 L. This range of engine capacity allowed for both fuel economy and power. For the ultra-luxury version of its Audi A8 fullsize luxury flagship sedan, the Audi A8L W12, Audi uses the Volkswagen Group W12 engine instead of the conventional V12 engine favored by rivals Mercedes-Benz and BMW. The W12 engine configuration (also known as a "WR12") is created by forming two imaginary narrow-angle 15° VR6 engines at an angle of 72°, and the narrow angle of each set of cylinders allows just two overhead camshafts to drive each pair of banks, so just four are needed in total. The advantage of the W12 engine is its compact packaging, allowing Audi to build a 12-cylinder sedan with all-wheel drive, whereas a conventional V12 engine could have only
programs became available. The first heavier-than-air craft capable of controlled free-flight were gliders. A glider designed by George Cayley carried out the first true manned, controlled flight in 1853. The practical, powered, fixed-wing aircraft (the airplane or aeroplane) was invented by Wilbur and Orville Wright. Besides the method of propulsion, fixed-wing aircraft are in general characterized by their wing configuration. The most important wing characteristics are: Number of wings — monoplane, biplane, etc. Wing support — Braced or cantilever, rigid, or flexible. Wing planform — including aspect ratio, angle of sweep, and any variations along the span (including the important class of delta wings). Location of the horizontal stabilizer, if any. Dihedral angle — positive, zero, or negative (anhedral). A variable geometry aircraft can change its wing configuration during flight. A flying wing has no fuselage, though it may have small blisters or pods. The opposite of this is a lifting body, which has no wings, though it may have small stabilizing and control surfaces. Wing-in-ground-effect vehicles are generally not considered aircraft. They "fly" efficiently close to the surface of the ground or water, like conventional aircraft during takeoff. An example is the Russian ekranoplan nicknamed the "Caspian Sea Monster". Man-powered aircraft also rely on ground effect to remain airborne with minimal pilot power, but this is only because they are so underpowered—in fact, the airframe is capable of flying higher. Rotorcraft Rotorcraft, or rotary-wing aircraft, use a spinning rotor with aerofoil section blades (a rotary wing) to provide lift. Types include helicopters, autogyros, and various hybrids such as gyrodynes and compound rotorcraft. Helicopters have a rotor turned by an engine-driven shaft. The rotor pushes air downward to create lift. By tilting the rotor forward, the downward flow is tilted backward, producing thrust for forward flight. Some helicopters have more than one rotor and a few have rotors turned by gas jets at the tips. Autogyros have unpowered rotors, with a separate power plant to provide thrust. The rotor is tilted backward. As the autogyro moves forward, air blows upward across the rotor, making it spin. This spinning increases the speed of airflow over the rotor, to provide lift. Rotor kites are unpowered autogyros, which are towed to give them forward speed or tethered to a static anchor in high-wind for kited flight. Cyclogyros rotate their wings about a horizontal axis. Compound rotorcraft have wings that provide some or all of the lift in forward flight. They are nowadays classified as powered lift types and not as rotorcraft. Tiltrotor aircraft (such as the Bell Boeing V-22 Osprey), tiltwing, tail-sitter, and coleopter aircraft have their rotors/propellers horizontal for vertical flight and vertical for forward flight. Other methods of lift A lifting body is an aircraft body shaped to produce lift. If there are any wings, they are too small to provide significant lift and are used only for stability and control. Lifting bodies are not efficient: they suffer from high drag, and must also travel at high speed to generate enough lift to fly. Many of the research prototypes, such as the Martin Marietta X-24, which led up to the Space Shuttle, were lifting bodies, though the Space Shuttle is not, and some supersonic missiles obtain lift from the airflow over a tubular body. Powered lift types rely on engine-derived lift for vertical takeoff and landing (VTOL). Most types transition to fixed-wing lift for horizontal flight. Classes of powered lift types include VTOL jet aircraft (such as the Harrier Jump Jet) and tiltrotors, such as the Bell Boeing V-22 Osprey, among others. A few experimental designs rely entirely on engine thrust to provide lift throughout the whole flight, including personal fan-lift hover platforms and jetpacks. VTOL research designs include the Rolls-Royce Thrust Measuring Rig. The Flettner airplane uses a rotating cylinder in place of a fixed wing, obtaining lift from the Magnus effect. The ornithopter obtains thrust by flapping its wings. Size and speed extremes Size The smallest aircraft are toys/recreational items, and nano aircraft. The largest aircraft by dimensions and volume (as of 2016) is the long British Airlander 10, a hybrid blimp, with helicopter and fixed-wing features, and reportedly capable of speeds up to , and an airborne endurance of two weeks with a payload of up to . The largest aircraft by weight and largest regular fixed-wing aircraft ever built, , is the Antonov An-225 Mriya. That Ukrainian-built six-engine Russian transport of the 1980s is long, with an wingspan. It holds the world payload record, after transporting of goods, and has recently flown loads commercially. With a maximum loaded weight of , it is also the heaviest aircraft built to date. It can cruise at . The largest military airplanes are the Ukrainian Antonov An-124 Ruslan (world's second-largest airplane, also used as a civilian transport), and American Lockheed C-5 Galaxy transport, weighing, loaded, over . The 8-engine, piston/propeller Hughes H-4 Hercules "Spruce Goose" — an American World War II wooden flying boat transport with a greater wingspan (94m/260ft) than any current aircraft and a tail height equal to the tallest (Airbus A380-800 at 24.1m/78ft) — flew only one short hop in the late 1940s and never flew out of ground effect. The largest civilian airplanes, apart from the above-noted An-225 and An-124, are the Airbus Beluga cargo transport derivative of the Airbus A300 jet airliner, the Boeing Dreamlifter cargo transport derivative of the Boeing 747 jet airliner/transport (the 747-200B was, at its creation in the 1960s, the heaviest aircraft ever built, with a maximum weight of over ), and the double-decker Airbus A380 "super-jumbo" jet airliner (the world's largest passenger airliner). Speeds The fastest recorded powered aircraft flight and fastest recorded aircraft flight of an air-breathing powered aircraft was of the NASA X-43A Pegasus, a scramjet-powered, hypersonic, lifting body experimental research aircraft, at Mach 9.6, exactly . The X-43A set that new mark, and broke its own world record of Mach 6.3, exactly , set in March 2004, on its third and final flight on 16 November 2004. Prior to the X-43A, the fastest recorded powered airplane flight (and still the record for the fastest manned, powered airplane / fastest manned, non-spacecraft aircraft) was of the North American X-15A-2, rocket-powered airplane at Mach 6.72, or , on 3 October 1967. On one flight it reached an altitude of . The fastest known, production aircraft (other than rockets and missiles) currently or formerly operational (as of 2016) are: The fastest fixed-wing aircraft, and fastest glider, is the Space Shuttle, a rocket-glider hybrid, which has re-entered the atmosphere as a fixed-wing glider at more than Mach 25, equal to . The fastest military airplane ever built: Lockheed SR-71 Blackbird, a U.S. reconnaissance jet fixed-wing aircraft, known to fly beyond Mach 3.3, equal to . On 28 July 1976, an SR-71 set the record for the fastest and highest-flying operational aircraft with an absolute speed record of and an absolute altitude record of . At its retirement in January 1990, it was the fastest air-breathing aircraft / fastest jet aircraft in the world, a record still standing . Note: Some sources refer to the above-mentioned X-15 as the "fastest military airplane" because it was partly a project of the U.S. Navy and Air Force; however, the X-15 was not used in non-experimental actual military operations. The fastest current military aircraft are the Soviet/Russian Mikoyan-Gurevich MiG-25 — capable of Mach 3.2, equal to , at the expense of engine damage, or Mach 2.83, equal to , normally — and the Russian Mikoyan MiG-31E (also capable of Mach 2.83 normally). Both are fighter-interceptor jet airplanes, in active operations as of 2016. The fastest civilian airplane ever built, and fastest passenger airliner ever built: the briefly operated Tupolev Tu-144 supersonic jet airliner (Mach 2.35, 1,600 mph, 2,587 km/h), which was believed to cruise at about Mach 2.2. The Tu-144 (officially operated from 1968 to 1978, ending after two crashes of the small fleet) was outlived by its rival, the Concorde (Mach 2.23), a French/British supersonic airliner, known to cruise at Mach 2.02 (1.450 mph, 2,333 kmh at cruising altitude), operating from 1976 until the small Concorde fleet was grounded permanently in 2003, following the crash of one in the early 2000s. The fastest civilian airplane currently flying: the Cessna Citation X, an American business jet, capable of Mach 0.935, or . Its rival, the American Gulfstream G650 business jet, can reach Mach 0.925, or The fastest airliner currently flying is the Boeing 747, quoted as being capable of cruising over Mach 0.885, . Previously, the fastest were the troubled, short-lived Russian (Soviet Union) Tupolev Tu-144 SST (Mach 2.35; equal to ) and the French/British Concorde, with a maximum speed of Mach 2.23 or and a normal cruising speed of Mach 2 or . Before them, the Convair 990 Coronado jet airliner of the 1960s flew at over . Propulsion Unpowered aircraft Gliders are heavier-than-air aircraft that do not employ propulsion once airborne. Take-off may be by launching forward and downward from a high location, or by pulling into the air on a tow-line, either by a ground-based winch or vehicle, or by a powered "tug" aircraft. For a glider to maintain its forward air speed and lift, it must descend in relation to the air (but not necessarily in relation to the ground). Many gliders can "soar", i.e., gain height from updrafts such as thermal currents. The first practical, controllable example was designed and built by the British scientist and pioneer George Cayley, whom many recognise as the first aeronautical engineer. Common examples of gliders are sailplanes, hang gliders and paragliders. Balloons drift with the wind, though normally the pilot can control the altitude, either by heating the air or by releasing ballast, giving some directional control (since the wind direction changes with altitude). A wing-shaped hybrid balloon can glide directionally when rising or falling; but a spherically shaped balloon does not have such directional control. Kites are aircraft that are tethered to the ground or other object (fixed or mobile) that maintains tension in the tether or kite line; they rely on virtual or real wind blowing over and under them to generate lift and drag. Kytoons are balloon-kite hybrids that are shaped and tethered to obtain kiting deflections, and can be lighter-than-air, neutrally buoyant, or heavier-than-air. Powered aircraft Powered aircraft have one or more onboard sources of mechanical power, typically aircraft engines although rubber and manpower have also been used. Most aircraft engines are either lightweight reciprocating engines or gas turbines. Engine fuel is stored in tanks, usually in the wings but larger aircraft also have additional fuel tanks in the fuselage. Propeller aircraft Propeller aircraft use one or more propellers (airscrews) to create thrust in a forward direction. The propeller is usually mounted in front of the power source in tractor configuration but can be mounted behind in pusher configuration. Variations of propeller layout include contra-rotating propellers and ducted fans. Many kinds of power plant have been used to drive propellers. Early airships used man power or steam engines. The more practical internal combustion piston engine was used for virtually all fixed-wing aircraft until World War II and is still used in many smaller aircraft. Some types use turbine engines to drive a propeller in the form of a turboprop or propfan. Human-powered flight has been achieved, but has not become a practical means of transport. Unmanned aircraft and models have also used power sources such as electric motors and rubber bands. Jet aircraft Jet aircraft use airbreathing jet engines, which take in air, burn fuel with it in a combustion chamber, and accelerate the exhaust rearwards to provide thrust. Different jet engine configurations include the turbojet and turbofan, sometimes with the addition of an afterburner. Those with no rotating turbomachinery include the pulsejet and ramjet. These mechanically simple engines produce no thrust when stationary, so the aircraft must be launched to flying speed using a catapult, like the V-1 flying bomb, or a rocket, for example. Other engine types include the motorjet and the dual-cycle Pratt & Whitney J58. Compared to engines using propellers, jet engines can provide much higher thrust, higher speeds and, above about , greater efficiency. They are also much more fuel-efficient than rockets. As a consequence nearly all large, high-speed or high-altitude aircraft use jet engines. Rotorcraft Some rotorcraft, such as helicopters, have a powered rotary wing or rotor, where the rotor disc can be angled slightly forward so that a proportion of its lift is directed forwards. The rotor may, like a propeller, be powered by a variety of methods such as a piston engine or turbine. Experiments have also used jet nozzles at the rotor blade tips. Other types of powered aircraft Rocket-powered aircraft have occasionally been experimented with, and the Messerschmitt Me 163 Komet fighter even saw action in the Second World War. Since then, they have been restricted to research aircraft, such as the North American X-15, which traveled up into space where air-breathing engines cannot work (rockets carry their own oxidant). Rockets have more often been used as a supplement to the main power plant, typically for the rocket-assisted take off of heavily loaded aircraft, but also to provide high-speed dash capability in some hybrid designs such as the Saunders-Roe SR.53. The ornithopter obtains thrust by flapping its wings. It has found practical use in a model hawk used to freeze prey animals into stillness so that they can be captured, and in toy birds. Design and construction Aircraft are designed according to many factors such as customer and manufacturer demand, safety protocols and physical and economic constraints. For many types of aircraft the design process is regulated by national airworthiness authorities. The key parts of
Powered lift types rely on engine-derived lift for vertical takeoff and landing (VTOL). Most types transition to fixed-wing lift for horizontal flight. Classes of powered lift types include VTOL jet aircraft (such as the Harrier Jump Jet) and tiltrotors, such as the Bell Boeing V-22 Osprey, among others. A few experimental designs rely entirely on engine thrust to provide lift throughout the whole flight, including personal fan-lift hover platforms and jetpacks. VTOL research designs include the Rolls-Royce Thrust Measuring Rig. The Flettner airplane uses a rotating cylinder in place of a fixed wing, obtaining lift from the Magnus effect. The ornithopter obtains thrust by flapping its wings. Size and speed extremes Size The smallest aircraft are toys/recreational items, and nano aircraft. The largest aircraft by dimensions and volume (as of 2016) is the long British Airlander 10, a hybrid blimp, with helicopter and fixed-wing features, and reportedly capable of speeds up to , and an airborne endurance of two weeks with a payload of up to . The largest aircraft by weight and largest regular fixed-wing aircraft ever built, , is the Antonov An-225 Mriya. That Ukrainian-built six-engine Russian transport of the 1980s is long, with an wingspan. It holds the world payload record, after transporting of goods, and has recently flown loads commercially. With a maximum loaded weight of , it is also the heaviest aircraft built to date. It can cruise at . The largest military airplanes are the Ukrainian Antonov An-124 Ruslan (world's second-largest airplane, also used as a civilian transport), and American Lockheed C-5 Galaxy transport, weighing, loaded, over . The 8-engine, piston/propeller Hughes H-4 Hercules "Spruce Goose" — an American World War II wooden flying boat transport with a greater wingspan (94m/260ft) than any current aircraft and a tail height equal to the tallest (Airbus A380-800 at 24.1m/78ft) — flew only one short hop in the late 1940s and never flew out of ground effect. The largest civilian airplanes, apart from the above-noted An-225 and An-124, are the Airbus Beluga cargo transport derivative of the Airbus A300 jet airliner, the Boeing Dreamlifter cargo transport derivative of the Boeing 747 jet airliner/transport (the 747-200B was, at its creation in the 1960s, the heaviest aircraft ever built, with a maximum weight of over ), and the double-decker Airbus A380 "super-jumbo" jet airliner (the world's largest passenger airliner). Speeds The fastest recorded powered aircraft flight and fastest recorded aircraft flight of an air-breathing powered aircraft was of the NASA X-43A Pegasus, a scramjet-powered, hypersonic, lifting body experimental research aircraft, at Mach 9.6, exactly . The X-43A set that new mark, and broke its own world record of Mach 6.3, exactly , set in March 2004, on its third and final flight on 16 November 2004. Prior to the X-43A, the fastest recorded powered airplane flight (and still the record for the fastest manned, powered airplane / fastest manned, non-spacecraft aircraft) was of the North American X-15A-2, rocket-powered airplane at Mach 6.72, or , on 3 October 1967. On one flight it reached an altitude of . The fastest known, production aircraft (other than rockets and missiles) currently or formerly operational (as of 2016) are: The fastest fixed-wing aircraft, and fastest glider, is the Space Shuttle, a rocket-glider hybrid, which has re-entered the atmosphere as a fixed-wing glider at more than Mach 25, equal to . The fastest military airplane ever built: Lockheed SR-71 Blackbird, a U.S. reconnaissance jet fixed-wing aircraft, known to fly beyond Mach 3.3, equal to . On 28 July 1976, an SR-71 set the record for the fastest and highest-flying operational aircraft with an absolute speed record of and an absolute altitude record of . At its retirement in January 1990, it was the fastest air-breathing aircraft / fastest jet aircraft in the world, a record still standing . Note: Some sources refer to the above-mentioned X-15 as the "fastest military airplane" because it was partly a project of the U.S. Navy and Air Force; however, the X-15 was not used in non-experimental actual military operations. The fastest current military aircraft are the Soviet/Russian Mikoyan-Gurevich MiG-25 — capable of Mach 3.2, equal to , at the expense of engine damage, or Mach 2.83, equal to , normally — and the Russian Mikoyan MiG-31E (also capable of Mach 2.83 normally). Both are fighter-interceptor jet airplanes, in active operations as of 2016. The fastest civilian airplane ever built, and fastest passenger airliner ever built: the briefly operated Tupolev Tu-144 supersonic jet airliner (Mach 2.35, 1,600 mph, 2,587 km/h), which was believed to cruise at about Mach 2.2. The Tu-144 (officially operated from 1968 to 1978, ending after two crashes of the small fleet) was outlived by its rival, the Concorde (Mach 2.23), a French/British supersonic airliner, known to cruise at Mach 2.02 (1.450 mph, 2,333 kmh at cruising altitude), operating from 1976 until the small Concorde fleet was grounded permanently in 2003, following the crash of one in the early 2000s. The fastest civilian airplane currently flying: the Cessna Citation X, an American business jet, capable of Mach 0.935, or . Its rival, the American Gulfstream G650 business jet, can reach Mach 0.925, or The fastest airliner currently flying is the Boeing 747, quoted as being capable of cruising over Mach 0.885, . Previously, the fastest were the troubled, short-lived Russian (Soviet Union) Tupolev Tu-144 SST (Mach 2.35; equal to ) and the French/British Concorde, with a maximum speed of Mach 2.23 or and a normal cruising speed of Mach 2 or . Before them, the Convair 990 Coronado jet airliner of the 1960s flew at over . Propulsion Unpowered aircraft Gliders are heavier-than-air aircraft that do not employ propulsion once airborne. Take-off may be by launching forward and downward from a high location, or by pulling into the air on a tow-line, either by a ground-based winch or vehicle, or by a powered "tug" aircraft. For a glider to maintain its forward air speed and lift, it must descend in relation to the air (but not necessarily in relation to the ground). Many gliders can "soar", i.e., gain height from updrafts such as thermal currents. The first practical, controllable example was designed and built by the British scientist and pioneer George Cayley, whom many recognise as the first aeronautical engineer. Common examples of gliders are sailplanes, hang gliders and paragliders. Balloons drift with the wind, though normally the pilot can control the altitude, either by heating the air or by releasing ballast, giving some directional control (since the wind direction changes with altitude). A wing-shaped hybrid balloon can glide directionally when rising or falling; but a spherically shaped balloon does not have such directional control. Kites are aircraft that are tethered to the ground or other object (fixed or mobile) that maintains tension in the tether or kite line; they rely on virtual or real wind blowing over and under them to generate lift and drag. Kytoons are balloon-kite hybrids that are shaped and tethered to obtain kiting deflections, and can be lighter-than-air, neutrally buoyant, or heavier-than-air. Powered aircraft Powered aircraft have one or more onboard sources of mechanical power, typically aircraft engines although rubber and manpower have also been used. Most aircraft engines are either lightweight reciprocating engines or gas turbines. Engine fuel is stored in tanks, usually in the wings but larger aircraft also have additional fuel tanks in the fuselage. Propeller aircraft Propeller aircraft use one or more propellers (airscrews) to create thrust in a forward direction. The propeller is usually mounted in front of the power source in tractor configuration but can be mounted behind in pusher configuration. Variations of propeller layout include contra-rotating propellers and ducted fans. Many kinds of power plant have been used to drive propellers. Early airships used man power or steam engines. The more practical internal combustion piston engine was used for virtually all fixed-wing aircraft until World War II and is still used in many smaller aircraft. Some types use turbine engines to drive a propeller in the form of a turboprop or propfan. Human-powered flight has been achieved, but has not become a practical means of transport. Unmanned aircraft and models have also used power sources such as electric motors and rubber bands. Jet aircraft Jet aircraft use airbreathing jet engines, which take in air, burn fuel with it in a combustion chamber, and accelerate the
very last residence in Sweden and has after his death functioned as a museum. Alfred Nobel died on 10 December 1896, in Sanremo, Italy, at his very last residence, Villa Nobel, overlooking the Mediterranean Sea. Scientific career As a young man, Nobel studied with chemist Nikolai Zinin; then, in 1850, went to Paris to further the work. There he met Ascanio Sobrero, who had invented nitroglycerin three years before. Sobrero strongly opposed the use of nitroglycerin because it was unpredictable, exploding when subjected to variable heat or pressure. But Nobel became interested in finding a way to control and use nitroglycerin as a commercially usable explosive; it had much more power than gunpowder. In 1851 at age 18, he went to the United States for one year to study, working for a short period under Swedish-American inventor John Ericsson, who designed the American Civil War ironclad, USS Monitor. Nobel filed his first patent, an English patent for a gas meter, in 1857, while his first Swedish patent, which he received in 1863, was on "ways to prepare gunpowder".The family factory produced armaments for the Crimean War (1853–1856), but had difficulty switching back to regular domestic production when the fighting ended and they filed for bankruptcy. In 1859, Nobel's father left his factory in the care of the second son, Ludvig Nobel (1831–1888), who greatly improved the business. Nobel and his parents returned to Sweden from Russia and Nobel devoted himself to the study of explosives, and especially to the safe manufacture and use of nitroglycerin. Nobel invented a detonator in 1863, and in 1865 designed the blasting cap. On 3 September 1864, a shed used for preparation of nitroglycerin exploded at the factory in Heleneborg, Stockholm, Sweden, killing five people, including Nobel's younger brother Emil. Fazed by the accident, Nobel founded the company Nitroglycerin Aktiebolaget AB in Vinterviken so that he could continue to work in a more isolated area. Nobel invented dynamite in 1867, a substance easier and safer to handle than the more unstable nitroglycerin. Dynamite was patented in the US and the UK and was used extensively in mining and the building of transport networks internationally. In 1875, Nobel invented gelignite, more stable and powerful than dynamite, and in 1887, patented ballistite, a predecessor of cordite. Nobel was elected a member of the Royal Swedish Academy of Sciences in 1884, the same institution that would later select laureates for two of the Nobel prizes, and he received an honorary doctorate from Uppsala University in 1893. Nobel's brothers Ludvig and Robert founded the oil company Branobel and became hugely rich in their own right. Nobel invested in these and amassed great wealth through the development of these new oil regions. During his life, Nobel was issued 355 patents internationally, and by his death, his business had established more than 90 armaments factories, despite his apparently pacifist character. Inventions Nobel found that when nitroglycerin was incorporated in an absorbent inert substance like kieselguhr (diatomaceous earth) it became safer and more convenient to handle, and this mixture he patented in 1867 as "dynamite". Nobel demonstrated his explosive for the first time that year, at a quarry in Redhill, Surrey, England. In order to help reestablish his name and improve the image of his business from the earlier controversies associated with dangerous explosives, Nobel had also considered naming the highly powerful substance "Nobel's Safety Powder", but settled with Dynamite instead, referring to the Greek word for "power" (). Nobel later combined nitroglycerin with various nitrocellulose compounds, similar to collodion, but settled on a more efficient recipe combining another nitrate explosive, and obtained a transparent, jelly-like substance, which was a more powerful explosive than dynamite. Gelignite, or blasting gelatine, as it was named, was patented in 1876;
very last residence, Villa Nobel, overlooking the Mediterranean Sea. Scientific career As a young man, Nobel studied with chemist Nikolai Zinin; then, in 1850, went to Paris to further the work. There he met Ascanio Sobrero, who had invented nitroglycerin three years before. Sobrero strongly opposed the use of nitroglycerin because it was unpredictable, exploding when subjected to variable heat or pressure. But Nobel became interested in finding a way to control and use nitroglycerin as a commercially usable explosive; it had much more power than gunpowder. In 1851 at age 18, he went to the United States for one year to study, working for a short period under Swedish-American inventor John Ericsson, who designed the American Civil War ironclad, USS Monitor. Nobel filed his first patent, an English patent for a gas meter, in 1857, while his first Swedish patent, which he received in 1863, was on "ways to prepare gunpowder".The family factory produced armaments for the Crimean War (1853–1856), but had difficulty switching back to regular domestic production when the fighting ended and they filed for bankruptcy. In 1859, Nobel's father left his factory in the care of the second son, Ludvig Nobel (1831–1888), who greatly improved the business. Nobel and his parents returned to Sweden from Russia and Nobel devoted himself to the study of explosives, and especially to the safe manufacture and use of nitroglycerin. Nobel invented a detonator in 1863, and in 1865 designed the blasting cap. On 3 September 1864, a shed used for preparation of nitroglycerin exploded at the factory in Heleneborg, Stockholm, Sweden, killing five people, including Nobel's younger brother Emil. Fazed by the accident, Nobel founded the company Nitroglycerin Aktiebolaget AB in Vinterviken so that he could continue to work in a more isolated area. Nobel invented dynamite in 1867, a substance easier and safer to handle than the more unstable nitroglycerin. Dynamite was patented in the US and the UK and was used extensively in mining and the building of transport networks internationally. In 1875, Nobel invented gelignite, more stable and powerful than dynamite, and in 1887, patented ballistite, a predecessor of cordite. Nobel was elected a member of the Royal Swedish Academy of Sciences in 1884, the same institution that would later select laureates for two of the Nobel prizes, and he received an honorary doctorate from Uppsala University in 1893. Nobel's brothers Ludvig and Robert founded the oil company Branobel and became hugely rich in their own right. Nobel invested in these and amassed great wealth through the development of these new oil regions. During his life, Nobel was issued 355 patents internationally, and by his death, his business had established more than 90 armaments factories, despite his apparently pacifist character. Inventions Nobel found that when nitroglycerin was incorporated in an absorbent inert substance like kieselguhr (diatomaceous earth) it became safer and more convenient to handle, and this mixture he patented in 1867 as "dynamite". Nobel demonstrated his explosive for the first time that year, at a quarry in Redhill, Surrey, England. In order to help reestablish his name and improve the image of his business from the earlier controversies associated with dangerous explosives, Nobel had also considered naming the highly powerful substance "Nobel's Safety Powder", but settled with Dynamite instead, referring to the Greek word for "power" (). Nobel later combined nitroglycerin with various nitrocellulose compounds, similar to collodion, but settled on a more efficient recipe combining another nitrate explosive, and obtained a transparent, jelly-like substance, which was a more powerful explosive than dynamite. Gelignite, or blasting gelatine, as it was named, was patented in 1876; and was followed by a host of similar combinations, modified by the addition of potassium nitrate and various other substances. Gelignite was more stable, transportable and conveniently formed to fit into bored holes, like those used in drilling and mining, than the previously used compounds. It was adopted as the standard technology for mining in the "Age of Engineering", bringing Nobel a great amount of financial success, though at a cost to his health. An offshoot of this research resulted in Nobel's invention of ballistite, the precursor of many modern smokeless powder explosives and still used as a rocket propellant. Nobel Prize In 1888, the death of his brother Ludvig caused several newspapers to publish obituaries of Alfred in error. One French newspaper condemned him for his invention of military explosives—not, as is commonly quoted, dynamite, which was mainly used for civilian applications—and is said to have brought about his decision to leave a better legacy after his death. The obituary stated, ("The merchant of death is dead"), and went on to say, "Dr. Alfred Nobel, who became rich by finding ways to kill more people faster than ever before, died yesterday." Nobel read the obituary and was appalled at the idea that he would be remembered in this way. His decision to posthumously donate the majority of his wealth to found the Nobel Prize has been credited at least in part to him wanting to leave behind a better legacy. On 27 November 1895, at the Swedish-Norwegian Club in Paris, Nobel signed his last will and testament and set aside the bulk of his estate to establish the Nobel Prizes, to be awarded annually without distinction of nationality. After taxes and bequests to individuals, Nobel's will allocated 94% of his total assets, 31,225,000 Swedish kronor, to establish the five Nobel Prizes. This converted to £1,687,837 (GBP) at the time. In 2012, the capital was worth around SEK 3.1 billion (US$472 million, EUR 337 million), which is almost twice the amount of the initial capital, taking inflation into account. The first three of these prizes are awarded for eminence in physical science, in chemistry and in medical science or physiology; the fourth is for literary work "in an ideal direction" and the fifth prize is to be given to the person or society that renders the greatest service to the cause of international fraternity, in the suppression or reduction of standing armies, or in the establishment or furtherance of peace congresses. The formulation for the literary prize being given for a work "in an ideal direction"
an intrusion on his real work as a scientist and refused to have a telephone in his study. Many other inventions marked Bell's later life, including groundbreaking work in optical telecommunications, hydrofoils, and aeronautics. Although Bell was not one of the 33 founders of the National Geographic Society, he had a strong influence on the magazine while serving as the second president from January 7, 1898, until 1903. Beyond his work in engineering, Bell had a deep interest in the emerging science of heredity. Early life Alexander Bell was born in Edinburgh, Scotland, on March 3, 1847. The family home was at South Charlotte Street, and has a stone inscription marking it as Alexander Graham Bell's birthplace. He had two brothers: Melville James Bell (1845–1870) and Edward Charles Bell (1848–1867), both of whom would die of tuberculosis. His father was Professor Alexander Melville Bell, a phonetician, and his mother was Eliza Grace Bell (née Symonds). Born as just "Alexander Bell", at age 10, he made a plea to his father to have a middle name like his two brothers. For his 11th birthday, his father acquiesced and allowed him to adopt the name "Graham", chosen out of respect for Alexander Graham, a Canadian being treated by his father who had become a family friend. To close relatives and friends he remained "Aleck". First invention As a child, young Bell displayed a curiosity about his world; he gathered botanical specimens and ran experiments at an early age. His best friend was Ben Herdman, a neighbour whose family operated a flour mill. At the age of 12, Bell built a homemade device that combined rotating paddles with sets of nail brushes, creating a simple dehusking machine that was put into operation at the mill and used steadily for a number of years. In return, Ben's father John Herdman gave both boys the run of a small workshop in which to "invent". From his early years, Bell showed a sensitive nature and a talent for art, poetry, and music that was encouraged by his mother. With no formal training, he mastered the piano and became the family's pianist. Despite being normally quiet and introspective, he revelled in mimicry and "voice tricks" akin to ventriloquism that continually entertained family guests during their occasional visits. Bell was also deeply affected by his mother's gradual deafness (she began to lose her hearing when he was 12), and learned a manual finger language so he could sit at her side and tap out silently the conversations swirling around the family parlour. He also developed a technique of speaking in clear, modulated tones directly into his mother's forehead wherein she would hear him with reasonable clarity. Bell's preoccupation with his mother's deafness led him to study acoustics. His family was long associated with the teaching of elocution: his grandfather, Alexander Bell, in London, his uncle in Dublin, and his father, in Edinburgh, were all elocutionists. His father published a variety of works on the subject, several of which are still well known, especially his The Standard Elocutionist (1860), which appeared in Edinburgh in 1868. The Standard Elocutionist appeared in 168 British editions and sold over a quarter of a million copies in the United States alone. In this treatise, his father explains his methods of how to instruct deaf-mutes (as they were then known) to articulate words and read other people's lip movements to decipher meaning. Bell's father taught him and his brothers not only to write Visible Speech but to identify any symbol and its accompanying sound. Bell became so proficient that he became a part of his father's public demonstrations and astounded audiences with his abilities. He could decipher Visible Speech representing virtually every language, including Latin, Scottish Gaelic, and even Sanskrit, accurately reciting written tracts without any prior knowledge of their pronunciation. Education As a young child, Bell, like his brothers, received his early schooling at home from his father. At an early age, he was enrolled at the Royal High School, Edinburgh, Scotland, which he left at the age of 15, having completed only the first four forms. His school record was undistinguished, marked by absenteeism and lacklustre grades. His main interest remained in the sciences, especially biology, while he treated other school subjects with indifference, to the dismay of his father. Upon leaving school, Bell travelled to London to live with his grandfather, Alexander Bell, on Harrington Square. During the year he spent with his grandfather, a love of learning was born, with long hours spent in serious discussion and study. The elder Bell took great efforts to have his young pupil learn to speak clearly and with conviction, the attributes that his pupil would need to become a teacher himself. At the age of 16, Bell secured a position as a "pupil-teacher" of elocution and music, in Weston House Academy at Elgin, Moray, Scotland. Although he was enrolled as a student in Latin and Greek, he instructed classes himself in return for board and £10 per session. The following year, he attended the University of Edinburgh, joining his older brother Melville who had enrolled there the previous year. In 1868, not long before he departed for Canada with his family, Bell completed his matriculation exams and was accepted for admission to University College London. First experiments with sound His father encouraged Bell's interest in speech and, in 1863, took his sons to see a unique automaton developed by Sir Charles Wheatstone based on the earlier work of Baron Wolfgang von Kempelen. The rudimentary "mechanical man" simulated a human voice. Bell was fascinated by the machine and after he obtained a copy of von Kempelen's book, published in German, and had laboriously translated it, he and his older brother Melville built their own automaton head. Their father, highly interested in their project, offered to pay for any supplies and spurred the boys on with the enticement of a "big prize" if they were successful. While his brother constructed the throat and larynx, Bell tackled the more difficult task of recreating a realistic skull. His efforts resulted in a remarkably lifelike head that could "speak", albeit only a few words. The boys would carefully adjust the "lips" and when a bellows forced air through the windpipe, a very recognizable "Mama" ensued, to the delight of neighbours who came to see the Bell invention. Intrigued by the results of the automaton, Bell continued to experiment with a live subject, the family's Skye Terrier, "Trouve". After he taught it to growl continuously, Bell would reach into its mouth and manipulate the dog's lips and vocal cords to produce a crude-sounding "Ow ah oo ga ma ma". With little convincing, visitors believed his dog could articulate "How are you, grandmama?" Indicative of his playful nature, his experiments convinced onlookers that they saw a "talking dog". These initial forays into experimentation with sound led Bell to undertake his first serious work on the transmission of sound, using tuning forks to explore resonance. At age 19, Bell wrote a report on his work and sent it to philologist Alexander Ellis, a colleague of his father. Ellis immediately wrote back indicating that the experiments were similar to existing work in Germany, and also lent Bell a copy of Hermann von Helmholtz's work, The Sensations of Tone as a Physiological Basis for the Theory of Music. Dismayed to find that groundbreaking work had already been undertaken by Helmholtz who had conveyed vowel sounds by means of a similar tuning fork "contraption", Bell pored over the German scientist's book. Working from his own erroneous mistranslation of a French edition, Bell fortuitously then made a deduction that would be the underpinning of all his future work on transmitting sound, reporting: "Without knowing much about the subject, it seemed to me that if vowel sounds could be produced by electrical means, so could consonants, so could articulate speech." He also later remarked: "I thought that Helmholtz had done it ... and that my failure was due only to my ignorance of electricity. It was a valuable blunder ... If I had been able to read German in those days, I might never have commenced my experiments!" Family tragedy In 1865, when the Bell family moved to London, Bell returned to Weston House as an assistant master and, in his spare hours, continued experiments on sound using a minimum of laboratory equipment. Bell concentrated on experimenting with electricity to convey sound and later installed a telegraph wire from his room in Somerset College to that of a friend. Throughout late 1867, his health faltered mainly through exhaustion. His younger brother, Edward "Ted," was similarly bed-ridden, suffering from tuberculosis. While Bell recovered (by then referring to himself in correspondence as "A. G. Bell") and served the next year as an instructor at Somerset College, Bath, England, his brother's condition deteriorated. Edward would never recover. Upon his brother's death, Bell returned home in 1867. His older brother Melville had married and moved out. With aspirations to obtain a degree at University College London, Bell considered his next years as preparation for the degree examinations, devoting his spare time at his family's residence to studying. Helping his father in Visible Speech demonstrations and lectures brought Bell to Susanna E. Hull's private school for the deaf in South Kensington, London. His first two pupils were deaf-mute girls who made remarkable progress under his tutelage. While his older brother seemed to achieve success on many fronts including opening his own elocution school, applying for a patent on an invention, and starting a family, Bell continued as a teacher. However, in May 1870, Melville died from complications due to tuberculosis, causing a family crisis. His father had also suffered a debilitating illness earlier in life and had been restored to health by a convalescence in Newfoundland. Bell's parents embarked upon a long-planned move when they realized that their remaining son was also sickly. Acting decisively, Alexander Melville Bell asked Bell to arrange for the sale of all the family property, conclude all of his brother's affairs (Bell took over his last student, curing a pronounced lisp), and join his father and mother in setting out for the "New World". Reluctantly, Bell also had to conclude a relationship with Marie Eccleston, who, as he had surmised, was not prepared to leave England with him. Canada In 1870, 23-year-old Bell travelled with his parents and his brother's widow, Caroline Margaret Ottaway, to Paris, Ontario, to stay with Thomas Henderson, a Baptist minister and family friend. The Bell family soon purchased a farm of at Tutelo Heights (now called Tutela Heights), near Brantford, Ontario. The property consisted of an orchard, large farmhouse, stable, pigsty, hen-house, and a carriage house, which bordered the Grand River. At the homestead, Bell set up his own workshop in the converted carriage house near to what he called his "dreaming place", a large hollow nestled in trees at the back of the property above the river. Despite his frail condition upon arriving in Canada, Bell found the climate and environs to his liking, and rapidly improved. He continued his interest in the study of the human voice and when he discovered the Six Nations Reserve across the river at Onondaga, he learned the Mohawk language and translated its unwritten vocabulary into Visible Speech symbols. For his work, Bell was awarded the title of Honorary Chief and participated in a ceremony where he donned a Mohawk headdress and danced traditional dances. After setting up his workshop, Bell continued experiments based on Helmholtz's work with electricity and sound. He also modified a melodeon (a type of pump organ) so that it could transmit its music electrically over a distance. Once the family was settled in, both Bell and his father made plans to establish a teaching practice and in 1871, he accompanied his father to Montreal, where Melville was offered a position to teach his System of Visible Speech. Work with the deaf Bell's father was invited by Sarah Fuller, principal of the Boston School for Deaf Mutes (which continues today as the public Horace Mann School for the Deaf), in Boston, Massachusetts, United States, to introduce the Visible Speech System by providing training for Fuller's instructors, but he declined the post in favour of his son. Travelling to Boston in April 1871, Bell proved successful in training the school's instructors. He was subsequently asked to repeat the programme at the American Asylum for Deaf-mutes in Hartford, Connecticut, and the Clarke School for the Deaf in Northampton, Massachusetts. Returning home to Brantford after six months abroad, Bell continued his experiments with his "harmonic telegraph". The basic concept behind his device was that messages could be sent through a single wire if each message was transmitted at a different pitch, but work on both the transmitter and receiver was needed. Unsure of his future, he first contemplated returning to London to complete his studies, but decided to return to Boston as a teacher. His father helped him set up his private practice by contacting Gardiner Greene Hubbard, the president of the Clarke School for the Deaf for a recommendation. Teaching his father's system, in October 1872, Alexander Bell opened his "School of Vocal Physiology and Mechanics of Speech" in Boston, which attracted a large number of deaf pupils, with his first class numbering 30 students. While he was working as a private tutor, one of his pupils was Helen Keller, who came to him as a young child unable to see, hear, or speak. She was later to say that Bell dedicated his life to the penetration of that "inhuman silence which separates and estranges". In 1893, Keller performed the sod-breaking ceremony for the construction of Bell's new Volta Bureau, dedicated to "the increase and diffusion of knowledge relating to the deaf". Throughout his lifetime, Bell sought to integrate the deaf and hard of hearing with the hearing world. To achieve complete assimilation in society, Bell encouraged speech therapy and lip reading as well as sign language. He outlined this in a 1898 paper detailing his belief that with resources and effort, the deaf could be taught to read lips and speak (known as oralism) thus enabling their integration within the wider society from which many were often being excluded. Owing to his efforts to balance oralism with the teaching of sign language, Bell is often viewed negatively by those embracing Deaf culture. Ironically, Bell's last words to his deaf wife, Mabell, were signed. Continuing experimentation In 1872, Bell became professor of Vocal Physiology and Elocution at the Boston University School of Oratory. During this period, he alternated between Boston and Brantford, spending summers in his Canadian home. At Boston University, Bell was "swept up" by the excitement engendered by the many scientists and inventors residing in the city. He continued his research in sound and endeavored to find a way to transmit musical notes and articulate speech, but although absorbed by his experiments, he found it difficult to devote enough time to experimentation. While days and evenings were occupied by his teaching and private classes, Bell began to stay awake late into the night, running experiment after experiment in rented facilities at his boarding house. Keeping "night owl" hours, he worried that his work would be discovered and took great pains to lock up his notebooks and laboratory equipment. Bell had a specially made table where he could place his notes and equipment inside a locking cover. Worse still, his health deteriorated as he suffered severe headaches. Returning to Boston in fall 1873, Bell made a far-reaching decision to concentrate on his experiments in sound. Deciding to give up his lucrative private Boston practice, Bell retained only two students, six-year-old "Georgie" Sanders, deaf from birth, and 15-year-old Mabel Hubbard. Each pupil would play an important role in the next developments. George's father, Thomas Sanders, a wealthy businessman, offered Bell a place to stay in nearby Salem with Georgie's grandmother, complete with a room to "experiment". Although the offer was made by George's mother and followed the year-long arrangement in 1872 where her son and his nurse had moved to quarters next to Bell's boarding house, it was clear that Mr. Sanders was backing the proposal. The arrangement was for teacher and student to continue their work together, with free room and board thrown in. Mabel was a bright, attractive girl who was ten years Bell's junior but became the object of his affection. Having lost her hearing after a near-fatal bout of scarlet fever close to her fifth birthday, she had learned to read lips but her father, Gardiner Greene Hubbard, Bell's benefactor and personal friend, wanted her to work directly with her teacher. The telephone By 1874, Bell's initial work on the harmonic telegraph had entered a formative stage, with progress made both at his new Boston "laboratory" (a rented facility) and at his family home in Canada a big success. While working that summer in Brantford, Bell experimented with a "phonautograph", a pen-like machine that could draw shapes of sound waves on smoked glass by tracing their vibrations. Bell thought it might be possible to generate undulating electrical currents that corresponded to sound waves. Bell also thought that multiple metal reeds tuned to different frequencies like a harp would be able to convert the undulating currents back into sound. But he had no working model to demonstrate the feasibility of these ideas. In 1874, telegraph message traffic was rapidly expanding and in the words of Western Union President William Orton, had become "the nervous system of commerce". Orton had contracted with inventors Thomas Edison and Elisha Gray to find a way to send multiple telegraph messages on each telegraph line to avoid the great cost of constructing new lines. When Bell mentioned to Gardiner Hubbard and Thomas Sanders that he was working on a method of sending multiple tones on a telegraph wire using a multi-reed device, the two wealthy patrons began to financially support Bell's experiments. Patent matters would be handled by Hubbard's patent attorney, Anthony Pollok. In March 1875, Bell and Pollok visited the scientist Joseph Henry, who was then director of the Smithsonian Institution, and asked Henry's advice on the electrical multi-reed apparatus that Bell hoped would transmit the human voice by telegraph. Henry replied that Bell had "the germ of a great invention". When Bell said that he did not have the necessary knowledge, Henry replied, "Get it!" That declaration greatly encouraged Bell to keep trying, even though he did not have the equipment needed to continue his experiments, nor the ability to create a working model of his ideas. However, a chance meeting in 1874 between Bell and Thomas A. Watson, an experienced electrical designer and mechanic at the electrical machine shop of Charles Williams, changed all that. With financial support from Sanders and Hubbard, Bell hired Thomas Watson as his assistant, and the two of them experimented with acoustic telegraphy. On June 2, 1875, Watson accidentally plucked one of the reeds and Bell, at the receiving end of the wire, heard the overtones of the reed; overtones that would be necessary for transmitting speech. That demonstrated to Bell that only one reed or armature was necessary, not multiple reeds. This led to the "gallows" sound-powered telephone, which could transmit indistinct, voice-like sounds, but not clear speech. The race to the patent office In 1875, Bell developed an acoustic telegraph and drew up a patent application for it. Since he had agreed to share U.S. profits with his investors Gardiner Hubbard and Thomas Sanders, Bell requested that an associate in Ontario, George Brown, attempt to patent it in Britain, instructing his lawyers to apply for a patent in the U.S. only after they received word from Britain (Britain would issue patents only for discoveries not previously patented elsewhere). Meanwhile, Elisha Gray was also experimenting with acoustic telegraphy and thought of a way to transmit speech using a water transmitter. On February 14, 1876, Gray filed a caveat with the U.S. Patent Office for a telephone design that used a water transmitter. That same morning, Bell's lawyer filed Bell's application with the patent office. There is considerable debate about who arrived first and Gray later challenged the primacy of Bell's patent. Bell was in Boston on February 14 and did not arrive in Washington until February 26. Bell's patent 174,465, was issued to Bell on March 7, 1876, by the U.S. Patent Office. Bell's patent covered "the method of, and apparatus for, transmitting vocal or other sounds telegraphically ... by causing electrical undulations, similar in form to the vibrations of the air accompanying the said vocal or other sound" Bell returned to Boston the same day and the next day resumed work, drawing in his notebook a diagram similar to that in Gray's patent caveat. On March 10, 1876, three days after his patent was issued, Bell succeeded in getting his telephone to work, using a liquid transmitter similar to Gray's design. Vibration of the diaphragm caused a needle to vibrate in the water, varying the electrical resistance in the circuit. When Bell spoke the sentence "Mr. Watson—Come here—I want to see you" into the liquid transmitter, Watson, listening at the receiving end in an adjoining room, heard the words clearly. Although Bell was, and still is, accused of stealing the telephone from Gray, Bell used Gray's water transmitter design only after Bell's patent had been granted, and only as a proof of concept scientific experiment, to prove to his own satisfaction that intelligible "articulate speech" (Bell's words) could be electrically transmitted. After March 1876, Bell focused on improving the electromagnetic telephone and never used Gray's liquid transmitter in public demonstrations or commercial use. The question of priority for the variable resistance feature of the telephone was raised by the examiner before he approved Bell's patent application. He told Bell that his claim for the variable resistance feature was also described in Gray's caveat. Bell pointed to a variable resistance device in his previous application in which he described a cup of mercury, not water. He had filed the mercury application at the patent office a year earlier on February 25, 1875, long before Elisha Gray described the water device. In addition, Gray abandoned his caveat, and because he did not contest Bell's priority, the examiner approved Bell's patent on March 3, 1876. Gray had reinvented the variable resistance telephone, but Bell was the first to write down the idea and the first to test it in a telephone. The patent examiner, Zenas Fisk Wilber, later stated in an affidavit that he was an alcoholic who was much in debt to Bell's lawyer, Marcellus Bailey, with whom he had served in the Civil War. He claimed he showed Gray's patent caveat to Bailey. Wilber also claimed (after Bell arrived in Washington D.C. from Boston) that he showed Gray's caveat to Bell and that Bell paid him $100 (). Bell claimed they discussed the patent only in general terms, although in a letter to Gray, Bell admitted that he learned some of the technical details. Bell denied in an affidavit that he ever gave Wilber any money. Later developments On March 10, 1876, Bell used "the instrument" in Boston to call Thomas Watson who was in another room but out of earshot. He said, "Mr. Watson, come here – I want to see you" and Watson soon appeared at his side. Continuing his experiments in Brantford, Bell brought home a working model of his telephone. On August 3, 1876, from the telegraph office in Brantford, Ontario, Bell sent a tentative telegram to the village of Mount Pleasant distant, indicating that he was ready. He made a telephone call via telegraph wires and faint voices were heard replying. The following night, he amazed guests as well as his family with a call between the Bell Homestead and the office of the Dominion Telegraph Company in Brantford along an improvised wire strung up along telegraph lines and fences, and laid through a tunnel. This time, guests at the household distinctly heard people in Brantford reading and singing. The third test on August 10, 1876, was made via the telegraph line between Brantford and Paris, Ontario, distant. This test was said by many sources to be the "world's first long-distance call". The final test certainly proved that the telephone could work over long distances, at least as a one-way call. The first two-way (reciprocal) conversation over a line occurred between Cambridge and Boston (roughly 2.5 miles) on October 9, 1876. During that conversation, Bell was on Kilby Street in Boston and Watson was at the offices of the Walworth Manufacturing Company. Bell and his partners, Hubbard and Sanders, offered to sell the patent outright to Western Union for $100,000. The president of Western Union balked, countering that the telephone was nothing but a toy. Two years later, he told colleagues that if he could get the patent for $25 million he would consider it a bargain. By then, the Bell company no longer wanted to sell the patent. Bell's investors would become millionaires while he fared well from residuals and at one point had assets of nearly one million dollars. Bell began a series of public demonstrations and lectures to introduce the new invention to the scientific community as well as the general public. A short time later, his demonstration of an early telephone prototype at the 1876 Centennial Exposition in Philadelphia brought the telephone to international attention. Influential visitors to the exhibition included Emperor Pedro II of Brazil. One of the judges at the Exhibition, Sir William Thomson (later, Lord Kelvin), a renowned Scottish scientist, described the telephone as "the greatest by far of all the marvels of the electric telegraph". On January 14, 1878, at Osborne House, on the Isle of Wight, Bell demonstrated the device to Queen Victoria, placing calls to Cowes, Southampton and London. These were the first publicly witnessed long-distance telephone calls in the UK. The queen considered the process to be "quite extraordinary" although the sound was "rather faint". She later asked to buy the equipment that was used, but Bell offered to make "a set of telephones" specifically for her. The Bell Telephone Company was created in 1877, and by 1886, more than 150,000 people in the U.S. owned telephones. Bell Company engineers made numerous other improvements to the telephone, which emerged as one of the most successful products ever. In 1879, the Bell company acquired Edison's patents for the carbon microphone from Western Union. This made the telephone practical for longer distances, and it was no longer necessary to shout to be heard at the receiving telephone. Emperor Pedro II of Brazil was the first person to buy stock in Bell's company, the Bell Telephone Company. One of the first telephones in a private residence was installed in his palace in Petrópolis, his summer retreat from Rio de Janeiro. In January 1915, Bell made the first ceremonial transcontinental telephone call. Calling from the AT&T head office at 15 Dey Street in New York City, Bell was heard by Thomas Watson at 333 Grant Avenue in San Francisco. The New York Times reported: Competitors As is sometimes common in scientific discoveries, simultaneous developments can occur, as evidenced by a number of inventors who were at work on the telephone. Over a period of 18 years, the Bell Telephone Company faced 587 court challenges to its patents, including five that went to the U.S. Supreme Court, but none was successful in establishing priority over the original Bell patent and the Bell Telephone Company never lost a case that had proceeded to a final trial stage. Bell's laboratory notes and family letters were the key to establishing a long lineage to his experiments. The Bell company lawyers successfully fought off myriad lawsuits generated initially around the challenges by Elisha Gray and Amos Dolbear. In personal correspondence to Bell, both Gray and Dolbear had acknowledged his prior work, which considerably weakened their later claims. On January 13, 1887, the U.S. Government moved to annul the patent issued to Bell on the grounds of fraud and misrepresentation. After a series of decisions and reversals, the Bell company won a decision in the Supreme Court, though a couple of the original claims from the lower court cases were left undecided. By the time that the trial wound its way through nine years of legal battles, the U.S. prosecuting attorney had died and the two Bell patents (No. 174,465 dated March 7, 1876,
to devote enough time to experimentation. While days and evenings were occupied by his teaching and private classes, Bell began to stay awake late into the night, running experiment after experiment in rented facilities at his boarding house. Keeping "night owl" hours, he worried that his work would be discovered and took great pains to lock up his notebooks and laboratory equipment. Bell had a specially made table where he could place his notes and equipment inside a locking cover. Worse still, his health deteriorated as he suffered severe headaches. Returning to Boston in fall 1873, Bell made a far-reaching decision to concentrate on his experiments in sound. Deciding to give up his lucrative private Boston practice, Bell retained only two students, six-year-old "Georgie" Sanders, deaf from birth, and 15-year-old Mabel Hubbard. Each pupil would play an important role in the next developments. George's father, Thomas Sanders, a wealthy businessman, offered Bell a place to stay in nearby Salem with Georgie's grandmother, complete with a room to "experiment". Although the offer was made by George's mother and followed the year-long arrangement in 1872 where her son and his nurse had moved to quarters next to Bell's boarding house, it was clear that Mr. Sanders was backing the proposal. The arrangement was for teacher and student to continue their work together, with free room and board thrown in. Mabel was a bright, attractive girl who was ten years Bell's junior but became the object of his affection. Having lost her hearing after a near-fatal bout of scarlet fever close to her fifth birthday, she had learned to read lips but her father, Gardiner Greene Hubbard, Bell's benefactor and personal friend, wanted her to work directly with her teacher. The telephone By 1874, Bell's initial work on the harmonic telegraph had entered a formative stage, with progress made both at his new Boston "laboratory" (a rented facility) and at his family home in Canada a big success. While working that summer in Brantford, Bell experimented with a "phonautograph", a pen-like machine that could draw shapes of sound waves on smoked glass by tracing their vibrations. Bell thought it might be possible to generate undulating electrical currents that corresponded to sound waves. Bell also thought that multiple metal reeds tuned to different frequencies like a harp would be able to convert the undulating currents back into sound. But he had no working model to demonstrate the feasibility of these ideas. In 1874, telegraph message traffic was rapidly expanding and in the words of Western Union President William Orton, had become "the nervous system of commerce". Orton had contracted with inventors Thomas Edison and Elisha Gray to find a way to send multiple telegraph messages on each telegraph line to avoid the great cost of constructing new lines. When Bell mentioned to Gardiner Hubbard and Thomas Sanders that he was working on a method of sending multiple tones on a telegraph wire using a multi-reed device, the two wealthy patrons began to financially support Bell's experiments. Patent matters would be handled by Hubbard's patent attorney, Anthony Pollok. In March 1875, Bell and Pollok visited the scientist Joseph Henry, who was then director of the Smithsonian Institution, and asked Henry's advice on the electrical multi-reed apparatus that Bell hoped would transmit the human voice by telegraph. Henry replied that Bell had "the germ of a great invention". When Bell said that he did not have the necessary knowledge, Henry replied, "Get it!" That declaration greatly encouraged Bell to keep trying, even though he did not have the equipment needed to continue his experiments, nor the ability to create a working model of his ideas. However, a chance meeting in 1874 between Bell and Thomas A. Watson, an experienced electrical designer and mechanic at the electrical machine shop of Charles Williams, changed all that. With financial support from Sanders and Hubbard, Bell hired Thomas Watson as his assistant, and the two of them experimented with acoustic telegraphy. On June 2, 1875, Watson accidentally plucked one of the reeds and Bell, at the receiving end of the wire, heard the overtones of the reed; overtones that would be necessary for transmitting speech. That demonstrated to Bell that only one reed or armature was necessary, not multiple reeds. This led to the "gallows" sound-powered telephone, which could transmit indistinct, voice-like sounds, but not clear speech. The race to the patent office In 1875, Bell developed an acoustic telegraph and drew up a patent application for it. Since he had agreed to share U.S. profits with his investors Gardiner Hubbard and Thomas Sanders, Bell requested that an associate in Ontario, George Brown, attempt to patent it in Britain, instructing his lawyers to apply for a patent in the U.S. only after they received word from Britain (Britain would issue patents only for discoveries not previously patented elsewhere). Meanwhile, Elisha Gray was also experimenting with acoustic telegraphy and thought of a way to transmit speech using a water transmitter. On February 14, 1876, Gray filed a caveat with the U.S. Patent Office for a telephone design that used a water transmitter. That same morning, Bell's lawyer filed Bell's application with the patent office. There is considerable debate about who arrived first and Gray later challenged the primacy of Bell's patent. Bell was in Boston on February 14 and did not arrive in Washington until February 26. Bell's patent 174,465, was issued to Bell on March 7, 1876, by the U.S. Patent Office. Bell's patent covered "the method of, and apparatus for, transmitting vocal or other sounds telegraphically ... by causing electrical undulations, similar in form to the vibrations of the air accompanying the said vocal or other sound" Bell returned to Boston the same day and the next day resumed work, drawing in his notebook a diagram similar to that in Gray's patent caveat. On March 10, 1876, three days after his patent was issued, Bell succeeded in getting his telephone to work, using a liquid transmitter similar to Gray's design. Vibration of the diaphragm caused a needle to vibrate in the water, varying the electrical resistance in the circuit. When Bell spoke the sentence "Mr. Watson—Come here—I want to see you" into the liquid transmitter, Watson, listening at the receiving end in an adjoining room, heard the words clearly. Although Bell was, and still is, accused of stealing the telephone from Gray, Bell used Gray's water transmitter design only after Bell's patent had been granted, and only as a proof of concept scientific experiment, to prove to his own satisfaction that intelligible "articulate speech" (Bell's words) could be electrically transmitted. After March 1876, Bell focused on improving the electromagnetic telephone and never used Gray's liquid transmitter in public demonstrations or commercial use. The question of priority for the variable resistance feature of the telephone was raised by the examiner before he approved Bell's patent application. He told Bell that his claim for the variable resistance feature was also described in Gray's caveat. Bell pointed to a variable resistance device in his previous application in which he described a cup of mercury, not water. He had filed the mercury application at the patent office a year earlier on February 25, 1875, long before Elisha Gray described the water device. In addition, Gray abandoned his caveat, and because he did not contest Bell's priority, the examiner approved Bell's patent on March 3, 1876. Gray had reinvented the variable resistance telephone, but Bell was the first to write down the idea and the first to test it in a telephone. The patent examiner, Zenas Fisk Wilber, later stated in an affidavit that he was an alcoholic who was much in debt to Bell's lawyer, Marcellus Bailey, with whom he had served in the Civil War. He claimed he showed Gray's patent caveat to Bailey. Wilber also claimed (after Bell arrived in Washington D.C. from Boston) that he showed Gray's caveat to Bell and that Bell paid him $100 (). Bell claimed they discussed the patent only in general terms, although in a letter to Gray, Bell admitted that he learned some of the technical details. Bell denied in an affidavit that he ever gave Wilber any money. Later developments On March 10, 1876, Bell used "the instrument" in Boston to call Thomas Watson who was in another room but out of earshot. He said, "Mr. Watson, come here – I want to see you" and Watson soon appeared at his side. Continuing his experiments in Brantford, Bell brought home a working model of his telephone. On August 3, 1876, from the telegraph office in Brantford, Ontario, Bell sent a tentative telegram to the village of Mount Pleasant distant, indicating that he was ready. He made a telephone call via telegraph wires and faint voices were heard replying. The following night, he amazed guests as well as his family with a call between the Bell Homestead and the office of the Dominion Telegraph Company in Brantford along an improvised wire strung up along telegraph lines and fences, and laid through a tunnel. This time, guests at the household distinctly heard people in Brantford reading and singing. The third test on August 10, 1876, was made via the telegraph line between Brantford and Paris, Ontario, distant. This test was said by many sources to be the "world's first long-distance call". The final test certainly proved that the telephone could work over long distances, at least as a one-way call. The first two-way (reciprocal) conversation over a line occurred between Cambridge and Boston (roughly 2.5 miles) on October 9, 1876. During that conversation, Bell was on Kilby Street in Boston and Watson was at the offices of the Walworth Manufacturing Company. Bell and his partners, Hubbard and Sanders, offered to sell the patent outright to Western Union for $100,000. The president of Western Union balked, countering that the telephone was nothing but a toy. Two years later, he told colleagues that if he could get the patent for $25 million he would consider it a bargain. By then, the Bell company no longer wanted to sell the patent. Bell's investors would become millionaires while he fared well from residuals and at one point had assets of nearly one million dollars. Bell began a series of public demonstrations and lectures to introduce the new invention to the scientific community as well as the general public. A short time later, his demonstration of an early telephone prototype at the 1876 Centennial Exposition in Philadelphia brought the telephone to international attention. Influential visitors to the exhibition included Emperor Pedro II of Brazil. One of the judges at the Exhibition, Sir William Thomson (later, Lord Kelvin), a renowned Scottish scientist, described the telephone as "the greatest by far of all the marvels of the electric telegraph". On January 14, 1878, at Osborne House, on the Isle of Wight, Bell demonstrated the device to Queen Victoria, placing calls to Cowes, Southampton and London. These were the first publicly witnessed long-distance telephone calls in the UK. The queen considered the process to be "quite extraordinary" although the sound was "rather faint". She later asked to buy the equipment that was used, but Bell offered to make "a set of telephones" specifically for her. The Bell Telephone Company was created in 1877, and by 1886, more than 150,000 people in the U.S. owned telephones. Bell Company engineers made numerous other improvements to the telephone, which emerged as one of the most successful products ever. In 1879, the Bell company acquired Edison's patents for the carbon microphone from Western Union. This made the telephone practical for longer distances, and it was no longer necessary to shout to be heard at the receiving telephone. Emperor Pedro II of Brazil was the first person to buy stock in Bell's company, the Bell Telephone Company. One of the first telephones in a private residence was installed in his palace in Petrópolis, his summer retreat from Rio de Janeiro. In January 1915, Bell made the first ceremonial transcontinental telephone call. Calling from the AT&T head office at 15 Dey Street in New York City, Bell was heard by Thomas Watson at 333 Grant Avenue in San Francisco. The New York Times reported: Competitors As is sometimes common in scientific discoveries, simultaneous developments can occur, as evidenced by a number of inventors who were at work on the telephone. Over a period of 18 years, the Bell Telephone Company faced 587 court challenges to its patents, including five that went to the U.S. Supreme Court, but none was successful in establishing priority over the original Bell patent and the Bell Telephone Company never lost a case that had proceeded to a final trial stage. Bell's laboratory notes and family letters were the key to establishing a long lineage to his experiments. The Bell company lawyers successfully fought off myriad lawsuits generated initially around the challenges by Elisha Gray and Amos Dolbear. In personal correspondence to Bell, both Gray and Dolbear had acknowledged his prior work, which considerably weakened their later claims. On January 13, 1887, the U.S. Government moved to annul the patent issued to Bell on the grounds of fraud and misrepresentation. After a series of decisions and reversals, the Bell company won a decision in the Supreme Court, though a couple of the original claims from the lower court cases were left undecided. By the time that the trial wound its way through nine years of legal battles, the U.S. prosecuting attorney had died and the two Bell patents (No. 174,465 dated March 7, 1876, and No. 186,787 dated January 30, 1877) were no longer in effect, although the presiding judges agreed to continue the proceedings due to the case's importance as a precedent. With a change in administration and charges of conflict of interest (on both sides) arising from the original trial, the US Attorney General dropped the lawsuit on November 30, 1897, leaving several issues undecided on the merits. During a deposition filed for the 1887 trial, Italian inventor Antonio Meucci also claimed to have created the first working model of a telephone in Italy in 1834. In 1886, in the first of three cases in which he was involved, Meucci took the stand as a witness in the hope of establishing his invention's priority. Meucci's testimony in this case was disputed due to a lack of material evidence for his inventions, as his working models were purportedly lost at the laboratory of American District Telegraph (ADT) of New York, which was later incorporated as a subsidiary of Western Union in 1901. Meucci's work, like many other inventors of the period, was based on earlier acoustic principles and despite evidence of earlier experiments, the final case involving Meucci was eventually dropped upon Meucci's death. However, due to the efforts of Congressman Vito Fossella, the U.S. House of Representatives on June 11, 2002, stated that Meucci's "work in the invention of the telephone should be acknowledged". This did not put an end to the still-contentious issue. Some modern scholars do not agree with the claims that Bell's work on the telephone was influenced by Meucci's inventions. The value of the Bell patent was acknowledged throughout the world, and patent applications were made in most major countries, but when Bell delayed the German patent application, the electrical firm of Siemens & Halske set up a rival manufacturer of Bell telephones under their own patent. The Siemens company produced near-identical copies of the Bell telephone without having to pay royalties. The establishment of the International Bell Telephone Company in Brussels, Belgium in 1880, as well as a series of agreements in other countries eventually consolidated a global telephone operation. The strain put on Bell by his constant appearances in court, necessitated by the legal battles, eventually resulted in his resignation from the company. Family life On July 11, 1877, a few days after the Bell Telephone Company was established, Bell married Mabel Hubbard (1857–1923) at the Hubbard estate in Cambridge, Massachusetts. His wedding present to his bride was to turn over 1,487 of his 1,497 shares in the newly formed Bell Telephone Company. Shortly thereafter, the newlyweds embarked on a year-long honeymoon in Europe. During that excursion, Bell took a handmade model of his telephone with him, making it a "working holiday". The courtship had begun years earlier; however, Bell waited until he was more financially secure before marrying. Although the telephone appeared to be an "instant" success, it was not initially a profitable venture and Bell's main sources of income were from lectures until after 1897. One unusual request exacted by his fiancée was that he use "Alec" rather than the family's earlier familiar name of "Aleck". From 1876, he would sign his name "Alec Bell". They had four children: Elsie May Bell (1878–1964) who married Gilbert Hovey Grosvenor of National Geographic fame. Marian Hubbard Bell (1880–1962) who was referred to as "Daisy". Married David Fairchild. Two sons who died in infancy (Edward in 1881 and Robert in 1883). The Bell family home was in Cambridge, Massachusetts, until 1880 when Bell's father-in-law bought a house in Washington, D.C.; in 1882 he bought a home in the same city for Bell's family, so they could be with him while he attended to the numerous court cases involving patent disputes. Bell was a British subject throughout his early life in Scotland and later in Canada until 1882 when he became a naturalized citizen of the United States. In 1915, he characterized his status as: "I am not one of those hyphenated Americans who claim allegiance to two countries." Despite this declaration, Bell has been proudly claimed as a "native son" by all three countries he resided in: the United States, Canada, and the United Kingdom. By 1885, a new summer retreat was contemplated. That summer, the Bells had a vacation on Cape Breton Island in Nova Scotia, spending time at the small village of Baddeck. Returning in 1886, Bell started building an estate on a point across from Baddeck, overlooking Bras d'Or Lake. By 1889, a large house, christened The Lodge was completed and two years later, a larger complex of buildings, including a new laboratory, were begun that the Bells would name Beinn Bhreagh (Gaelic: Beautiful Mountain) after Bell's ancestral Scottish highlands. Bell also built the Bell Boatyard on the estate, employing up to 40 people building experimental craft as well as wartime lifeboats and workboats for the Royal Canadian Navy and pleasure craft for the Bell family. He was an enthusiastic boater, and Bell and his family sailed or rowed a long series of vessels on Bras d'Or Lake, ordering additional vessels from the H.W. Embree and Sons boatyard in Port Hawkesbury, Nova Scotia. In his final, and some of his most productive years, Bell split his residency between Washington, D.C., where he and his family initially resided for most of the year, and Beinn Bhreagh, where they spent increasing amounts of time. Until the end of his life, Bell and his family would alternate between the two homes, but Beinn Bhreagh would, over the next 30 years, become more than a summer home as Bell became so absorbed in his experiments that his annual stays lengthened. Both Mabel and Bell became immersed in the Baddeck community and were accepted by the villagers as "their own". The Bells were still in residence at Beinn Bhreagh when the Halifax Explosion occurred on December 6, 1917. Mabel and Bell mobilized the community to help victims in Halifax. Later inventions Although Alexander Graham Bell is most often associated with the invention of the telephone, his interests were extremely varied. According to one of his biographers, Charlotte Gray, Bell's work ranged "unfettered across the scientific landscape" and he often went to bed voraciously reading the Encyclopædia Britannica, scouring it for new areas of interest. The range of Bell's inventive genius is represented only in part by the 18 patents granted in his name alone and the 12 he shared with his collaborators. These included 14 for the telephone and telegraph, four for the photophone, one for the phonograph, five for aerial vehicles, four for "hydroairplanes", and two for selenium cells. Bell's inventions spanned a wide range of interests and included a metal jacket to assist in breathing, the audiometer to detect minor hearing problems, a device to locate icebergs, investigations on how to separate salt from seawater, and work on finding alternative fuels. Bell worked extensively in medical research and invented techniques for teaching speech to the deaf. During his Volta Laboratory period, Bell and his associates considered impressing a magnetic field on a record as a means of reproducing sound. Although the trio briefly experimented with the concept, they could not develop a workable prototype. They abandoned the idea, never realizing they had glimpsed a basic principle which would one day find its application in the tape recorder, the hard disc and floppy disc drive, and other magnetic media. Bell's own home used a primitive form of air conditioning, in which fans blew currents of air across great blocks of ice. He also anticipated modern concerns with fuel shortages and industrial pollution. Methane gas, he reasoned, could be produced from the waste of farms and factories. At his Canadian estate in Nova Scotia, he experimented with composting toilets and devices to capture water from the atmosphere. In a magazine interview published shortly before his death, he reflected on the possibility of using solar panels to heat houses. Photophone Bell and his assistant Charles Sumner Tainter jointly invented a wireless telephone, named a photophone, which allowed for the transmission of both sounds and normal human conversations on a beam of light. Both men later became full associates in the Volta Laboratory Association. On June 21, 1880, Bell's assistant transmitted a wireless voice telephone message a considerable distance, from the roof of the Franklin School in Washington, D.C., to Bell at the window of his laboratory, some away, 19 years before the first voice radio transmissions. Bell believed the photophone's principles were his life's "greatest achievement", telling a reporter shortly before his death that the photophone was "the greatest invention [I have] ever made, greater than the telephone". The photophone was a precursor to the fiber-optic communication systems which achieved popular worldwide usage in the 1980s. Its master patent was issued in December 1880, many decades before the photophone's principles came into popular use. Metal detector Bell is also credited with developing one of the early versions of a metal detector through the use of an induction balance, after the shooting of U.S. President James A. Garfield in 1881. According to some accounts, the metal detector worked flawlessly in tests but did not find Guiteau's bullet, partly because the metal bed frame on which the President was lying disturbed the instrument, resulting in static. Garfield's surgeons, led by self-appointed chief physician Doctor Willard Bliss, were skeptical of the device, and ignored Bell's requests to move the President to a bed not fitted with metal springs. Alternatively, although Bell had detected a slight sound on his first test, the bullet may have been lodged too deeply to be detected by the crude apparatus. Bell's own detailed account, presented to the American Association for the Advancement of Science in 1882, differs in several particulars from most of the many and varied versions now in circulation, by concluding that extraneous metal was not to blame for failure to locate the bullet. Perplexed by the peculiar results he had obtained during an examination of Garfield, Bell "proceeded to the Executive Mansion the next morning ... to ascertain from the surgeons whether they were perfectly sure that all metal had been removed from the neighborhood of the bed. It was then recollected that underneath the horse-hair mattress on which the President lay was another mattress composed of steel wires. Upon obtaining a duplicate, the mattress was found to consist of a sort of net of woven steel wires, with large meshes. The extent of the [area that produced a response from the detector] having been so small, as compared with the area of the bed, it seemed reasonable to conclude that the steel mattress had produced no detrimental effect." In a footnote, Bell adds, "The death of President Garfield and the subsequent post-mortem examination, however, proved that the bullet was at too great a distance from the surface to have affected our apparatus." Hydrofoils The March 1906 Scientific American article by American pioneer William E. Meacham explained the basic principle of hydrofoils and hydroplanes. Bell considered the invention of the hydroplane as a very significant achievement. Based on information gained from that article, he began to sketch concepts of what is now called a hydrofoil boat. Bell and assistant Frederick W. "Casey" Baldwin began hydrofoil experimentation in the summer of 1908 as a possible aid to airplane takeoff from water. Baldwin studied the work of the Italian inventor Enrico Forlanini and began testing models. This led him and Bell to the development of practical hydrofoil watercraft. During his world tour of 1910–11, Bell and Baldwin met with Forlanini in France. They had rides in the Forlanini hydrofoil boat over Lake Maggiore. Baldwin described it as being as smooth as flying. On returning to Baddeck, a number of initial concepts were built as experimental models, including the Dhonnas Beag (Scottish Gaelic for little devil), the first self-propelled Bell-Baldwin hydrofoil. The experimental boats were essentially proof-of-concept prototypes that culminated in the more substantial HD-4, powered by Renault engines. A top speed of was achieved, with the hydrofoil exhibiting rapid acceleration, good stability, and steering, along with the ability to take waves without difficulty. In 1913, Dr. Bell hired Walter Pinaud, a Sydney yacht designer and builder as well as the proprietor of Pinaud's Yacht Yard in Westmount, Nova Scotia, to work on the pontoons of the HD-4. Pinaud soon took over the boatyard at Bell Laboratories on Beinn Bhreagh, Bell's estate near Baddeck, Nova Scotia. Pinaud's experience in boat-building enabled him to make useful design changes to the HD-4. After the First World War, work began again on the HD-4. Bell's report to the U.S. Navy permitted him to obtain two engines in July 1919. On September 9, 1919, the HD-4 set a world marine speed record of , a record which stood for ten years. Aeronautics In 1891, Bell had begun experiments to develop motor-powered heavier-than-air aircraft. The AEA was first formed as Bell shared the vision to fly with his wife, who advised him to seek "young" help as Bell was at the age of 60. In 1898, Bell experimented with tetrahedral box kites and wings constructed of multiple compound tetrahedral kites covered in maroon silk. The tetrahedral wings were named Cygnet I, II, and III, and were flown both unmanned and manned (Cygnet I crashed during a flight carrying Selfridge) in the period from 1907 to 1912. Some of Bell's kites are on display at the Alexander Graham Bell National Historic Site. Bell was a supporter of aerospace engineering research through the Aerial Experiment Association (AEA), officially formed at Baddeck, Nova Scotia, in October 1907 at the suggestion of his wife Mabel and with her financial support after the sale of some of her real estate. The AEA was headed by Bell and the founding members were four young men: American Glenn H. Curtiss, a motorcycle manufacturer at the time and who held the title "world's fastest man", having ridden his self-constructed motor bicycle around in the shortest time, and who was later awarded the Scientific American Trophy for the first official one-kilometre flight in the Western hemisphere, and who later became a world-renowned airplane manufacturer; Lieutenant Thomas Selfridge, an official observer from the U.S. Federal government and one of the few people in the army who believed that aviation was the future; Frederick W. Baldwin, the first Canadian and first British subject to pilot a public flight in Hammondsport, New York; and J. A. D. McCurdy–Baldwin and McCurdy being new engineering graduates from the University of Toronto. The AEA's work progressed to heavier-than-air machines, applying their knowledge of kites to gliders. Moving to Hammondsport, the group then designed and built the Red Wing, framed in bamboo and covered in red silk and powered by a small air-cooled engine. On March 12, 1908, over Keuka Lake, the biplane lifted off on the first public flight in North America. The innovations that were incorporated into this design included a cockpit enclosure and tail rudder (later variations on the original design would
in Mesopotamia at a much earlier date) as a medium of exchange, some time in the 7th century BCE in Lydia. The use of minted coins continued to flourish during the Greek and Roman eras. During the 6th century BCE, all of Anatolia was conquered by the Persian Achaemenid Empire, the Persians having usurped the Medes as the dominant dynasty in Iran. In 499 BCE, the Ionian city-states on the west coast of Anatolia rebelled against Persian rule. The Ionian Revolt, as it became known, though quelled, initiated the Greco-Persian Wars, which ended in a Greek victory in 449 BCE, and the Ionian cities regained their independence. By the Peace of Antalcidas (387 BCE), which ended the Corinthian War, Persia regained control over Ionia. In 334 BCE, the Macedonian Greek king Alexander the Great conquered the peninsula from the Achaemenid Persian Empire. Alexander's conquest opened up the interior of Asia Minor to Greek settlement and influence. Following the death of Alexander and the breakup of his empire, Anatolia was ruled by a series of Hellenistic kingdoms, such as the Attalids of Pergamum and the Seleucids, the latter controlling most of Anatolia. A period of peaceful Hellenization followed, such that the local Anatolian languages had been supplanted by Greek by the 1st century BCE. In 133 BCE the last Attalid king bequeathed his kingdom to the Roman Republic, and western and central Anatolia came under Roman control, but Hellenistic culture remained predominant. Further annexations by Rome, in particular of the Kingdom of Pontus by Pompey, brought all of Anatolia under Roman control, except for the eastern frontier with the Parthian Empire, which remained unstable for centuries, causing a series of wars, culminating in the Roman-Parthian Wars. Early Christian Period After the division of the Roman Empire, Anatolia became part of the East Roman, or Byzantine Empire. Anatolia was one of the first places where Christianity spread, so that by the 4th century CE, western and central Anatolia were overwhelmingly Christian and Greek-speaking. For the next 600 years, while Imperial possessions in Europe were subjected to barbarian invasions, Anatolia would be the center of the Hellenic world. It was one of the wealthiest and most densely populated places in the Late Roman Empire. Anatolia's wealth grew during the 4th and 5th centuries thanks, in part, to the Pilgrim's Road that ran through the peninsula. Literary evidence about the rural landscape stems from the hagiographies of 6th century Nicholas of Sion and 7th century Theodore of Sykeon. Large urban centers included Ephesus, Pergamum, Sardis and Aphrodisias. Scholars continue to debate the cause of urban decline in the 6th and 7th centuries variously attributing it to the Plague of Justinian (541), and the 7th century Persian incursion and Arab conquest of the Levant. In the ninth and tenth century a resurgent Byzantine Empire regained its lost territories, including even long lost territory such as Armenia and Syria (ancient Aram). Medieval Period In the 10 years following the Battle of Manzikert in 1071, the Seljuk Turks from Central Asia migrated over large areas of Anatolia, with particular concentrations around the northwestern rim. The Turkish language and the Islamic religion were gradually introduced as a result of the Seljuk conquest, and this period marks the start of Anatolia's slow transition from predominantly Christian and Greek-speaking, to predominantly Muslim and Turkish-speaking (although ethnic groups such as Armenians, Greeks, and Assyrians remained numerous and retained Christianity and their native languages). In the following century, the Byzantines managed to reassert their control in western and northern Anatolia. Control of Anatolia was then split between the Byzantine Empire and the Seljuk Sultanate of Rûm, with the Byzantine holdings gradually being reduced. In 1255, the Mongols swept through eastern and central Anatolia, and would remain until 1335. The Ilkhanate garrison was stationed near Ankara. After the decline of the Ilkhanate from 1335 to 1353, the Mongol Empire's legacy in the region was the Uyghur Eretna Dynasty that was overthrown by Kadi Burhan al-Din in 1381. By the end of the 14th century, most of Anatolia was controlled by various Anatolian beyliks. Smyrna fell in 1330, and the last Byzantine stronghold in Anatolia, Philadelphia, fell in 1390. The Turkmen Beyliks were under the control of the Mongols, at least nominally, through declining Seljuk sultans. The Beyliks did not mint coins in the names of their own leaders while they remained under the suzerainty of the Mongol Ilkhanids. The Osmanli ruler Osman I was the first Turkish ruler who minted coins in his own name in 1320s; they bear the legend "Minted by Osman son of Ertugrul". Since the minting of coins was a prerogative accorded in Islamic practice only to a sovereign, it can be considered that the Osmanli, or Ottoman Turks, had become formally independent from the Mongol Khans. Ottoman Empire Among the Turkish leaders, the Ottomans emerged as great power under Osman I and his son Orhan I. The Anatolian beyliks were successively absorbed into the rising Ottoman Empire during the 15th century. It is not well understood how the Osmanlı, or Ottoman Turks, came to dominate their neighbours, as the history of medieval Anatolia is still little known. The Ottomans completed the conquest of the peninsula in 1517 with the taking of Halicarnassus (modern Bodrum) from the Knights of Saint John. Modern times With the acceleration of the decline of the Ottoman Empire in the early 19th century, and as a result of the expansionist policies of the Russian Empire in the Caucasus, many Muslim nations and groups in that region, mainly Circassians, Tatars, Azeris, Lezgis, Chechens and several Turkic groups left their homelands and settled in Anatolia. As the Ottoman Empire further shrank in the Balkan regions and then fragmented during the Balkan Wars, much of the non-Christian populations of its former possessions, mainly Balkan Muslims (Bosnian Muslims, Albanians, Turks, Muslim Bulgarians and Greek Muslims such as the Vallahades from Greek Macedonia), were resettled in various parts of Anatolia, mostly in formerly Christian villages throughout Anatolia. A continuous reverse migration occurred since the early 19th century, when Greeks from Anatolia, Constantinople and Pontus area migrated toward the newly independent Kingdom of Greece, and also towards the United States, the southern part of the Russian Empire, Latin America, and the rest of Europe. Following the Russo-Persian Treaty of Turkmenchay (1828) and the incorporation of Eastern Armenia into the Russian Empire, another migration involved the large Armenian population of Anatolia, which recorded significant migration rates from Western Armenia (Eastern Anatolia) toward the Russian Empire, especially toward its newly established Armenian provinces. Anatolia remained multi-ethnic until the early 20th century (see the rise of nationalism under the Ottoman Empire). During World War I, the Armenian genocide, the Greek genocide (especially in Pontus), and the Assyrian genocide almost entirely removed the ancient indigenous communities of Armenian, Greek, and Assyrian populations in Anatolia and surrounding regions. Following the Greco-Turkish War of 1919–1922, most remaining ethnic Anatolian Greeks were forced out during the 1923 population exchange between Greece and Turkey. Of the remainder, most have left Turkey since then, leaving fewer than 5,000 Greeks in Anatolia today. Geology Anatolia's terrain is structurally complex. A central massif composed of uplifted blocks and downfolded troughs, covered by recent deposits and giving the appearance of a plateau with rough terrain, is wedged between two folded mountain ranges that converge in the east. True lowland is confined to a few narrow coastal strips along the Aegean, Mediterranean, and the Black Sea coasts. Flat or gently sloping land is rare and largely confined to the deltas of the Kızıl River, the coastal plains of Çukurova and the valley floors of the Gediz River and the Büyük Menderes River as well as some interior high plains in Anatolia, mainly around Lake Tuz (Salt Lake) and the Konya Basin (Konya Ovasi). There are two mountain ranges in southern Anatolia: the Taurus and the Zagros mountains. Climate Anatolia has a varied range of climates. The central plateau is characterized by a continental climate, with hot summers and cold snowy winters. The south and west coasts enjoy a typical Mediterranean climate, with mild rainy winters, and warm dry summers. The Black Sea and Marmara coasts have a temperate oceanic climate, with cool foggy summers and much rainfall throughout the year. Ecoregions There is a diverse number of plant and animal communities. The mountains and coastal plain of northern Anatolia experience a humid and mild climate. There are temperate broadleaf, mixed and coniferous forests. The central and eastern plateau, with its drier continental climate, has deciduous forests and forest steppes. Western and southern Anatolia, which have a Mediterranean climate, contain Mediterranean forests, woodlands, and scrub ecoregions. Euxine-Colchic deciduous forests: These temperate broadleaf and mixed forests extend across northern Anatolia, lying between the mountains of northern Anatolia and the Black Sea. They include the enclaves of temperate rainforest lying along the southeastern coast of the Black Sea in eastern Turkey and Georgia. Northern Anatolian conifer and deciduous forests: These forests occupy the mountains of northern Anatolia, running east and west between the coastal Euxine-Colchic forests and the drier, continental climate forests of central and eastern Anatolia. Central Anatolian deciduous forests: These forests of deciduous oaks and evergreen pines cover the plateau of central Anatolia. Central Anatolian steppe: These dry grasslands cover the drier valleys and surround the saline lakes of central Anatolia, and include halophytic (salt tolerant) plant communities. Eastern Anatolian deciduous forests: This ecoregion occupies the plateau of eastern Anatolia. The drier and more continental climate is beneficial for steppe-forests dominated by deciduous oaks, with areas of shrubland, montane forest, and valley forest. Anatolian conifer and deciduous mixed forests: These forests occupy the western, Mediterranean-climate portion of the Anatolian plateau. Pine forests and mixed pine and oak woodlands and shrublands are predominant. Aegean and Western Turkey sclerophyllous and mixed forests: These Mediterranean-climate forests occupy the coastal lowlands and valleys of western Anatolia bordering the Aegean Sea. The ecoregion has forests of Turkish pine (Pinus brutia), oak forests and woodlands, and maquis shrubland of Turkish pine and evergreen sclerophyllous trees and shrubs, including Olive (Olea europaea), Strawberry Tree (Arbutus unedo), Arbutus andrachne, Kermes Oak (Quercus coccifera), and Bay Laurel (Laurus nobilis). Southern Anatolian montane conifer and deciduous forests: These mountain forests occupy the Mediterranean-climate Taurus Mountains of southern Anatolia. Conifer forests are predominant, chiefly Anatolian black pine (Pinus nigra), Cedar of Lebanon (Cedrus libani), Taurus fir (Abies cilicica), and juniper (Juniperus foetidissima and J. excelsa). Broadleaf trees include oaks, hornbeam, and maples. Eastern Mediterranean conifer-sclerophyllous-broadleaf forests: This ecoregion occupies the coastal strip of southern Anatolia between the Taurus Mountains and the Mediterranean Sea. Plant communities include broadleaf sclerophyllous maquis shrublands, forests of Aleppo Pine (Pinus halepensis) and Turkish Pine (Pinus brutia), and dry oak (Quercus spp.) woodlands and steppes. Demographics See also Aeolis Anatolian hypothesis Anatolianism Anatolian leopard Anatolian Plate Anatolian Shepherd Ancient kingdoms of Anatolia Antigonid dynasty Doris (Asia Minor) Empire of Nicaea Empire of Trebizond Gordium Lycaonia Midas Miletus Myra Pentarchy Pontic Greeks Rumi Saint Anatolia Saint John Saint Nicholas Saint Paul Seleucid Empire Seven churches of Asia Seven Sleepers Tarsus Troad Turkic migration Notes References Citations Sources Further reading Akat, Uücel, Neşe Özgünel, and Aynur Durukan. 1991. Anatolia: A World Heritage. Ankara: Kültür Bakanliǧi. Brewster, Harry. 1993. Classical Anatolia: The Glory of Hellenism. London: I.B. Tauris. Donbaz, Veysel, and Şemsi Güner. 1995. The Royal Roads of Anatolia. Istanbul: Dünya. Dusinberre, Elspeth R. M. 2013. Empire, Authority, and Autonomy In Achaemenid Anatolia. Cambridge: Cambridge University Press. Gates, Charles, Jacques Morin, and Thomas Zimmermann. 2009. Sacred Landscapes In Anatolia and Neighboring Regions. Oxford: Archaeopress. Mikasa, Takahito, ed. 1999. Essays On Ancient Anatolia. Wiesbaden: Harrassowitz. Takaoğlu, Turan. 2004. Ethnoarchaeological Investigations In Rural Anatolia. İstanbul: Ege Yayınları. Taracha, Piotr. 2009. Religions of Second Millennium Anatolia. Wiesbaden: Harrassowitz. Taymaz, Tuncay, Y. Yilmaz, and Yildirim Dilek. 2007. The Geodynamics of the Aegean and Anatolia. London: Geological Society. External links Peninsulas of Asia Geography of Western Asia Geography of the Middle East Near East Geography of Armenia Geography of Turkey Peninsulas of Turkey Regions of Turkey
use of Anatolian designations has varied over time, perhaps originally referring to the Aeolian, Ionian and Dorian colonies situated along the eastern coasts of the Aegean Sea, but also encompassing eastern regions in general. Such use of Anatolian designations was employed during the reign of Roman Emperor Diocletian (284–305), who created the Diocese of the East, known in Greek as the Eastern (Ανατολής / Anatolian) Diocese, but completely unrelated to the regions of Asia Minor. In their widest territorial scope, Anatolian designations were employed during the reign of Roman Emperor Constantine I (306–337), who created the Praetorian prefecture of the East, known in Greek as the Eastern (Ανατολής / Anatolian) Prefecture, encompassing all eastern regions of the Late Roman Empire and spaning from Thrace to Egypt. Only after the loss of other eastern regions during the 7th century and the reduction of Byzantine eastern domains to Asia Minor, that region became the only remaining part of the Byzantine East, and thus commonly referred to (in Greek) as the Eastern (Ανατολής / Anatolian) part of the Empire. In the same time, the Anatolic Theme (Ἀνατολικὸν θέμα / "the Eastern theme") was created, as a province (theme) covering the western and central parts of Turkey's present-day Central Anatolia Region, centered around Iconium, but ruled from the city of Amorium. The Latinized form "," with its -ia ending, is probably a Medieval Latin innovation. The modern Turkish form Anadolu derives directly from the Greek name Aνατολή (Anatolḗ). The Russian male name Anatoly, the French Anatole and plain Anatol, all stemming from saints Anatolius of Laodicea (d. 283) and Anatolius of Constantinople (d. 458; the first Patriarch of Constantinople), share the same linguistic origin. Names The oldest known name for any region within Anatolia is related to its central area, known as the "Land of Hatti" – a designation that was initially used for the land of ancient Hattians, but later became the most common name for the entire territory under the rule of ancient Hittites. The first recorded name the Greeks used for the Anatolian peninsula, though not particularly popular at the time, was Ἀσία (Asía), perhaps from an Akkadian expression for the "sunrise" or possibly echoing the name of the Assuwa league in western Anatolia. The Romans used it as the name of their province, comprising the west of the peninsula plus the nearby Aegean Islands. As the name "Asia" broadened its scope to apply to the vaster region east of the Mediterranean, some Greeks in Late Antiquity came to use the name Asia Minor (Μικρὰ Ἀσία, Mikrà Asía), meaning "Lesser Asia" to refer to present-day Anatolia, whereas the administration of the Empire preferred the description Ἀνατολή (Anatolḗ "the East"). The endonym Ῥωμανία (Rōmanía "the land of the Romans, i.e. the Eastern Roman Empire") was understood as another name for the province by the invading Seljuq Turks, who founded a Sultanate of Rûm in 1077. Thus (land of the) Rûm became another name for Anatolia. By the 12th century Europeans had started referring to Anatolia as Turchia. During the era of the Ottoman Empire, mapmakers outside the Empire referred to the mountainous plateau in eastern Anatolia as Armenia. Other contemporary sources called the same area Kurdistan. Geographers have variously used the terms East Anatolian Plateau and Armenian Plateau to refer to the region, although the territory encompassed by each term largely overlaps with the other. According to archaeologist Lori Khatchadourian, this difference in terminology "primarily result[s] from the shifting political fortunes and cultural trajectories of the region since the nineteenth century." Turkey's First Geography Congress in 1941 created two geographical regions of Turkey to the east of the Gulf of Iskenderun-Black Sea line, the Eastern Anatolia Region and the Southeastern Anatolia Region, the former largely corresponding to the western part of the Armenian Highlands, the latter to the northern part of the Mesopotamian plain. According to Richard Hovannisian, this changing of toponyms was "necessary to obscure all evidence" of the Armenian presence as part of the policy of Armenian genocide denial embarked upon by the newly established Turkish government and what Hovannisian calls its "foreign collaborators." History Prehistoric Anatolia Human habitation in Anatolia dates back to the Paleolithic. Neolithic settlements include Çatalhöyük, Çayönü, Nevali Cori, Aşıklı Höyük, Boncuklu Höyük Hacilar, Göbekli Tepe, Norşuntepe, Kosk, and Mersin. Çatalhöyük (7.000 BCE) is considered the most advanced of these. Neolithic Anatolia has been proposed as the homeland of the Indo-European language family, although linguists tend to favour a later origin in the steppes north of the Black Sea. However, it is clear that the Anatolian languages, the earliest attested branch of Indo-European, have been spoken in Anatolia since at least the 19th century BCE. Ancient Anatolia The earliest historical data related to Anatolia appear during the Bronze Age and continue throughout the Iron Age. The most ancient period in the history of Anatolia spans from the emergence of ancient Hattians, up to the conquest of Anatolia by the Achaemenid Empire in the 6th century BCE. Hattians and Hurrians The earliest historically attested populations of Anatolia were the Hattians in central Anatolia, and Hurrians further to the east. The Hattians were an indigenous people, whose main center was the city of Hattush. Affiliation of Hattian language remains unclear, while Hurrian language belongs to a distinctive family of Hurro-Urartian languages. All of those languages are extinct; relationships with indigenous languages of the Caucasus have been proposed, but are not generally accepted. The region became famous for exporting raw materials. Organized trade between Anatolia and Mesopotamia started to emerge during the period of the Akkadian Empire, and was continued and intensified during the period of the Old Assyrian Empire, between the 21st and the 18th centuries BCE. Assyrian traders were bringing tin and textiles in exchange for copper, silver or gold. Cuneiform records, dated circa 20th century BCE, found in Anatolia at the Assyrian colony of Kanesh, use an advanced system of trading computations and credit lines. Hittite Anatolia (18th–12th century BCE) Unlike the Akkadians and Assyrians, whose Anatolian trading posts were peripheral to their core lands in Mesopotamia, the Hittites were centered at Hattusa (modern Boğazkale) in north-central Anatolia by the 17th century BCE. They were speakers of an Indo-European language, the Hittite language, or nesili (the language of Nesa) in Hittite. The Hittites originated from local ancient cultures that grew in Anatolia, in addition to the arrival of Indo-European languages. Attested for the first time in the Assyrian tablets of Nesa around 2000 BCE, they conquered Hattusa in the 18th century BCE, imposing themselves over Hattian- and Hurrian-speaking populations. According to the widely accepted Kurgan theory on the Proto-Indo-European homeland, however, the Hittites (along with the other Indo-European ancient Anatolians) were themselves relatively recent immigrants to Anatolia from the north. However, they did not necessarily displace the population genetically; they assimilated into the former peoples' culture, preserving the Hittite language. The Hittites adopted the Mesopotamian cuneiform script. In the Late Bronze Age, Hittite New Kingdom (c. 1650 BCE) was founded, becoming an empire in the 14th century BCE after the conquest of Kizzuwatna in the south-east and the defeat of the Assuwa league in western Anatolia. The empire reached its height in the 13th century BCE, controlling much of Asia Minor, northwestern Syria, and northwest upper Mesopotamia. However, the Hittite advance toward the Black Sea coast was halted by the semi-nomadic pastoralist and tribal Kaskians, a non-Indo-European people who had earlier displaced the Palaic-speaking Indo-Europeans. Much of the history of the Hittite Empire concerned war with the rival empires of Egypt, Assyria and the Mitanni. The Egyptians eventually withdrew from the region after failing to gain the upper hand over the Hittites and becoming wary of the power of Assyria, which had destroyed the Mitanni Empire. The Assyrians and Hittites were then left to battle over control of eastern and southern Anatolia and colonial territories in Syria. The Assyrians had better success than the Egyptians, annexing much Hittite (and Hurrian) territory in these regions. Post-Hittite Anatolia (12th–6th century BCE) After 1180 BCE, during the Late Bronze Age collapse, the Hittite empire disintegrated into several independent Syro-Hittite states, subsequent to losing much territory to the Middle Assyrian Empire and being finally overrun by the Phrygians, another Indo-European people who are believed to have migrated from the Balkans. The Phrygian expansion into southeast Anatolia was eventually halted by the Assyrians, who controlled that region. Luwians Another Indo-European people, the Luwians, rose to prominence in central and western Anatolia circa 2000 BCE. Their language belonged to the same linguistic branch as Hittite. The general consensus amongst scholars is that Luwian was spoken across a large area of western Anatolia, including (possibly) Wilusa (Troy), the Seha River Land (to be identified with the Hermos and/or Kaikos valley), and the kingdom of Mira-Kuwaliya with its core territory of the Maeander valley. From the 9th century BCE, Luwian regions coalesced into a number of states such as Lydia, Caria, and Lycia, all of which had Hellenic influence. Arameans Arameans encroached over the borders of south-central Anatolia in the century or so after the fall of the Hittite empire, and some of the Syro-Hittite states in this region became an amalgam of Hittites and Arameans. These became known as Syro-Hittite states. Neo-Assyrian Empire From the 10th to late 7th centuries BCE, much of Anatolia (particularly the southeastern regions) fell to the Neo-Assyrian Empire, including all of the Syro-Hittite states, Tabal, Kingdom of Commagene, the Cimmerians and Scythians and swathes of Cappadocia. The Neo-Assyrian empire collapsed due to a bitter series of civil wars followed by a combined attack by Medes, Persians, Scythians and their own Babylonian relations. The last Assyrian city to fall was Harran in southeast Anatolia. This city was the birthplace of the last king of Babylon, the Assyrian Nabonidus and his son and regent Belshazzar. Much of the region then fell to the short-lived Iran-based Median Empire, with the Babylonians and Scythians briefly appropriating some territory. Cimmerian and Scythian invasions From the late 8th century BCE, a new wave of Indo-European-speaking raiders entered northern and northeast Anatolia: the Cimmerians and Scythians. The Cimmerians overran Phrygia and the Scythians threatened to do the same to Urartu and Lydia, before both were finally checked by the Assyrians. Early Greek presence The north-western coast of Anatolia was inhabited by Greeks of the Achaean/Mycenaean culture from the 20th century BCE, related to the Greeks of southeastern Europe and the Aegean. Beginning with the Bronze Age collapse at the end of the 2nd millennium BCE, the west coast of Anatolia was settled by Ionian Greeks, usurping the area of the related but earlier Mycenaean Greeks. Over several centuries, numerous Ancient Greek city-states were established on the coasts of Anatolia. Greeks started Western philosophy on the western coast of Anatolia (Pre-Socratic philosophy). Classical Anatolia In classical antiquity, Anatolia was described by Herodotus and later historians as divided into regions that were diverse in culture, language and religious practices. The northern regions included Bithynia, Paphlagonia and Pontus; to the west were Mysia, Lydia and Caria; and Lycia, Pamphylia and Cilicia belonged to the southern shore. There were also several inland regions: Phrygia, Cappadocia, Pisidia and Galatia. Languages spoken included the late surviving Anatolic languages Isaurian and Pisidian, Greek in Western and coastal regions, Phrygian spoken until the 7th century CE, local variants of Thracian in the Northwest, the Galatian variant of Gaulish in Galatia until the 6th century CE, Cappadocian and Armenian in the East, and Kartvelian languages in the Northeast. Anatolia is known as the birthplace of minted coinage (as opposed to unminted coinage, which first appears in Mesopotamia at a much earlier date) as a medium of exchange, some time in the 7th century BCE in Lydia. The use of minted coins continued to flourish during the Greek and Roman eras. During the 6th century BCE, all of Anatolia was conquered by the Persian Achaemenid Empire, the Persians having usurped the Medes as the dominant dynasty in Iran. In 499 BCE, the Ionian city-states on the west coast of Anatolia rebelled against
means of transportation, a VW Bus, for a few hundred dollars, and Wozniak sold his HP-65 calculator for . Wozniak debuted the first prototype Apple I at the Homebrew Computer Club in July 1976. The Apple I was sold as a motherboard with CPU, RAM, and basic textual-video chips—a base kit concept which would not yet be marketed as a complete personal computer. It went on sale soon after debut for . Wozniak later said he was unaware of the coincidental mark of the beast in the number 666, and that he came up with the price because he liked "repeating digits". Apple Computer, Inc. was incorporated on January 3, 1977, without Wayne, who had left and sold his share of the company back to Jobs and Wozniak for $800 only twelve days after having co-founded Apple. Multimillionaire Mike Markkula provided essential business expertise and funding of to Jobs and Wozniak during the incorporation of Apple. During the first five years of operations, revenues grew exponentially, doubling about every four months. Between September 1977 and September 1980, yearly sales grew from $775,000 to $118 million, an average annual growth rate of 533%. The Apple II, also invented by Wozniak, was introduced on April 16, 1977, at the first West Coast Computer Faire. It differed from its major rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics and open architecture. While the Apple I and early Apple II models used ordinary audio cassette tapes as storage devices, they were superseded by the introduction of a -inch floppy disk drive and interface called the Disk II in 1978. The Apple II was chosen to be the desktop platform for the first "killer application" of the business world: VisiCalc, a spreadsheet program released in 1979. VisiCalc created a business market for the Apple II and gave home users an additional reason to buy an Apple II: compatibility with the office. Before VisiCalc, Apple had been a distant third place competitor to Commodore and Tandy. By the end of the 1970s, Apple had become the leading computer manufacturer in the United States. On December 12, 1980, Apple (ticker symbol "AAPL") went public selling 4.6 million shares at $22 per share ($.39 per share when adjusting for stock splits ), generating over $100 million, which was more capital than any IPO since Ford Motor Company in 1956. By the end of the day, 300 millionaires were created, from a stock price of $29 per share and a market cap of $1.778 billion. 1980–1990: Success with Macintosh A critical moment in the company's history came in December 1979 when Jobs and several Apple employees, including human–computer interface expert Jef Raskin, visited Xerox PARC in to see a demonstration of the Xerox Alto, a computer using a graphical user interface. Xerox granted Apple engineers three days of access to the PARC facilities in return for the option to buy 100,000 shares (5.6 million split-adjusted shares ) of Apple at the pre-IPO price of $10 a share. After the demonstration, Jobs was immediately convinced that all future computers would use a graphical user interface, and development of a GUI began for the Apple Lisa, named after Jobs's daughter. The Lisa division would be plagued by infighting, and in 1982 Jobs was pushed off the project. The Lisa launched in 1983 and became the first personal computer sold to the public with a GUI, but was a commercial failure due to its high price and limited software titles. Jobs, angered by being pushed off the Lisa team, took over the company's Macintosh division. Wozniak and Raskin had envisioned the Macintosh as low-cost-computer with a text-based interface like the Apple II, but a plane crash in 1981 forced Wozniak to step back from the project. Jobs quickly redefined the Macintosh as a graphical system that would be cheaper than the Lisa, undercutting his former division. Jobs was also hostile to the Apple II division, which at the time, generated most of the company's revenue. In 1984, Apple launched the Macintosh, the first personal computer to be sold without a programming language. Its debut was signified by "1984", a $1.5 million television advertisement directed by Ridley Scott that aired during the third quarter of Super Bowl XVIII on January 22, 1984. This is now hailed as a watershed event for Apple's success and was called a "masterpiece" by CNN and one of the greatest TV advertisements of all time by TV Guide. The advertisement created great interest in the original Macintosh, and sales were initially good, but began to taper off dramatically after the first three months as reviews started to come in. Jobs had made the decision to equip the original Macintosh with 128 kilobytes of RAM, attempting to reach a price point, which limited its speed and the software that could be used. The Macintosh would eventually ship for , a price panned by critics in light of its slow performance. In early 1985, this sales slump triggered a power struggle between Steve Jobs and CEO John Sculley, who had been hired away from Pepsi two years earlier by Jobs using the famous line, "Do you want to sell sugar water for the rest of your life or come with me and change the world?" Sculley decided to remove Jobs as the head of the Macintosh division, with unanimous support from the Apple board of directors. The board of directors instructed Sculley to contain Jobs and his ability to launch expensive forays into untested products. Rather than submit to Sculley's direction, Jobs attempted to oust him from his leadership role at Apple. Informed by Jean-Louis Gassée, Sculley found out that Jobs had been attempting to organize a boardroom coup and called an emergency meeting at which Apple's executive staff sided with Sculley and stripped Jobs of all operational duties. Jobs resigned from Apple in September 1985 and took a number of Apple employees with him to found NeXT. Wozniak had also quit his active employment at Apple earlier in 1985 to pursue other ventures, expressing his frustration with Apple's treatment of the Apple II division and stating that the company had "been going in the wrong direction for the last five years". Despite Wozniak's grievances, he officially remained employed by Apple, and to this day continues to work for the company as a representative, receiving a stipend estimated to be $120,000 per year for this role. Both Jobs and Wozniak remained Apple shareholders after their departures. After the departures of Jobs and Wozniak, Sculley worked to improve the Macintosh in 1985 by quadrupling the RAM and introducing the LaserWriter, the first reasonably priced PostScript laser printer. PageMaker, an early desktop publishing application taking advantage of the PostScript language, was also released by Aldus Corporation in July 1985. It has been suggested that the combination of Macintosh, LaserWriter and PageMaker was responsible for the creation of the desktop publishing market. This dominant position in the desktop publishing market allowed the company to focus on higher price points, the so-called "high-right policy" named for the position on a chart of price vs. profits. Newer models selling at higher price points offered higher profit margin, and appeared to have no effect on total sales as power users snapped up every increase in speed. Although some worried about pricing themselves out of the market, the high-right policy was in full force by the mid-1980s, notably due to Jean-Louis Gassée's mantra of "fifty-five or die", referring to the 55% profit margins of the Macintosh II. This policy began to backfire in the last years of the decade as desktop publishing programs appeared on PC clones that offered some or much of the same functionality of the Macintosh, but at far lower price points. The company lost its dominant position in the desktop publishing market and estranged many of its original consumer customer base who could no longer afford their high-priced products. The Christmas season of 1989 was the first in the company's history to have declining sales, which led to a 20% drop in Apple's stock price. During this period, the relationship between Sculley and Gassée deteriorated, leading Sculley to effectively demote Gassée in January 1990 by appointing Michael Spindler as the chief operating officer. Gassée left the company later that year. 1990–1997: Decline and restructuring The company pivoted strategy and in October 1990 introduced three lower-cost models, the Macintosh Classic, the Macintosh LC, and the Macintosh IIsi, all of which saw significant sales due to pent-up demand. In 1991, Apple introduced the hugely successful PowerBook with a design that set the current shape for almost all modern laptops. The same year, Apple introduced System 7, a major upgrade to the Macintosh operating system, adding color to the interface and introducing new networking capabilities. The success of the lower-cost Macs and PowerBook brought increasing revenue. For some time, Apple was doing incredibly well, introducing fresh new products and generating increasing profits in the process. The magazine MacAddict named the period between 1989 and 1991 as the "first golden age" of the Macintosh. The success of Apple's lower-cost consumer models, especially the LC, also led to the cannibalization of their higher-priced machines. To address this, management introduced several new brands, selling largely identical machines at different price points, aimed at different markets: the high-end Quadra models, the mid-range Centris line, and the consumer-marketed Performa series. This led to significant market confusion, as customers did not understand the difference between models. The early 1990s also saw the discontinuation of the Apple II series, which was expensive to produce, and the company felt was still taking sales away from lower-cost Macintosh models. After the launch of the LC, Apple began encouraging developers to create applications for Macintosh rather than Apple II, and authorized salespersons to direct consumers towards Macintosh and away from Apple II. The Apple IIe was discontinued in 1993. Throughout this period, Microsoft continued to gain market share with its Windows graphical user interface that it sold to manufacturers of generally less expensive PC clones. While the Macintosh was more expensive, it offered a more tightly integrated user experience, but the company struggled to make the case to consumers. Apple also experimented with a number of other unsuccessful consumer targeted products during the 1990s, including digital cameras, portable CD audio players, speakers, video game consoles, the eWorld online service, and TV appliances. Most notably, enormous resources were invested in the problem-plagued Newton tablet division, based on John Sculley's unrealistic market forecasts. personal computers, while Apple was delivering a richly engineered but expensive experience. Apple relied on high profit margins and never developed a clear response; instead, they sued Microsoft for using a GUI similar to the Apple Lisa in Apple Computer, Inc. v. Microsoft Corp. The lawsuit dragged on for years before it was finally dismissed. The major product flops and the rapid loss of market share to Windows sullied Apple's reputation, and in 1993 Sculley was replaced as CEO by Michael Spindler. With Spindler at the helm Apple, IBM, and Motorola formed the AIM alliance in 1994 with the goal of creating a new computing platform (the PowerPC Reference Platform; PReP), which would use IBM and Motorola hardware coupled with Apple software. The AIM alliance hoped that PReP's performance and Apple's software would leave the PC far behind and thus counter the dominance of Windows. The same year, Apple introduced the Power Macintosh, the first of many Apple computers to use Motorola's PowerPC processor. In the wake of the alliance, Apple opened up to the idea of allowing Motorola and other companies to build Macintosh clones. Over the next two years, 75 distinct Macintosh clone models were introduced. However, by 1996 Apple executives were worried that the clones were cannibalizing sales of their own high-end computers, where profit margins were highest. In 1996, Spindler was replaced by Gil Amelio as CEO. Hired for his reputation as a corporate rehabilitator, Amelio made deep changes, including extensive layoffs and cost-cutting. This period was also marked by numerous failed attempts to modernize the Macintosh operating system (MacOS). The original Macintosh operating system (System 1) was not built for multitasking (running several applications at once). The company attempted to correct this with by introducing cooperative multitasking in System 5, but the company still felt it needed a more modern approach. This led to the Pink project in 1988, A/UX that same year, Copland in 1994, and the attempted purchase of BeOS in 1996. Talks with Be stalled the CEO, former Apple executive Jean-Louis Gassée, demanded $300 million instead of the $125 million Apple wanted to pay. Only weeks away from bankruptcy, Apple's board decided NeXTSTEP was a better choice for its next operating system and purchased NeXT in late 1996 for $429 million, bringing back Apple co-founder Steve Jobs. 1997–2007: Return to profitability The NeXT acquisition was finalized on February 9, 1997, and the board brought Jobs back to Apple as an advisor. On July 9, 1997, Jobs staged a boardroom coup that resulted in Amelio's resignation after overseeing a three-year record-low stock price and crippling financial losses. The board named Jobs as interim CEO and he immediately began a review of the company's products. Jobs would order 70% of the company's products to be cancelled, resulting in the loss of 3,000 jobs, and taking Apple back to the core of its computer offerings. The next month, in August 1997, Steve Jobs convinced Microsoft to make a $150 million investment in Apple and a commitment to continue developing software for the Mac. The investment was seen as an "antitrust insurance policy" for Microsoft who had recently settled with the Department of Justice over anti-competitive practices. Jobs also ended the Mac clone deals and in September 1997, purchased the largest clone maker, Power Computing. On November 10, 1997, Apple introduced the Apple Store website, which was tied to a new build-to-order manufacturing that had been successfully used by PC manufacturer Dell. The moves paid off for Jobs, at the end of his first year as CEO, the company turned a $309 million profit. On May 6, 1998, Apple introduced a new all-in-one computer reminiscent of the original Macintosh: the iMac. The iMac was a huge success for Apple selling 800,000 units in its first five months and ushered in major shifts in the industry by abandoning legacy technologies like the 3½-inch diskette, being an early adopter of the USB connector, and coming pre-installed with internet connectivity (the "i" in iMac) via Ethernet and a dial-up modem. The device also had a striking eardrop shape and translucent materials, designed by Jonathan Ive, who although hired by Amelio, would go on to work collaboratively with Jobs for the next decade to chart a new course the design of Apple's products. A little more than a year later on July 21, 1999, Apple introduced the iBook, a laptop for consumers. It was the culmination of a strategy established by Jobs to produce only four products: refined versions of the Power Macintosh G3 desktop and PowerBook G3 laptop for professionals, along with the iMac desktop and iBook laptop for consumers. Jobs felt the small product line allowed for a greater focus on quality and innovation. At around the same time, Apple also completed numerous acquisitions to create a portfolio of digital media production software for both professionals and consumers. Apple acquired of Macromedia's Key Grip digital video editing software project which was renamed Final Cut Pro when it was launched on the retail market in April 1999. The development of Key Grip also led to Apple's release of the consumer video-editing product iMovie in October 1999. Next, Apple successfully acquired the German company Astarte in April 2000, which had developed the DVD authoring software DVDirector, which Apple would sell as the professional-oriented DVD Studio Pro software product, and used the same technology to create iDVD for the consumer market. In 2000, Apple purchased the SoundJam MP audio player software from Casady & Greene. Apple renamed the program iTunes, while simplifying the user interface and adding the ability to burn CDs. 2001 would be a pivotal year for the Apple with the company making three announcements that would change the course of the company. The first announcement came on March 24, 2001, that Apple was nearly ready to release a new modern operating system, Mac OS X. The announcement came after numerous failed attempts in the early 1990s, and several years of development. Mac OS X was based on NeXTSTEP, OPENSTEP, and BSD Unix, with Apple aiming to combine the stability, reliability, and security of Unix with the ease of use afforded by an overhauled user interface, heavily influenced by NeXTSTEP. To aid users in migrating from Mac OS 9, the new operating system allowed the use of OS 9 applications within Mac OS X via the Classic Environment. In May 2001 the company opened its first two Apple Store retail locations in Virginia and California, offering an improved presentation of the company's products. At the time, many speculated that the stores would fail, but they went on to become highly successful, and the first of more than 500 stores around the world. On October 23, 2001, Apple debuted the iPod portable digital audio player. The product, which was first sold on November 10, 2001, was phenomenally successful with over 100 million units sold within six years. In 2003, Apple's iTunes Store was introduced. The service offered music downloads for $0.99 a song and integration with the iPod. The iTunes Store quickly became the market leader in online music services, with over five billion downloads by June 19, 2008. Two years later, the iTunes Store was the world's largest music retailer. In 2002, Apple purchased Nothing Real for their advanced digital compositing application Shake, as well as Emagic for the music productivity application Logic. The purchase of Emagic made Apple the first computer manufacturer to own a music software company. The acquisition was followed by the development of Apple's consumer-level GarageBand application. The release of iPhoto in the same year completed the iLife suite. At the Worldwide Developers Conference keynote address on June 6, 2005, Jobs announced that Apple would move away from PowerPC processors, and the Mac would transition to Intel processors in 2006. On January 10, 2006, the new MacBook Pro and iMac became the first Apple computers to use Intel's Core Duo CPU. By August 7, 2006, Apple made the transition to Intel chips for the entire Mac product line—over one year sooner than announced. The Power Mac, iBook, and PowerBook brands were retired during the transition; the Mac Pro, MacBook, and MacBook Pro became their respective successors. On April 29, 2009, The Wall Street Journal reported that Apple was building its own team of engineers to design microchips. Apple also introduced Boot Camp in 2006 to help users install Windows XP or Windows Vista on their Intel Macs alongside Mac OS X. Apple's success during this period was evident in its stock price. Between early 2003 and 2006, the price of Apple's stock increased more than tenfold, from around $6 per share (split-adjusted) to over $80. When Apple surpassed Dell's market cap in January 2006, Jobs sent an email to Apple employees saying Dell's CEO Michael Dell should eat his words. Nine years prior, Dell had said that if he ran Apple he would "shut it down and give the money back to the shareholders". 2007–2011: Success with mobile devices During his keynote speech at the Macworld Expo on January 9, 2007, Jobs announced that Apple Computer, Inc. would thereafter be known as "Apple Inc.", because the company had shifted its emphasis from computers to consumer electronics. This event also saw the announcement of the iPhone and the Apple TV. The company sold 270,000 iPhone units during the first 30 hours of sales, and the device was called "a game changer for the industry". In an article posted on Apple's website on February 6, 2007, Jobs wrote that Apple would be willing to sell music on the iTunes Store without digital rights management (DRM) , thereby allowing tracks to be played on third-party players, if record labels would agree to drop the technology. On April 2, 2007, Apple and EMI jointly announced the removal of DRM technology from EMI's catalog in the iTunes Store, effective in May 2007. Other record labels eventually followed suit and Apple published a press release in January 2009 to announce that all songs on the iTunes Store are available without their FairPlay DRM. In July 2008, Apple launched the App Store to sell third-party applications for the iPhone and iPod Touch. Within a month, the store sold 60 million applications and registered an average daily revenue of $1 million, with Jobs speculating in August 2008 that the App Store could become a billion-dollar business for Apple. By October 2008, Apple was the third-largest mobile handset supplier in the world due to the popularity of the iPhone. On January 14, 2009, Jobs announced in an internal memo that he would be taking a six-month medical leave of absence from Apple until the end of June 2009 and would spend the time focusing on his health. In the email, Jobs stated that "the curiosity over my personal health continues to be a distraction not only for me and my family, but everyone else at Apple as well", and explained that the break would allow the company "to focus on delivering extraordinary products". Though Jobs was absent, Apple recorded its best non-holiday quarter (Q1 FY 2009) during the recession with revenue of $8.16 billion and profit of $1.21 billion. After years of speculation and multiple rumored "leaks", Apple unveiled a large screen, tablet-like media device known as the iPad on January 27, 2010. The iPad ran the same touch-based operating system as the iPhone, and all iPhone apps were compatible with the iPad. This gave the iPad a large app catalog on launch, though having very little development time before the release. Later that year on April 3, 2010, the iPad was launched in the US. It sold more than 300,000 units on its first day, and 500,000 by the end of the first week. In May of the same year, Apple's market cap exceeded that of competitor Microsoft for the first time since 1989. In June 2010, Apple released the iPhone 4, which introduced video calling using FaceTime, multitasking, and a new uninsulated stainless steel design that acted as the phone's antenna. Later that year, Apple again refreshed its iPod line of MP3 players by introducing a multi-touch iPod Nano, an iPod Touch with FaceTime, and an iPod Shuffle that brought back the clickwheel buttons of earlier generations. It also introduced the smaller, cheaper second generation Apple TV which allowed renting of movies and shows. On January 17, 2011, Jobs announced in an internal Apple memo that he would take another medical leave of absence for an indefinite period to allow him to focus on his health. Chief Operating Officer Tim Cook assumed Jobs's day-to-day operations at Apple, although Jobs would still remain "involved in major strategic decisions". Apple became the most valuable consumer-facing brand in the world. In June 2011, Jobs surprisingly took the stage and unveiled iCloud, an online storage and syncing service for music, photos, files, and software which replaced MobileMe, Apple's previous attempt at content syncing. This would be the last product launch Jobs would attend before his death. On August 24, 2011, Jobs resigned his position as CEO of Apple. He was replaced by Cook and Jobs became Apple's chairman. Apple did not have a chairman at the time and instead had two co-lead directors, Andrea Jung and Arthur D. Levinson, who continued with those titles until Levinson replaced Jobs as chairman of the board in November after Jobs' death. 2011–present: Post–Jobs era, Tim Cook's leadership On October 5, 2011, Steve Jobs died, marking the end of an era for Apple. The first major product announcement by Apple following Jobs's passing occurred on January 19, 2012, when Apple's Phil Schiller introduced iBook's Textbooks for iOS and iBook Author for Mac OS X in New York City. Jobs stated in the biography "Jobs" that he wanted to reinvent the textbook industry and education. From 2011 to 2012, Apple released the iPhone 4S and iPhone 5, which featured improved cameras, an intelligent software assistant named Siri, and cloud-synced data with iCloud; the third and fourth generation iPads, which featured Retina displays; and the iPad Mini, which featured a 7.9-inch screen in contrast to the iPad's 9.7-inch screen. These launches were successful, with the iPhone 5 (released September 21, 2012) becoming Apple's biggest iPhone launch with over two million pre-orders and sales of three million iPads in three days following the launch of the iPad Mini and fourth generation iPad (released November 3, 2012). Apple also released a third-generation 13-inch MacBook Pro with a Retina display and new iMac and Mac Mini computers. On August 20, 2012, Apple's rising stock price increased the company's market capitalization to a then-record $624 billion. This beat the non-inflation-adjusted record for market capitalization previously set by Microsoft in 1999. On August 24, 2012, a US jury ruled that Samsung should pay Apple $1.05 billion (£665m) in damages in an intellectual property lawsuit. Samsung appealed the damages award, which was reduced by $450 million and further granted Samsung's request for a new trial. On November 10, 2012, Apple confirmed a global settlement that dismissed all existing lawsuits between Apple and HTC
since 2005. The New York Times in 1985 stated that "Apple above all else is a marketing company". John Sculley agreed, telling The Guardian newspaper in 1997 that "People talk about technology, but Apple was a marketing company. It was the marketing company of the decade." Research in 2002 by NetRatings indicate that the average Apple consumer was usually more affluent and better educated than other PC company consumers. The research indicated that this correlation could stem from the fact that on average Apple Inc. products were more expensive than other PC products. In response to a query about the devotion of loyal Apple consumers, Jonathan Ive responded: there are 1.65 billion Apple products in active use. Headquarters and major facilities Apple Inc.'s world corporate headquarters are located in Cupertino, in the middle of California's Silicon Valley, at Apple Park, a massive circular groundscraper building with a circumference of . The building opened in April 2017 and houses more than 12,000 employees. Apple co-founder Steve Jobs wanted Apple Park to look less like a business park and more like a nature refuge, and personally appeared before the Cupertino City Council in June 2011 to make the proposal, in his final public appearance before his death. Apple also operates from the Apple Campus (also known by its address, 1 Infinite Loop), a grouping of six buildings in Cupertino that total located about to the west of Apple Park. The Apple Campus was the company's headquarters from its opening in 1993, until the opening of Apple Park in 2017. The buildings, located at 1–6 Infinite Loop, are arranged in a circular pattern around a central green space, in a design that has been compared to that of a university. In addition to Apple Park and the Apple Campus, Apple occupies an additional thirty office buildings scattered throughout the city of Cupertino, including three buildings that also served as prior headquarters: "Stephens Creek Three" (1977–1978), Bandley One" (1978–1982), and "Mariani One" (1982–1993). In total, Apple occupies almost 40% of the available office space in the city. Apple's headquarters for Europe, the Middle East and Africa (EMEA) are located in Cork in the south of Ireland, called the Hollyhill campus. The facility, which opened in 1980, houses 5,500 people and was Apple's first location outside of the United States. Apple's international sales and distribution arms operate out of the campus in Cork. Apple has two campuses near Austin, Texas: a campus opened in 2014 houses 500 engineers who work on Apple silicon and a campus opened in 2021 where 6,000 people to work in technical support, supply chain management, online store curation, and Apple Maps data management. The company, also has several other locations in Boulder, Colo., Culver City, Calif., Herzliya (Israel), London, New York, Pittsburgh, San Diego and Seattle that each employ hundreds of people. Stores The first Apple Stores were originally opened as two locations in May 2001 by then-CEO Steve Jobs, after years of attempting but failing store-within-a-store concepts. Seeing a need for improved retail presentation of the company's products, he began an effort in 1997 to revamp the retail program to get an improved relationship to consumers, and hired Ron Johnson in 2000. Jobs relaunched Apple's online store in 1997, and opened the first two physical stores in 2001. The media initially speculated that Apple would fail, but its stores were highly successful, bypassing the sales numbers of competing nearby stores and within three years reached US$1 billion in annual sales, becoming the fastest retailer in history to do so. Over the years, Apple has expanded the number of retail locations and its geographical coverage, with 499 stores across 22 countries worldwide . Strong product sales have placed Apple among the top-tier retail stores, with sales over $16 billion globally in 2011. In May 2016, Angela Ahrendts, Apple's then Senior Vice President of Retail, unveiled a significantly redesigned Apple Store in Union Square, San Francisco, featuring large glass doors for the entry, open spaces, and re-branded rooms. In addition to purchasing products, consumers can get advice and help from "Creative Pros" – individuals with specialized knowledge of creative arts; get product support in a tree-lined Genius Grove; and attend sessions, conferences and community events, with Ahrendts commenting that the goal is to make Apple Stores into "town squares", a place where people naturally meet up and spend time. The new design will be applied to all Apple Stores worldwide, a process that has seen stores temporarily relocate or close. Many Apple Stores are located inside shopping malls, but Apple has built several stand-alone "flagship" stores in high-profile locations. It has been granted design patents and received architectural awards for its stores' designs and construction, specifically for its use of glass staircases and cubes. The success of Apple Stores have had significant influence over other consumer electronics retailers, who have lost traffic, control and profits due to a perceived higher quality of service and products at Apple Stores. Apple's notable brand loyalty among consumers causes long lines of hundreds of people at new Apple Store openings or product releases. Due to the popularity of the brand, Apple receives a large number of job applications, many of which come from young workers. Although Apple Store employees receive above-average pay, are offered money toward education and health care, and receive product discounts, there are limited or no paths of career advancement. A May 2016 report with an anonymous retail employee highlighted a hostile work environment with harassment from customers, intense internal criticism, and a lack of significant bonuses for securing major business contracts. Due to the COVID-19 pandemic, Apple closed its stores outside China until March 27, 2020. Despite the stores being closed, hourly workers continue to be paid. Workers across the company are allowed to work remotely if their jobs permit it. On March 24, 2020, in a memo, Senior Vice President of People and Retail Deirdre O’Brien announced that some of its retail stores are expected to reopen at the beginning of April. Corporate affairs Corporate culture Apple is one of several highly successful companies founded in the 1970s that bucked the traditional notions of corporate culture. Jobs often walked around the office barefoot even after Apple became a Fortune 500 company. By the time of the "1984" television advertisement, Apple's informal culture had become a key trait that differentiated it from its competitors. According to a 2011 report in Fortune, this has resulted in a corporate culture more akin to a startup rather than a multinational corporation. In a 2017 interview, Wozniak credited watching Star Trek and attending Star Trek conventions while in his youth as a source of inspiration for his co-founding Apple. As the company has grown and been led by a series of differently opinionated chief executives, it has arguably lost some of its original character. Nonetheless, it has maintained a reputation for fostering individuality and excellence that reliably attracts talented workers, particularly after Jobs returned to the company. Numerous Apple employees have stated that projects without Jobs's involvement often took longer than projects with it. To recognize the best of its employees, Apple created the Apple Fellows program which awards individuals who make extraordinary technical or leadership contributions to personal computing while at the company. The Apple Fellowship has so far been awarded to individuals including Bill Atkinson, Steve Capps, Rod Holt, Alan Kay, Guy Kawasaki, Al Alcorn, Don Norman, Rich Page, Steve Wozniak, and Phil Schiller. At Apple, employees are intended to be specialists who are not exposed to functions outside their area of expertise. Jobs saw this as a means of having "best-in-class" employees in every role. For instance, Ron Johnson—Senior Vice President of Retail Operations until November 1, 2011—was responsible for site selection, in-store service, and store layout, yet had no control of the inventory in his stores. This was done by Tim Cook, who had a background in supply-chain management. Apple is known for strictly enforcing accountability. Each project has a "directly responsible individual" or "DRI" in Apple jargon. As an example, when iOS senior vice president Scott Forstall refused to sign Apple's official apology for numerous errors in the redesigned Maps app, he was forced to resign. Unlike other major U.S. companies, Apple provides a relatively simple compensation policy for executives that does not include perks enjoyed by other CEOs like country club fees or private use of company aircraft. The company typically grants stock options to executives every other year. In 2015, Apple had 110,000 full-time employees. This increased to 116,000 full-time employees the next year, a notable hiring decrease, largely due to its first revenue decline. Apple does not specify how many of its employees work in retail, though its 2014 SEC filing put the number at approximately half of its employee base. In September 2017, Apple announced that it had over 123,000 full-time employees. Apple has a strong culture of corporate secrecy, and has an anti-leak Global Security team that recruits from the National Security Agency, the Federal Bureau of Investigation, and the United States Secret Service. In December 2017, Glassdoor said Apple was the 48th best place to work, having originally entered at rank 19 in 2009, peaking at rank 10 in 2012, and falling down the ranks in subsequent years. Lack of innovation An editorial article in The Verge in September 2016 by technology journalist Thomas Ricker explored some of the public's perceived lack of innovation at Apple in recent years, specifically stating that Samsung has "matched and even surpassed Apple in terms of smartphone industrial design" and citing the belief that Apple is incapable of producing another breakthrough moment in technology with its products. He goes on to write that the criticism focuses on individual pieces of hardware rather than the ecosystem as a whole, stating "Yes, iteration is boring. But it's also how Apple does business. [...] It enters a new market and then refines and refines and continues refining until it yields a success". He acknowledges that people are wishing for the "excitement of revolution", but argues that people want "the comfort that comes with harmony". Furthermore, he writes that "a device is only the starting point of an experience that will ultimately be ruled by the ecosystem in which it was spawned", referring to how decent hardware products can still fail without a proper ecosystem (specifically mentioning that Walkman did not have an ecosystem to keep users from leaving once something better came along), but how Apple devices in different hardware segments are able to communicate and cooperate through the iCloud cloud service with features including Universal Clipboard (in which text copied on one device can be pasted on a different device) as well as inter-connected device functionality including Auto Unlock (in which an Apple Watch can unlock a Mac in close proximity). He argues that Apple's ecosystem is its greatest innovation. The Wall Street Journal reported in June 2017 that Apple's increased reliance on Siri, its virtual personal assistant, has raised questions about how much Apple can actually accomplish in terms of functionality. Whereas Google and Amazon make use of big data and analyze customer information to personalize results, Apple has a strong pro-privacy stance, intentionally not retaining user data. "Siri is a textbook of leading on something in tech and then losing an edge despite having all the money and the talent and sitting in Silicon Valley", Holger Mueller, a technology analyst, told the Journal. The report further claims that development on Siri has suffered due to team members and executives leaving the company for competitors, a lack of ambitious goals, and shifting strategies. Though switching Siri's functions to machine learning and algorithms, which dramatically cut its error rate, the company reportedly still failed to anticipate the popularity of Amazon's Echo, which features the Alexa personal assistant. Improvements to Siri stalled, executives clashed, and there were disagreements over the restrictions imposed on third-party app interactions. While Apple acquired an England-based startup specializing in conversational assistants, Google's Assistant had already become capable of helping users select Wi-Fi networks by voice, and Siri was lagging in functionality. In December 2017, two articles from The Verge and ZDNet debated what had been a particularly devastating week for Apple's macOS and iOS software platforms. The former had experienced a severe security vulnerability, in which Macs running the then-latest macOS High Sierra software were vulnerable to a bug that let anyone gain administrator privileges by entering "root" as the username in system prompts, leaving the password field empty and twice clicking "unlock", gaining full access. The bug was publicly disclosed on Twitter, rather than through proper bug bounty programs. Apple released a security fix within a day and issued an apology, stating that "regrettably we stumbled" in regards to the security of the latest updates. After installing the security patch, however, file sharing was broken for users, with Apple releasing a support document with instructions to separately fix that issue. Though Apple publicly stated the promise of "auditing our development processes to help prevent this from happening again", users who installed the security update while running the older 10.13.0 version of the High Sierra operating system rather than the then-newest 10.13.1 release experienced that the "root" security vulnerability was re-introduced, and persisted even after fully updating their systems. On iOS, a date bug caused iOS devices that received local app notifications at 12:15am on December 2, 2017, to repeatedly restart. Users were recommended to turn off notifications for their apps. Apple quickly released an update, done during the nighttime in Cupertino, California time and outside of their usual software release window, with one of the headlining features of the update needing to be delayed for a few days. The combined problems of the week on both macOS and iOS caused The Verges Tom Warren to call it a "nightmare" for Apple's software engineers and described it as a significant lapse in Apple's ability to protect its more than 1 billion devices. ZDNets Adrian Kingsley-Hughes wrote that "it's hard to not come away from the last week with the feeling that Apple is slipping". Kingsley-Hughes also concluded his piece by referencing an earlier article, in which he wrote that "As much as I don't want to bring up the tired old 'Apple wouldn't have done this under Steve Jobs's watch' trope, a lot of what's happening at Apple lately is different from what they came to expect under Jobs. Not to say that things didn't go wrong under his watch, but product announcements and launches felt a lot tighter for sure, as did the overall quality of what Apple was releasing." He did, however, also acknowledge that such failures "may indeed have happened" with Jobs in charge, though returning to the previous praise for his demands of quality, stating "it's almost guaranteed that given his personality that heads would have rolled, which limits future failures". Manufacturing and assembling The company's manufacturing, procurement, and logistics enable it to execute massive product launches without having to maintain large, profit-sapping inventories. In 2011, Apple's profit margins were 40 percent, compared with between 10 and 20 percent for most other hardware companies. Cook's catchphrase to describe his focus on the company's operational arm is: "Nobody wants to buy sour milk". In May 2017, the company announced a $1 billion funding project for "advanced manufacturing" in the United States, and subsequently invested $200 million in Corning Inc., a manufacturer of toughened Gorilla Glass technology used in its iPhone devices. The following December, Apple's chief operating officer, Jeff Williams, told CNBC that the "$1 billion" amount was "absolutely not" the final limit on its spending, elaborating that "We're not thinking in terms of a fund limit. ... We're thinking about, where are the opportunities across the U.S. to help nurture companies that are making the advanced technology — and the advanced manufacturing that goes with that — that quite frankly is essential to our innovation". As of 2021, Apple uses components from 43 different countries. The majority of assembling is done by Taiwanese original design manufacturer firms Foxconn, Pegatron, Wistron and Compal Electronics mostly in factories located inside China, but also Brazil, and India. During the Mac's early history Apple generally refused to adopt prevailing industry standards for hardware, instead creating their own. This trend was largely reversed in the late 1990s, beginning with Apple's adoption of the PCI bus in the 7500/8500/9500 Power Macs. Apple has since joined the industry standards groups to influence the future direction of technology standards such as USB, AGP, HyperTransport, Wi-Fi, NVMe, PCIe and others in its products. FireWire is an Apple-originated standard that was widely adopted across the industry after it was standardized as IEEE 1394 and is a legally mandated port in all Cable TV boxes in the United States. Apple has gradually expanded its efforts in getting its products into the Indian market. In July 2012, during a conference call with investors, CEO Tim Cook said that he "[loves] India", but that Apple saw larger opportunities outside the region. India's requirement that 30% of products sold be manufactured in the country was described as "really adds cost to getting product to market". In May 2016, Apple opened an iOS app development center in Bangalore and a maps development office for 4,000 staff in Hyderabad. In March, The Wall Street Journal reported that Apple would begin manufacturing iPhone models in India "over the next two months", and in May, the Journal wrote that an Apple manufacturer had begun production of iPhone SE in the country, while Apple told CNBC that the manufacturing was for a "small number" of units. In April 2019, Apple initiated manufacturing of iPhone 7 at its Bengaluru facility, keeping in mind demand from local customers even as they seek more incentives from the government of India. At the beginning of 2020, Tim Cook announced that Apple schedules the opening of its first physical outlet in India for 2021, while an online store is to be launched by the end of the year. Labor practices The company advertised its products as being made in America until the late 1990s; however, as a result of outsourcing initiatives in the 2000s, almost all of its manufacturing is now handled abroad. According to a report by The New York Times, Apple insiders "believe the vast scale of overseas factories, as well as the flexibility, diligence and industrial skills of foreign workers, have so outpaced their American counterparts that "Made in the USA" is no longer a viable option for most Apple products". In 2006, one complex of factories in Shenzhen, China that assembled the iPod and other items had over 200,000 workers living and working within it. Employees regularly worked more than 60 hours per week and made around $100 per month. A little over half of the workers' earnings was required to pay for rent and food from the company. Apple immediately launched an investigation after the 2006 media report, and worked with their manufacturers to ensure acceptable working conditions. In 2007, Apple started yearly audits of all its suppliers regarding worker's rights, slowly raising standards and pruning suppliers that did not comply. Yearly progress reports have been published since 2008. In 2011, Apple admitted that its suppliers' child labor practices in China had worsened. The Foxconn suicides occurred between January and November 2010, when 18 Foxconn (Chinese: 富士康) employees attempted suicide, resulting in 14 deaths—the company was the world's largest contract electronics manufacturer, for clients including Apple, at the time. The suicides drew media attention, and employment practices at Foxconn were investigated by Apple. Apple issued a public statement about the suicides, and company spokesperson Steven Dowling said: The statement was released after the results from the company's probe into its suppliers' labor practices were published in early 2010. Foxconn was not specifically named in the report, but Apple identified a series of serious labor violations of labor laws, including Apple's own rules, and some child labor existed in a number of factories. Apple committed to the implementation of changes following the suicides. Also in 2010, workers in China planned to sue iPhone contractors over poisoning by a cleaner used to clean LCD screens. One worker claimed that he and his coworkers had not been informed of possible occupational illnesses. After a high suicide rate in a Foxconn facility in China making iPads and iPhones, albeit a lower rate than that of China as a whole, workers were forced to sign a legally binding document guaranteeing that they would not kill themselves. Workers in factories producing Apple products have also been exposed to hexane, a neurotoxin that is a cheaper alternative than alcohol for cleaning the products. A 2014 BBC investigation found excessive hours and other problems persisted, despite Apple's promise to reform factory practice after the 2010 Foxconn suicides. The Pegatron factory was once again the subject of review, as reporters gained access to the working conditions inside through recruitment as employees. While the BBC maintained that the experiences of its reporters showed that labor violations were continuing since 2010, Apple publicly disagreed with the BBC and stated: "We are aware of no other company doing as much as Apple to ensure fair and safe working conditions". In December 2014, the Institute for Global Labour and Human Rights published a report which documented inhumane conditions for the 15,000 workers at a Zhen Ding Technology factory in Shenzhen, China, which serves as a major supplier of circuit boards for Apple's iPhone and iPad. According to the report, workers are pressured into 65-hour work weeks which leaves them so exhausted that they often sleep during lunch breaks. They are also made to reside in "primitive, dark and filthy dorms" where they sleep "on plywood, with six to ten workers in each crowded room." Omnipresent security personnel also routinely harass and beat the workers. In 2019, there were reports stating that some of Foxconn's managers had used rejected parts to build iPhones and that Apple was investigating the issue. Environmental practices and initiatives Apple Energy Apple Energy, LLC is a wholly owned subsidiary of Apple Inc. that sells solar energy. , Apple's solar farms in California and Nevada have been declared to provide 217.9 megawatts of solar generation capacity. In addition to the company's solar energy production, Apple has received regulatory approval to construct a landfill gas energy plant in North Carolina. Apple will use the methane emissions to generate electricity. Apple's North Carolina data center is already powered entirely with energy from renewable sources. Energy and resources Following a Greenpeace protest, Apple released a statement on April 17, 2012, committing to ending its use of coal and shifting to 100% renewable clean energy. By 2013, Apple was using 100% renewable energy to power their data centers. Overall, 75% of the company's power came from clean renewable sources. In 2010, Climate Counts, a nonprofit organization dedicated to directing consumers toward the greenest companies, gave Apple a score of 52 points out of a possible 100, which puts Apple in their top category "Striding". This was an increase from May 2008, when Climate Counts only gave Apple 11 points out of 100, which placed the company last among electronics companies, at which time Climate Counts also labeled Apple with a "stuck icon", adding that Apple at the time was "a choice to avoid for the climate-conscious consumer". In May 2015, Greenpeace evaluated the state of the Green Internet and commended Apple on their environmental practices saying, "Apple's commitment to renewable energy has helped set a new bar for the industry, illustrating in very concrete terms that a 100% renewable Internet is within its reach, and providing several models of intervention for other companies that want to build a sustainable Internet." , Apple states that 100% of its U.S. operations run on renewable energy, 100% of Apple's data centers run on renewable energy and 93% of Apple's global operations run on renewable energy. However, the facilities are connected to the local grid which usually contains a mix of fossil and renewable sources, so Apple carbon offsets its electricity use. The Electronic Product Environmental Assessment Tool (EPEAT) allows consumers to see the effect a product has on the environment. Each product receives a Gold, Silver, or Bronze rank depending on its efficiency and sustainability. Every Apple tablet, notebook, desktop computer, and display that EPEAT ranks achieves a Gold rating, the highest possible. Although Apple's data centers recycle water 35 times, the increased activity in retail, corporate and data centers also increase the amount of water use to in 2015. During an event on March 21, 2016, Apple provided a status update on its environmental initiative to be 100% renewable in all of its worldwide operations. Lisa P. Jackson, Apple's vice president of Environment, Policy and Social Initiatives who reports directly to CEO, Tim Cook, announced that , 93% of Apple's worldwide operations are powered with renewable energy. Also featured was the company's efforts to use sustainable paper in their product packaging; 99% of all paper used by Apple in the product packaging comes from post-consumer recycled paper or sustainably managed forests, as the company continues its move to all paper packaging for all of its products. Apple working in partnership with Conservation Fund, have preserved 36,000 acres of working forests in Maine and North Carolina. Another partnership announced is with the World Wildlife Fund to preserve up to of forests in China. Featured was the company's installation of a 40 MW solar power plant in the Sichuan province of China that was tailor-made to coexist with the indigenous yaks that eat hay produced on the land, by raising the panels to be several feet off of the ground so the yaks and their feed would be unharmed grazing beneath the array. This installation alone compensates for more than all of the energy used in Apple's Stores and Offices in the whole of China, negating the company's energy carbon footprint in the country. In Singapore, Apple has worked with the Singaporean government to cover the rooftops of 800 buildings in the city-state with solar panels allowing Apple's Singapore operations to be run on 100% renewable energy. Liam was introduced to the world, an advanced robotic disassembler and sorter designed by Apple Engineers in California specifically for recycling outdated or broken iPhones. Reuses and recycles parts from traded in products. Apple announced on August 16, 2016, that Lens Technology, one of its major suppliers in China, has committed to power all its glass production for Apple with 100 percent renewable energy by 2018. The commitment is a large step in Apple's efforts to help manufacturers lower their carbon footprint in China. Apple also announced that all 14 of its final assembly sites in China are now compliant with UL's Zero Waste to Landfill validation. The standard, which started in January 2015, certifies that all manufacturing waste is reused, recycled, composted, or converted into energy (when necessary). Since the program began, nearly, 140,000 metric tons of waste have been diverted from landfills. On July 21, 2020, Apple announced its plan to become carbon neutral across its entire business, manufacturing supply chain, and product life cycle by 2030. In the next 10 years, Apple will try to lower emissions with a series of innovative actions, including: low carbon product design, expanding energy efficiency, renewable energy, process and material innovations, and carbon removal. In April 2021, Apple said that it had started a $200 million fund in order to combat climate change by removing 1 million metric tons of carbon dioxide from the atmosphere each year. Toxins Following further campaigns by Greenpeace, in 2008, Apple became the first electronics manufacturer to fully eliminate all polyvinyl chloride (PVC) and brominated flame retardants (BFRs) in its complete product line. In June 2007, Apple began replacing the cold cathode fluorescent lamp (CCFL) backlit LCD displays in its computers with mercury-free LED-backlit LCD displays and arsenic-free glass, starting with the upgraded MacBook Pro. Apple offers comprehensive and transparent information about the CO2e, emissions, materials, and electrical usage concerning every product they currently produce or have sold in the past (and which they have enough data needed to produce the report), in their portfolio on their homepage. Allowing consumers to make informed purchasing decisions on the products they offer for sale. In June 2009, Apple's iPhone 3GS was free of PVC, arsenic, and BFRs. All Apple products now have mercury-free LED-backlit LCD displays, arsenic-free glass, and non-PVC cables. All Apple products have EPEAT Gold status and beat the latest Energy Star guidelines in each product's respective regulatory category. In November 2011, Apple was featured in Greenpeace's Guide to Greener Electronics, which ranks electronics manufacturers on sustainability, climate and energy policy, and how "green" their products are. The company ranked fourth of fifteen electronics companies (moving up five places from the previous year) with a score of 4.6/10. Greenpeace praises Apple's sustainability, noting that the company exceeded its 70% global recycling goal in 2010. It continues to score well on the products rating with all Apple products now being free of PVC plastic and BFRs. However, the guide criticizes Apple on the Energy criteria for not seeking external verification of its greenhouse gas emissions data and for not setting out any targets to reduce emissions. In January 2012, Apple requested that its cable maker, Volex, begin producing halogen-free USB and power cables. Green bonds In February 2016, Apple issued a US$1.5 billion green bond (climate bond), the first ever of its kind by a U.S. tech company. The green bond proceeds are dedicated to the financing of environmental projects. Racial Justice and Equality Initiatives In June 2020, Apple committed $100 million for its Racial Equity and Justice initiative (REJI) and in Jan 2021 announced various projects as part of the initiative. Finance Apple is the world's largest information technology company by revenue, the world's largest technology company by total assets, and the world's second-largest mobile phone manufacturer after Samsung. In its fiscal year ending in September 2011, Apple Inc. reported a total of $108 billion in annual revenues—a significant increase from its 2010 revenues of $65 billion—and nearly $82 billion in cash reserves. On March 19, 2012, Apple announced plans for a $2.65-per-share dividend beginning in fourth quarter of 2012, per approval by their board of directors. The company's worldwide annual revenue in 2013 totaled $170 billion. In May 2013, Apple entered the top ten of the Fortune 500 list of companies for the first time, rising 11 places above its 2012 ranking to take the sixth position. , Apple has around US$234 billion of cash and marketable securities, of which 90% is located outside the United States for tax purposes. Apple amassed 65% of all profits made by the eight largest worldwide smartphone manufacturers in quarter one of 2014, according to a report by Canaccord Genuity. In the first quarter of 2015, the company garnered 92% of all earnings. On April 30, 2017, The Wall Street Journal reported that Apple had cash reserves of $250 billion, officially confirmed by Apple as specifically $256.8 billion a few days later. , Apple was the largest publicly traded corporation in the world by market capitalization. On August 2, 2018, Apple became the first publicly traded U.S. company to reach a $1 trillion market value. Apple was ranked No. 4 on the 2018 Fortune 500 rankings of the largest United States corporations by total revenue. Tax practices Apple has created subsidiaries in low-tax places such as Ireland, the Netherlands, Luxembourg, and the British Virgin Islands to cut the taxes it pays around the world. According to The New York Times, in the 1980s Apple was among the first tech companies to designate overseas salespeople in high-tax countries in a manner that allowed the company to sell on behalf of low-tax subsidiaries on other continents, sidestepping income taxes. In the late 1980s, Apple was a pioneer of an accounting technique known as the "Double Irish with a Dutch sandwich," which reduces taxes by routing profits through Irish subsidiaries and the Netherlands and then to the Caribbean. British Conservative Party Member of Parliament Charlie Elphicke published research on October 30, 2012, which showed that some multinational companies, including Apple Inc., were making billions of pounds of profit in the UK, but were paying an effective tax rate to the UK Treasury of only 3 percent, well below standard corporation tax. He followed this research by calling on the Chancellor of the Exchequer George Osborne to force these multinationals, which also included Google and The Coca-Cola Company, to state the effective rate of tax they pay on their UK revenues. Elphicke also said that government contracts should be withheld from multinationals who do not pay their fair share of UK tax. Apple Inc. claims to be the single largest taxpayer to the Department of the Treasury of the United States of America with an effective tax rate of approximately of 26% as of the second quarter of the Apple fiscal year 2016. In an interview with the German newspaper FAZ in October 2017, Tim Cook stated, that Apple is the biggest taxpayer worldwide. In 2015, Reuters reported that Apple had earnings abroad of $54.4 billion which were untaxed by the IRS of the United States. Under U.S. tax law governed by the IRC, corporations don't pay income tax on overseas profits unless the profits are repatriated into the United States and as such Apple argues that to benefit its shareholders it will leave it overseas until a repatriation holiday or comprehensive tax reform takes place in the United States. On July 12, 2016, the Central Statistics Office of Ireland announced that 2015 Irish GDP had grown by 26.3%, and 2015 Irish GNP had grown by 18.7%. The figures attracted international scorn, and were labelled by Nobel-prize winning economist, Paul Krugman, as leprechaun economics. It was not until 2018 that Irish economists could definitively prove that the 2015 growth was due to Apple restructuring its controversial double Irish subsidiaries (Apple Sales International), which Apple converted into a new Irish capital allowances for intangible assets tax scheme (expires in January 2020). The affair required the Central Bank of Ireland to create a new measure of Irish economic growth, Modified GNI* to replace Irish GDP, given the distortion of Apple's tax schemes. Irish GDP is 143% of Irish Modified GNI*. On August 30, 2016, after a two-year investigation, the EU Competition Commissioner concluded Apple received "illegal state aid" from Ireland. The EU ordered Apple to pay 13 billion euros ($14.5 billion), plus interest, in unpaid Irish taxes for 2004–2014. It is the largest tax fine in history. The Commission found that Apple had benefited from a private Irish Revenue Commissioners tax ruling regarding its double Irish tax structure, Apple Sales International (ASI). Instead of using two companies for its double Irish structure, Apple was given a ruling to split ASI into two internal "branches". The Chancellor of Austria, Christian Kern, put this decision into perspective by stating that "every Viennese cafe, every sausage stand pays more tax in Austria than a multinational corporation". , Apple agreed to start paying €13 billion in back taxes to the Irish government, the repayments will be held in an escrow account while Apple and the Irish government continue their appeals in EU courts. On July 15, 2020, the EU General Court annuls the European Commission's decision in Apple State aid case: Apple will not have to repay €13 billion to Ireland. Board of directors the following individuals sit on the board of Apple Inc. Arthur D. Levinson (chairman) Tim Cook (executive director and CEO) James A. Bell (non-executive director) Al Gore (non-executive director) Andrea Jung (non-executive director) Ronald Sugar (non-executive director) Susan Wagner (non-executive director) Executive management the management of Apple Inc. includes: Tim Cook (chief executive officer) Jeff Williams (chief operating officer) Luca Maestri (senior vice president and chief financial officer) Katherine L. Adams (senior vice president and general counsel) Eddy Cue (senior vice president – Internet Software and Services) Craig Federighi (senior vice president – Software Engineering) John Giannandrea (senior vice president – Machine Learning and AI Strategy) Deirdre O'Brien (senior vice president – Retail + People) John Ternus (senior vice president – Hardware Engineering) Greg Josiwak (senior vice president – Worldwide Marketing) Johny Srouji (senior vice president – Hardware Technologies) Sabih Khan (senior vice president – Operations) Lisa P. Jackson (vice president – Environment, Policy, and Social Initiatives) Isabel Ge Mahe (vice president and managing director – Greater China) Tor Myhren (vice president – Marketing Communications) Adrian Perica (vice president – Corporate Development) List of chief executives Michael Scott (1977–1981) Mike Markkula (1981–1983) John Sculley (1983–1993) Michael Spindler (1993–1996) Gil Amelio (1996–1997) Steve Jobs (1997–2011) Tim Cook (2011–present) List of chairmen The role of chairman of the Board has not always been in use; notably, between 1981 to 1985, and 1997 to 2011. Mike Markkula (1977–1981) Steve Jobs (1985) Mike Markkula (1985–1993); second term John Sculley (1993) Mike Markkula (1993–1997); third term Steve Jobs (2011); second term Arthur D. Levinson (2011–present) Litigation Apple has been a participant in various legal proceedings and claims since it began operation. In particular, Apple is known for and promotes itself as actively and aggressively enforcing its intellectual property interests. Some litigation examples include Apple v. Samsung, Apple v. Microsoft, Motorola Mobility v. Apple Inc., and Apple Corps v. Apple Computer. Apple has also had to defend itself against charges on numerous occasions of violating intellectual property rights. Most have been dismissed in the courts as shell companies known as patent trolls, with no evidence of actual use of patents in question. On December 21, 2016, Nokia announced that in the U.S. and Germany, it has filed a suit against Apple, claiming that the latter's products infringe on Nokia's patents. Most recently, in November 2017, the United States International Trade Commission announced an investigation into allegations of patent infringement in regards to Apple's remote desktop technology; Aqua Connect, a company that builds remote desktop software, has claimed that Apple infringed on two of its patents. Privacy stance Apple has a notable pro-privacy stance, actively making privacy-conscious features and settings part of its conferences, promotional campaigns, and public image. With its iOS 8 mobile operating system in 2014, the company started encrypting all contents of iOS devices through users' passcodes, making it impossible at the time for the company to provide customer data to law enforcement requests seeking such information. With the popularity rise of cloud storage solutions, Apple began a technique in 2016 to do deep learning scans for facial data in photos on the user's local device and encrypting the content before uploading it to Apple's iCloud storage system. It also introduced "differential privacy", a way to collect crowdsourced data from many users, while keeping individual users anonymous, in a system that Wired described as "trying to learn as much as possible about a group while learning as little as possible about any individual in it". Users are explicitly asked if they want to participate, and can actively opt-in or opt-out. With Apple's release of an update to iOS 14, Apple required all developers of iPhone, iPad, and iPod touch applications to directly ask iPhone users permission
House of Balliol, Clan Bruce, and Clan Cumming (Comyn). When the fighting amongst these newcomers resulted in the Scottish Wars of Independence, the English king Edward I travelled across the area twice, in 1296 and 1303. In 1307, Robert the Bruce was victorious near Inverurie. Along with his victory came new families, namely the Forbeses and the Gordons. These new families set the stage for the upcoming rivalries during the 14th and 15th centuries. This rivalry grew worse during and after the Protestant Reformation, when religion was another reason for conflict between the clans. The Gordon family adhered to Catholicism and the Forbeses to Protestantism. Aberdeenshire was the historic seat of the clan Dempster. Three universities were founded in the area prior to the 17th century, King's College in Old Aberdeen (1494), Marischal College in Aberdeen (1593), and the University of Fraserburgh (1597). After the end of the Revolution of 1688, an extended peaceful period was interrupted only by such fleeting events such as the Rising of 1715 and the Rising of 1745. The latter resulted in the end of the ascendancy of Episcopalianism and the feudal power of landowners. An era began of increased agricultural and industrial progress. During the 17th century, Aberdeenshire was the location of more fighting, centred on the Marquess of Montrose and the English Civil Wars. This period also saw increased wealth due to the increase in trade with Germany, Poland, and the Low Countries. The present council area is named after the historic county of Aberdeenshire, which has different boundaries and was abandoned as an administrative area in 1975 under the Local Government (Scotland) Act 1973. It was replaced by Grampian Regional Council and five district councils: Banff and Buchan, Gordon, Kincardine and Deeside, Moray and the City of Aberdeen. Local government functions were shared between the two levels. In 1996, under the Local Government, etc. (Scotland) Act 1994, the Banff and Buchan District, Gordon District and Kincardine and Deeside District were merged to form the present Aberdeenshire Council area. Moray and the City of Aberdeen were made their own council areas. The present Aberdeenshire Council area consists of all of the historic counties of Aberdeenshire and Kincardineshire (except the area of those two counties making up the City of Aberdeen), as well as north-east portions of Banffshire. Demographics The population of the council area has risen over 50% since 1971 to approximately , representing 4.7% of Scotland's total. Aberdeenshire's population has increased by 9.1% since 2001, while Scotland's total population grew by 3.8%. The census lists a relatively high proportion of under 16s and slightly fewer people of working age compared with the Scottish average. Aberdeenshire is one of the most homogeneous/indigenous regions of the UK. In 2011, 82.2% of residents identified as 'White Scottish', followed by 12.3% who are 'White British', whilst ethnic minorities constitute only 0.9% of the population. The largest ethnic minority group are Asian Scottish/British at 0.8%. In addition to the English language, 48.8% of residents reported being able to speak and understand the Scots language. The fourteen biggest settlements in Aberdeenshire (with 2011 population estimates) are: Peterhead (17,790) Fraserburgh (12,540) Inverurie (11,529) Westhill (11,220) Stonehaven (10,820) Ellon (9,910) Portlethen (7,327) Banchory (7,111) Turriff (4,804) Kintore (4,476) Huntly (4,461) Banff (3,931) Kemnay (3,830) Macduff (3,711) Economy Aberdeenshire's Gross Domestic Product (GDP) is estimated at £3,496M (2011), representing 5.2% of the Scottish total. Aberdeenshire's economy is closely linked to Aberdeen City's (GDP £7,906M), and in 2011, the region as a whole was calculated to contribute 16.8% of Scotland's GDP. Between 2012 and 2014, the combined Aberdeenshire and Aberdeen City economic forecast GDP growth rate is 8.6%, the highest growth rate of any local council area in the UK and above the Scottish rate of 4.8%. A significant proportion of Aberdeenshire's working residents commute to Aberdeen City for work, varying from 11.5% from Fraserburgh to 65% from Westhill. Average Gross Weekly Earnings (for full-time employees employed in workplaces in Aberdeenshire in 2011) are £572.60. This is lower than the Scottish average by £2.10 and a fall of 2.6% on the 2010 figure. The average gross weekly pay of people resident in Aberdeenshire is much higher, at £741.90, as many people commute out of Aberdeenshire, principally into Aberdeen City. Total employment (excluding farm data) in Aberdeenshire is estimated at 93,700 employees (Business Register and Employment Survey 2009). The majority of employees work within the service sector, predominantly in public administration, education and health. Almost 19% of employment is within the public sector. Aberdeenshire's economy remains closely linked to Aberdeen City's and the North Sea oil industry, with many employees in oil-related jobs. The average monthly unemployment (claimant count) rate for Aberdeenshire in 2011 was 1.5%. This is lower than the average rate of Aberdeen City (2.3%), Scotland (4.2%) and the UK (3.8%). Major industries Energy – There are significant energy-related infrastructure, presence and expertise in Aberdeenshire. Peterhead is an important centre for the energy industry. Peterhead Port, which includes
areas. The present Aberdeenshire Council area consists of all of the historic counties of Aberdeenshire and Kincardineshire (except the area of those two counties making up the City of Aberdeen), as well as north-east portions of Banffshire. Demographics The population of the council area has risen over 50% since 1971 to approximately , representing 4.7% of Scotland's total. Aberdeenshire's population has increased by 9.1% since 2001, while Scotland's total population grew by 3.8%. The census lists a relatively high proportion of under 16s and slightly fewer people of working age compared with the Scottish average. Aberdeenshire is one of the most homogeneous/indigenous regions of the UK. In 2011, 82.2% of residents identified as 'White Scottish', followed by 12.3% who are 'White British', whilst ethnic minorities constitute only 0.9% of the population. The largest ethnic minority group are Asian Scottish/British at 0.8%. In addition to the English language, 48.8% of residents reported being able to speak and understand the Scots language. The fourteen biggest settlements in Aberdeenshire (with 2011 population estimates) are: Peterhead (17,790) Fraserburgh (12,540) Inverurie (11,529) Westhill (11,220) Stonehaven (10,820) Ellon (9,910) Portlethen (7,327) Banchory (7,111) Turriff (4,804) Kintore (4,476) Huntly (4,461) Banff (3,931) Kemnay (3,830) Macduff (3,711) Economy Aberdeenshire's Gross Domestic Product (GDP) is estimated at £3,496M (2011), representing 5.2% of the Scottish total. Aberdeenshire's economy is closely linked to Aberdeen City's (GDP £7,906M), and in 2011, the region as a whole was calculated to contribute 16.8% of Scotland's GDP. Between 2012 and 2014, the combined Aberdeenshire and Aberdeen City economic forecast GDP growth rate is 8.6%, the highest growth rate of any local council area in the UK and above the Scottish rate of 4.8%. A significant proportion of Aberdeenshire's working residents commute to Aberdeen City for work, varying from 11.5% from Fraserburgh to 65% from Westhill. Average Gross Weekly Earnings (for full-time employees employed in workplaces in Aberdeenshire in 2011) are £572.60. This is lower than the Scottish average by £2.10 and a fall of 2.6% on the 2010 figure. The average gross weekly pay of people resident in Aberdeenshire is much higher, at £741.90, as many people commute out of Aberdeenshire, principally into Aberdeen City. Total employment (excluding farm data) in Aberdeenshire is estimated at 93,700 employees (Business Register and Employment Survey 2009). The majority of employees work within the service sector, predominantly in public administration, education and health. Almost 19% of employment is within the public sector. Aberdeenshire's economy remains closely linked to Aberdeen City's and the North Sea oil industry, with many employees in oil-related jobs. The average monthly unemployment (claimant count) rate for Aberdeenshire in 2011 was 1.5%. This is lower than the average rate of Aberdeen City (2.3%), Scotland (4.2%) and the UK (3.8%). Major industries Energy – There are significant energy-related infrastructure, presence and expertise in Aberdeenshire. Peterhead is an important centre for the energy industry. Peterhead Port, which includes an extensive new quay with adjacent lay down area at Smith Quay, is a major support location for North Sea oil and gas exploration and production and the fast-growing global sub-sea sector. The Gas Terminal at St Fergus handles around 15% of the UK's natural gas requirements and the Peterhead power station is looking to host Britain's first carbon capture and storage power generation project.There are numerous offshore wind turbines near the coast. Fishing – Aberdeenshire is Scotland's foremost fishing area. In 2010, catches landed at Aberdeenshire's ports accounted for over half the total fish landings of Scotland, and almost 45% in the UK. Peterhead and Fraserburgh ports, alongside Aberdeen City, provide much of the employment in these sectors. The River Deeis also rich in salmon. Agriculture – Aberdeenshire is rich in arable land, with an estimated 9,000 people employed in the sector, and is best known for rearing livestock, mainly cattle. Sheep are important in the higher ground. Tourism – this sector continues to grow, with a range of sights to be seen in the area. From the lively Cairngorm Mountain range to the bustling fishing ports on the
performed in Canada, Australia, and Venezuela. The band has been recognized for their music with nominations in the New Times 1998 "Best Latin Influenced" category, the BAM Magazine 1999 "Best Rock en Español" category, and the LA Weekly 1999 "Best Hip Hop" category. The release of their eponymous third album on August 29, 2009 was met with positive reviews and earned the band four Native American Music Award (NAMMY) nominations in 2010. Discography Decolonize Year:1995 "Teteu Innan" "Killing Season" "Lost Souls" "My Blood Is Red" "Natural Enemy" "Sacred Circle" "Blood On Your Hands" "Interlude" "Aug 2 the 9" "Indigena" "Lyrical Drive By" Sub-Verses Year:1998 "Permiso" "They Move In Silence" "No Soy Animal" "Killing Season" "Blood On Your Hands" "Reality Check" "Lemon Pledge" "Revolution" "Preachers of the Blind State" "Lyrical Drive-By" "Nahui Ollin" "How to Catch a Bullet" "Ik Otik" "Obsolete Man" "Decolonize" "War Flowers" Aztlan Underground Year: 2009 "Moztlitta" "Be God" "Light Shines" "Prey" "In the Field" "9 10 11 12" "Smell the Dead" "Sprung" "Medicine" "Acabando" "Crescent Moon" See also Chicano rap Native American hip hop Rapcore Chicano rock References External
in 1998, The band was featured in the independent films Algun Dia and Frontierland in the 1990s, and on the upcoming Studio 49. The band has been mentioned or featured in various newspapers and magazines: the Vancouver Sun, New Times, BLU Magazine (an underground hip hop magazine), BAM Magazine, La Banda Elastica Magazine, and the Los Angeles Times calendar section. The band is also the subject of a chapter in the book It's Not About a Salary, by Brian Cross. Aztlan Underground remains active in the community, lending their voice to annual events such as The Farce of July, and the recent movement to recognize Indigenous People's Day in Los Angeles and beyond. In addition to forming their own label, Xicano Records and Film, Aztlan Underground were signed to the Basque record label Esan Ozenki in 1999 which enabled them to tour Spain extensively and perform in France and Portugal. Aztlan Underground have also performed in Canada, Australia, and Venezuela. The band has been recognized for their music with nominations in the New Times 1998 "Best Latin Influenced" category, the BAM Magazine 1999 "Best Rock en Español" category, and the LA Weekly 1999 "Best Hip Hop" category. The release of their eponymous third album on August 29, 2009 was met with positive reviews and earned the band four Native American Music Award (NAMMY) nominations in 2010. Discography Decolonize Year:1995 "Teteu Innan" "Killing Season" "Lost Souls" "My Blood Is Red" "Natural Enemy" "Sacred Circle" "Blood On Your Hands" "Interlude"
men while the Confederates lost only 174. Fort Pulaski on the Georgia coast was an early target for the Union navy. Following the capture of Port Royal, an expedition was organized with engineer troops under the command of Captain Quincy A. Gillmore, forcing a Confederate surrender. The Union army occupied the fort for the rest of the war after repairing it. In April 1862, a Union naval task force commanded by Commander David D. Porter attacked Forts Jackson and St. Philip, which guarded the river approach to New Orleans from the south. While part of the fleet bombarded the forts, other vessels forced a break in the obstructions in the river and enabled the rest of the fleet to steam upriver to the city. A Union army force commanded by Major General Benjamin Butler landed near the forts and forced their surrender. Butler's controversial command of New Orleans earned him the nickname "Beast." The following year, the Union Army of the Gulf commanded by Major General Nathaniel P. Banks laid siege to Port Hudson for nearly eight weeks, the longest siege in US military history. The Confederates attempted to defend with the Bayou Teche Campaign but surrendered after Vicksburg. These two surrenders gave the Union control over the entire Mississippi. Several small skirmishes were fought in Florida, but no major battles. The biggest was the Battle of Olustee in early 1864. Pacific Coast theater The Pacific Coast theater refers to military operations on the Pacific Ocean and in the states and Territories west of the Continental Divide. Conquest of Virginia At the beginning of 1864, Lincoln made Grant commander of all Union armies. Grant made his headquarters with the Army of the Potomac and put Maj. Gen. William Tecumseh Sherman in command of most of the western armies. Grant understood the concept of total war and believed, along with Lincoln and Sherman, that only the utter defeat of Confederate forces and their economic base would end the war. This was total war not in killing civilians but rather in taking provisions and forage and destroying homes, farms, and railroads, that Grant said "would otherwise have gone to the support of secession and rebellion. This policy I believe exercised a material influence in hastening the end." Grant devised a coordinated strategy that would strike at the entire Confederacy from multiple directions. Generals George Meade and Benjamin Butler were ordered to move against Lee near Richmond, General Franz Sigel (and later Philip Sheridan) were to attack the Shenandoah Valley, General Sherman was to capture Atlanta and march to the sea (the Atlantic Ocean), Generals George Crook and William W. Averell were to operate against railroad supply lines in West Virginia, and Maj. Gen. Nathaniel P. Banks was to capture Mobile, Alabama. Grant's Overland Campaign Grant's army set out on the Overland Campaign intending to draw Lee into a defense of Richmond, where they would attempt to pin down and destroy the Confederate army. The Union army first attempted to maneuver past Lee and fought several battles, notably at the Wilderness, Spotsylvania, and Cold Harbor. These battles resulted in heavy losses on both sides and forced Lee's Confederates to fall back repeatedly. At the Battle of Yellow Tavern, the Confederates lost Jeb Stuart. An attempt to outflank Lee from the south failed under Butler, who was trapped inside the Bermuda Hundred river bend. Each battle resulted in setbacks for the Union that mirrored what they had suffered under prior generals, though, unlike those prior generals, Grant fought on rather than retreat. Grant was tenacious and kept pressing Lee's Army of Northern Virginia back to Richmond. While Lee was preparing for an attack on Richmond, Grant unexpectedly turned south to cross the James River and began the protracted Siege of Petersburg, where the two armies engaged in trench warfare for over nine months. Sheridan's Valley Campaign Grant finally found a commander, General Philip Sheridan, aggressive enough to prevail in the Valley Campaigns of 1864. Sheridan was initially repelled at the Battle of New Market by former U.S. vice president and Confederate Gen. John C. Breckinridge. The Battle of New Market was the Confederacy's last major victory of the war and included a charge by teenage VMI cadets. After redoubling his efforts, Sheridan defeated Maj. Gen. Jubal A. Early in a series of battles, including a final decisive defeat at the Battle of Cedar Creek. Sheridan then proceeded to destroy the agricultural base of the Shenandoah Valley, a strategy similar to the tactics Sherman later employed in Georgia. Sherman's March to the Sea Meanwhile, Sherman maneuvered from Chattanooga to Atlanta, defeating Confederate Generals Joseph E. Johnston and John Bell Hood along the way. The fall of Atlanta on September 2, 1864, guaranteed the reelection of Lincoln as president. Hood left the Atlanta area to swing around and menace Sherman's supply lines and invade Tennessee in the Franklin–Nashville Campaign. Union Maj. Gen. John Schofield defeated Hood at the Battle of Franklin, and George H. Thomas dealt Hood a massive defeat at the Battle of Nashville, effectively destroying Hood's army. Leaving Atlanta, and his base of supplies, Sherman's army marched with an unknown destination, laying waste to about 20 percent of the farms in Georgia in his "March to the Sea". He reached the Atlantic Ocean at Savannah, Georgia, in December 1864. Sherman's army was followed by thousands of freed slaves; there were no major battles along the March. Sherman turned north through South Carolina and North Carolina to approach the Confederate Virginia lines from the south, increasing the pressure on Lee's army. The Waterloo of the Confederacy Lee's army, thinned by desertion and casualties, was now much smaller than Grant's. One last Confederate attempt to break the Union hold on Petersburg failed at the decisive Battle of Five Forks (sometimes called "the Waterloo of the Confederacy") on April 1. This meant that the Union now controlled the entire perimeter surrounding Richmond-Petersburg, completely cutting it off from the Confederacy. Realizing that the capital was now lost, Lee decided to evacuate his army. The Confederate capital fell to the Union XXV Corps, composed of black troops. The remaining Confederate units fled west after a defeat at Sayler's Creek. Confederacy surrenders Initially, Lee did not intend to surrender but planned to regroup at the village of Appomattox Court House, where supplies were to be waiting and then continue the war. Grant chased Lee and got in front of him so that when Lee's army reached Appomattox Court House, they were surrounded. After an initial battle, Lee decided that the fight was now hopeless, and surrendered his Army of Northern Virginia on April 9, 1865, at the McLean House. In an untraditional gesture and as a sign of Grant's respect and anticipation of peacefully restoring Confederate states to the Union, Lee was permitted to keep his sword and his horse, Traveller. His men were paroled, and a chain of Confederate surrenders began. On April 14, 1865, President Lincoln was shot by John Wilkes Booth, a Confederate sympathizer. Lincoln died early the next morning. Lincoln's vice president, Andrew Johnson, was unharmed, because his would-be assassin, George Atzerodt, lost his nerve, so Johnson was immediately sworn in as president. Meanwhile, Confederate forces across the South surrendered as news of Lee's surrender reached them. On April 26, 1865, the same day Boston Corbett killed Booth at a tobacco barn, General Joseph E. Johnston surrendered nearly 90,000 men of the Army of Tennessee to Major General William Tecumseh Sherman at Bennett Place near present-day Durham, North Carolina. It proved to be the largest surrender of Confederate forces. On May 4, all remaining Confederate forces in Alabama and Mississippi surrendered. President Johnson officially declared an end to the insurrection on May 9, 1865; Confederate president, Jefferson Davis, was captured the following day. On June 2, Kirby Smith officially surrendered his troops in the Trans-Mississippi Department. On June 23, Cherokee leader Stand Watie became the last Confederate general to surrender his forces. The final Confederate surrender was by the Shenandoah on November 6, 1865, bringing all hostilities of the four-year war to a close. Home fronts Union victory and aftermath Explaining the Union victory The causes of the war, the reasons for its outcome, and even the name of the war itself are subjects of lingering contention today. The North and West grew rich while the once-rich South became poor for a century. The national political power of the slaveowners and rich Southerners ended. Historians are less sure about the results of the postwar Reconstruction, especially regarding the second-class citizenship of the freedmen and their poverty. Historians have debated whether the Confederacy could have won the war. Most scholars, including James McPherson, argue that Confederate victory was at least possible. McPherson argues that the North's advantage in population and resources made Northern victory likely but not guaranteed. He also argues that if the Confederacy had fought using unconventional tactics, they would have more easily been able to hold out long enough to exhaust the Union. Confederates did not need to invade and hold enemy territory to win but only needed to fight a defensive war to convince the North that the cost of winning was too high. The North needed to conquer and hold vast stretches of enemy territory and defeat Confederate armies to win. Lincoln was not a military dictator and could continue to fight the war only as long as the American public supported a continuation of the war. The Confederacy sought to win independence by outlasting Lincoln; however, after Atlanta fell and Lincoln defeated McClellan in the election of 1864, all hope for a political victory for the South ended. At that point, Lincoln had secured the support of the Republicans, War Democrats, the border states, emancipated slaves, and the neutrality of Britain and France. By defeating the Democrats and McClellan, he also defeated the Copperheads and their peace platform. Some scholars argue that the Union held an insurmountable long-term advantage over the Confederacy in industrial strength and population. Confederate actions, they argue, only delayed defeat. Civil War historian Shelby Foote expressed this view succinctly: "I think that the North fought that war with one hand behind its back .... If there had been more Southern victories, and a lot more, the North simply would have brought that other hand out from behind its back. I don't think the South ever had a chance to win that War." A minority view among historians is that the Confederacy lost because, as E. Merton Coulter put it, "people did not will hard enough and long enough to win." However, most historians reject the argument. McPherson, after reading thousands of letters written by Confederate soldiers, found strong patriotism that continued to the end; they truly believed they were fighting for freedom and liberty. Even as the Confederacy was visibly collapsing in 1864–65, he says most Confederate soldiers were fighting hard. Historian Gary Gallagher cites General Sherman who in early 1864 commented, "The devils seem to have a determination that cannot but be admired." Despite their loss of slaves and wealth, with starvation looming, Sherman continued, "yet I see no sign of let-up—some few deserters—plenty tired of war, but the masses determined to fight it out." Also important were Lincoln's eloquence in rationalizing the national purpose and his skill in keeping the border states committed to the Union cause. The Emancipation Proclamation was an effective use of the President's war powers. The Confederate government failed in its attempt to get Europe involved in the war militarily, particularly Britain and France. Southern leaders needed to get European powers to help break up the blockade the Union had created around the Southern ports and cities. Lincoln's naval blockade was 95% effective at stopping trade goods; as a result, imports and exports to the South declined significantly. The abundance of European cotton and Britain's hostility to the institution of slavery, along with Lincoln's Atlantic and Gulf of Mexico naval blockades, severely decreased any chance that either Britain or France would enter the war. Historian Don Doyle has argued that the Union victory had a major impact on the course of world history. The Union victory energized popular democratic forces. A Confederate victory, on the other hand, would have meant a new birth of slavery, not freedom. Historian Fergus Bordewich, following Doyle, argues that: Scholars have debated what the effects of the war were on political and economic power in the South. The prevailing view is that the southern planter elite retained its powerful position in the South. However, a 2017 study challenges this, noting that while some Southern elites retained their economic status, the turmoil of the 1860s created greater opportunities for economic mobility in the South than in the North. Casualties The war resulted in at least 1,030,000 casualties (3 percent of the population), including about 620,000 soldier deaths—two-thirds by disease—and 50,000 civilians. Binghamton University historian J. David Hacker believes the number of soldier deaths was approximately 750,000, 20 percent higher than traditionally estimated, and possibly as high as 850,000. A novel way of calculating casualties by looking at the deviation of the death rate of men of fighting age from the norm through analysis of census data found that at least 627,000 and at most 888,000 people, but most likely 761,000 people, died through the war.As historian McPherson notes, the war's "cost in American lives was as great as in all of the nation's other wars combined through Vietnam" (referring to the Vietnam War). Based on 1860 census figures, 8 percent of all white men aged 13 to 43 died in the war, including 6 percent in the North and 18 percent in the South. About 56,000 soldiers died in prison camps during the War. An estimated 60,000 men lost limbs in the war. Of the 359,528 Union army dead, amounting to 15 percent of the over two million who served: 110,070 were killed in action (67,000) or died of wounds (43,000). 199,790 died of disease (75 percent was due to the war, the remainder would have occurred in civilian life anyway) 24,866 died in Confederate prison camps 9,058 were killed by accidents or drowning 15,741 other/unknown deaths In addition there were 4,523 deaths in the Navy (2,112 in battle) and 460 in the Marines (148 in battle). Black troops made up 10 percent of the Union death toll, they amounted to 15 percent of disease deaths but less than 3 percent of those killed in battle. Losses among African Americans were high. In the last year and a half and from all reported casualties, approximately 20 percent of all African Americans enrolled in the military lost their lives during the Civil War. Notably, their mortality rate was significantly higher than white soldiers. While 15.2% of United States Volunteers and just 8.6% of white Regular Army troops died, 20.5% of United States Colored Troops died. Confederate records compiled by historian William F. Fox list 74,524 killed and died of wounds and 59,292 died of disease. Including Confederate estimates of battle losses where no records exist would bring the Confederate death toll to 94,000 killed and died of wounds. However, this excludes the 30,000 deaths of Confederate troops in prisons, which would raise the minimum number of deaths to 290,000. The United States National Park Service uses the following figures in its official tally of war losses: Union: 853,838 110,100 killed in action 224,580 disease deaths 275,154 wounded in action 211,411 captured (including 30,192 who died as POWs) Confederate: 914,660 94,000 killed in action 164,000 disease deaths 194,026 wounded in action 462,634 captured (including 31,000 who died as POWs) While the figures of 360,000 army deaths for the Union and 260,000 for the Confederacy remained commonly cited, they are incomplete. In addition to many Confederate records being missing, partly as a result of Confederate widows not reporting deaths due to being ineligible for benefits, both armies only counted troops who died during their service and not the tens of thousands who died of wounds or diseases after being discharged. This often happened only a few days or weeks later. Francis Amasa Walker, superintendent of the 1870 census, used census and surgeon general data to estimate a minimum of 500,000 Union military deaths and 350,000 Confederate military deaths, for a total death toll of 850,000 soldiers. While Walker's estimates were originally dismissed because of the 1870 census's undercounting, it was later found that the census was only off by 6.5% and that the data Walker used would be roughly accurate. Analyzing the number of dead by using census data to calculate the deviation of the death rate of men of fighting age from the norm suggests that at least 627,000 and at most 888,000, but most likely 761,000 soldiers, died in the war. This would break down to approximately 350,000 Confederate and 411,000 Union military deaths, going by the proportion of Union to Confederate battle losses. Deaths among former slaves has proven much harder to estimate, due to the lack of reliable census data at the time, though they were known to be considerable, as former slaves were set free or escaped in massive numbers in an area where the Union army did not have sufficient shelter, doctors, or food for them. University of Connecticut Professor James Downs states that tens to hundreds of thousands of slaves died during the war from disease, starvation, or exposure and that if these deaths are counted in the war's total, the death toll would exceed 1 million. Losses were far higher than during the recent defeat of Mexico, which saw roughly thirteen thousand American deaths, including fewer than two thousand killed in battle, between 1846 and 1848. One reason for the high number of battle deaths during the war was the continued use of tactics similar to those of the Napoleonic Wars at the turn of the century, such as charging. With the advent of more accurate rifled barrels, Minié balls, and (near the end of the war for the Union army) repeating firearms such as the Spencer Repeating Rifle and the Henry Repeating Rifle, soldiers were mowed down when standing in lines in the open. This led to the adoption of trench warfare, a style of fighting that defined much of World War I. Emancipation Abolishing slavery was not a Union war goal from the outset, but it quickly became one. Lincoln's initial claims were that preserving the Union was the central goal of the war. In contrast, the South saw itself as fighting to preserve slavery. While not all Southerners saw themselves as fighting for slavery, most of the officers and over a third of the rank and file in Lee's army had close family ties to slavery. To Northerners, in contrast, the motivation was primarily to preserve the Union, not to abolish slavery. However, as the war dragged on it became clear that slavery was the central factor of the conflict, and that emancipation was (to quote the Emancipation Proclamation) "a fit and necessary war measure for suppressing [the] rebellion," Lincoln and his cabinet made ending slavery a war goal, culminating in the Emancipation Proclamation. Lincoln's decision to issue the Emancipation Proclamation angered both Peace Democrats ("Copperheads") and War Democrats, but energized most Republicans. By warning that free blacks would flood the North, Democrats made gains in the 1862 elections, but they did not gain control of Congress. The Republicans' counterargument that slavery was the mainstay of the enemy steadily gained support, with the Democrats losing decisively in the 1863 elections in the northern state of Ohio when they tried to resurrect anti-black sentiment. Emancipation Proclamation Slavery for the Confederacy's 3.5 million blacks effectively ended in each area when Union armies arrived; they were nearly all freed by the Emancipation Proclamation. The last Confederate slaves were freed on June 19, 1865, celebrated as the modern holiday of Juneteenth. Slaves in the border states and those located in some former Confederate territory occupied before the Emancipation Proclamation were freed by state action or (on December 6, 1865) by the Thirteenth Amendment. The Emancipation Proclamation enabled African Americans, both free blacks and escaped slaves, to join the Union Army. About 190,000 volunteered, further enhancing the numerical advantage the Union armies enjoyed over the Confederates, who did not dare emulate the equivalent manpower source for fear of fundamentally undermining the legitimacy of slavery. During the Civil War, sentiment concerning slaves, enslavement and emancipation in the United States was divided. Lincoln's fears of making slavery a war issue were based on a harsh reality: abolition did not enjoy wide support in the west, the territories, and the border states. In 1861, Lincoln worried that premature attempts at emancipation would mean the loss of the border states, and that "to lose Kentucky is nearly the same as to lose the whole game." Copperheads and some War Democrats opposed emancipation, although the latter eventually accepted it as part of the total war needed to save the Union. At first, Lincoln reversed attempts at emancipation by Secretary of War Simon Cameron and Generals John C. Frémont (in Missouri) and David Hunter (in South Carolina, Georgia and Florida) to keep the loyalty of the border states and the War Democrats. Lincoln warned the border states that a more radical type of emancipation would happen if his gradual plan based on compensated emancipation and voluntary colonization was rejected. But only the District of Columbia accepted Lincoln's gradual plan, which was enacted by Congress. When Lincoln told his cabinet about his proposed emancipation proclamation, Seward advised Lincoln to wait for a victory before issuing it, as to do otherwise would seem like "our last shriek on the retreat". Lincoln laid the groundwork for public support in an open letter published in response to Horace Greeley's "The Prayer of Twenty Millions." He also laid the groundwork at a meeting at the White House with five African American representatives on August 14, 1862. Arranging for a reporter to be present, he urged his visitors to agree to the voluntary colonization of black people, apparently to make his forthcoming preliminary emancipation proclamation more palatable to racist white people. A Union victory in the Battle of Antietam on September 17, 1862, provided Lincoln with an opportunity to issue the preliminary Emancipation Proclamation, and the subsequent War Governors' Conference added support for the proclamation. Lincoln issued his preliminary Emancipation Proclamation on September 22, 1862, and his final Emancipation Proclamation on January 1, 1863. In his letter to Albert G. Hodges, Lincoln explained his belief that "If slavery is not wrong, nothing is wrong .... And yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling .... I claim not to have controlled events, but confess plainly that events have controlled me." Lincoln's moderate approach succeeded in inducing the border states to remain in the Union and War Democrats to support the Union. The border states (Kentucky, Missouri, Maryland, Delaware) and Union-controlled regions around New Orleans, Norfolk, and elsewhere, were not covered by the Emancipation Proclamation. All abolished slavery on their own, except Kentucky and Delaware. Still, the proclamation did not enjoy universal support. It caused much unrest in what were then considered western states, where racist sentiments led to a great fear of abolition. There was some concern that the proclamation would lead to the secession of western states, and its issuance prompted the stationing of Union troops in Illinois in case of rebellion. Since the Emancipation Proclamation was based on the President's war powers, it applied only in territory held by Confederates at the time. However, the Proclamation became a symbol of the Union's growing commitment to add emancipation to the Union's definition of liberty. The Emancipation Proclamation greatly reduced the Confederacy's hope of being recognized or otherwise aided by Britain or France. By late 1864, Lincoln was playing a leading role in getting Congress to vote for the Thirteenth Amendment, which made emancipation universal and permanent unless it was repealed by another constitutional amendment. Reconstruction The war had utterly devastated the South, and posed serious questions of how the South would be re-integrated to the Union. The war destroyed much of the wealth that had existed in the South. All accumulated investment Confederate bonds were forfeit; most banks and railroads were bankrupt. The income per person in the South dropped to less than 40 percent of that of the North, a condition that lasted until well into the 20th century. Southern influence in the U.S. federal government, previously considered, was greatly diminished until the latter half of the 20th century. Reconstruction began during the war, with the Emancipation Proclamation of January 1, 1863, and it continued until 1877. It comprised multiple complex methods to resolve the outstanding issues of the war's aftermath, the most important of which were the three "Reconstruction Amendments" to the Constitution: the 13th outlawing slavery (1865), the 14th guaranteeing citizenship to slaves (1868) and the 15th ensuring voting rights to slaves (1870). From the Union perspective, the goals of Reconstruction were to consolidate the Union victory on the battlefield by reuniting the Union; to guarantee a "republican form of government" for the ex-Confederate states, and to permanently end slavery—and prevent semi-slavery status. President Johnson took a lenient approach and saw the achievement of the main war goals as realized in 1865 when each ex-rebel state repudiated secession and ratified the Thirteenth Amendment. Radical Republicans demanded proof that Confederate nationalism was dead and that the slaves were truly free. They came to the fore after the 1866 elections and undid much of Johnson's work. In
over slavery, as each new territory acquired had to face the thorny question of whether to allow or disallow the "peculiar institution". Between 1803 and 1854, the United States achieved a vast expansion of territory through purchase, negotiation, and conquest. At first, the new states carved out of these territories entering the union were apportioned equally between slave and free states. Pro- and anti-slavery forces collided over the territories west of the Mississippi. The Mexican–American War and its aftermath was a key territorial event in the leadup to the war. As the Treaty of Guadalupe Hidalgo finalized the conquest of northern Mexico west to California in 1848, slaveholding interests looked forward to expanding into these lands and perhaps Cuba and Central America as well. Prophetically, Ralph Waldo Emerson wrote that "Mexico will poison us", referring to the ensuing divisions around whether the newly conquered lands would end up slave or free. Northern "free soil" interests vigorously sought to curtail any further expansion of slave territory. The Compromise of 1850 over California balanced a free-soil state with stronger fugitive slave laws for a political settlement after four years of strife in the 1840s. But the states admitted following California were all free: Minnesota (1858), Oregon (1859), and Kansas (1861). In the Southern states, the question of the territorial expansion of slavery westward again became explosive. Both the South and the North drew the same conclusion: "The power to decide the question of slavery for the territories was the power to determine the future of slavery itself." By 1860, four doctrines had emerged to answer the question of federal control in the territories, and they all claimed they were sanctioned by the Constitution, implicitly or explicitly. The first of these "conservative" theories, represented by the Constitutional Union Party, argued that the Missouri Compromise apportionment of territory north for free soil and south for slavery should become a Constitutional mandate. The Crittenden Compromise of 1860 was an expression of this view. The second doctrine of Congressional preeminence, championed by Abraham Lincoln and the Republican Party, insisted that the Constitution did not bind legislators to a policy of balance—that slavery could be excluded in a territory as it was done in the Northwest Ordinance of 1787 at the discretion of Congress; thus Congress could restrict human bondage, but never establish it. The ill-fated Wilmot Proviso announced this position in 1846. The Proviso was a pivotal moment in national politics, as it was the first time slavery had become a major congressional issue based on sectionalism, instead of party lines. Its bipartisan support by northern Democrats and Whigs, and bipartisan opposition by southerners was a dark omen of coming divisions. Senator Stephen A. Douglas proclaimed the third doctrine: territorial or "popular" sovereignty, which asserted that the settlers in a territory had the same rights as states in the Union to establish or disestablish slavery as a purely local matter. The Kansas–Nebraska Act of 1854 legislated this doctrine. In the Kansas Territory, years of pro and anti-slavery violence and political conflict erupted; the U.S. House of Representatives voted to admit Kansas as a free state in early 1860, but its admission did not pass the Senate until January 1861, after the departure of Southern senators. The fourth doctrine was advocated by Mississippi Senator Jefferson Davis, one of state sovereignty ("states' rights"), also known as the "Calhoun doctrine", named after the South Carolinian political theorist and statesman John C. Calhoun. Rejecting the arguments for federal authority or self-government, state sovereignty would empower states to promote the expansion of slavery as part of the federal union under the U.S. Constitution. "States' rights" was an ideology formulated and applied as a means of advancing slave state interests through federal authority. As historian Thomas L. Krannawitter points out, the "Southern demand for federal slave protection represented a demand for an unprecedented expansion of Federal power." These four doctrines comprised the dominant ideologies presented to the American public on the matters of slavery, the territories, and the U.S. Constitution before the 1860 presidential election. States' rights A long running dispute over the origin of the Civil War is to what extent states' rights triggered the conflict. The consensus among historians is that the Civil War was fought about states' rights. But the issue is frequently referenced in popular accounts of the war and has much traction among Southerners. The South argued that just as each state had decided to join the Union, a state had the right to secede—leave the Union—at any time. Northerners (including pro-slavery President Buchanan) rejected that notion as opposed to the will of the Founding Fathers, who said they were setting up a perpetual union. Historian James McPherson points out that even if Confederates genuinely fought over states' rights, it boiled down to states' right to slavery. McPherson writes concerning states' rights and other non-slavery explanations: Before the Civil War, the Southern states used federal powers in enforcing and extending slavery at the national level, with the Fugitive Slave Act of 1850 and Dred Scott v. Sandford decision. The faction that pushed for secession often infringed on states' rights. Because of the overrepresentation of pro-slavery factions in the federal government, many Northerners, even non-abolitionists, feared the Slave Power conspiracy. Some Northern states resisted the enforcement of the Fugitive Slave Act. Historian Eric Foner stated the act "could hardly have been designed to arouse greater opposition in the North. It overrode numerous state and local laws and legal procedures and 'commanded' individual citizens to assist, when called upon, in capturing runaways." He continues, "It certainly did not reveal, on the part of slaveholders, sensitivity to states’ rights." According to historian Paul Finkelman "the southern states mostly complained that the northern states were asserting their states’ rights and that the national government was not powerful enough to counter these northern claims." The Confederate constitution also "federally" required slavery to be legal in all Confederate states and claimed territories. Sectionalism Sectionalism resulted from the different economies, social structure, customs, and political values of the North and South. Regional tensions came to a head during the War of 1812, resulting in the Hartford Convention, which manifested Northern dissatisfaction with a foreign trade embargo that affected the industrial North disproportionately, the Three-Fifths Compromise, dilution of Northern power by new states, and a succession of Southern presidents. Sectionalism increased steadily between 1800 and 1860 as the North, which phased slavery out of existence, industrialized, urbanized, and built prosperous farms, while the deep South concentrated on plantation agriculture based on slave labor, together with subsistence agriculture for poor whites. In the 1840s and 1850s, the issue of accepting slavery (in the guise of rejecting slave-owning bishops and missionaries) split the nation's largest religious denominations (the Methodist, Baptist, and Presbyterian churches) into separate Northern and Southern denominations. Historians have debated whether economic differences between the mainly industrial North and the mainly agricultural South helped cause the war. Most historians now disagree with the economic determinism of historian Charles A. Beard in the 1920s, and emphasize that Northern and Southern economies were largely complementary. While socially different, the sections economically benefited each other. Protectionism Owners of slaves preferred low-cost manual labor with no mechanization. Northern manufacturing interests supported tariffs and protectionism while Southern planters demanded free trade. The Democrats in Congress, controlled by Southerners, wrote the tariff laws in the 1830s, 1840s, and 1850s, and kept reducing rates so that the 1857 rates were the lowest since 1816. The Republicans called for an increase in tariffs in the 1860 election. The increases were only enacted in 1861 after Southerners resigned their seats in Congress. The tariff issue was a Northern grievance. However, neo-Confederate writers have claimed it as a Southern grievance. In 1860–61 none of the groups that proposed compromises to head off secession raised the tariff issue. Pamphleteers North and South rarely mentioned the tariff. Nationalism and honor Nationalism was a powerful force in the early 19th century, with famous spokesmen such as Andrew Jackson and Daniel Webster. While practically all Northerners supported the Union, Southerners were split between those loyal to the entirety of the United States (called "Southern Unionists") and those loyal primarily to the Southern region and then the Confederacy. Perceived insults to Southern collective honor included the enormous popularity of Uncle Tom's Cabin, and the actions of abolitionist John Brown in trying to incite a rebellion of slaves in 1859. While the South moved towards a Southern nationalism, leaders in the North were also becoming more nationally minded, and they rejected any notion of splitting the Union. The Republican national electoral platform of 1860 warned that Republicans regarded disunion as treason and would not tolerate it. The South ignored the warnings; Southerners did not realize how ardently the North would fight to hold the Union together. Lincoln's election The election of Abraham Lincoln in November 1860 was the final trigger for secession. Efforts at compromise, including the Corwin Amendment and the Crittenden Compromise, failed. Southern leaders feared that Lincoln would stop the expansion of slavery and put it on a course toward extinction. When Lincoln won the presidential election in 1860, the South lost any hope of compromise. Jefferson Davis claimed that all the cotton states would secede from the Union. The Confederacy was formed of seven states of the Deep South: Alabama, Florida, Georgia, Louisiana, Mississippi, South Carolina, and Texas in January and February 1861. They wrote the Confederate Constitution, which provided greater states' rights than the Constitution of the United States. Until elections were held, Davis was the provisional president. Lincoln was inaugurated on March 4, 1861. According to Lincoln, the American people had shown that they had been successful in establishing and administering a republic, but a third challenge faced the nation: maintaining a republic based on the people's vote, in the face of an attempt to destroy it. Outbreak of the war Secession crisis The election of Lincoln provoked the legislature of South Carolina to call a state convention to consider secession. Before the war, South Carolina did more than any other Southern state to advance the notion that a state had the right to nullify federal laws, and even to secede from the United States. The convention unanimously voted to secede on December 20, 1860, and adopted a secession declaration. It argued for states' rights for slave owners in the South, but contained a complaint about states' rights in the North in the form of opposition to the Fugitive Slave Act, claiming that Northern states were not fulfilling their federal obligations under the Constitution. The "cotton states" of Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas followed suit, seceding in January and February 1861. Among the ordinances of secession passed by the individual states, those of three—Texas, Alabama, and Virginia—specifically mentioned the plight of the "slaveholding states" at the hands of Northern abolitionists. The rest make no mention of the slavery issue and are often brief announcements of the dissolution of ties by the legislatures. However, at least four states—South Carolina, Mississippi, Georgia, and Texas—also passed lengthy and detailed explanations of their causes for secession, all of which laid the blame squarely on the movement to abolish slavery and that movement's influence over the politics of the Northern states. The Southern states believed slaveholding was a constitutional right because of the Fugitive Slave Clause of the Constitution. These states agreed to form a new federal government, the Confederate States of America, on February 4, 1861. They took control of federal forts and other properties within their boundaries with little resistance from outgoing President James Buchanan, whose term ended on March 4, 1861. Buchanan said that the Dred Scott decision was proof that the South had no reason for secession, and that the Union "was intended to be perpetual", but that "The power by force of arms to compel a State to remain in the Union" was not among the "enumerated powers granted to Congress". One-quarter of the U.S. Army—the entire garrison in Texas—was surrendered in February 1861 to state forces by its commanding general, David E. Twiggs, who then joined the Confederacy. As Southerners resigned their seats in the Senate and the House, Republicans were able to pass projects that had been blocked by Southern senators before the war. These included the Morrill Tariff, land grant colleges (the Morrill Act), a Homestead Act, a transcontinental railroad (the Pacific Railroad Acts), the National Bank Act, the authorization of United States Notes by the Legal Tender Act of 1862, and the ending of slavery in the District of Columbia. The Revenue Act of 1861 introduced the income tax to help finance the war. In December 1860, the Crittenden Compromise was proposed to re-establish the Missouri Compromise line by constitutionally banning slavery in territories to the north of the line while guaranteeing it to the south. The adoption of this compromise likely would have prevented the secession of the Southern states, but Lincoln and the Republicans rejected it. Lincoln stated that any compromise that would extend slavery would in time bring down the Union. A pre-war February Peace Conference of 1861 met in Washington, proposing a solution similar to that of the Crittenden compromise; it was rejected by Congress. The Republicans proposed an alternative compromise to not interfere with slavery where it existed but the South regarded it as insufficient. Nonetheless, the remaining eight slave states rejected pleas to join the Confederacy following a two-to-one no-vote in Virginia's First Secessionist Convention on April 4, 1861. On March 4, 1861, Abraham Lincoln was sworn in as president. In his inaugural address, he argued that the Constitution was a more perfect union than the earlier Articles of Confederation and Perpetual Union, that it was a binding contract, and called any secession "legally void". He had no intent to invade Southern states, nor did he intend to end slavery where it existed, but said that he would use force to maintain possession of Federal property, including forts, arsenals, mints, and customhouses that had been seized by the Southern states. The government would make no move to recover post offices, and if resisted, mail delivery would end at state lines. Where popular conditions did not allow peaceful enforcement of Federal law, U.S. marshals and judges would be withdrawn. No mention was made of bullion lost from U.S. mints in Louisiana, Georgia, and North Carolina. He stated that it would be U.S. policy to only collect import duties at its ports; there could be no serious injury to the South to justify the armed revolution during his administration. His speech closed with a plea for restoration of the bonds of union, famously calling on "the mystic chords of memory" binding the two regions. The Davis government of the new Confederacy sent three delegates to Washington to negotiate a peace treaty with the United States of America. Lincoln rejected any negotiations with Confederate agents because he claimed the Confederacy was not a legitimate government, and that making any treaty with it would be tantamount to recognition of it as a sovereign government. Lincoln instead attempted to negotiate directly with the governors of individual seceded states, whose administrations he continued to recognize. Complicating Lincoln's attempts to defuse the crisis were the actions of the new Secretary of State, William Seward. Seward had been Lincoln's main rival for the Republican presidential nomination. Shocked and deeply embittered by this defeat, Seward only agreed to support Lincoln's candidacy after he was guaranteed the executive office which was considered at that time to be by far the most powerful and important after the presidency itself. Even in the early stages of Lincoln's presidency Seward still held little regard for the new chief executive due to his perceived inexperience, and therefore viewed himself as the de facto head of government or "prime minister" behind the throne of Lincoln. In this role, Seward attempted to engage in unauthorized and indirect negotiations that failed. However, President Lincoln was determined to hold all remaining Union-occupied forts in the Confederacy: Fort Monroe in Virginia, Fort Pickens, Fort Jefferson and Fort Taylor in Florida, and Fort Sumter – located at the cockpit of secession in Charleston, South Carolina. Battle of Fort Sumter Fort Sumter is located in the middle of the harbor of Charleston, South Carolina. Its garrison had recently moved there to avoid incidents with local militias in the streets of the city. Lincoln told its commander, Major Robert Anderson, to hold on until fired upon. Confederate president Jefferson Davis ordered the surrender of the fort. Anderson gave a conditional reply, which the Confederate government rejected, and Davis ordered General P. G. T. Beauregard to attack the fort before a relief expedition could arrive. He bombarded Fort Sumter on April 12–13, forcing its capitulation. The attack on Fort Sumter enormously invigorated the North to the defense of American nationalism. On April 15, 1861, Lincoln called on all the states to send forces to recapture the fort and other federal properties. The scale of the rebellion appeared to be small, so he called for only 75,000 volunteers for 90 days. In western Missouri, local secessionists seized Liberty Arsenal. On May 3, 1861, Lincoln called for an additional 42,000 volunteers for a period of three years. Shortly after this, Virginia, Tennessee, Arkansas, and North Carolina seceded and joined the Confederacy. To reward Virginia, the Confederate capital was moved to Richmond. Attitude of the border states Maryland, Delaware, Missouri, and Kentucky were slave states that had divided loyalties to Northern and Southern businesses and family members. Some men enlisted in the Union Army and others in the Confederate Army. West Virginia separated from Virginia and was admitted to the Union on June 20, 1863. Maryland's territory surrounded the United States' capital of Washington, D.C., and could cut it off from the North. It had numerous anti-Lincoln officials who tolerated anti-army rioting in Baltimore and the burning of bridges, both aimed at hindering the passage of troops to the South. Maryland's legislature voted overwhelmingly (53–13) to stay in the Union, but also rejected hostilities with its southern neighbors, voting to close Maryland's rail lines to prevent them from being used for war. Lincoln responded by establishing martial law and unilaterally suspending habeas corpus in Maryland, along with sending in militia units from the North. Lincoln rapidly took control of Maryland and the District of Columbia by seizing many prominent figures, including arresting 1/3 of the members of the Maryland General Assembly on the day it reconvened. All were held without trial, ignoring a ruling by the Chief Justice of the U.S. Supreme Court Roger Taney, a Maryland native, that only Congress (and not the president) could suspend habeas corpus (Ex parte Merryman). Federal troops imprisoned a prominent Baltimore newspaper editor, Frank Key Howard, Francis Scott Key's grandson, after he criticized Lincoln in an editorial for ignoring the Supreme Court Chief Justice's ruling. In Missouri, an elected convention on secession voted decisively to remain within the Union. When pro-Confederate Governor Claiborne F. Jackson called out the state militia, it was attacked by federal forces under General Nathaniel Lyon, who chased the governor and the rest of the State Guard to the southwestern corner of the state (see also: Missouri secession). In the resulting vacuum, the convention on secession reconvened and took power as the Unionist provisional government of Missouri. Kentucky did not secede; for a time, it declared itself neutral. When Confederate forces entered the state in September 1861, neutrality ended and the state reaffirmed its Union status while maintaining slavery. During a brief invasion by Confederate forces in 1861, Confederate sympathizers organized a secession convention, formed the shadow Confederate Government of Kentucky, inaugurated a governor, and gained recognition from the Confederacy. Its jurisdiction extended only as far as Confederate battle lines in the Commonwealth, and it went into exile for good after October 1862. After Virginia's secession, a Unionist government in Wheeling asked 48 counties to vote on an ordinance to create a new state on October 24, 1861. A voter turnout of 34 percent approved the statehood bill (96 percent approving). Twenty-four secessionist counties were included in the new state, and the ensuing guerrilla war engaged about 40,000 Federal troops for much of the war. Congress admitted West Virginia to the Union on June 20, 1863. West Virginia provided about 20,000–22,000 soldiers to both the Confederacy and the Union. A Unionist secession attempt occurred in East Tennessee, but was suppressed by the Confederacy, which arrested over 3,000 men suspected of being loyal to the Union. They were held without trial. General features of the war The Civil War was a contest marked by the ferocity and frequency of battle. Over four years, 237 named battles were fought, as were many more minor actions and skirmishes, which were often characterized by their bitter intensity and high casualties. In his book The American Civil War, John Keegan writes that "The American Civil War was to prove one of the most ferocious wars ever fought". In many cases, without geographic objectives, the only target for each side was the enemy's soldier. Mobilization As the first seven states began organizing a Confederacy in Montgomery, the entire U.S. army numbered 16,000. However, Northern governors had begun to mobilize their militias. The Confederate Congress authorized the new nation up to 100,000 troops sent by governors as early as February. By May, Jefferson Davis was pushing for 100,000 men under arms for one year or the duration, and that was answered in kind by the U.S. Congress. In the first year of the war, both sides had far more volunteers than they could effectively train and equip. After the initial enthusiasm faded, reliance on the cohort of young men who came of age every year and wanted to join was not enough. Both sides used a draft law—conscription—as a device to encourage or force volunteering; relatively few were drafted and served. The Confederacy passed a draft law in April 1862 for young men aged 18 to 35; overseers of slaves, government officials, and clergymen were exempt. The U.S. Congress followed in July, authorizing a militia draft within a state when it could not meet its quota with volunteers. European immigrants joined the Union Army in large numbers, including 177,000 born in Germany and 144,000 born in Ireland. When the Emancipation Proclamation went into effect in January 1863, ex-slaves were energetically recruited by the states and used to meet the state quotas. States and local communities offered higher and higher cash bonuses for white volunteers. Congress tightened the law in March 1863. Men selected in the draft could provide substitutes or, until mid-1864, pay commutation money. Many eligibles pooled their money to cover the cost of anyone drafted. Families used the substitute provision to select which man should go into the army and which should stay home. There was much evasion and overt resistance to the draft, especially in Catholic areas. The draft riot in New York City in July 1863 involved Irish immigrants who had been signed up as citizens to swell the vote of the city's Democratic political machine, not realizing it made them liable for the draft. Of the 168,649 men procured for the Union through the draft, 117,986 were substitutes, leaving only 50,663 who had their services conscripted. In both the North and South, the draft laws were highly unpopular. In the North, some 120,000 men evaded conscription, many of them fleeing to Canada, and another 280,000 soldiers deserted during the war. At least 100,000 Southerners deserted, or about 10 percent; Southern desertion was high because, according to one historian writing in 1991, the highly localized Southern identity meant that many Southern men had little investment in the outcome of the war, with individual soldiers caring more about the fate of their local area than any grand ideal. In the North, "bounty jumpers" enlisted to get the generous bonus, deserted, then went back to a second recruiting station under a different name to sign up again for a second bonus; 141 were caught and executed. From a tiny frontier force in 1860, the Union and Confederate armies had grown into the "largest and most efficient armies in the world" within a few years. Some European observers at the time dismissed them as amateur and unprofessional, but British historian John Keegan concluded that each outmatched the French, Prussian, and Russian armies of the time, and without the Atlantic, would have threatened any of them with defeat. Prisoners At the start of the Civil War, a system of paroles operated. Captives agreed not to fight until they were officially exchanged. Meanwhile, they were held in camps run by their army. They were paid, but they were not allowed to perform any military duties. The system of exchanges collapsed in 1863 when the Confederacy refused to exchange black prisoners. After that, about 56,000 of the 409,000 POWs died in prisons during the war, accounting for nearly 10 percent of the conflict's fatalities. Women Historian Elizabeth D. Leonard writes that, according to various estimates, between five hundred and one thousand women enlisted as soldiers on both sides of the war, disguised as men. Women also served as spies, resistance activists, nurses, and hospital personnel. Women served on the Union hospital ship Red Rover and nursed Union and Confederate troops at field hospitals. Mary Edwards Walker, the only woman ever to receive the Medal of Honor, served in the Union Army and was given the medal for her efforts to treat the wounded during the war. Her name was deleted from the Army Medal of Honor Roll in 1917 (along with over 900 other, male MOH recipients); however, it was restored in 1977. Naval tactics The small U.S. Navy of 1861 was rapidly enlarged to 6,000 officers and 45,000 men in 1865, with 671 vessels, having a tonnage of 510,396. Its mission was to blockade Confederate ports, take control of the river system, defend against Confederate raiders on the high seas, and be ready for a possible war with the British Royal Navy. Meanwhile, the main riverine war was fought in the West, where a series of major rivers gave access to the Confederate heartland. The U.S. Navy eventually gained control of the Red, Tennessee, Cumberland, Mississippi, and Ohio rivers. In the East, the Navy shelled Confederate forts and provided support for coastal army operations. Modern navy evolves The Civil War occurred during the early stages of the industrial revolution. Many naval innovations emerged during this time, most notably the advent of the ironclad warship. It began when the Confederacy, knowing they had to meet or match the Union's naval superiority, responded to the Union blockade by building or converting more than 130 vessels, including twenty-six ironclads and floating batteries. Only half of these saw active service. Many were equipped with ram bows, creating "ram fever" among Union squadrons wherever they threatened. But in the face of overwhelming Union superiority and the Union's ironclad warships, they were unsuccessful. In addition to ocean-going warships coming up the Mississippi, the Union Navy used timberclads, tinclads, and armored gunboats. Shipyards at Cairo, Illinois, and St. Louis built new boats or modified steamboats for action. The Confederacy experimented with the submarine , which did not work satisfactorily, and with building an ironclad ship, , which was based on rebuilding a sunken Union ship, . On its first foray, on March 8, 1862, Virginia inflicted significant damage to the Union's wooden fleet, but the next day the first Union ironclad, , arrived to challenge it in the Chesapeake Bay. The resulting three-hour Battle of Hampton Roads was a draw, but it proved that ironclads were effective warships. Not long after the battle, the Confederacy was forced to scuttle the Virginia to prevent its capture, while the Union built many copies of the Monitor. Lacking the technology and infrastructure to build effective warships, the Confederacy attempted to obtain warships from Great Britain. However, this failed, because Great Britain had no interest in selling warships to a nation that was at war with a far stronger enemy, and doing so could sour relations with the U.S. Union blockade By early 1861, General Winfield Scott had devised the Anaconda Plan to win the war with as little bloodshed as possible. Scott argued that a Union blockade of the main ports would weaken the Confederate economy. Lincoln adopted parts of the plan, but he overruled Scott's caution about 90-day volunteers. Public opinion, however, demanded an immediate attack by the army to capture Richmond. In April 1861, Lincoln announced the Union blockade of all Southern ports; commercial ships could not get insurance and regular traffic ended. The South blundered in embargoing cotton exports in 1861 before the blockade was effective; by the time they realized the mistake, it was too late. "King Cotton" was dead, as the South could export less than 10 percent of its cotton. The blockade shut down the ten Confederate seaports with railheads that moved almost all the cotton, especially New Orleans, Mobile, and Charleston. By June 1861, warships were stationed off the principal Southern ports, and a year later nearly 300 ships were in service. Blockade runners The Confederates began the war short on military supplies and in desperate need of large quantities of arms which the agrarian South could not provide. Arms manufactures in the industrial North were restricted by an arms embargo, keeping shipments of arms from going to the South, and ending all existing and future contracts. The Confederacy subsequently looked to foreign sources for their enormous military needs and sought out financiers and companies like S. Isaac, Campbell & Company and the London Armoury Company in Britain, who acted as purchasing agents for the Confederacy, connecting them with Britain's
Factory" and gathered about him a wide range of artists, writers, musicians, and underground celebrities. His work became popular and controversial. Warhol had this to say about Coca-Cola: In December 1962, New York City's Museum of Modern Art hosted a symposium on pop art, during which artists such as Warhol were attacked for "capitulating" to consumerism. Critics were appalled by Warhol's open acceptance of market culture, which set the tone for his reception. Warhol had his second exhibition at the Stable Gallery in the spring of 1964, which featured sculptures of commercial boxes stacked and scattered throughout the space to resemble a warehouse. For the exhibition, Warhol custom ordered wooden boxes and silkscreened graphics onto them. The sculptures—Brillo Box, Del Monte Peach Box, Heinz Tomato Ketchup Box, Kellog's Cornflakes Box, Campbell's Tomato Juice Box, and Mott's Apple Juice Box—sold for $200 to $400 depending on the size of the box. A pivotal event was The American Supermarket exhibition at Paul Bianchini's Upper East Side gallery in the fall of 1964. The show was presented as a typical small supermarket environment, except that everything in it—from the produce, canned goods, meat, posters on the wall, etc.—was created by prominent pop artists of the time, among them were sculpture Claes Oldenburg, Mary Inman and Bob Watts. Warhol designed a $12 paper shopping bag—plain white with a red Campbell's soup can. His painting of a can of a Campbell's soup cost $1,500 while each autographed can sold for 3 for $18, $6.50 each. The exhibit was one of the first mass events that directly confronted the general public with both pop art and the perennial question of what art is. As an advertisement illustrator in the 1950s, Warhol used assistants to increase his productivity. Collaboration would remain a defining (and controversial) aspect of his working methods throughout his career; this was particularly true in the 1960s. One of the most important collaborators during this period was Gerard Malanga. Malanga assisted the artist with the production of silkscreens, films, sculpture, and other works at "The Factory", Warhol's aluminum foil-and-silver-paint-lined studio on 47th Street (later moved to Broadway). Other members of Warhol's Factory crowd included Freddie Herko, Ondine, Ronald Tavel, Mary Woronov, Billy Name, and Brigid Berlin (from whom he apparently got the idea to tape-record his phone conversations). During the 1960s, Warhol also groomed a retinue of bohemian and counterculture eccentrics upon whom he bestowed the designation "superstars", including Nico, Joe Dallesandro, Edie Sedgwick, Viva, Ultra Violet, Holly Woodlawn, Jackie Curtis, and Candy Darling. These people all participated in the Factory films, and some—like Berlin—remained friends with Warhol until his death. Important figures in the New York underground art/cinema world, such as writer John Giorno and film-maker Jack Smith, also appear in Warhol films (many premiering at the New Andy Warhol Garrick Theatre and 55th Street Playhouse) of the 1960s, revealing Warhol's connections to a diverse range of artistic scenes during this time. Less well known was his support and collaboration with several teenagers during this era, who would achieve prominence later in life including writer David Dalton, photographer Stephen Shore and artist Bibbe Hansen (mother of pop musician Beck). Attempted murder: 1968 On June 3, 1968, radical feminist writer Valerie Solanas shot Warhol and Mario Amaya, art critic and curator, at Warhol's studio, The Factory. Before the shooting, Solanas had been a marginal figure in the Factory scene. She authored in 1967 the SCUM Manifesto, a separatist feminist tract that advocated the elimination of men; and appeared in the 1968 Warhol film I, a Man. Earlier on the day of the attack, Solanas had been turned away from the Factory after asking for the return of a script she had given to Warhol. The script had apparently been misplaced. Amaya received only minor injuries and was released from the hospital later the same day. Warhol was seriously wounded by the attack and barely survived. He suffered physical effects for the rest of his life, including being required to wear a surgical corset. The shooting had a profound effect on Warhol's life and art. Solanas was arrested the day after the assault, after turning herself in to police. By way of explanation, she said that Warhol "had too much control over my life". She was subsequently diagnosed with paranoid schizophrenia and eventually sentenced to three years under the control of the Department of Corrections. After the shooting, the Factory scene heavily increased its security, and for many the "Factory 60s" ended ("The superstars from the old Factory days didn't come around to the new Factory much"). Warhol had this to say about the attack: In 1969, Warhol and British journalist John Wilcock founded Interview magazine. 1970s Warhol had a retrospective exhibition at the Whitney Museum of American Art in 1971. His famous portrait of Chinese Communist leader Mao Zedong was created in 1973. In 1975, he published The Philosophy of Andy Warhol (1975). An idea expressed in the book: "Making money is art, and working is art and good business is the best art." Compared to the success and scandal of Warhol's work in the 1960s, the 1970s were a much quieter decade, as he became more entrepreneurial. He socialized at various nightspots in New York City, including Max's Kansas City and, later in the 1970s, Studio 54. He was generally regarded as quiet, shy, and a meticulous observer. Art critic Robert Hughes called him "the white mole of Union Square". In 1977, Warhol was commissioned by art collector Richard Weisman to create, Athletes, ten portraits consisting of the leading athletes of the day. According to Bob Colacello, Warhol devoted much of his time to rounding up new, rich patrons for portrait commissions—including Shah of Iran Mohammad Reza Pahlavi, his wife Empress Farah Pahlavi, his sister Princess Ashraf Pahlavi, Mick Jagger, Liza Minnelli, John Lennon, Diana Ross, and Brigitte Bardot. In 1979, reviewers disliked his exhibits of portraits of 1970s personalities and celebrities, calling them superficial, facile and commercial, with no depth or indication of the significance of the subjects. In 1979, Warhol and his longtime friend Stuart Pivar founded the New York Academy of Art. 1980s Warhol had a re-emergence of critical and financial success in the 1980s, partially due to his affiliation and friendships with a number of prolific younger artists, who were dominating the "bull market" of 1980s New York art: Jean-Michel Basquiat, Julian Schnabel, David Salle and other so-called Neo-Expressionists, as well as members of the Transavantgarde movement in Europe, including Francesco Clemente and Enzo Cucchi. Warhol also earned street credibility and graffiti artist Fab Five Freddy paid homage to Warhol by painting an entire train with Campbell soup cans. Warhol was also being criticized for becoming merely a "business artist". Critics panned his 1980 exhibition Ten Portraits of Jews of the Twentieth Century at the Jewish Museum in Manhattan, which Warhol—who was uninterested in Judaism and Jews—had described in his diary as "They're going to sell." In hindsight, however, some critics have come to view Warhol's superficiality and commerciality as "the most brilliant mirror of our times," contending that "Warhol had captured something irresistible about the zeitgeist of American culture in the 1970s." Warhol also had an appreciation for intense Hollywood glamour. He once said: "I love Los Angeles. I love Hollywood. They're so beautiful. Everything's plastic, but I love plastic. I want to be plastic." Warhol occasionally walked the fashion runways and did product endorsements, represented by Zoli Agency and later Ford Models. Before the 1984 Sarajevo Winter Olympics, he teamed with 15 other artists, including David Hockney and Cy Twombly, and contributed a Speed Skater print to the Art and Sport collection. The Speed Skater was used for the official Sarajevo Winter Olympics poster. In 1984, Vanity Fair commissioned Warhol to produce a portrait of Prince, in order to accompany an article that celebrated the success of Purple Rain and its accompanying movie. Referencing the many celebrity portraits produced by Warhol across his career, Orange Prince (1984) was created using a similar composition to the Marilyn "Flavors" series from 1962, among some of Warhol's first celebrity portraits. Prince is depicted in a pop color palette commonly used by Warhol, in bright orange with highlights of bright green and blue. The facial features and hair are screen-printed in black over the orange background. In September 1985, Warhol's joint exhibition with Basquiat, Paintings, opened to negative reviews at the Tony Shafrazi Gallery. That month, despite apprehension from Warhol, his silkscreen series Reigning Queens was shown at the Leo Castelli Gallery. In the Andy Warhol Diaries, Warhol wrote, "They were supposed to be only for Europe—nobody here cares about royalty and it'll be another bad review." In January 1987, Warhol traveled to Milan for the opening of his last exhibition, Last Supper, at the Palazzo delle Stelline. The next month, Warhol and jazz musician Miles Davis modeled for Koshin Satoh's fashion show at the Tunnel in New York City on February 17, 1987. Death Warhol died in Manhattan at 6:32 a.m. on February 22, 1987, at age 58. According to news reports, he had been making a good recovery from gallbladder surgery at New York Hospital before dying in his sleep from a sudden post-operative irregular heartbeat. Prior to his diagnosis and operation, Warhol delayed having his recurring gallbladder problems checked, as he was afraid to enter hospitals and see doctors. His family sued the hospital for inadequate care, saying that the arrhythmia was caused by improper care and water intoxication. The malpractice case was quickly settled out of court; Warhol's family received an undisclosed sum of money. Shortly before Warhol's death, doctors expected Warhol to survive the surgery, though a re-evaluation of the case about thirty years after his death showed many indications that Warhol's surgery was in fact riskier than originally thought. It was widely reported at the time that Warhol died of a "routine" surgery, though when considering factors such as his age, a family history of gallbladder problems, his previous gunshot wound, and his medical state in the weeks leading up to the procedure, the potential risk of death following the surgery appeared to have been significant. Warhol's brothers took his body back to Pittsburgh, where an open-coffin wake was held at the Thomas P. Kunsak Funeral Home. The solid bronze casket had gold-plated rails and white upholstery. Warhol was dressed in a black cashmere suit, a paisley tie, a platinum wig, and sunglasses. He was laid out holding a small prayer book and a red rose. The funeral liturgy was held at the Holy Ghost Byzantine Catholic Church on Pittsburgh's North Side. The eulogy was given by Monsignor Peter Tay. Yoko Ono and John Richardson were speakers. The coffin was covered with white roses and asparagus ferns. After the liturgy, the coffin was driven to St. John the Baptist Byzantine Catholic Cemetery in Bethel Park, a south suburb of Pittsburgh. At the grave, the priest said a brief prayer and sprinkled holy water on the casket. Before the coffin was lowered, Warhol's friend and advertising director of Interview Paige Powell dropped a copy of the magazine, an Interview T-shirt, and a bottle of the Estée Lauder perfume "Beautiful" into the grave. Warhol was buried next to his mother and father. A memorial service was held in Manhattan for Warhol at St. Patrick's Cathedral on April 1, 1987. Art works Paintings By the beginning of the 1960s, pop art was an experimental form that several artists were independently adopting; some of these pioneers, such as Roy Lichtenstein, would later become synonymous with the movement. Warhol, who would become famous as the "Pope of Pop", turned to this new style, where popular subjects could be part of the artist's palette. His early paintings show images taken from cartoons and advertisements, hand-painted with paint drips. Marilyn Monroe was a pop art painting that Warhol had done and it was very popular. Those drips emulated the style of successful abstract expressionists (such as Willem de Kooning). Warhol's first pop art paintings were displayed in April 1961, serving as the backdrop for New York Department Store Bonwit Teller's window display. This was the same stage his Pop Art contemporaries Jasper Johns, James Rosenquist and Robert Rauschenberg had also once graced. It was the gallerist Muriel Latow who came up with the ideas for both the soup cans and Warhol's dollar paintings. On November 23, 1961, Warhol wrote Latow a check for $50 which, according to the 2009 Warhol biography, Pop, The Genius of Warhol, was payment for coming up with the idea of the soup cans as subject matter. For his first major exhibition, Warhol painted his famous cans of Campbell's soup, which he claimed to have had for lunch for most of his life. From these beginnings, he developed his later style and subjects. Instead of working on a signature subject matter, as he started out to do, he worked more and more on a signature style, slowly eliminating the handmade from the artistic process. Warhol frequently used silk-screening; his later drawings were traced from slide projections. At the height of his fame as a painter, Warhol had several assistants who produced his silk-screen multiples, following his directions to make different versions and variations. Warhol produced both comic and serious works; his subject could be a soup can or an electric chair. Warhol used the same techniques—silkscreens, reproduced serially, and often painted with bright colors—whether he painted celebrities, everyday objects, or images of suicide, car crashes, and disasters, as in the 1962–63 Death and Disaster series. In 1979, Warhol was commissioned to paint a BMW M1 Group 4 racing version for the fourth installment of the BMW Art Car project. He was initially asked to paint a BMW 320i in 1978, but the car model was changed and it didn't qualify for the race that year. Warhol was the first artist to paint directly onto the automobile himself instead of letting technicians transfer a scale-model design to the car. Reportedly, it took him only 23 minutes to paint the entire car. Racecar drivers Hervé Poulain, Manfred Winkelhock and Marcel Mignot drove the car at the 1979 24 Hours of Le Mans. Some of Warhol's work, as well as his own personality, has been described as being Keatonesque. Warhol has been described as playing dumb to the media. He sometimes refused to explain his work. He has suggested that all one needs to know about his work is "already there 'on the surface. His Rorschach inkblots are intended as pop comments on art and what art could be. His cow wallpaper (literally, wallpaper with a cow motif) and his oxidation paintings (canvases prepared with copper paint that was then oxidized with urine) are also noteworthy in this context. Equally noteworthy is the way these works—and their means of production—mirrored the atmosphere at Andy's New York "Factory". Biographer Bob Colacello provides some details on Andy's "piss paintings": Warhol's 1982 portrait of Basquiat, Jean-Michel Basquiat, is a silkscreen over an oxidized copper "piss painting." After many years of silkscreen, oxidation, photography, etc., Warhol returned to painting with a brush in hand. In 1983, Warhol began collaborating with Basquiat and Clemente. Warhol and Basquiat created a series of more than 50 large collaborative works between 1984 and 1985. Despite criticism when these were first shown, Warhol called some of them "masterpieces," and they were influential for his later work. In 1984, Warhol was commissioned by collector and gallerist Alexander Iolas to produce work based on Leonardo da Vinci's The Last Supper for an exhibition at the old refectory of the Palazzo delle Stelline in Milan, opposite from the Santa Maria delle Grazie where Leonardo da Vinci's mural can be seen. Warhol exceeded the demands of the commission and produced nearly 100 variations on the theme, mostly silkscreens and paintings, and among them a collaborative sculpture with Basquiat, the Ten Punching Bags (Last Supper). The Milan exhibition that opened in January 1987 with a set of 22 silk-screens, was the last exhibition for both the artist and the gallerist. The series of The Last Supper was seen by some as "arguably his greatest," but by others as "wishy-washy, religiose" and "spiritless". It is the largest series of religious-themed works by any U.S. artist. Artist Maurizio Cattelan describes that it is difficult to separate daily encounters from the art of Andy Warhol: "That's probably the greatest thing about Warhol: the way he penetrated and summarized our world, to the point that distinguishing between him and our everyday life is basically impossible, and in any case useless." Warhol was an inspiration towards Cattelan's magazine and photography compilations, such as Permanent Food, Charley, and Toilet Paper. In the period just before his death, Warhol was working on Cars, a series of paintings for Mercedes-Benz. Art market The value of Andy Warhol's work has been on an endless upward trajectory since his death in 1987. In 2014, his works accumulated $569 million at auction, which accounted for more than a sixth of the global art market. However, there have been some dips. According to art dealer Dominique Lévy, "The Warhol trade moves something like a seesaw being pulled uphill: it rises and falls, but each new high and low is above the last one." She attributes this to the consistent influx of new collectors intrigued by Warhol. "At different moments, you've had different groups of collectors entering the Warhol market, and that resulted in peaks in demand, then satisfaction and a slow down," before the process repeats another demographic or the next generation. In 1998, Orange Marilyn (1964), a depiction of Marilyn Monroe, sold for $17.3 million, which at the time set a new record as the highest price paid for a Warhol artwork. In 2007, one of Warhol's 1963 paintings of Elizabeth Taylor, Liz (Colored Liz), which was owned by actor Hugh Grant, sold for $23.7 million at Christie's. In 2007, Stefan Edlis and Gael Neeson sold Warhol's Turquoise Marilyn (1964) to financier Steven A. Cohen for $80 million. In May 2007, Green Car Crash (1963) sold for $71.1 million and Lemon Marilyn (1962) sold for $28 million at Christie's post-war and contemporary art auction. In 2007, Large Campbell's Soup Can (1964) was sold at a Sotheby's auction to a South American collector for 7.4 million. In November 2009, 200 One Dollar Bills (1962) at Sotheby's for $43.8 million. In 2008, Eight Elvises (1963) was sold by Annibale Berlingieri for $100 million to a private buyer. The work depicts Elvis Presley in a gunslinger pose. It was first exhibited in 1963 at the Ferus Gallery in Los Angeles. Warhol made 22 versions of the Double Elvis, nine of which are held in museums. In May 2012, Double Elvis (Ferus Type) sold at auction at Sotheby's for $37 million. In November 2014, Triple Elvis (Ferus Type) sold for $81.9 million at Christie's. In May 2010, a purple self-portrait of Warhol from 1986 that was owned by fashion designer Tom Ford sold for $32.6 million at Sotheby's. In November 2010, Men in Her Life (1962), based on Elizabeth Taylor, sold for $63.4 million at Phillips de Pury and Coca-Cola (4) (1962) sold for $35.3 million at Sotheby's. In May 2011, Warhol's first self-portrait from 1963–64 sold for $38.4 million and a red self-portrait from 1986 sold for $27.5 million at Christie's. In May 2011, Liz #5 (Early Colored Liz) sold for $26.9 million at Phillips. In November 2013, Warhol's rarely seen 1963 diptych, Silver Car Crash (Double Disaster), sold at Sotheby's for $105.4 million, a new record for the artist. In November 2013, Coca-Cola (3) (1962) sold for $57.3 million at Christie's. In May 2014, White Marilyn (1962) sold for $41 million at Christie's. In November 2014, Four Marlons (1964), which depicts Marlon Brando, sold for $69.6 million at Christie's. In May 2015, Silver Liz (diptych), painted in 1963–65, sold for $28 million and Colored Mona Lisa (1963) sold for $56.2 million at Christie's. In May 2017, Warhol's 1962 painting Big Campbell's Soup Can With Can Opener (Vegetable) sold for $27.5 million at Christie's. Collectors Among Warhol's early collectors and influential supporters were Emily and Burton Tremaine. Among the over 15 artworks purchased, Marilyn Diptych (now at Tate Modern, London) and A boy for Meg (now at the National Gallery of Art in Washington, DC), were purchased directly out of Warhol's studio in 1962. One Christmas, Warhol left a small Head of Marilyn Monroe by the Tremaine's door at their New York apartment in gratitude for their support and encouragement. Works Filmography Warhol attended the 1962 premiere of the static composition by La Monte Young called Trio for Strings and subsequently created his famous series of static films. Filmmaker Jonas Mekas, who accompanied Warhol to the Trio premiere, claims Warhol's static films were directly inspired by the performance. Between 1963 and 1968, he made more than 60 films, plus some 500 short black-and-white "screen test" portraits of Factory visitors. One of his most famous films, Sleep, monitors poet John Giorno sleeping for six hours. The 35-minute film Blow Job is one continuous shot of the face of DeVeren Bookwalter supposedly receiving oral sex from filmmaker Willard Maas, although the camera never tilts down to see this. Another, Empire (1964), consists of eight hours of footage of the Empire State Building in New York City at dusk. The film Eat consists of a man eating a mushroom for 45 minutes. Batman Dracula is a 1964 film that was produced and directed by Warhol, without the permission of DC Comics. It was screened only at his art exhibits. A fan of the Batman series, Warhol's movie was an "homage" to the series, and is considered the first appearance of a blatantly campy Batman. The film was until recently thought to have been lost, until scenes from the picture were shown at some length in the 2006 documentary Jack Smith and the Destruction of Atlantis. Warhol's 1965 film Vinyl is an adaptation of Anthony Burgess' popular dystopian novel A Clockwork Orange. Others record improvised encounters between Factory regulars such as Brigid Berlin, Viva, Edie Sedgwick, Candy Darling, Holly Woodlawn, Ondine, Nico, and Jackie Curtis. Legendary underground artist Jack Smith appears in the film Camp. His most popular and critically successful film was Chelsea Girls (1966). The film was highly innovative in that it consisted of two 16 mm-films being projected simultaneously, with two different stories being shown in tandem. From the projection booth, the sound would be raised for one film to elucidate that "story" while it was lowered for the other. The multiplication of images evoked Warhol's seminal silk-screen works of the early 1960s. Warhol was a fan of filmmaker Radley Metzger film work and commented that Metzger's film, The Lickerish Quartet, was "an outrageously kinky masterpiece". Blue Movie—a film in which Warhol superstar Viva makes love in bed with Louis Waldon, another Warhol superstar—was Warhol's last film as director. The film, a seminal film in the Golden Age of Porn, was, at the time, controversial for its frank approach to a sexual encounter. Blue Movie was publicly screened in New York City in 2005, for the first time in more than 30 years. In the wake of the 1968 shooting, a reclusive Warhol relinquished his personal involvement in filmmaking. His acolyte and assistant director, Paul Morrissey, took over the film-making chores for the Factory collective, steering Warhol-branded cinema towards more mainstream, narrative-based, B-movie exploitation fare with Flesh, Trash, and Heat. All of these films, including the later Andy Warhol's Dracula and Andy Warhol's Frankenstein, were far more mainstream than anything Warhol as a director had attempted. These latter "Warhol" films starred Joe Dallesandro—more of a Morrissey star than a true Warhol superstar. In the early 1970s, most of the films directed by Warhol were pulled out of circulation by Warhol and the people around him who ran his business. After Warhol's death, the films were slowly restored by the Whitney Museum and are occasionally projected at museums and film festivals. Few of the Warhol-directed films are available on video or DVD. Music In the mid-1960s, Warhol adopted the band the Velvet Underground, making them a crucial element of the Exploding Plastic Inevitable multimedia performance art show. Warhol, with Paul Morrissey, acted as the band's manager, introducing them to Nico (who would perform with the band at Warhol's request). While managing The Velvet Underground, Andy would have them dressed in all black to perform in front of movies that he was also presenting. In 1966, he "produced" their first album The Velvet Underground & Nico, as well as providing its album art. His actual participation in the album's production amounted to simply paying for the studio time. After the band's first album, Warhol and band leader Lou Reed started to disagree more about the direction the band should take, and their artistic friendship ended. In 1989, after Warhol's death, Reed and John Cale re-united for the first time since 1972 to write, perform, record and release the concept album Songs for Drella, a tribute to Warhol. In October 2019, an audio tape of publicly unknown music by Reed, based on Warhols' 1975 book, "The Philosophy of Andy Warhol: From A to B and Back Again", was
not well received, by 1956, he was included in his first group exhibition at the Museum of Modern Art, New York. Warhol's "whimsical" ink drawings of shoe advertisements figured in some of his earliest showings at the Bodley Gallery in New York in 1957. Warhol habitually used the expedient of tracing photographs projected with an epidiascope. Using prints by Edward Wallowitch, his "first boyfriend," the photographs would undergo a subtle transformation during Warhol's often cursory tracing of contours and hatching of shadows. Warhol used Wallowitch's photograph Young Man Smoking a Cigarette (c.1956), for a 1958 design for a book cover he submitted to Simon and Schuster for the Walter Ross pulp novel The Immortal, and later used others for his series of paintings. With the rapid expansion of the record industry, RCA Records hired Warhol, along with another freelance artist, Sid Maurer, to design album covers and promotional materials. 1960s Warhol was an early adopter of the silk screen printmaking process as a technique for making paintings. In 1962, Warhol was taught silk screen printmaking techniques by Max Arthur Cohn at his graphic arts business in Manhattan. In his book Popism: The Warhol Sixties, Warhol writes: "When you do something exactly wrong, you always turn up something." In May 1962, Warhol was featured in an article in Time magazine with his painting Big Campbell's Soup Can with Can Opener (Vegetable) (1962), which initiated his most sustained motif, the Campbell's soup can. That painting became Warhol's first to be shown in a museum when it was exhibited at the Wadsworth Atheneum in Hartford in July 1962. On July 9, 1962, Warhol's exhibition opened at the Ferus Gallery in Los Angeles with Campbell's Soup Cans, marking his West Coast debut of pop art. In November 1962, Warhol had an exhibition at Eleanor Ward's Stable Gallery in New York. The exhibit included the works Gold Marilyn, eight of the classic “Marilyn” series also named "Flavor Marilyns", Marilyn Diptych, 100 Soup Cans, 100 Coke Bottles, and 100 Dollar Bills. The Flavor Marilyns were selected from a group of fourteen canvases in the sub-series, each measuring 20″ x 16″. Some of the canvases were named after various candy Life Savers flavors, including Cherry Marilyn, Lemon Marilyn, Mint, Lavender, Grape or Licorice Marilyn. The others are identified by their background colors. Gold Marilyn, was bought by the architect Philip Johnson and donated to the Museum of Modern Art. At the exhibit, Warhol met poet John Giorno, who would star in Warhol's first film, Sleep, in 1964. It was during the 1960s that Warhol began to make paintings of iconic American objects such as dollar bills, mushroom clouds, electric chairs, Campbell's soup cans, Coca-Cola bottles, celebrities such as Marilyn Monroe, Elvis Presley, Marlon Brando, Troy Donahue, Muhammad Ali, and Elizabeth Taylor, as well as newspaper headlines or photographs of police dogs attacking African-American protesters during the Birmingham campaign in the civil rights movement. During these years, he founded his studio, "The Factory" and gathered about him a wide range of artists, writers, musicians, and underground celebrities. His work became popular and controversial. Warhol had this to say about Coca-Cola: In December 1962, New York City's Museum of Modern Art hosted a symposium on pop art, during which artists such as Warhol were attacked for "capitulating" to consumerism. Critics were appalled by Warhol's open acceptance of market culture, which set the tone for his reception. Warhol had his second exhibition at the Stable Gallery in the spring of 1964, which featured sculptures of commercial boxes stacked and scattered throughout the space to resemble a warehouse. For the exhibition, Warhol custom ordered wooden boxes and silkscreened graphics onto them. The sculptures—Brillo Box, Del Monte Peach Box, Heinz Tomato Ketchup Box, Kellog's Cornflakes Box, Campbell's Tomato Juice Box, and Mott's Apple Juice Box—sold for $200 to $400 depending on the size of the box. A pivotal event was The American Supermarket exhibition at Paul Bianchini's Upper East Side gallery in the fall of 1964. The show was presented as a typical small supermarket environment, except that everything in it—from the produce, canned goods, meat, posters on the wall, etc.—was created by prominent pop artists of the time, among them were sculpture Claes Oldenburg, Mary Inman and Bob Watts. Warhol designed a $12 paper shopping bag—plain white with a red Campbell's soup can. His painting of a can of a Campbell's soup cost $1,500 while each autographed can sold for 3 for $18, $6.50 each. The exhibit was one of the first mass events that directly confronted the general public with both pop art and the perennial question of what art is. As an advertisement illustrator in the 1950s, Warhol used assistants to increase his productivity. Collaboration would remain a defining (and controversial) aspect of his working methods throughout his career; this was particularly true in the 1960s. One of the most important collaborators during this period was Gerard Malanga. Malanga assisted the artist with the production of silkscreens, films, sculpture, and other works at "The Factory", Warhol's aluminum foil-and-silver-paint-lined studio on 47th Street (later moved to Broadway). Other members of Warhol's Factory crowd included Freddie Herko, Ondine, Ronald Tavel, Mary Woronov, Billy Name, and Brigid Berlin (from whom he apparently got the idea to tape-record his phone conversations). During the 1960s, Warhol also groomed a retinue of bohemian and counterculture eccentrics upon whom he bestowed the designation "superstars", including Nico, Joe Dallesandro, Edie Sedgwick, Viva, Ultra Violet, Holly Woodlawn, Jackie Curtis, and Candy Darling. These people all participated in the Factory films, and some—like Berlin—remained friends with Warhol until his death. Important figures in the New York underground art/cinema world, such as writer John Giorno and film-maker Jack Smith, also appear in Warhol films (many premiering at the New Andy Warhol Garrick Theatre and 55th Street Playhouse) of the 1960s, revealing Warhol's connections to a diverse range of artistic scenes during this time. Less well known was his support and collaboration with several teenagers during this era, who would achieve prominence later in life including writer David Dalton, photographer Stephen Shore and artist Bibbe Hansen (mother of pop musician Beck). Attempted murder: 1968 On June 3, 1968, radical feminist writer Valerie Solanas shot Warhol and Mario Amaya, art critic and curator, at Warhol's studio, The Factory. Before the shooting, Solanas had been a marginal figure in the Factory scene. She authored in 1967 the SCUM Manifesto, a separatist feminist tract that advocated the elimination of men; and appeared in the 1968 Warhol film I, a Man. Earlier on the day of the attack, Solanas had been turned away from the Factory after asking for the return of a script she had given to Warhol. The script had apparently been misplaced. Amaya received only minor injuries and was released from the hospital later the same day. Warhol was seriously wounded by the attack and barely survived. He suffered physical effects for the rest of his life, including being required to wear a surgical corset. The shooting had a profound effect on Warhol's life and art. Solanas was arrested the day after the assault, after turning herself in to police. By way of explanation, she said that Warhol "had too much control over my life". She was subsequently diagnosed with paranoid schizophrenia and eventually sentenced to three years under the control of the Department of Corrections. After the shooting, the Factory scene heavily increased its security, and for many the "Factory 60s" ended ("The superstars from the old Factory days didn't come around to the new Factory much"). Warhol had this to say about the attack: In 1969, Warhol and British journalist John Wilcock founded Interview magazine. 1970s Warhol had a retrospective exhibition at the Whitney Museum of American Art in 1971. His famous portrait of Chinese Communist leader Mao Zedong was created in 1973. In 1975, he published The Philosophy of Andy Warhol (1975). An idea expressed in the book: "Making money is art, and working is art and good business is the best art." Compared to the success and scandal of Warhol's work in the 1960s, the 1970s were a much quieter decade, as he became more entrepreneurial. He socialized at various nightspots in New York City, including Max's Kansas City and, later in the 1970s, Studio 54. He was generally regarded as quiet, shy, and a meticulous observer. Art critic Robert Hughes called him "the white mole of Union Square". In 1977, Warhol was commissioned by art collector Richard Weisman to create, Athletes, ten portraits consisting of the leading athletes of the day. According to Bob Colacello, Warhol devoted much of his time to rounding up new, rich patrons for portrait commissions—including Shah of Iran Mohammad Reza Pahlavi, his wife Empress Farah Pahlavi, his sister Princess Ashraf Pahlavi, Mick Jagger, Liza Minnelli, John Lennon, Diana Ross, and Brigitte Bardot. In 1979, reviewers disliked his exhibits of portraits of 1970s personalities and celebrities, calling them superficial, facile and commercial, with no depth or indication of the significance of the subjects. In 1979, Warhol and his longtime friend Stuart Pivar founded the New York Academy of Art. 1980s Warhol had a re-emergence of critical and financial success in the 1980s, partially due to his affiliation and friendships with a number of prolific younger artists, who were dominating the "bull market" of 1980s New York art: Jean-Michel Basquiat, Julian Schnabel, David Salle and other so-called Neo-Expressionists, as well as members of the Transavantgarde movement in Europe, including Francesco Clemente and Enzo Cucchi. Warhol also earned street credibility and graffiti artist Fab Five Freddy paid homage to Warhol by painting an entire train with Campbell soup cans. Warhol was also being criticized for becoming merely a "business artist". Critics panned his 1980 exhibition Ten Portraits of Jews of the Twentieth Century at the Jewish Museum in Manhattan, which Warhol—who was uninterested in Judaism and Jews—had described in his diary as "They're going to sell." In hindsight, however, some critics have come to view Warhol's superficiality and commerciality as "the most brilliant mirror of our times," contending that "Warhol had captured something irresistible about the zeitgeist of American culture in the 1970s." Warhol also had an appreciation for intense Hollywood glamour. He once said: "I love Los Angeles. I love Hollywood. They're so beautiful. Everything's plastic, but I love plastic. I want to be plastic." Warhol occasionally walked the fashion runways and did product endorsements, represented by Zoli Agency and later Ford Models. Before the 1984 Sarajevo Winter Olympics, he teamed with 15 other artists, including David Hockney and Cy Twombly, and contributed a Speed Skater print to the Art and Sport collection. The Speed Skater was used for the official Sarajevo Winter Olympics poster. In 1984, Vanity Fair commissioned Warhol to produce a portrait of Prince, in order to accompany an article that celebrated the success of Purple Rain and its accompanying movie. Referencing the many celebrity portraits produced by Warhol across his career, Orange Prince (1984) was created using a similar composition to the Marilyn "Flavors" series from 1962, among some of Warhol's first celebrity portraits. Prince is depicted in a pop color palette commonly used by Warhol, in bright orange with highlights of bright green and blue. The facial features and hair are screen-printed in black over the orange background. In September 1985, Warhol's joint exhibition with Basquiat, Paintings, opened to negative reviews at the Tony Shafrazi Gallery. That month, despite apprehension from Warhol, his silkscreen series Reigning Queens was shown at the Leo Castelli Gallery. In the Andy Warhol Diaries, Warhol wrote, "They were supposed to be only for Europe—nobody here cares about royalty and it'll be another bad review." In January 1987, Warhol traveled to Milan for the opening of his last exhibition, Last Supper, at the Palazzo delle Stelline. The next month, Warhol and jazz musician Miles Davis modeled for Koshin Satoh's fashion show at the Tunnel in New York City on February 17, 1987. Death Warhol died in Manhattan at 6:32 a.m. on February 22, 1987, at age 58. According to news reports, he had been making a good recovery from gallbladder surgery at New York Hospital before dying in his sleep from a sudden post-operative irregular heartbeat. Prior to his diagnosis and operation, Warhol delayed having his recurring gallbladder problems checked, as he was afraid to enter hospitals and see doctors. His family sued the hospital for inadequate care, saying that the arrhythmia was caused by improper care and water intoxication. The malpractice case was quickly settled out of court; Warhol's family received an undisclosed sum of money. Shortly before Warhol's death, doctors expected Warhol to survive the surgery, though a re-evaluation of the case about thirty years after his death showed many indications that Warhol's surgery was in fact riskier than originally thought. It was widely reported at the time that Warhol died of a "routine" surgery, though when considering factors such as his age, a family history of gallbladder problems, his previous gunshot wound, and his medical state in the weeks leading up to the procedure, the potential risk of death following the surgery appeared to have been significant. Warhol's brothers took his body back to Pittsburgh, where an open-coffin wake was held at the Thomas P. Kunsak Funeral Home. The solid bronze casket had gold-plated rails and white upholstery. Warhol was dressed in a black cashmere suit, a paisley tie, a platinum wig, and sunglasses. He was laid out holding a small prayer book and a red rose. The funeral liturgy was held at the Holy Ghost Byzantine Catholic Church on Pittsburgh's North Side. The eulogy was given by Monsignor Peter Tay. Yoko Ono and John Richardson were speakers. The coffin was covered with white roses and asparagus ferns. After the liturgy, the coffin was driven to St. John the Baptist Byzantine Catholic Cemetery in Bethel Park, a south suburb of Pittsburgh. At the grave, the priest said a brief prayer and sprinkled holy water on the casket. Before the coffin was lowered, Warhol's friend and advertising director of Interview Paige Powell dropped a copy of the magazine, an Interview T-shirt, and a bottle of the Estée Lauder perfume "Beautiful" into the grave. Warhol was buried next to his mother and father. A memorial service was held in Manhattan for Warhol at St. Patrick's Cathedral on April 1, 1987. Art works Paintings By the beginning of the 1960s, pop art was an experimental form that several artists were independently adopting; some of these pioneers, such as Roy Lichtenstein, would later become synonymous with the movement. Warhol, who would become famous as the "Pope of Pop", turned to this new style, where popular subjects could be part of the artist's palette. His early paintings show images taken from cartoons and advertisements, hand-painted with paint drips. Marilyn Monroe was a pop art painting that Warhol had done and it was very popular. Those drips emulated the style of successful abstract expressionists (such as Willem de Kooning). Warhol's first pop art paintings were displayed in April 1961, serving as the backdrop for New York Department Store Bonwit Teller's window display. This was the same stage his Pop Art contemporaries Jasper Johns, James Rosenquist and Robert Rauschenberg had also once graced. It was the gallerist Muriel Latow who came up with the ideas for both the soup cans and Warhol's dollar paintings. On November 23, 1961, Warhol wrote Latow a check for $50 which, according to the 2009 Warhol biography, Pop, The Genius of Warhol, was payment for coming up with the idea of the soup cans as subject matter. For his first major exhibition, Warhol painted his famous cans of Campbell's soup, which he claimed to have had for lunch for most of his life. From these beginnings, he developed his later style and subjects. Instead of working on a signature subject matter, as he started out to do, he worked more and more on a signature style, slowly eliminating the handmade from the artistic process. Warhol frequently used silk-screening; his later drawings were traced from slide projections. At the height of his fame as a painter, Warhol had several assistants who produced his silk-screen multiples, following his directions to make different versions and variations. Warhol produced both comic and serious works; his subject could be a soup can or an electric chair. Warhol used the same techniques—silkscreens, reproduced serially, and often painted with bright colors—whether he painted celebrities, everyday objects, or images of suicide, car crashes, and disasters, as in the 1962–63 Death and Disaster series. In 1979, Warhol was commissioned to paint a BMW M1 Group 4 racing version for the fourth installment of the BMW Art Car project. He was initially asked to paint a BMW 320i in 1978, but the car model was changed and it didn't qualify for the race that year. Warhol was the first artist to paint directly onto the automobile himself instead of letting technicians transfer a scale-model design to the car. Reportedly, it took him only 23 minutes to paint the entire car. Racecar drivers Hervé Poulain, Manfred Winkelhock and Marcel Mignot drove the car at the 1979 24 Hours of Le Mans. Some of Warhol's work, as well as his own personality, has been described as being Keatonesque. Warhol has been described as playing dumb to the media. He sometimes refused to explain his work. He has suggested that all one needs to know about his work is "already there 'on the surface. His Rorschach inkblots are intended as pop comments on art and what art could be. His cow wallpaper (literally, wallpaper with a cow motif) and his oxidation paintings (canvases prepared with copper paint that was then oxidized with urine) are also noteworthy in this context. Equally noteworthy is the way these works—and their means of production—mirrored the atmosphere at Andy's New York "Factory". Biographer Bob Colacello provides some details on Andy's "piss paintings": Warhol's 1982 portrait of Basquiat, Jean-Michel Basquiat, is a silkscreen over an oxidized copper "piss painting." After many years of silkscreen, oxidation, photography, etc., Warhol returned to painting with a brush in hand. In 1983, Warhol began collaborating with Basquiat and Clemente. Warhol and Basquiat created a series of more than 50 large collaborative works between 1984 and 1985. Despite criticism when these were first shown, Warhol called some of them "masterpieces," and they were influential for his later work. In 1984, Warhol was commissioned by collector and gallerist Alexander Iolas to produce work based on Leonardo da Vinci's The Last Supper for an exhibition at the old refectory of the Palazzo delle Stelline in Milan, opposite from the Santa Maria delle Grazie where Leonardo da Vinci's mural can be seen. Warhol exceeded the demands of the commission and produced nearly 100 variations on the theme, mostly silkscreens and paintings, and among them a collaborative sculpture with Basquiat, the Ten Punching Bags (Last Supper). The Milan exhibition that opened in January 1987 with a set of 22 silk-screens, was the last exhibition for both the artist and the gallerist. The series of The Last Supper was seen by some as "arguably his greatest," but by others as "wishy-washy, religiose" and "spiritless". It is the largest series of religious-themed works by any U.S. artist. Artist
he placed himself at the head of the Turkoman cavalry, crossed the Euphrates, and entered and invaded the city. Along with Nizam al-Mulk, he then marched into Armenia and Georgia, which he conquered in 1064. After a siege of 25 days, the Seljuks captured Ani, the capital city of Armenia. An account of the sack and massacres in Ani is given by the historian Sibt ibn al-Jawzi, who quotes an eyewitness saying: Byzantine struggle In route to fight the Fatimids in Syria in 1068, Alp Arslan invaded the Byzantine Empire. The Emperor Romanos IV Diogenes, assuming command in person, met the invaders in Cilicia. In three arduous campaigns, the Turks were defeated in detail and driven across the Euphrates in 1070. The first two campaigns were conducted by the emperor himself, while the third was directed by Manuel Comnenos, great-uncle of Emperor Manuel Comnenos. During this time, Arslan gained the allegiance of Rashid al-Dawla Mahmud, the Mirdasid emir of Aleppo. In 1071, Romanos again took the field and advanced into Armenia with possibly 30,000 men, including a contingent of Cuman Turks as well as contingents of Franks and Normans, under Ursel de Baieul. Alp Arslan, who had moved his troops south to fight the Fatimids, quickly reversed to meet the Byzantines. At Manzikert, on the Murat River, north of Lake Van, the two forces waged the Battle of Manzikert. The Cuman mercenaries among the Byzantine forces immediately defected to the Turkic side. Seeing this, "the Western mercenaries rode off and took no part in the battle." To be exact, Romanos was betrayed by general Andronikos Doukas, son of the Caesar (Romanos's stepson), who pronounced him dead and rode off with a large part of the Byzantine forces at a critical moment. The Byzantines were totally routed. Emperor Romanos IV was himself taken prisoner and conducted into the presence of Alp Arslan. After a ritual humiliation, Arslan treated him with generosity. After peace terms were agreed to, Arslan dismissed the Emperor, loaded with presents and respectfully attended by a military guard. The following conversation is said to have taken place after Romanos was brought as a prisoner before the Sultan: Alp Arslan's victories changed the balance in near Asia completely in favour of the Seljuq Turks and Sunni Muslims. While the Byzantine Empire was to continue for nearly four more centuries, the victory at Manzikert signalled the beginning of Turkmen ascendancy in Anatolia. The victory at Manzikert became so popular among the Turks that later every noble family in Anatolia claimed to have had an ancestor who had fought on that day. Most historians, including Edward Gibbon, date the defeat at Manzikert as the beginning of the end of the Eastern Roman Empire. State organization Alp Arslan's strength lay in the military realm. Domestic affairs were handled by his able vizier, Nizam al-Mulk, the founder of the administrative organization that characterized and strengthened the sultanate during the reigns of Alp Arslan and his son, Malik Shah. Military fiefs, governed by Seljuq princes, were established to provide support for the soldiery and to accommodate the nomadic Turks to the established Anatolian agricultural scene. This type of military fiefdom enabled the nomadic Turks to draw on the resources of the sedentary Persians, Turks, and other established cultures within the Seljuq realm, and allowed Alp Arslan to field a huge standing army without depending on tribute from conquest to pay his soldiers. He not only had enough food from his subjects to maintain his military, but the taxes collected from traders and merchants added to his coffers sufficiently to fund his continuous wars. Suleiman ibn Qutalmish was the son of the contender for Arslan's throne; he was appointed governor of the north-western provinces and assigned to completing the invasion of Anatolia. An explanation for this choice can only be conjectured from Ibn al-Athir's account of the battle between Alp-Arslan and Kutalmish, in which he writes that Alp-Arslan wept for the latter's death and greatly mourned the loss of his kinsman. Death After Manzikert, the dominion of Alp Arslan extended over much of western Asia. He soon prepared to march for the conquest of Turkestan, the original seat of his ancestors. With a powerful army he advanced to the banks of the Oxus. Before he could pass the river with safety, however, it was necessary to subdue certain fortresses, one of which was for several days vigorously defended by the Kurdish rebel, Yusuf al-Kharezmi or Yusuf al-Harani. Perhaps over-eager to press on against his Qarakhanid enemy, Alp Arslan gained the governor's submission by promising the rebel ‘perpetual ownership of his lands’. When Yusuf al-Harani was brought before him, the Sultan ordered that he be shot, but before the archers could raise their bows Yusuf seized a knife and threw himself at Alp Arslan, striking three blows before being slain. Four days later on 24
after the death of Tughril. With peace and security established in his dominions, Arslan convoked an assembly of the states and in 1066, he declared his son Malik Shah I his heir and successor. With the hope of capturing Caesarea Mazaca, the capital of Cappadocia, he placed himself at the head of the Turkoman cavalry, crossed the Euphrates, and entered and invaded the city. Along with Nizam al-Mulk, he then marched into Armenia and Georgia, which he conquered in 1064. After a siege of 25 days, the Seljuks captured Ani, the capital city of Armenia. An account of the sack and massacres in Ani is given by the historian Sibt ibn al-Jawzi, who quotes an eyewitness saying: Byzantine struggle In route to fight the Fatimids in Syria in 1068, Alp Arslan invaded the Byzantine Empire. The Emperor Romanos IV Diogenes, assuming command in person, met the invaders in Cilicia. In three arduous campaigns, the Turks were defeated in detail and driven across the Euphrates in 1070. The first two campaigns were conducted by the emperor himself, while the third was directed by Manuel Comnenos, great-uncle of Emperor Manuel Comnenos. During this time, Arslan gained the allegiance of Rashid al-Dawla Mahmud, the Mirdasid emir of Aleppo. In 1071, Romanos again took the field and advanced into Armenia with possibly 30,000 men, including a contingent of Cuman Turks as well as contingents of Franks and Normans, under Ursel de Baieul. Alp Arslan, who had moved his troops south to fight the Fatimids, quickly reversed to meet the Byzantines. At Manzikert, on the Murat River, north of Lake Van, the two forces waged the Battle of Manzikert. The Cuman mercenaries among the Byzantine forces immediately defected to the Turkic side. Seeing this, "the Western mercenaries rode off and took no part in the battle." To be exact, Romanos was betrayed by general Andronikos Doukas, son of the Caesar (Romanos's stepson), who pronounced him dead and rode off with a large part of the Byzantine forces at a critical moment. The Byzantines were totally routed. Emperor Romanos IV was himself taken prisoner and conducted into the presence of Alp Arslan. After a ritual humiliation, Arslan treated him with generosity. After peace terms were agreed to, Arslan dismissed the Emperor, loaded with presents and respectfully attended by a military guard. The following conversation is said to have taken place after Romanos was brought as a prisoner before the Sultan: Alp Arslan's victories changed the balance in near Asia completely in favour of the Seljuq Turks and Sunni Muslims. While the Byzantine Empire was to continue for nearly four more centuries, the victory at Manzikert signalled the beginning of Turkmen ascendancy in Anatolia. The victory at Manzikert became so popular among the Turks that later every noble family in Anatolia claimed to have had an ancestor who had fought on that day. Most historians, including Edward Gibbon, date the defeat at Manzikert as the beginning of the end of the Eastern Roman Empire. State organization Alp Arslan's strength lay in the military realm. Domestic affairs were handled by his able vizier, Nizam al-Mulk, the founder of the administrative organization that characterized and strengthened the sultanate during the reigns of Alp Arslan and his son, Malik Shah. Military fiefs, governed by Seljuq princes, were established to provide support for the soldiery and to accommodate the nomadic Turks to the established Anatolian agricultural scene. This type of military fiefdom enabled the nomadic Turks to draw on the resources of the sedentary Persians, Turks, and other established cultures within the Seljuq realm, and allowed Alp Arslan to field a huge standing army without depending on tribute from conquest to pay his soldiers. He not only had enough food from his subjects to maintain his military, but the taxes collected from traders and merchants added to his coffers sufficiently to fund his continuous wars. Suleiman ibn Qutalmish was the son of the contender for Arslan's throne; he was appointed governor of the north-western provinces and assigned to completing the invasion of Anatolia. An explanation for this choice can only be conjectured from Ibn al-Athir's account of the battle between Alp-Arslan and Kutalmish, in which he writes that Alp-Arslan wept for the latter's death and greatly mourned the loss of his kinsman. Death After Manzikert, the dominion of Alp Arslan extended over much of western Asia. He soon prepared
the National Endowment for the Arts, the Motion Picture Association of America and the Ford Foundation. The original 22-member Board of Trustees included actor Gregory Peck as chairman and actor Sidney Poitier as vice-chairman, as well as director Francis Ford Coppola, film historian Arthur Schlesinger, Jr., lobbyist Jack Valenti, and other representatives from the arts and academia. The institute established a training program for filmmakers known then as the Center for Advanced Film Studies. Also created in the early years were a repertory film exhibition program at the Kennedy Center for the Performing Arts and the AFI Catalog of Feature Films — a scholarly source for American film history. The institute moved to its current eight-acre Hollywood campus in 1981. The film training program grew into the AFI Conservatory, an accredited graduate school. AFI moved its presentation of first-run and auteur films from the Kennedy Center to the historic AFI Silver Theatre and Cultural Center, which hosts the AFI DOCS film festival, making AFI the largest nonprofit film exhibitor in the world. AFI educates audiences and recognizes artistic excellence through its awards programs and 10 Top 10 Lists. List of programs in brief AFI educational and cultural programs include: AFI Awards – an honor celebrating the creative ensembles of the most outstanding motion picture and television programs of the year AFI Catalog of Feature Films and AFI Archive – the written history of all feature films during the first 100 years of the art form – accessible free online AFI Conservatory – a film school led by master filmmakers in a graduate-level program AFI Directing Workshop for Women – a production-based training program committed to increasing the number of women working professionally in screen directing AFI Life Achievement Award – a tradition since 1973, a high honor for a career in film AFI 100 Years... series – television events and movie reference lists AFI's two film festivals – AFI Fest in Los Angeles and AFI Docs in Washington, D.C. and Silver Spring, Maryland AFI Silver Theatre and Cultural Center – a historic theater with year-round art house, first-run and classic film programming in Silver Spring, Maryland American Film – a magazine that explores the art of new and historic film classics, now a blog on AFI.com AFI Conservatory In 1969, the institute established the AFI Conservatory for Advanced Film Studies at Greystone, the Doheny Mansion in Beverly Hills, California. The first class included filmmakers Terrence Malick, Caleb Deschanel, and Paul Schrader. That program grew into the AFI Conservatory, an accredited graduate film school located in the hills above Hollywood, California, providing training in six filmmaking disciplines: cinematography, directing, editing, producing, production design, and screenwriting. Mirroring a professional production environment, Fellows collaborate to make more films than any other graduate level program. Admission to AFI Conservatory is highly selective, with a maximum of 140 graduates per year. In 2013, Emmy and Oscar-winning director, producer, and screenwriter James L. Brooks (As Good as It Gets, Broadcast News, Terms of Endearment) joined as the artistic director of the AFI Conservatory where he provides leadership for the film program. Brooks' artistic role at the AFI Conservatory has a rich legacy that includes Daniel Petrie, Jr., Robert Wise, and Frank Pierson. Award-winning director Bob Mandel served as dean of the AFI Conservatory for nine years. Jan Schuette took over as dean in 2014 and served until 2017. Film producer Richard Gladstein was dean from 2017 until 2019, when Susan Ruskin was appointed. Notable alumni AFI Conservatory's alumni have careers in film, television and on the web. They have been recognized with all of the major industry awards—Academy Award, Emmy Award, guild awards, and the Tony Award. Among
— a scholarly source for American film history. The institute moved to its current eight-acre Hollywood campus in 1981. The film training program grew into the AFI Conservatory, an accredited graduate school. AFI moved its presentation of first-run and auteur films from the Kennedy Center to the historic AFI Silver Theatre and Cultural Center, which hosts the AFI DOCS film festival, making AFI the largest nonprofit film exhibitor in the world. AFI educates audiences and recognizes artistic excellence through its awards programs and 10 Top 10 Lists. List of programs in brief AFI educational and cultural programs include: AFI Awards – an honor celebrating the creative ensembles of the most outstanding motion picture and television programs of the year AFI Catalog of Feature Films and AFI Archive – the written history of all feature films during the first 100 years of the art form – accessible free online AFI Conservatory – a film school led by master filmmakers in a graduate-level program AFI Directing Workshop for Women – a production-based training program committed to increasing the number of women working professionally in screen directing AFI Life Achievement Award – a tradition since 1973, a high honor for a career in film AFI 100 Years... series – television events and movie reference lists AFI's two film festivals – AFI Fest in Los Angeles and AFI Docs in Washington, D.C. and Silver Spring, Maryland AFI Silver Theatre and Cultural Center – a historic theater with year-round art house, first-run and classic film programming in Silver Spring, Maryland American Film – a magazine that explores the art of new and historic film classics, now a blog on AFI.com AFI Conservatory In 1969, the institute established the AFI Conservatory for Advanced Film Studies at Greystone, the Doheny Mansion in Beverly Hills, California. The first class included filmmakers Terrence Malick, Caleb Deschanel, and Paul Schrader. That program grew into the AFI Conservatory, an accredited graduate film school located in the hills above Hollywood, California, providing training in six filmmaking disciplines: cinematography, directing, editing, producing, production design, and screenwriting. Mirroring a professional production environment, Fellows collaborate to make more films than any other graduate level program. Admission to AFI Conservatory is highly selective, with a maximum of 140 graduates per year. In 2013, Emmy and Oscar-winning director, producer, and screenwriter James L. Brooks (As Good as It Gets, Broadcast News, Terms of Endearment) joined as the artistic director of the AFI Conservatory where he provides leadership for the film program. Brooks' artistic role at the AFI Conservatory has a rich legacy that includes Daniel Petrie, Jr., Robert Wise, and Frank Pierson. Award-winning director Bob Mandel served as dean of the AFI Conservatory for nine years. Jan Schuette took over as dean in 2014 and served until 2017. Film producer Richard Gladstein was dean from 2017 until 2019, when Susan Ruskin was appointed. Notable alumni AFI Conservatory's alumni have careers in film, television and on the web. They have been recognized with all of the major industry awards—Academy Award, Emmy Award, guild awards, and the Tony Award. Among the alumni of AFI are Andrea Arnold (Red Road, Fish Tank), Darren Aronofsky (Requiem for a Dream, Black Swan), Carl Colpaert (Gas Food Lodging, Hurlyburly, Swimming with Sharks), Doug Ellin (Entourage), Todd Field (In the Bedroom, Little Children), Jack Fisk (Badlands, Days of Heaven, There Will Be Blood), Carl Franklin (One False
important figure in his development was Yamamoto. Of his 24 films as A.D., he worked on 17 under Yamamoto, many of them comedies featuring the popular actor Ken'ichi Enomoto, known as "Enoken". Yamamoto nurtured Kurosawa's talent, promoting him directly from third assistant director to chief assistant director after a year. Kurosawa's responsibilities increased, and he worked at tasks ranging from stage construction and film development to location scouting, script polishing, rehearsals, lighting, dubbing, editing, and second-unit directing. In the last of Kurosawa's films as an assistant director for Yamamoto, Horse (Uma, 1941), Kurosawa took over most of the production, as his mentor was occupied with the shooting of another film. Yamamoto advised Kurosawa that a good director needed to master screenwriting. Kurosawa soon realized that the potential earnings from his scripts were much higher than what he was paid as an assistant director. He later wrote or co-wrote all his films, and frequently penned screenplays for other directors such as Satsuo Yamamoto's film, A Triumph of Wings (Tsubasa no gaika, 1942). This outside scriptwriting would serve Kurosawa as a lucrative sideline lasting well into the 1960s, long after he became famous. Wartime films and marriage (1942–1945) In the two years following the release of Horse in 1941, Kurosawa searched for a story he could use to launch his directing career. Towards the end of 1942, about a year after the Japanese attack on Pearl Harbor, novelist Tsuneo Tomita published his Musashi Miyamoto-inspired judo novel, Sanshiro Sugata, the advertisements for which intrigued Kurosawa. He bought the book on its publication day, devoured it in one sitting, and immediately asked Toho to secure the film rights. Kurosawa's initial instinct proved correct as, within a few days, three other major Japanese studios also offered to buy the rights. Toho prevailed, and Kurosawa began pre-production on his debut work as director. Shooting of Sanshiro Sugata began on location in Yokohama in December 1942. Production proceeded smoothly, but getting the completed film past the censors was an entirely different matter. The censorship office considered the work to be objectionably "British-American" by the standards of wartime Japan, and it was only through the intervention of director Yasujirō Ozu, who championed the film, that Sanshiro Sugata was finally accepted for release on March 25, 1943. (Kurosawa had just turned 33.) The movie became both a critical and commercial success. Nevertheless, the censorship office would later decide to cut out some 18 minutes of footage, much of which is now considered lost. He next turned to the subject of wartime female factory workers in The Most Beautiful, a propaganda film which he shot in a semi-documentary style in early 1944. To elicit realistic performances from his actresses, the director had them live in a real factory during the shoot, eat the factory food and call each other by their character names. He would use similar methods with his performers throughout his career. During production, the actress playing the leader of the factory workers, Yōko Yaguchi, was chosen by her colleagues to present their demands to the director. She and Kurosawa were constantly at odds, and it was through these arguments that the two paradoxically became close. They married on May 21, 1945, with Yaguchi two months pregnant (she never resumed her acting career), and the couple would remain together until her death in 1985. They had two children, both surviving Kurosawa : a son, Hisao, born December 20, 1945, who served as producer on some of his father's last projects, and Kazuko, a daughter, born April 29, 1954, who became a costume designer. Shortly before his marriage, Kurosawa was pressured by the studio against his will to direct a sequel to his debut film. The often blatantly propagandistic Sanshiro Sugata Part II, which premiered in May 1945, is generally considered one of his weakest pictures. Kurosawa decided to write the script for a film that would be both censor-friendly and less expensive to produce. The Men Who Tread on the Tiger's Tail, based on the Kabuki play Kanjinchō and starring the comedian Enoken, with whom Kurosawa had often worked during his assistant director days, was completed in September 1945. By this time, Japan had surrendered and the occupation of Japan had begun. The new American censors interpreted the values allegedly promoted in the picture as overly "feudal" and banned the work. It was not released until 1952, the year another Kurosawa film, Ikiru, was also released. Ironically, while in production, the film had already been savaged by Japanese wartime censors as too Western and "democratic" (they particularly disliked the comic porter played by Enoken), so the movie most probably would not have seen the light of day even if the war had continued beyond its completion. Early postwar years to Red Beard (1946–65) First postwar works (1946–50) After the war, Kurosawa, influenced by the democratic ideals of the Occupation, sought to make films that would establish a new respect towards the individual and the self. The first such film, No Regrets for Our Youth (1946), inspired by both the 1933 Takigawa incident and the Hotsumi Ozaki wartime spy case, criticized Japan's prewar regime for its political oppression. Atypically for the director, the heroic central character is a woman, Yukie (Setsuko Hara), who, born into upper-middle-class privilege, comes to question her values in a time of political crisis. The original script had to be extensively rewritten and, because of its controversial theme and gender of its protagonist, the completed work divided critics. Nevertheless, it managed to win the approval of audiences, who turned variations on the film's title into a postwar catchphrase. His next film, One Wonderful Sunday premiered in July 1947 to mixed reviews. It is a relatively uncomplicated and sentimental love story dealing with an impoverished postwar couple trying to enjoy, within the devastation of postwar Tokyo, their one weekly day off. The movie bears the influence of Frank Capra, D. W. Griffith and F. W. Murnau, each of whom was among Kurosawa's favorite directors. Another film released in 1947 with Kurosawa's involvement was the action-adventure thriller, Snow Trail, directed by Senkichi Taniguchi from Kurosawa's screenplay. It marked the debut of the intense young actor Toshiro Mifune. It was Kurosawa who, with his mentor Yamamoto, had intervened to persuade Toho to sign Mifune, during an audition in which the young man greatly impressed Kurosawa, but managed to alienate most of the other judges. Drunken Angel is often considered the director's first major work. Although the script, like all of Kurosawa's occupation-era works, had to go through rewrites due to American censorship, Kurosawa felt that this was the first film in which he was able to express himself freely. A gritty story of a doctor who tries to save a gangster (yakuza) with tuberculosis, it was also the first time that Kurosawa directed Mifune, who went on to play major roles in all but one of the director's next 16 films (the exception being Ikiru). While Mifune was not cast as the protagonist in Drunken Angel, his explosive performance as the gangster so dominates the drama that he shifted the focus from the title character, the alcoholic doctor played by Takashi Shimura, who had already appeared in several Kurosawa movies. However, Kurosawa did not want to smother the young actor's immense vitality, and Mifune's rebellious character electrified audiences in much the way that Marlon Brando's defiant stance would startle American film audiences a few years later. The film premiered in Tokyo in April 1948 to rave reviews and was chosen by the prestigious Kinema Junpo critics poll as the best film of its year, the first of three Kurosawa movies to be so honored. Kurosawa, with producer Sōjirō Motoki and fellow directors and friends Kajiro Yamamoto, Mikio Naruse and Senkichi Taniguchi, formed a new independent production unit called Film Art Association (Eiga Geijutsu Kyōkai). For this organization's debut work, and first film for Daiei studios, Kurosawa turned to a contemporary play by Kazuo Kikuta and, together with Taniguchi, adapted it for the screen. The Quiet Duel starred Toshiro Mifune as an idealistic young doctor struggling with syphilis, a deliberate attempt by Kurosawa to break the actor away from being typecast as gangsters. Released in March 1949, it was a box office success, but is generally considered one of the director's lesser achievements. His second film of 1949, also produced by Film Art Association and released by Shintoho, was Stray Dog. It is a detective movie (perhaps the first important Japanese film in that genre) that explores the mood of Japan during its painful postwar recovery through the story of a young detective, played by Mifune, and his fixation on the recovery of his handgun, which was stolen by a penniless war veteran who proceeds to use it to rob and murder. Adapted from an unpublished novel by Kurosawa in the style of a favorite writer of his, Georges Simenon, it was the director's first collaboration with screenwriter Ryuzo Kikushima, who would later help to script eight other Kurosawa films. A famous, virtually wordless sequence, lasting over eight minutes, shows the detective, disguised as an impoverished veteran, wandering the streets in search of the gun thief; it employed actual documentary footage of war-ravaged Tokyo neighborhoods shot by Kurosawa's friend, Ishirō Honda, the future director of Godzilla. The film is considered a precursor to the contemporary police procedural and buddy cop film genres. Scandal, released by Shochiku in April 1950, was inspired by the director's personal experiences with, and anger towards, Japanese yellow journalism. The work is an ambitious mixture of courtroom drama and social problem film about free speech and personal responsibility, but even Kurosawa regarded the finished product as dramatically unfocused and unsatisfactory, and almost all critics agree. However, it would be Kurosawa's second film of 1950, Rashomon, that would ultimately win him, and Japanese cinema, a whole new international audience. International recognition (1950–58) After finishing Scandal, Kurosawa was approached by Daiei studios to make another film for them. Kurosawa picked a script by an aspiring young screenwriter, Shinobu Hashimoto, who would eventually work on nine of his films. Their first joint effort was based on Ryūnosuke Akutagawa's experimental short story "In a Grove", which recounts the murder of a samurai and the rape of his wife from various different and conflicting points-of-view. Kurosawa saw potential in the script, and with Hashimoto's help, polished and expanded it and then pitched it to Daiei, who were happy to accept the project due to its low budget. The shooting of Rashomon began on July 7, 1950, and, after extensive location work in the primeval forest of Nara, wrapped on August 17. Just one week was spent in hurried post-production, hampered by a studio fire, and the finished film premiered at Tokyo's Imperial Theatre on August 25, expanding nationwide the following day. The movie was met by lukewarm reviews, with many critics puzzled by its unique theme and treatment, but it was nevertheless a moderate financial success for Daiei. Kurosawa's next film, for Shochiku, was The Idiot, an adaptation of the novel by the director's favorite writer, Fyodor Dostoyevsky. The story is relocated from Russia to Hokkaido, but otherwise adheres closely to the original, a fact seen by many critics as detrimental to the work. A studio-mandated edit shortened it from Kurosawa's original cut of 265 minutes to just 166 minutes, making the resulting narrative exceedingly difficult to follow. The severely edited film version is widely considered to be one of the director's least successful works and the original full-length version no longer exists. Contemporary reviews of the much shortened edited version were very negative, but the film was a moderate success at the box office, largely because of the popularity of one of its stars, Setsuko Hara. Meanwhile, unbeknownst to Kurosawa, Rashomon had been entered in the Venice Film Festival, due to the efforts of Giuliana Stramigioli, a Japan-based representative of an Italian film company, who had seen and admired the movie and convinced Daiei to submit it. On September 10, 1951, Rashomon was awarded the festival's highest prize, the Golden Lion, shocking not only Daiei but the international film world, which at the time was largely unaware of Japan's decades-old cinematic tradition. After Daiei briefly exhibited a subtitled print of the film in Los Angeles, RKO purchased distribution rights to Rashomon in the United States. The company was taking a considerable gamble. It had put out only one prior subtitled film in the American market, and the only previous Japanese talkie commercially released in New York had been Mikio Naruse's comedy, Wife! Be Like a Rose, in 1937: a critical and box-office flop. However, Rashomons commercial run, greatly helped by strong reviews from critics and even the columnist Ed Sullivan, earned $35,000 in its first three weeks at a single New York theatre, an almost unheard-of sum at the time. This success in turn led to a vogue in America and the West for Japanese movies throughout the 1950s, replacing the enthusiasm for Italian neorealist cinema. By the end of 1952 Rashomon was released in Japan, the United States, and most of Europe. Among the Japanese film-makers whose work, as a result, began to win festival prizes and commercial release in the West were Kenji Mizoguchi (The Life of Oharu, Ugetsu, Sansho the Bailiff) and, somewhat later, Yasujirō Ozu (Tokyo Story, An Autumn Afternoon)—artists highly respected in Japan but, before this period, almost totally unknown in the West. Kurosawa's growing reputation among Western audiences in the 1950s would make Western audiences more sympathetic to the reception of later generations of Japanese film-makers ranging from Kon Ichikawa, Masaki Kobayashi, Nagisa Oshima and Shohei Imamura to Juzo Itami, Takeshi Kitano and Takashi Miike. His career boosted by his sudden international fame, Kurosawa, now reunited with his original film studio, Toho (which would go on to produce his next 11 films), set to work on his next project, Ikiru. The movie stars Takashi Shimura as a cancer-ridden Tokyo bureaucrat, Watanabe, on a final quest for meaning before his death. For the screenplay, Kurosawa brought in Hashimoto as well as writer Hideo Oguni, who would go on to co-write twelve Kurosawa films. Despite the work's grim subject matter, the screenwriters took a satirical approach, which some have compared to the work of Brecht, to both the bureaucratic world of its hero and the U.S. cultural colonization of Japan. (American pop songs figure prominently in the film.) Because of this strategy, the film-makers are usually credited with saving the picture from the kind of sentimentality common to dramas about characters with terminal illnesses. Ikiru opened in October 1952 to rave reviews—it won Kurosawa his second Kinema Junpo "Best Film" award—and enormous box office success. It remains the most acclaimed of all the artist's films set in the modern era. In December 1952, Kurosawa took his Ikiru screenwriters, Shinobu Hashimoto and Hideo Oguni, for a forty-five-day secluded residence at an inn to create the screenplay for his next movie, Seven Samurai. The ensemble work was Kurosawa's first proper samurai film, the genre for which he would become most famous. The simple story, about a poor farming village in Sengoku period Japan that hires a group of samurai to defend it against an impending attack by bandits, was given a full epic treatment, with a huge cast (largely consisting of veterans of previous Kurosawa productions) and meticulously detailed action, stretching out to almost three-and-a-half hours of screen time. Three months were spent in pre-production and a month in rehearsals. Shooting took up 148 days spread over almost a year, interrupted by production and financing troubles and Kurosawa's health problems. The film finally opened in April 1954, half a year behind its original release date and about three times over budget, making it at the time the most expensive Japanese film ever made. (However, by Hollywood standards, it was a quite modestly budgeted production, even for that time.) The film received positive critical reaction and became a big hit, quickly making back the money invested in it and providing the studio with a product that they could, and did, market internationally—though with extensive edits. Over time—and with the theatrical and home video releases of the uncut version—its reputation has steadily grown. It is now regarded by some commentators as the greatest Japanese film ever made, and in 1979, a poll of Japanese film critics also voted it the best Japanese film ever made. In the most recent (2012) version of the widely respected British Film Institute (BFI) Sight & Sound "Greatest Films of All Time" poll, Seven Samurai placed 17th among all films from all countries in both the critics' and the directors' polls, receiving a place in the Top Ten lists of 48 critics and 22 directors. In 1954, nuclear tests in the Pacific were causing radioactive rainstorms in Japan and one particular incident in March had exposed a Japanese fishing boat to nuclear fallout, with disastrous results. It is in this anxious atmosphere that Kurosawa's next film, Record of a Living Being, was conceived. The story concerned an elderly factory owner (Toshiro Mifune) so terrified of the prospect of a nuclear attack that he becomes determined to move his entire extended family (both legal and extra-marital) to what he imagines is the safety of a farm in Brazil. Production went much more smoothly than the director's previous film, but a few days before shooting ended, Kurosawa's composer, collaborator and close friend Fumio Hayasaka died (of tuberculosis) at the age of 41. The film's score was finished by Hayasaka's student, Masaru Sato, who would go on to score all of Kurosawa's next eight films. Record of a Living Being opened in November 1955 to mixed reviews and muted audience reaction, becoming the first Kurosawa film to lose money during its original theatrical run. Today, it is considered by many to be among the finest films dealing with the psychological effects of the global nuclear stalemate. Kurosawa's next project, Throne
"Young Guy" (Wakadaishō) series of musical comedies, so signing him to appear in the film virtually guaranteed Kurosawa strong box-office. The shoot, the film-maker's longest ever, lasted well over a year (after five months of pre-production), and wrapped in spring 1965, leaving the director, his crew and his actors exhausted. Red Beard premiered in April 1965, becoming the year's highest-grossing Japanese production and the third (and last) Kurosawa film to top the prestigious Kinema Jumpo yearly critics poll. It remains one of Kurosawa's best-known and most-loved works in his native country. Outside Japan, critics have been much more divided. Most commentators concede its technical merits and some praise it as among Kurosawa's best, while others insist that it lacks complexity and genuine narrative power, with still others claiming that it represents a retreat from the artist's previous commitment to social and political change. The film marked something of an end of an era for its creator. The director himself recognized this at the time of its release, telling critic Donald Richie that a cycle of some kind had just come to an end and that his future films and production methods would be different. His prediction proved quite accurate. Beginning in the late 1950s, television began increasingly to dominate the leisure time of the formerly large and loyal Japanese cinema audience. And as film company revenues dropped, so did their appetite for risk—particularly the risk represented by Kurosawa's costly production methods. Red Beard also marked the midway point, chronologically, in the artist's career. During his previous twenty-nine years in the film industry (which includes his five years as assistant director), he had directed twenty-three films, while during the remaining twenty-eight years, for many and complex reasons, he would complete only seven more. Also, for reasons never adequately explained, Red Beard would be his final film starring Toshiro Mifune. Yu Fujiki, an actor who worked on The Lower Depths, observed, regarding the closeness of the two men on the set, "Mr. Kurosawa's heart was in Mr. Mifune's body." Donald Richie has described the rapport between them as a unique "symbiosis". Hollywood ambitions to last films (1966–98) Hollywood detour (1966–68) When Kurosawa's exclusive contract with Toho came to an end in 1966, the 56-year-old director was seriously contemplating change. Observing the troubled state of the domestic film industry, and having already received dozens of offers from abroad, the idea of working outside Japan appealed to him as never before. For his first foreign project, Kurosawa chose a story based on a Life magazine article. The Embassy Pictures action thriller, to be filmed in English and called simply Runaway Train, would have been his first in color. But the language barrier proved a major problem, and the English version of the screenplay was not even finished by the time filming was to begin in autumn 1966. The shoot, which required snow, was moved to autumn 1967, then canceled in 1968. Almost two decades later, another foreign director working in Hollywood, Andrei Konchalovsky, finally made Runaway Train (1985), though from a new script loosely based on Kurosawa's. The director meanwhile had become involved in a much more ambitious Hollywood project. Tora! Tora! Tora!, produced by 20th Century Fox and Kurosawa Production, would be a portrayal of the Japanese attack on Pearl Harbor from both the American and the Japanese points of view, with Kurosawa helming the Japanese half and an Anglophonic film-maker directing the American half. He spent several months working on the script with Ryuzo Kikushima and Hideo Oguni, but very soon the project began to unravel. The director of the American sequences turned out not to be David Lean, as originally planned, but American Richard Fleischer. The budget was also cut, and the screen time allocated for the Japanese segment would now be no longer than 90 minutes—a major problem, considering that Kurosawa's script ran over four hours. After numerous revisions with the direct involvement of Darryl Zanuck, a more or less finalized cut screenplay was agreed upon in May 1968. Shooting began in early December, but Kurosawa would last only a little over three weeks as director. He struggled to work with an unfamiliar crew and the requirements of a Hollywood production, while his working methods puzzled his American producers, who ultimately concluded that the director must be mentally ill. Kurosawa was examined at Kyoto University Hospital by a neuropsychologist, Dr. Murakami, whose diagnosis was forwarded to Darryl Zanuck and Richard Zanuck at Fox studios indicating a diagnosis of neurasthenia stating that, "He is suffering from disturbance of sleep, agitated with feelings of anxiety and in manic excitement caused by the above mentioned illness. It is necessary for him to have rest and medical treatment for more than two months." On Christmas Eve 1968, the Americans announced that Kurosawa had left the production due to "fatigue", effectively firing him. He was ultimately replaced, for the film's Japanese sequences, with two directors, Kinji Fukasaku and Toshio Masuda. Tora! Tora! Tora!, finally released to unenthusiastic reviews in September 1970, was, as Donald Richie put it, an "almost unmitigated tragedy" in Kurosawa's career. He had spent years of his life on a logistically nightmarish project to which he ultimately did not contribute a foot of film shot by himself. (He had his name removed from the credits, though the script used for the Japanese half was still his and his co-writers'.) He became estranged from his longtime collaborator, writer Ryuzo Kikushima, and never worked with him again. The project had inadvertently exposed corruption in his own production company (a situation reminiscent of his own movie, The Bad Sleep Well). His very sanity had been called into question. Worst of all, the Japanese film industry—and perhaps the man himself—began to suspect that he would never make another film. A difficult decade (1969–77) Knowing that his reputation was at stake following the much publicised Tora! Tora! Tora! debacle, Kurosawa moved quickly to a new project to prove he was still viable. To his aid came friends and famed directors Keisuke Kinoshita, Masaki Kobayashi and Kon Ichikawa, who together with Kurosawa established in July 1969 a production company called the Club of the Four Knights (Yonki no kai). Although the plan was for the four directors to create a film each, it has been suggested that the real motivation for the other three directors was to make it easier for Kurosawa to successfully complete a film, and therefore find his way back into the business. The first project proposed and worked on was a period film to be called Dora-heita, but when this was deemed too expensive, attention shifted to Dodesukaden, an adaptation of yet another Shūgorō Yamamoto work, again about the poor and destitute. The film was shot quickly (by Kurosawa's standards) in about nine weeks, with Kurosawa determined to show he was still capable of working quickly and efficiently within a limited budget. For his first work in color, the dynamic editing and complex compositions of his earlier pictures were set aside, with the artist focusing on the creation of a bold, almost surreal palette of primary colors, in order to reveal the toxic environment in which the characters live. It was released in Japan in October 1970, but though a minor critical success, it was greeted with audience indifference. The picture lost money and caused the Club of the Four Knights to dissolve. Initial reception abroad was somewhat more favorable, but Dodesukaden has since been typically considered an interesting experiment not comparable to the director's best work. After struggling through the production of Dodesukaden, Kurosawa turned to television work the following year for the only time in his career with Song of the Horse, a documentary about thoroughbred race horses. It featured a voice-over narrated by a fictional man and a child (voiced by the same actors as the beggar and his son in Dodesukaden). It is the only documentary in Kurosawa's filmography; the small crew included his frequent collaborator Masaru Sato, who composed the music. Song of the Horse is also unique in Kurosawa's oeuvre in that it includes an editor's credit, suggesting that it is the only Kurosawa film that he did not cut himself. Unable to secure funding for further work and allegedly suffering from health problems, Kurosawa apparently reached the breaking point: on December 22, 1971, he slit his wrists and throat multiple times. The suicide attempt proved unsuccessful and the director's health recovered fairly quickly, with Kurosawa now taking refuge in domestic life, uncertain if he would ever direct another film. In early 1973, the Soviet studio Mosfilm approached the film-maker to ask if he would be interested in working with them. Kurosawa proposed an adaptation of Russian explorer Vladimir Arsenyev's autobiographical work Dersu Uzala. The book, about a Goldi hunter who lives in harmony with nature until destroyed by encroaching civilization, was one that he had wanted to make since the 1930s. In December 1973, the 63-year-old Kurosawa set off for the Soviet Union with four of his closest aides, beginning a year-and-a-half stay in the country. Shooting began in May 1974 in Siberia, with filming in exceedingly harsh natural conditions proving very difficult and demanding. The picture wrapped in April 1975, with a thoroughly exhausted and homesick Kurosawa returning to Japan and his family in June. Dersu Uzala had its world premiere in Japan on August 2, 1975, and did well at the box office. While critical reception in Japan was muted, the film was better reviewed abroad, winning the Golden Prize at the 9th Moscow International Film Festival, as well as an Academy Award for Best Foreign Language Film. Today, critics remain divided over the film: some see it as an example of Kurosawa's alleged artistic decline, while others count it among his finest works. Although proposals for television projects were submitted to him, he had no interest in working outside the film world. Nevertheless, the hard-drinking director did agree to appear in a series of television ads for Suntory whiskey, which aired in 1976. While fearing that he might never be able to make another film, the director nevertheless continued working on various projects, writing scripts and creating detailed illustrations, intending to leave behind a visual record of his plans in case he would never be able to film his stories. Two epics (1978–86) In 1977, American director George Lucas released Star Wars, a wildly successful science fiction film influenced by Kurosawa's The Hidden Fortress, among other works. Lucas, like many other New Hollywood directors, revered Kurosawa and considered him a role model, and was shocked to discover that the Japanese film-maker was unable to secure financing for any new work. The two met in San Francisco in July 1978 to discuss the project Kurosawa considered most financially viable: Kagemusha, the epic story of a thief hired as the double of a medieval Japanese lord of a great clan. Lucas, enthralled by the screenplay and Kurosawa's illustrations, leveraged his influence over 20th Century Fox to coerce the studio that had fired Kurosawa just ten years earlier to produce Kagemusha, then recruited fellow fan Francis Ford Coppola as co-producer. Production began the following April, with Kurosawa in high spirits. Shooting lasted from June 1979 through March 1980 and was plagued with problems, not the least of which was the firing of the original lead actor, Shintaro Katsu—creator of the very popular Zatoichi character—due to an incident in which the actor insisted, against the director's wishes, on videotaping his own performance. (He was replaced by Tatsuya Nakadai, in his first of two consecutive leading roles in a Kurosawa movie.) The film was completed only a few weeks behind schedule and opened in Tokyo in April 1980. It quickly became a massive hit in Japan. The film was also a critical and box office success abroad, winning the coveted Palme d'Or at the 1980 Cannes Film Festival in May, though some critics, then and now, have faulted the film for its alleged coldness. Kurosawa spent much of the rest of the year in Europe and America promoting Kagemusha, collecting awards and accolades, and exhibiting as art the drawings he had made to serve as storyboards for the film. The international success of Kagemusha allowed Kurosawa to proceed with his next project, Ran, another epic in a similar vein. The script, partly based on William Shakespeare's King Lear, depicted a ruthless, bloodthirsty daimyō (warlord), played by Tatsuya Nakadai, who, after foolishly banishing his one loyal son, surrenders his kingdom to his other two sons, who then betray him, thus plunging the entire kingdom into war. As Japanese studios still felt wary about producing another film that would rank among the most expensive ever made in the country, international help was again needed. This time it came from French producer Serge Silberman, who had produced Luis Buñuel's final movies. Filming did not begin until December 1983 and lasted more than a year. In January 1985, production of Ran was halted as Kurosawa's 64-year-old wife Yōko fell ill. She died on February 1. Kurosawa returned to finish his film and Ran premiered at the Tokyo Film Festival on May 31, with a wide release the next day. The film was a moderate financial success in Japan, but a larger one abroad and, as he had done with Kagemusha, Kurosawa embarked on a trip to Europe and America, where he attended the film's premieres in September and October. Ran won several awards in Japan, but was not quite as honored there as many of the director's best films of the 1950s and 1960s had been. The film world was surprised, however, when Japan passed over the selection of Ran in favor of another film as its official entry to compete for an Oscar nomination in the Best Foreign Film category, which was ultimately rejected for competition at the 58th Academy Awards. Both the producer and Kurosawa himself attributed the failure to even submit Ran for competition to a misunderstanding: because of the Academy's arcane rules, no one was sure whether Ran qualified as a Japanese film, a French film (due to its financing), or both, so it was not submitted at all. In response to what at least appeared to be a blatant snub by his own countrymen, the director Sidney Lumet led a successful campaign to have Kurosawa receive an Oscar nomination for Best Director that year (Sydney Pollack ultimately won the award for directing Out of Africa). Rans costume designer, Emi Wada, won the movie's only Oscar. Kagemusha and Ran, particularly the latter, are often considered to be among Kurosawa's finest works. After Rans release, Kurosawa would point to it as his best film, a major change of attitude for the director who, when asked which of his works was his best, had always previously answered "my next one". Final works and last years (1987–98) For his next movie, Kurosawa chose a subject very different from any that he had ever filmed before. While some of his previous pictures (for example, Drunken Angel and Kagemusha) had included brief dream sequences, Dreams was to be entirely based upon the director's own dreams. Significantly, for the first time in over forty years, Kurosawa, for this deeply personal project, wrote the screenplay alone. Although its estimated budget was lower than the films immediately preceding it, Japanese studios were still unwilling to back one of his productions, so Kurosawa turned to another famous American fan, Steven Spielberg, who convinced Warner Bros. to buy the international rights to the completed film. This made it easier for Kurosawa's son, Hisao, as co-producer and soon-to-be head of Kurosawa Production, to negotiate a loan in Japan that would cover the film's production costs. Shooting took more than eight months to complete, and Dreams premiered at Cannes in May 1990 to a polite but muted reception, similar to the reaction the picture would generate elsewhere in the world. In 1990, he accepted the Academy Award for Lifetime Achievement. In his acceptance speech, he famously said "I'm a little worried because I don't feel that I understand cinema yet." Kurosawa now turned to a more conventional story with Rhapsody in August—the director's first film fully produced in Japan since Dodeskaden over twenty years before—which explored the scars of the nuclear bombing which destroyed Nagasaki at the very end of World War II. It was adapted from a Kiyoko Murata novel, but the film's references to the Nagasaki bombing came from the director rather than from the book. This was his only movie to include a role for an American movie star: Richard Gere, who plays a small role as the nephew of the elderly heroine. Shooting took place in early 1991, with the film opening on May 25 that year to a largely negative critical reaction, especially in the United States, where the director was accused of promulgating naïvely anti-American sentiments, though Kurosawa rejected these accusations. Kurosawa wasted no time moving onto his next project: Madadayo, or Not Yet. Based on autobiographical essays by Hyakken Uchida, the film follows the life of a Japanese professor of German through the Second World War and beyond. The narrative centers on yearly birthday celebrations with his former students, during which the protagonist declares his unwillingness to die just yet—a theme that was becoming increasingly relevant for the film's 81-year-old creator. Filming began in February 1992 and wrapped by the end of September. Its release on April 17, 1993, was greeted by an even more disappointed reaction than had been the case with his two preceding works. Kurosawa nevertheless continued to work. He wrote the original screenplays The Sea is Watching in 1993 and After the Rain in 1995. While putting finishing touches on the latter work in 1995, Kurosawa slipped and broke the base of his spine. Following the accident, he would use a wheelchair for the rest of his life, putting an end to any hopes of him directing another film. His longtime wish—to die on the set while shooting a movie—was never to be fulfilled. After his accident, Kurosawa's health began to deteriorate. While his mind remained sharp and lively, his body was giving up, and for the last half-year of his life, the director was largely confined to bed, listening to music and watching television at home. On September 6, 1998, Kurosawa died of a stroke in Setagaya, Tokyo, at the age of 88. At the time of his death, Kurosawa had two children, his son Hisao Kurosawa who married Hiroko Hayashi and his daughter Kazuko Kurosawa who married Harayuki Kato, along with several grandchildren. One of his grandchildren, the
other cereal grains, all of which were used to make the two main food staples of bread and beer. Flax plants, uprooted before they started flowering, were grown for the fibers of their stems. These fibers were split along their length and spun into thread, which was used to weave sheets of linen and to make clothing. Papyrus growing on the banks of the Nile River was used to make paper. Vegetables and fruits were grown in garden plots, close to habitations and on higher ground, and had to be watered by hand. Vegetables included leeks, garlic, melons, squashes, pulses, lettuce, and other crops, in addition to grapes that were made into wine. Animals The Egyptians believed that a balanced relationship between people and animals was an essential element of the cosmic order; thus humans, animals and plants were believed to be members of a single whole. Animals, both domesticated and wild, were therefore a critical source of spirituality, companionship, and sustenance to the ancient Egyptians. Cattle were the most important livestock; the administration collected taxes on livestock in regular censuses, and the size of a herd reflected the prestige and importance of the estate or temple that owned them. In addition to cattle, the ancient Egyptians kept sheep, goats, and pigs. Poultry, such as ducks, geese, and pigeons, were captured in nets and bred on farms, where they were force-fed with dough to fatten them. The Nile provided a plentiful source of fish. Bees were also domesticated from at least the Old Kingdom, and provided both honey and wax. The ancient Egyptians used donkeys and oxen as beasts of burden, and they were responsible for plowing the fields and trampling seed into the soil. The slaughter of a fattened ox was also a central part of an offering ritual. Horses were introduced by the Hyksos in the Second Intermediate Period. Camels, although known from the New Kingdom, were not used as beasts of burden until the Late Period. There is also evidence to suggest that elephants were briefly utilized in the Late Period but largely abandoned due to lack of grazing land. Cats, dogs, and monkeys were common family pets, while more exotic pets imported from the heart of Africa, such as Sub-Saharan African lions, were reserved for royalty. Herodotus observed that the Egyptians were the only people to keep their animals with them in their houses. During the Late Period, the worship of the gods in their animal form was extremely popular, such as the cat goddess Bastet and the ibis god Thoth, and these animals were kept in large numbers for the purpose of ritual sacrifice. Natural resources Egypt is rich in building and decorative stone, copper and lead ores, gold, and semiprecious stones. These natural resources allowed the ancient Egyptians to build monuments, sculpt statues, make tools, and fashion jewelry. Embalmers used salts from the Wadi Natrun for mummification, which also provided the gypsum needed to make plaster. Ore-bearing rock formations were found in distant, inhospitable wadis in the Eastern Desert and the Sinai, requiring large, state-controlled expeditions to obtain natural resources found there. There were extensive gold mines in Nubia, and one of the first maps known is of a gold mine in this region. The Wadi Hammamat was a notable source of granite, greywacke, and gold. Flint was the first mineral collected and used to make tools, and flint handaxes are the earliest pieces of evidence of habitation in the Nile valley. Nodules of the mineral were carefully flaked to make blades and arrowheads of moderate hardness and durability even after copper was adopted for this purpose. Ancient Egyptians were among the first to use minerals such as sulfur as cosmetic substances. The Egyptians worked deposits of the lead ore galena at Gebel Rosas to make net sinkers, plumb bobs, and small figurines. Copper was the most important metal for toolmaking in ancient Egypt and was smelted in furnaces from malachite ore mined in the Sinai. Workers collected gold by washing the nuggets out of sediment in alluvial deposits, or by the more labor-intensive process of grinding and washing gold-bearing quartzite. Iron deposits found in upper Egypt were utilized in the Late Period. High-quality building stones were abundant in Egypt; the ancient Egyptians quarried limestone all along the Nile valley, granite from Aswan, and basalt and sandstone from the wadis of the Eastern Desert. Deposits of decorative stones such as porphyry, greywacke, alabaster, and carnelian dotted the Eastern Desert and were collected even before the First Dynasty. In the Ptolemaic and Roman Periods, miners worked deposits of emeralds in Wadi Sikait and amethyst in Wadi el-Hudi. Trade The ancient Egyptians engaged in trade with their foreign neighbors to obtain rare, exotic goods not found in Egypt. In the Predynastic Period, they established trade with Nubia to obtain gold and incense. They also established trade with Palestine, as evidenced by Palestinian-style oil jugs found in the burials of the First Dynasty pharaohs. An Egyptian colony stationed in southern Canaan dates to slightly before the First Dynasty. Narmer had Egyptian pottery produced in Canaan and exported back to Egypt. By the Second Dynasty at latest, ancient Egyptian trade with Byblos yielded a critical source of quality timber not found in Egypt. By the Fifth Dynasty, trade with Punt provided gold, aromatic resins, ebony, ivory, and wild animals such as monkeys and baboons. Egypt relied on trade with Anatolia for essential quantities of tin as well as supplementary supplies of copper, both metals being necessary for the manufacture of bronze. The ancient Egyptians prized the blue stone lapis lazuli, which had to be imported from far-away Afghanistan. Egypt's Mediterranean trade partners also included Greece and Crete, which provided, among other goods, supplies of olive oil. Language Historical development The Egyptian language is a northern Afro-Asiatic language closely related to the Berber and Semitic languages. It has the second longest known history of any language (after Sumerian), having been written from c. 3200BC to the Middle Ages and remaining as a spoken language for longer. The phases of ancient Egyptian are Old Egyptian, Middle Egyptian (Classical Egyptian), Late Egyptian, Demotic and Coptic. Egyptian writings do not show dialect differences before Coptic, but it was probably spoken in regional dialects around Memphis and later Thebes. Ancient Egyptian was a synthetic language, but it became more analytic later on. Late Egyptian developed prefixal definite and indefinite articles, which replaced the older inflectional suffixes. There was a change from the older verb–subject–object word order to subject–verb–object. The Egyptian hieroglyphic, hieratic, and demotic scripts were eventually replaced by the more phonetic Coptic alphabet. Coptic is still used in the liturgy of the Egyptian Orthodox Church, and traces of it are found in modern Egyptian Arabic. Sounds and grammar Ancient Egyptian has 25 consonants similar to those of other Afro-Asiatic languages. These include pharyngeal and emphatic consonants, voiced and voiceless stops, voiceless fricatives and voiced and voiceless affricates. It has three long and three short vowels, which expanded in Late Egyptian to about nine. The basic word in Egyptian, similar to Semitic and Berber, is a triliteral or biliteral root of consonants and semiconsonants. Suffixes are added to form words. The verb conjugation corresponds to the person. For example, the triconsonantal skeleton is the semantic core of the word 'hear'; its basic conjugation is , 'he hears'. If the subject is a noun, suffixes are not added to the verb: , 'the woman hears'. Adjectives are derived from nouns through a process that Egyptologists call nisbation because of its similarity with Arabic. The word order is in verbal and adjectival sentences, and in nominal and adverbial sentences. The subject can be moved to the beginning of sentences if it is long and is followed by a resumptive pronoun. Verbs and nouns are negated by the particle n, but nn is used for adverbial and adjectival sentences. Stress falls on the ultimate or penultimate syllable, which can be open (CV) or closed (CVC). Writing Hieroglyphic writing dates from c. 3000BC, and is composed of hundreds of symbols. A hieroglyph can represent a word, a sound, or a silent determinative; and the same symbol can serve different purposes in different contexts. Hieroglyphs were a formal script, used on stone monuments and in tombs, that could be as detailed as individual works of art. In day-to-day writing, scribes used a cursive form of writing, called hieratic, which was quicker and easier. While formal hieroglyphs may be read in rows or columns in either direction (though typically written from right to left), hieratic was always written from right to left, usually in horizontal rows. A new form of writing, Demotic, became the prevalent writing style, and it is this form of writing—along with formal hieroglyphs—that accompany the Greek text on the Rosetta Stone. Around the first century AD, the Coptic alphabet started to be used alongside the Demotic script. Coptic is a modified Greek alphabet with the addition of some Demotic signs. Although formal hieroglyphs were used in a ceremonial role until the fourth century, towards the end only a small handful of priests could still read them. As the traditional religious establishments were disbanded, knowledge of hieroglyphic writing was mostly lost. Attempts to decipher them date to the Byzantine and Islamic periods in Egypt, but only in the 1820s, after the discovery of the Rosetta Stone and years of research by Thomas Young and Jean-François Champollion, were hieroglyphs substantially deciphered. Literature Writing first appeared in association with kingship on labels and tags for items found in royal tombs. It was primarily an occupation of the scribes, who worked out of the Per Ankh institution or the House of Life. The latter comprised offices, libraries (called House of Books), laboratories and observatories. Some of the best-known pieces of ancient Egyptian literature, such as the Pyramid and Coffin Texts, were written in Classical Egyptian, which continued to be the language of writing until about 1300BC. Late Egyptian was spoken from the New Kingdom onward and is represented in Ramesside administrative documents, love poetry and tales, as well as in Demotic and Coptic texts. During this period, the tradition of writing had evolved into the tomb autobiography, such as those of Harkhuf and Weni. The genre known as Sebayt ("instructions") was developed to communicate teachings and guidance from famous nobles; the Ipuwer papyrus, a poem of lamentations describing natural disasters and social upheaval, is a famous example. The Story of Sinuhe, written in Middle Egyptian, might be the classic of Egyptian literature. Also written at this time was the Westcar Papyrus, a set of stories told to Khufu by his sons relating the marvels performed by priests. The Instruction of Amenemope is considered a masterpiece of Near Eastern literature. Towards the end of the New Kingdom, the vernacular language was more often employed to write popular pieces like the Story of Wenamun and the Instruction of Any. The former tells the story of a noble who is robbed on his way to buy cedar from Lebanon and of his struggle to return to Egypt. From about 700BC, narrative stories and instructions, such as the popular Instructions of Onchsheshonqy, as well as personal and business documents were written in the demotic script and phase of Egyptian. Many stories written in demotic during the Greco-Roman period were set in previous historical eras, when Egypt was an independent nation ruled by great pharaohs such as Ramesses II. Culture Daily life Most ancient Egyptians were farmers tied to the land. Their dwellings were restricted to immediate family members, and were constructed of mudbrick designed to remain cool in the heat of the day. Each home had a kitchen with an open roof, which contained a grindstone for milling grain and a small oven for baking the bread. Ceramics served as household wares for the storage, preparation, transport, and consumption of food, drink, and raw materials. Walls were painted white and could be covered with dyed linen wall hangings. Floors were covered with reed mats, while wooden stools, beds raised from the floor and individual tables comprised the furniture. The ancient Egyptians placed a great value on hygiene and appearance. Most bathed in the Nile and used a pasty soap made from animal fat and chalk. Men shaved their entire bodies for cleanliness; perfumes and aromatic ointments covered bad odors and soothed skin. Clothing was made from simple linen sheets that were bleached white, and both men and women of the upper classes wore wigs, jewelry, and cosmetics. Children went without clothing until maturity, at about age 12, and at this age males were circumcised and had their heads shaved. Mothers were responsible for taking care of the children, while the father provided the family's income. Music and dance were popular entertainments for those who could afford them. Early instruments included flutes and harps, while instruments similar to trumpets, oboes, and pipes developed later and became popular. In the New Kingdom, the Egyptians played on bells, cymbals, tambourines, drums, and imported lutes and lyres from Asia. The sistrum was a rattle-like musical instrument that was especially important in religious ceremonies. The ancient Egyptians enjoyed a variety of leisure activities, including games and music. Senet, a board game where pieces moved according to random chance, was particularly popular from the earliest times; another similar game was mehen, which had a circular gaming board. “Hounds and Jackals” also known as 58 holes is another example of board games played in ancient Egypt. The first complete set of this game was discovered from a Theban tomb of the Egyptian pharaoh Amenemhat IV that dates to the 13th Dynasty. Juggling and ball games were popular with children, and wrestling is also documented in a tomb at Beni Hasan. The wealthy members of ancient Egyptian society enjoyed hunting, fishing, and boating as well. The excavation of the workers' village of Deir el-Medina has resulted in one of the most thoroughly documented accounts of community life in the ancient world, which spans almost four hundred years. There is no comparable site in which the organization, social interactions, and working and living conditions of a community have been studied in such detail. Cuisine Egyptian cuisine remained remarkably stable over time; indeed, the cuisine of modern Egypt retains some striking similarities to the cuisine of the ancients. The staple diet consisted of bread and beer, supplemented with vegetables such as onions and garlic, and fruit such as dates and figs. Wine and meat were enjoyed by all on feast days while the upper classes indulged on a more regular basis. Fish, meat, and fowl could be salted or dried, and could be cooked in stews or roasted on a grill. Architecture The architecture of ancient Egypt includes some of the most famous structures in the world: the Great Pyramids of Giza and the temples at Thebes. Building projects were organized and funded by the state for religious and commemorative purposes, but also to reinforce the wide-ranging power of the pharaoh. The ancient Egyptians were skilled builders; using only simple but effective tools and sighting instruments, architects could build large stone structures with great accuracy and precision that is still envied today. The domestic dwellings of elite and ordinary Egyptians alike were constructed from perishable materials such as mudbricks and wood, and have not survived. Peasants lived in simple homes, while the palaces of the elite and the pharaoh were more elaborate structures. A few surviving New Kingdom palaces, such as those in Malkata and Amarna, show richly decorated walls and floors with scenes of people, birds, water pools, deities and geometric designs. Important structures such as temples and tombs that were intended to last forever were constructed of stone instead of mudbricks. The architectural elements used in the world's first large-scale stone building, Djoser's mortuary complex, include post and lintel supports in the papyrus and lotus motif. The earliest preserved ancient Egyptian temples, such as those at Giza, consist of single, enclosed halls with roof slabs supported by columns. In the New Kingdom, architects added the pylon, the open courtyard, and the enclosed hypostyle hall to the front of the temple's sanctuary, a style that was standard until the Greco-Roman period. The earliest and most popular tomb architecture in the Old Kingdom was the mastaba, a flat-roofed rectangular structure of mudbrick or stone built over an underground burial chamber. The step pyramid of Djoser is a series of stone mastabas stacked on top of each other. Pyramids were built during the Old and Middle Kingdoms, but most later rulers abandoned them in favor of less conspicuous rock-cut tombs. The use of the pyramid form continued in private tomb chapels of the New Kingdom and in the royal pyramids of Nubia. Art The ancient Egyptians produced art to serve functional purposes. For over 3500 years, artists adhered to artistic forms and iconography that were developed during the Old Kingdom, following a strict set of principles that resisted foreign influence and internal change. These artistic standards—simple lines, shapes, and flat areas of color combined with the characteristic flat projection of figures with no indication of spatial depth—created a sense of order and balance within a composition. Images and text were intimately interwoven on tomb and temple walls, coffins, stelae, and even statues. The Narmer Palette, for example, displays figures that can also be read as hieroglyphs. Because of the rigid rules that governed its highly stylized and symbolic appearance, ancient Egyptian art served its political and religious purposes with precision and clarity. Ancient Egyptian artisans used stone as a medium for carving statues and fine reliefs, but used wood as a cheap and easily carved substitute. Paints were obtained from minerals such as iron ores (red and yellow ochres), copper ores (blue and green), soot or charcoal (black), and limestone (white). Paints could be mixed with gum arabic as a binder and pressed into cakes, which could be moistened with water when needed. Pharaohs used reliefs to record victories in battle, royal decrees, and religious scenes. Common citizens had access to pieces of funerary art, such as shabti statues and books of the dead, which they believed would protect them in the afterlife. During the Middle Kingdom, wooden or clay models depicting scenes from everyday life became popular additions to the tomb. In an attempt to duplicate the activities of the living in the afterlife, these models show laborers, houses, boats, and even military formations that are scale representations of the ideal ancient Egyptian afterlife. Despite the homogeneity of ancient Egyptian art, the styles of particular times and places sometimes reflected changing cultural or political attitudes. After the invasion of the Hyksos in the Second Intermediate Period, Minoan-style frescoes were found in Avaris. The most striking example of a politically driven change in artistic forms comes from the Amarna Period, where figures were radically altered to conform to Akhenaten's revolutionary religious ideas. This style, known as Amarna art, was quickly abandoned after Akhenaten's death and replaced by the traditional forms. Religious beliefs Beliefs in the divine and in the afterlife were ingrained in ancient Egyptian civilization from its inception; pharaonic rule was based on the divine right of kings. The Egyptian pantheon was populated by gods who had supernatural powers and were called on for help or protection. However, the gods were not always viewed as benevolent, and Egyptians believed they had to be appeased with offerings and prayers. The structure of this pantheon changed
robbed on his way to buy cedar from Lebanon and of his struggle to return to Egypt. From about 700BC, narrative stories and instructions, such as the popular Instructions of Onchsheshonqy, as well as personal and business documents were written in the demotic script and phase of Egyptian. Many stories written in demotic during the Greco-Roman period were set in previous historical eras, when Egypt was an independent nation ruled by great pharaohs such as Ramesses II. Culture Daily life Most ancient Egyptians were farmers tied to the land. Their dwellings were restricted to immediate family members, and were constructed of mudbrick designed to remain cool in the heat of the day. Each home had a kitchen with an open roof, which contained a grindstone for milling grain and a small oven for baking the bread. Ceramics served as household wares for the storage, preparation, transport, and consumption of food, drink, and raw materials. Walls were painted white and could be covered with dyed linen wall hangings. Floors were covered with reed mats, while wooden stools, beds raised from the floor and individual tables comprised the furniture. The ancient Egyptians placed a great value on hygiene and appearance. Most bathed in the Nile and used a pasty soap made from animal fat and chalk. Men shaved their entire bodies for cleanliness; perfumes and aromatic ointments covered bad odors and soothed skin. Clothing was made from simple linen sheets that were bleached white, and both men and women of the upper classes wore wigs, jewelry, and cosmetics. Children went without clothing until maturity, at about age 12, and at this age males were circumcised and had their heads shaved. Mothers were responsible for taking care of the children, while the father provided the family's income. Music and dance were popular entertainments for those who could afford them. Early instruments included flutes and harps, while instruments similar to trumpets, oboes, and pipes developed later and became popular. In the New Kingdom, the Egyptians played on bells, cymbals, tambourines, drums, and imported lutes and lyres from Asia. The sistrum was a rattle-like musical instrument that was especially important in religious ceremonies. The ancient Egyptians enjoyed a variety of leisure activities, including games and music. Senet, a board game where pieces moved according to random chance, was particularly popular from the earliest times; another similar game was mehen, which had a circular gaming board. “Hounds and Jackals” also known as 58 holes is another example of board games played in ancient Egypt. The first complete set of this game was discovered from a Theban tomb of the Egyptian pharaoh Amenemhat IV that dates to the 13th Dynasty. Juggling and ball games were popular with children, and wrestling is also documented in a tomb at Beni Hasan. The wealthy members of ancient Egyptian society enjoyed hunting, fishing, and boating as well. The excavation of the workers' village of Deir el-Medina has resulted in one of the most thoroughly documented accounts of community life in the ancient world, which spans almost four hundred years. There is no comparable site in which the organization, social interactions, and working and living conditions of a community have been studied in such detail. Cuisine Egyptian cuisine remained remarkably stable over time; indeed, the cuisine of modern Egypt retains some striking similarities to the cuisine of the ancients. The staple diet consisted of bread and beer, supplemented with vegetables such as onions and garlic, and fruit such as dates and figs. Wine and meat were enjoyed by all on feast days while the upper classes indulged on a more regular basis. Fish, meat, and fowl could be salted or dried, and could be cooked in stews or roasted on a grill. Architecture The architecture of ancient Egypt includes some of the most famous structures in the world: the Great Pyramids of Giza and the temples at Thebes. Building projects were organized and funded by the state for religious and commemorative purposes, but also to reinforce the wide-ranging power of the pharaoh. The ancient Egyptians were skilled builders; using only simple but effective tools and sighting instruments, architects could build large stone structures with great accuracy and precision that is still envied today. The domestic dwellings of elite and ordinary Egyptians alike were constructed from perishable materials such as mudbricks and wood, and have not survived. Peasants lived in simple homes, while the palaces of the elite and the pharaoh were more elaborate structures. A few surviving New Kingdom palaces, such as those in Malkata and Amarna, show richly decorated walls and floors with scenes of people, birds, water pools, deities and geometric designs. Important structures such as temples and tombs that were intended to last forever were constructed of stone instead of mudbricks. The architectural elements used in the world's first large-scale stone building, Djoser's mortuary complex, include post and lintel supports in the papyrus and lotus motif. The earliest preserved ancient Egyptian temples, such as those at Giza, consist of single, enclosed halls with roof slabs supported by columns. In the New Kingdom, architects added the pylon, the open courtyard, and the enclosed hypostyle hall to the front of the temple's sanctuary, a style that was standard until the Greco-Roman period. The earliest and most popular tomb architecture in the Old Kingdom was the mastaba, a flat-roofed rectangular structure of mudbrick or stone built over an underground burial chamber. The step pyramid of Djoser is a series of stone mastabas stacked on top of each other. Pyramids were built during the Old and Middle Kingdoms, but most later rulers abandoned them in favor of less conspicuous rock-cut tombs. The use of the pyramid form continued in private tomb chapels of the New Kingdom and in the royal pyramids of Nubia. Art The ancient Egyptians produced art to serve functional purposes. For over 3500 years, artists adhered to artistic forms and iconography that were developed during the Old Kingdom, following a strict set of principles that resisted foreign influence and internal change. These artistic standards—simple lines, shapes, and flat areas of color combined with the characteristic flat projection of figures with no indication of spatial depth—created a sense of order and balance within a composition. Images and text were intimately interwoven on tomb and temple walls, coffins, stelae, and even statues. The Narmer Palette, for example, displays figures that can also be read as hieroglyphs. Because of the rigid rules that governed its highly stylized and symbolic appearance, ancient Egyptian art served its political and religious purposes with precision and clarity. Ancient Egyptian artisans used stone as a medium for carving statues and fine reliefs, but used wood as a cheap and easily carved substitute. Paints were obtained from minerals such as iron ores (red and yellow ochres), copper ores (blue and green), soot or charcoal (black), and limestone (white). Paints could be mixed with gum arabic as a binder and pressed into cakes, which could be moistened with water when needed. Pharaohs used reliefs to record victories in battle, royal decrees, and religious scenes. Common citizens had access to pieces of funerary art, such as shabti statues and books of the dead, which they believed would protect them in the afterlife. During the Middle Kingdom, wooden or clay models depicting scenes from everyday life became popular additions to the tomb. In an attempt to duplicate the activities of the living in the afterlife, these models show laborers, houses, boats, and even military formations that are scale representations of the ideal ancient Egyptian afterlife. Despite the homogeneity of ancient Egyptian art, the styles of particular times and places sometimes reflected changing cultural or political attitudes. After the invasion of the Hyksos in the Second Intermediate Period, Minoan-style frescoes were found in Avaris. The most striking example of a politically driven change in artistic forms comes from the Amarna Period, where figures were radically altered to conform to Akhenaten's revolutionary religious ideas. This style, known as Amarna art, was quickly abandoned after Akhenaten's death and replaced by the traditional forms. Religious beliefs Beliefs in the divine and in the afterlife were ingrained in ancient Egyptian civilization from its inception; pharaonic rule was based on the divine right of kings. The Egyptian pantheon was populated by gods who had supernatural powers and were called on for help or protection. However, the gods were not always viewed as benevolent, and Egyptians believed they had to be appeased with offerings and prayers. The structure of this pantheon changed continually as new deities were promoted in the hierarchy, but priests made no effort to organize the diverse and sometimes conflicting myths and stories into a coherent system. These various conceptions of divinity were not considered contradictory but rather layers in the multiple facets of reality. Gods were worshiped in cult temples administered by priests acting on the king's behalf. At the center of the temple was the cult statue in a shrine. Temples were not places of public worship or congregation, and only on select feast days and celebrations was a shrine carrying the statue of the god brought out for public worship. Normally, the god's domain was sealed off from the outside world and was only accessible to temple officials. Common citizens could worship private statues in their homes, and amulets offered protection against the forces of chaos. After the New Kingdom, the pharaoh's role as a spiritual intermediary was de-emphasized as religious customs shifted to direct worship of the gods. As a result, priests developed a system of oracles to communicate the will of the gods directly to the people. The Egyptians believed that every human being was composed of physical and spiritual parts or aspects. In addition to the body, each person had a šwt (shadow), a ba (personality or soul), a ka (life-force), and a name. The heart, rather than the brain, was considered the seat of thoughts and emotions. After death, the spiritual aspects were released from the body and could move at will, but they required the physical remains (or a substitute, such as a statue) as a permanent home. The ultimate goal of the deceased was to rejoin his ka and ba and become one of the "blessed dead", living on as an akh, or "effective one". For this to happen, the deceased had to be judged worthy in a trial, in which the heart was weighed against a "feather of truth." If deemed worthy, the deceased could continue their existence on earth in spiritual form. If they were not deemed worthy, their heart was eaten by Ammit the Devourer and they were erased from the Universe. Burial customs The ancient Egyptians maintained an elaborate set of burial customs that they believed were necessary to ensure immortality after death. These customs involved preserving the body by mummification, performing burial ceremonies, and interring with the body goods the deceased would use in the afterlife. Before the Old Kingdom, bodies buried in desert pits were naturally preserved by desiccation. The arid, desert conditions were a boon throughout the history of ancient Egypt for burials of the poor, who could not afford the elaborate burial preparations available to the elite. Wealthier Egyptians began to bury their dead in stone tombs and use artificial mummification, which involved removing the internal organs, wrapping the body in linen, and burying it in a rectangular stone sarcophagus or wooden coffin. Beginning in the Fourth Dynasty, some parts were preserved separately in canopic jars. By the New Kingdom, the ancient Egyptians had perfected the art of mummification; the best technique took 70 days and involved removing the internal organs, removing the brain through the nose, and desiccating the body in a mixture of salts called natron. The body was then wrapped in linen with protective amulets inserted between layers and placed in a decorated anthropoid coffin. Mummies of the Late Period were also placed in painted cartonnage mummy cases. Actual preservation practices declined during the Ptolemaic and Roman eras, while greater emphasis was placed on the outer appearance of the mummy, which was decorated. Wealthy Egyptians were buried with larger quantities of luxury items, but all burials, regardless of social status, included goods for the deceased. Funerary texts were often included in the grave, and, beginning in the New Kingdom, so were shabti statues that were believed to perform manual labor for them in the afterlife. Rituals in which the deceased was magically re-animated accompanied burials. After burial, living relatives were expected to occasionally bring food to the tomb and recite prayers on behalf of the deceased. Military The ancient Egyptian military was responsible for defending Egypt against foreign invasion, and for maintaining Egypt's domination in the ancient Near East. The military protected mining expeditions to the Sinai during the Old Kingdom and fought civil wars during the First and Second Intermediate Periods. The military was responsible for maintaining fortifications along important trade routes, such as those found at the city of Buhen on the way to Nubia. Forts also were constructed to serve as military bases, such as the fortress at Sile, which was a base of operations for expeditions to the Levant. In the New Kingdom, a series of pharaohs used the standing Egyptian army to attack and conquer Kush and parts of the Levant. Typical military equipment included bows and arrows, spears, and round-topped shields made by stretching animal skin over a wooden frame. In the New Kingdom, the military began using chariots that had earlier been introduced by the Hyksos invaders. Weapons and armor continued to improve after the adoption of bronze: shields were now made from solid wood with a bronze buckle, spears were tipped with a bronze point, and the khopesh was adopted from Asiatic soldiers. The pharaoh was usually depicted in art and literature riding at the head of the army; it has been suggested that at least a few pharaohs, such as Seqenenre Tao II and his sons, did do so. However, it has also been argued that "kings of this period did not personally act as frontline war leaders, fighting alongside their troops." Soldiers were recruited from the general population, but during, and especially after, the New Kingdom, mercenaries from Nubia, Kush, and Libya were hired to fight for Egypt. Technology, medicine, and mathematics Technology In technology, medicine, and mathematics, ancient Egypt achieved a relatively high standard of productivity and sophistication. Traditional empiricism, as evidenced by the Edwin Smith and Ebers papyri (c. 1600BC), is first credited to Egypt. The Egyptians created their own alphabet and decimal system. Faience and glass Even before the Old Kingdom, the ancient Egyptians had developed a glassy material known as faience, which they treated as a type of artificial semi-precious stone. Faience is a non-clay ceramic made of silica, small amounts of lime and soda, and a colorant, typically copper. The material was used to make beads, tiles, figurines, and small wares. Several methods can be used to create faience, but typically production involved application of the powdered materials in the form of a paste over a clay core, which was then fired. By a related technique, the ancient Egyptians produced a pigment known as Egyptian blue, also called blue frit, which is produced by fusing (or sintering) silica, copper, lime, and an alkali such as natron. The product can be ground up and used as a pigment. The ancient Egyptians could fabricate a wide variety of objects from glass with great skill, but it is not clear whether they developed the process independently. It is also unclear whether they made their own raw glass or merely imported pre-made ingots, which they melted and finished. However, they did have technical expertise in making objects, as well as adding trace elements to control the color of the finished glass. A range of colors could be produced, including yellow, red, green, blue, purple, and white, and the glass could be made either transparent or opaque. Medicine The medical problems of the ancient Egyptians stemmed directly from their environment. Living and working close to the Nile brought hazards from malaria and debilitating schistosomiasis parasites, which caused liver and intestinal damage. Dangerous wildlife such as crocodiles and hippos were also a common threat. The lifelong labors of farming and building put stress on the spine and joints, and traumatic injuries from construction and warfare all took a significant toll on the body. The grit and sand from stone-ground flour abraded teeth, leaving them susceptible to abscesses (though caries were rare). The diets of the wealthy were rich in sugars, which promoted periodontal disease. Despite the flattering physiques portrayed on tomb walls, the overweight mummies of many of the upper class show the effects of a life of overindulgence. Adult life expectancy was about 35 for men and 30 for women, but reaching adulthood was difficult as about one-third of the population died in infancy. Ancient Egyptian physicians were renowned in the ancient Near East for their healing skills, and some, such as Imhotep, remained famous long after their deaths. Herodotus remarked that there was a high degree of specialization among Egyptian physicians, with some treating only the head or the stomach, while others were eye-doctors and dentists. Training of physicians took place at the Per Ankh or "House of Life" institution, most notably those headquartered in Per-Bastet during the New Kingdom and at Abydos and Saïs in the Late period. Medical papyri show empirical knowledge of anatomy, injuries, and practical treatments. Wounds were treated by bandaging with raw meat, white linen, sutures, nets, pads, and swabs soaked with honey to prevent infection, while opium, thyme, and belladona were used to relieve pain. The earliest records of burn treatment describe burn dressings that use the milk from mothers of male babies. Prayers were made to the goddess Isis. Moldy bread, honey, and copper salts were also used to prevent infection from dirt in burns. Garlic and onions were used regularly to promote good health and were thought to relieve asthma symptoms. Ancient Egyptian surgeons stitched wounds, set broken bones, and amputated diseased limbs, but they recognized that some injuries were so serious that they could only make the patient comfortable until death occurred. Maritime technology Early Egyptians knew how to assemble planks of wood into a ship hull and had mastered advanced forms of shipbuilding as early as 3000BC. The Archaeological Institute of America reports that the oldest planked ships known are the Abydos boats. A group of 14 discovered ships in Abydos were constructed of wooden planks "sewn" together. Discovered by Egyptologist David O'Connor of New York University, woven straps were found to have been used to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Because the ships are all buried together and near a mortuary belonging to Pharaoh Khasekhemwy, originally they were all thought to have belonged to him, but one of the 14 ships dates to 3000BC, and the associated pottery jars buried with the vessels also suggest earlier dating. The ship dating to 3000BC was long and is now thought to perhaps have belonged to an earlier pharaoh, perhaps one as early as Hor-Aha. Early Egyptians also knew how to assemble planks of wood with treenails to fasten them together, using pitch for caulking the seams. The "Khufu ship", a vessel sealed into a pit in the Giza pyramid complex at the foot of the Great Pyramid of Giza in the Fourth Dynasty around 2500BC, is a full-size surviving example that may have filled the symbolic function of a solar barque. Early Egyptians also knew how to fasten the planks of this ship together with mortise and tenon joints. Large seagoing ships are known to have been heavily used by the Egyptians in their trade with the city states of the eastern Mediterranean, especially Byblos (on the coast of modern-day Lebanon), and in several expeditions down the Red Sea to the Land of Punt. In fact one of the earliest Egyptian words for a seagoing ship is a "Byblos Ship", which originally defined a class of Egyptian seagoing ships used on the Byblos run; however, by the end of the Old Kingdom, the term had come to include large seagoing ships, whatever their destination. In 1977, an ancient north–south canal was discovered extending from Lake Timsah to the Ballah Lakes. It was dated to the Middle Kingdom of Egypt by extrapolating dates of ancient sites constructed along its course. In 2011, archaeologists from Italy, the United States, and Egypt excavating a dried-up lagoon known as Mersa Gawasis have unearthed traces of an ancient harbor that once launched early voyages like Hatshepsut's Punt expedition onto the open ocean. Some of the site's most evocative evidence for the ancient Egyptians' seafaring prowess include large ship timbers and hundreds of feet of ropes, made from papyrus, coiled in huge bundles. In 2013, a team of Franco-Egyptian archaeologists discovered
Jacky Jasper (who appears as Jacky Jasper on the song "We Sleep Days" and H-Bomb on "War"), D.J. Cisco from S.M., Synth-A-Size Sisters and Teflon. While the group only recorded one album together as the Analog Brothers, a few bootlegs of its live concert performances, including freestyles with original lyrics, have occasionally surfaced online. After Pimp to Eat, the Analog Brothers continued performing together in various line ups. Kool Keith and Marc Live joined with Jacky Jasper to release two albums as KHM. Marc Live rapped with Ice T's group SMG. Marc also formed a group with Black Silver called Live Black, but while five of their tracks were released on a demo CD sold at concerts, Live Black's first album has yet to be released. In 2008, Ice-T and Black Silver toured together as Black Ice, and released an album together called Urban Legends. In 2013, Black Silver and newest member to Analog Brothers, Kiew Kurzweil (Kiew Nikon of Kinetic)
on drums, violins and vocals, Christopher "Silver Synth" Rodgers (Black Silver) on synthesizer, lazar bell and vocals, and Rex Colonel "Rex Roland JX3P" Doby Jr. (Pimpin' Rex) on keyboards, vocals and production. Its album Pimp to Eat featured guest appearances by various members of Rhyme Syndicate, Odd Oberheim, Jacky Jasper (who appears as Jacky Jasper on the song "We Sleep Days" and H-Bomb on "War"), D.J. Cisco from S.M., Synth-A-Size Sisters and Teflon. While the group only recorded one album together as the Analog Brothers, a few bootlegs of its live concert performances, including freestyles with original lyrics, have occasionally surfaced online. After Pimp to Eat, the Analog Brothers continued performing together in various line ups. Kool Keith and Marc Live joined with Jacky Jasper to release two albums as KHM. Marc Live rapped with Ice T's group
They come on slowly, and worsen over the course of more than three months. Various patterns of muscle weakness are seen, and muscle cramps and spasms may occur. One can have difficulty breathing with climbing stairs (exertion), difficulty breathing when lying down (orthopnea), or even respiratory failure if breathing muscles become involved. Bulbar symptoms, including difficulty speaking (dysarthria), difficulty swallowing (dysphagia), and excessive saliva production (sialorrhea), can also occur. Sensation, or the ability to feel, is typically not affected. Emotional disturbance (e.g. pseudobulbar affect) and cognitive and behavioural changes (e.g. problems in word fluency, decision-making, and memory) are also seen. There can be lower motor neuron findings (e.g. muscle wasting, muscle twitching), upper motor neuron findings (e.g. brisk reflexes, Babinski reflex, Hoffman's reflex, increased muscle tone), or both. Motor neuron diseases are seen both in children and in adults. Those that affect children tend to be inherited or familial, and their symptoms are either present at birth or appear before learning to walk. Those that affect adults tend to appear after age 40. The clinical course depends on the specific disease, but most progress or worsen over the course of months. Some are fatal (e.g. ALS), while others are not (e.g. PLS). Patterns of weakness Various patterns of muscle weakness occur in different motor neuron diseases. Weakness can be symmetric or asymmetric, and it can occur in body parts that are distal, proximal, or both... According to Statland et al., there are three main weakness patterns that are seen in motor neuron diseases, which are: Asymmetric distal weakness without sensory loss (e.g. ALS, PLS, PMA, MMA) Symmetric weakness without sensory loss (e.g. PMA, PLS) Symmetric focal midline proximal weakness (neck, trunk, bulbar involvement; e.g. ALS, PBP, PLS) Lower and upper motor neuron findings Motor neuron diseases are on a spectrum in terms of upper and lower motor neuron involvement. Some have just lower or upper motor neuron findings, while others have a mix of both. Lower motor neuron (LMN) findings include muscle atrophy and fasciculations, and upper motor neuron (UMN) findings include hyperreflexia, spasticity, muscle spasm, and abnormal reflexes. Pure upper motor neuron diseases, or those with just UMN findings, include PLS. Pure lower motor neuron diseases, or those with just LMN findings, include PMA. Motor neuron diseases with both UMN and LMN findings include both familial and sporadic ALS. Causes Most cases are sporadic and their causes are usually not known. It is thought that environmental, toxic, viral, or genetic factors may be involved. DNA damage TARDBP (TAR DNA-binding protein 43), also referred to as TDP-43, is a critical component of the non-homologous end joining (NHEJ) enzymatic pathway that repairs DNA double-strand breaks in pluripotent stem cell-derived motor neurons. TDP-43 is rapidly recruited to double-strand breaks where it acts as a scaffold for the recruitment of the XRCC4-DNA ligase
main weakness patterns that are seen in motor neuron diseases, which are: Asymmetric distal weakness without sensory loss (e.g. ALS, PLS, PMA, MMA) Symmetric weakness without sensory loss (e.g. PMA, PLS) Symmetric focal midline proximal weakness (neck, trunk, bulbar involvement; e.g. ALS, PBP, PLS) Lower and upper motor neuron findings Motor neuron diseases are on a spectrum in terms of upper and lower motor neuron involvement. Some have just lower or upper motor neuron findings, while others have a mix of both. Lower motor neuron (LMN) findings include muscle atrophy and fasciculations, and upper motor neuron (UMN) findings include hyperreflexia, spasticity, muscle spasm, and abnormal reflexes. Pure upper motor neuron diseases, or those with just UMN findings, include PLS. Pure lower motor neuron diseases, or those with just LMN findings, include PMA. Motor neuron diseases with both UMN and LMN findings include both familial and sporadic ALS. Causes Most cases are sporadic and their causes are usually not known. It is thought that environmental, toxic, viral, or genetic factors may be involved. DNA damage TARDBP (TAR DNA-binding protein 43), also referred to as TDP-43, is a critical component of the non-homologous end joining (NHEJ) enzymatic pathway that repairs DNA double-strand breaks in pluripotent stem cell-derived motor neurons. TDP-43 is rapidly recruited to double-strand breaks where it acts as a scaffold for the recruitment of the XRCC4-DNA ligase protein complex that then acts to repair double-strand breaks. About 95% of ALS patients have abnormalities in the nucleus-cytoplasmic localization in spinal motor neurons of TDP43. In TDP-43 depleted human neural stem cell-derived motor neurons, as well as in sporadic ALS patients’ spinal cord specimens there is significant double-strand break accumulation and reduced levels of NHEJ. Associated risk factors In adults, men are more commonly affected than women. Diagnosis Differential diagnosis can be challenging due to the number of overlapping symptoms, shared between several motor neuron diseases. Frequently, the diagnosis is based on clinical findings (i.e. LMN vs. UMN signs and symptoms, patterns of weakness), family history of MND, and a variation of tests, many of which are used to rule out disease mimics, which can manifest with identical symptoms. Classification Motor neuron disease describes a collection of clinical disorders, characterized by progressive muscle weakness and the degeneration of the motor neuron on electrophysiological testing. As discussed above, the term "motor neuron disease" has varying meanings in different countries. Similarly, the literature inconsistently classifies which degenerative motor neuron disorders can be included under the umbrella term "motor neuron disease". The four main types of MND are marked (*) in the table below. All types of MND can be differentiated by two defining characteristics: Is the disease sporadic or inherited? Is there involvement of the upper motor neurons (UMN), the lower motor neurons (LMN), or both? Sporadic or acquired MNDs occur in patients with no family history of degenerative motor neuron disease. Inherited or genetic MNDs adhere to one of the following inheritance patterns: autosomal dominant, autosomal recessive, or X-linked. Some disorders, like ALS, can occur sporadically (85%) or can have a genetic cause (15%) with the same clinical symptoms and progression of disease. UMNs are motor neurons that project from the cortex down to the brainstem or spinal cord. LMNs originate in the anterior horns of the spinal cord and synapse on peripheral muscles. Both motor neurons are necessary for the strong contraction of a muscle, but damage to an UMN can be distinguished from damage to a LMN by physical exam. Tests Cerebrospinal fluid (CSF) tests: Analysis of the fluid from around the brain and spinal cord could reveal signs
write phonetically, much as man'yōgana (kanji used solely for phonetic use) was used to represent Japanese phonetically before the invention of kana. Phoenician gave rise to a number of new writing systems, including the widely used Aramaic abjad and the Greek alphabet. The Greek alphabet evolved into the modern western alphabets, such as Latin and Cyrillic, while Aramaic became the ancestor of many modern abjads and abugidas of Asia. Impure abjads Impure abjads have characters for some vowels, optional vowel diacritics, or both. The term pure abjad refers to scripts entirely lacking in vowel indicators. However, most modern abjads, such as Arabic, Hebrew, Aramaic, and Pahlavi, are "impure" abjadsthat is, they also contain symbols for some of the vowel phonemes, although the said non-diacritic vowel letters are also used to write certain consonants, particularly approximants that sound similar to long vowels. A "pure" abjad is exemplified (perhaps) by very early forms of ancient Phoenician, though at some point (at least by the 9th century BC) it and most of the contemporary Semitic abjads had begun to overload a few of the consonant symbols with a secondary function as vowel markers, called matres lectionis. This practice was at first rare and limited in scope but became increasingly common and more developed in later times. Addition of vowels In the 9th century BC the Greeks adapted the Phoenician script for use in their own language. The phonetic structure of the Greek language created too many ambiguities when vowels went unrepresented, so the script was modified. They did not need letters for the guttural sounds represented by aleph, he, heth or ayin, so these symbols were assigned vocalic values. The letters waw and yod were also adapted into vowel signs; along with he, these were already used as matres lectionis in Phoenician. The major innovation of Greek was to dedicate these symbols exclusively and unambiguously to vowel sounds that could be combined arbitrarily with consonants (as opposed to syllabaries such as Linear B which usually have vowel symbols but cannot combine them with consonants to form arbitrary syllables). Abugidas developed along a slightly different route. The basic consonantal symbol was considered to have an inherent "a" vowel sound. Hooks or short lines attached to various parts of the basic letter modify the vowel. In this way, the South Arabian abjad evolved into the Ge'ez abugida of Ethiopia between the 5th century BC and the 5th century AD. Similarly, the Brāhmī abugida of the Indian subcontinent developed around the 3rd century BC (from the Aramaic abjad, it has been hypothesized). The other major family of abugidas, Canadian Aboriginal syllabics, was initially developed in the 1840s by missionary and linguist James Evans for the Cree and Ojibwe languages. Evans used features of Devanagari script and Pitman shorthand to create his initial abugida. Later in the 19th century, other missionaries adapted Evans's system to other Canadian aboriginal languages. Canadian syllabics differ from other abugidas in that the vowel is indicated by rotation of the consonantal symbol, with each vowel having a consistent orientation. Abjads and the structure
phonetically, much as man'yōgana (kanji used solely for phonetic use) was used to represent Japanese phonetically before the invention of kana. Phoenician gave rise to a number of new writing systems, including the widely used Aramaic abjad and the Greek alphabet. The Greek alphabet evolved into the modern western alphabets, such as Latin and Cyrillic, while Aramaic became the ancestor of many modern abjads and abugidas of Asia. Impure abjads Impure abjads have characters for some vowels, optional vowel diacritics, or both. The term pure abjad refers to scripts entirely lacking in vowel indicators. However, most modern abjads, such as Arabic, Hebrew, Aramaic, and Pahlavi, are "impure" abjadsthat is, they also contain symbols for some of the vowel phonemes, although the said non-diacritic vowel letters are also used to write certain consonants, particularly approximants that sound similar to long vowels. A "pure" abjad is exemplified (perhaps) by very early forms of ancient Phoenician, though at some point (at least by the 9th century BC) it and most of the contemporary Semitic abjads had begun to overload a few of the consonant symbols with a secondary function as vowel markers, called matres lectionis. This practice was at first rare and limited in scope but became increasingly common and more developed in later times. Addition of vowels In the 9th century BC the Greeks adapted the Phoenician script for use in their own language. The phonetic structure of the Greek language created too many ambiguities when vowels went unrepresented, so the script was modified. They did not need letters for the guttural sounds represented by aleph, he, heth or ayin, so these symbols were assigned vocalic values. The letters waw and yod were also adapted into vowel signs; along with he, these were already used as matres lectionis in Phoenician. The major innovation of Greek was to dedicate these symbols exclusively and unambiguously to vowel sounds that could be combined arbitrarily
vowels are always explicit. This description is expressed in terms of an abugida. Formally, an alphasyllabary that is not an abugida can be converted to an abugida by adding a purely formal vowel sound that is never used and declaring that to be the inherent vowel of the letters representing consonants. This may formally make the system ambiguous, but in practice this is not a problem, for then the interpretation with the never-used inherent vowel sound will always be a wrong interpretation. Note that the actual pronunciation may be complicated by interactions between the sounds apparently written just as the sounds of the letters in the English words wan, gem and war are affected by neighbouring letters. The fundamental principles of an abugida apply to words made up of consonant-vowel (CV) syllables. The syllables are written as a linear sequences of the units of the script. Each syllable is either a letter that represents the sound of a consonant and its inherent vowel or a letter modified to indicate the vowel, either by means of diacritics or by changes in the form of the letter itself. If all modifications are by diacritics and all diacritics follow the direction of the writing of the letters, then the abugida is not an alphasyllabary. However, most languages have words that are more complicated than a sequence of CV syllables, even ignoring tone. The first complication is syllables that consist of just a vowel (V). This issue does not arise in some languages because every syllable starts with a consonant. This is common in Semitic languages and in languages of mainland SE Asia; for such languages this issue need not arise. For some languages, a zero consonant letter is used as though every syllable began with a consonant. For other languages, each vowel has a separate letter that is used for each syllable consisting of just the vowel. These letters are known as independent vowels, and are found in most Indic scripts. These letters may be quite different from the corresponding diacritics, which by contrast are known as dependent vowels. As a result of the spread of writing systems, independent vowels may be used to represent syllables beginning with a glottal stop, even for non-initial syllables. The next two complications are sequences of consonants before a vowel (CCV) and syllables ending in a consonant (CVC). The simplest solution, which is not always available, is to break with the principle of writing words as a sequence of syllables and use a unit representing just a consonant (C). This unit may be represented with: a modification that explicitly indicates the lack of a vowel (virama), a lack of vowel marking (often with ambiguity between no vowel and a default inherent vowel), vowel marking for a short or neutral vowel such as schwa (with ambiguity between no vowel and that short or neutral vowel), or a visually unrelated letter. In a true abugida, the lack of distinctive marking may result from the diachronic loss of the inherent vowel, e.g. by syncope and apocope in Hindi. When not handled by decomposition into C + CV, CCV syllables are handled by combining the two consonants. In the Indic scripts, the earliest method was simply to arrange them vertically, but the two consonants may merge as a conjunct consonant letters, where two or more letters are graphically joined in a ligature, or otherwise change their shapes. Rarely, one of the consonants may be replaced by a gemination mark, e.g. the Gurmukhi addak. When they are arranged vertically, as in Burmese or Khmer, they are said to be 'stacked'. Often there has been a change to writing the two consonants side by side. In the latter case, the fact of combination may be indicated by a diacritic on one of the consonants or a change in the form of one of the consonants, e.g. the half forms of Devanagari. Generally, the reading order is top to bottom or the general reading order of the script, but sometimes the order is reversed. The division of a word into syllables for the purposes of writing does not always accord with the natural phonetics of the language. For example, Brahmic scripts commonly handle a phonetic sequence CVC-CV as CV-CCV or CV-C-CV. However, sometimes phonetic CVC syllables are handled as single units, and the final consonant may be represented: in much the same way as the second consonant in CCV, e.g. in the Tibetan, Khmer and Tai Tham scripts. The positioning of the components may be slightly different, as in Khmer and Tai Tham. by a special dependent consonant sign, which may be a smaller or differently placed version of the full consonant letter, or may be a distinct sign altogether. not at all. For example, repeated consonants need not be represented, homorganic nasals may be ignored, and in Philippine scripts, the syllable-final consonant was traditionally never represented. More complicated unit structures (e.g. CC or CCVC) are handled by combining the various techniques above. Family-specific features There are three principal families of abugidas, depending on whether vowels are indicated by modifying consonants by diacritics, distortion, or orientation. The oldest and largest is the Brahmic family of India and Southeast Asia, in which vowels are marked with diacritics and syllable-final consonants, when they occur, are indicated with ligatures, diacritics, or with a special vowel-canceling mark. In the Ethiopic family, vowels are marked by modifying the shapes of the consonants, and one of the vowel-forms serves additionally to indicate final consonants. In Canadian Aboriginal syllabics, vowels are marked by rotating or flipping the consonants, and final consonants are indicated with either special diacritics or superscript forms of the main initial consonants. Tāna of the Maldives has dependent vowels and a zero vowel sign, but no inherent vowel. Indic (Brahmic) Indic scripts originated in India and spread to Southeast Asia, Bangladesh, Sri Lanka, Nepal, Bhutan, Tibet, Mongolia, and Russia. All surviving Indic scripts are descendants of the Brahmi alphabet. Today they are used in most languages of South Asia (although replaced by Perso-Arabic in Urdu, Kashmiri and some other languages of Pakistan and India), mainland Southeast Asia (Myanmar, Thailand, Laos, Cambodia, and Vietnam), Tibet (Tibetan), Indonesian archipelago (Javanese, Balinese, Sundanese), Philippines (Baybayin, Buhid, Hanunuo, Kulitan, and Aborlan Tagbanwa), Malaysia (Rencong, etc.). The primary division is into North Indic scripts used in Northern India, Nepal, Tibet, Bhutan, Mongolia, and Russia and Southern Indic scripts used in South India, Sri Lanka and Southeast Asia. South Indic letter forms are very rounded; North Indic less so, though Odia, Golmol and Litumol of Nepal script are rounded. Most North Indic scripts' full letters incorporate a horizontal line at the top, with Gujarati and Odia as exceptions; South Indic scripts do not. Indic scripts indicate vowels through dependent vowel signs (diacritics) around the consonants, often including a sign that explicitly indicates the lack of a vowel. If a consonant has no vowel sign, this indicates a default vowel. Vowel diacritics may appear above, below, to the left, to the right, or around the consonant. The most widely used Indic script is Devanagari, shared by Hindi, Bihari, Marathi, Konkani, Nepali, and often Sanskrit. A basic letter such as क in Hindi represents a syllable with the default vowel, in this case ka (). In some languages, including Hindi, it becomes a final closing consonant at the end of a word, in this case k. The inherent vowel may be changed by adding vowel mark (diacritics), producing syllables such as कि ki, कु ku, के ke, को ko. In many of the Brahmic scripts, a syllable beginning with a cluster is treated as a single character for purposes of vowel marking, so a vowel marker like ि -i, falling before the character it modifies, may appear several positions before the place where it is pronounced. For example, the game cricket in Hindi is क्रिकेट cricket; the diacritic for appears before the consonant cluster , not before the . A more unusual example is seen in the Batak alphabet: Here the syllable bim is written ba-ma-i-(virama). That is, the vowel diacritic and virama are both written after the consonants for the whole syllable. In many abugidas, there is also a diacritic to suppress the inherent vowel, yielding the bare consonant. In Devanagari, क् is k, and ल् is l. This is called the virāma or halantam in Sanskrit. It may be used to form consonant clusters, or to indicate that a consonant occurs at the end of a word. Thus in Sanskrit, a default vowel consonant such as क does not take on a final consonant sound. Instead, it keeps its vowel. For writing two consonants without a vowel in between, instead of using diacritics on the first consonant to remove its vowel, another popular method of special conjunct forms is used in which two or more consonant characters are merged to express a cluster, such as Devanagari: क्ल kla. (Note that some fonts display this as क् followed by ल, rather than forming a conjunct. This expedient is used by ISCII and South Asian scripts of Unicode.) Thus a closed syllable such as kal requires two aksharas to write. The Róng script used for the Lepcha language goes further than other Indic abugidas, in that a single akshara can represent a closed syllable: Not only the vowel, but any final consonant is indicated by a diacritic. For example, the syllable [sok] would be written as something like s̥̽, here with an underring representing and an overcross representing the diacritic for final . Most other Indic abugidas can only indicate a very limited set of final consonants with diacritics, such as or , if they can indicate any at all. Ethiopic In Ethiopic or Ge'ez script, fidels (individual "letters" of the script) have "diacritics" that are fused with the consonants to the point that they must be considered modifications of the
South Asian linguistic usage, to convey the idea that "they share features of both alphabet and syllabary." General description The formal definitions given by Daniels and Bright for abugida and alphasyllabary differ; some writing systems are abugidas but not alphasyllabaries, and some are alphasyllabaries but not abugidas. An abugida is defined as "a type of writing system whose basic characters denote consonants followed by a particular vowel, and in which diacritics denote other vowels". (This 'particular vowel' is referred to as the inherent or implicit vowel, as opposed to the explicit vowels marked by the 'diacritics'.) An alphasyllabary is defined as "a type of writing system in which the vowels are denoted by subsidiary symbols not all of which occur in a linear order (with relation to the consonant symbols) that is congruent with their temporal order in speech". Bright did not require that an alphabet explicitly represent all vowels. ʼPhags-pa is an example of an abugida that is not an alphasyllabary, and modern Lao is an example of an alphasyllabary that is not an abugida, for its vowels are always explicit. This description is expressed in terms of an abugida. Formally, an alphasyllabary that is not an abugida can be converted to an abugida by adding a purely formal vowel sound that is never used and declaring that to be the inherent vowel of the letters representing consonants. This may formally make the system ambiguous, but in practice this is not a problem, for then the interpretation with the never-used inherent vowel sound will always be a wrong interpretation. Note that the actual pronunciation may be complicated by interactions between the sounds apparently written just as the sounds of the letters in the English words wan, gem and war are affected by neighbouring letters. The fundamental principles of an abugida apply to words made up of consonant-vowel (CV) syllables. The syllables are written as a linear sequences of the units of the script. Each syllable is either a letter that represents the sound of a consonant and its inherent vowel or a letter modified to indicate the vowel, either by means of diacritics or by changes in the form of the letter itself. If all modifications are by diacritics and all diacritics follow the direction of the writing of the letters, then the abugida is not an alphasyllabary. However, most languages have words that are more complicated than a sequence of CV syllables, even ignoring tone. The first complication is syllables that consist of just a vowel (V). This issue does not arise in some languages because every syllable starts with a consonant. This is common in Semitic languages and in languages of mainland SE Asia; for such languages this issue need not arise. For some languages, a zero consonant letter is used as though every syllable began with a consonant. For other languages, each vowel has a separate letter that is used for each syllable consisting of just the vowel. These letters are known as independent vowels, and are found in most Indic scripts. These letters may be quite different from the corresponding diacritics, which by contrast are known as dependent vowels. As a result of the spread of writing systems, independent vowels may be used to represent syllables beginning with a glottal stop, even for non-initial syllables. The next two complications are sequences of consonants before a vowel (CCV) and syllables ending in a consonant (CVC). The simplest solution, which is not always available, is to break with the principle of writing words as a sequence of syllables and use a unit representing just a consonant (C). This unit may be represented with: a modification that explicitly indicates the lack of a vowel (virama), a lack of vowel marking (often with ambiguity between no vowel and a default inherent vowel), vowel marking for a short or neutral vowel such as schwa (with ambiguity between no vowel and that short or neutral vowel), or a visually unrelated letter. In a true abugida, the lack of distinctive marking may result from the diachronic loss of the inherent vowel, e.g. by syncope and apocope in Hindi. When not handled by decomposition into C + CV, CCV syllables are handled by combining the two consonants. In the Indic scripts, the earliest method was simply to arrange them vertically, but the two consonants may merge as a conjunct consonant letters, where two or more letters are graphically joined in a ligature, or otherwise change their shapes. Rarely, one of the consonants may be replaced by a gemination mark, e.g. the Gurmukhi addak. When they are arranged vertically, as in Burmese or Khmer, they are said to be 'stacked'. Often there has been a change to writing the two consonants side by side. In the latter case, the fact of combination may be indicated by a diacritic on one of the consonants or a change in the form of one of the consonants, e.g. the half forms of Devanagari. Generally, the reading order is top to bottom or the general reading order of the script, but sometimes the order is reversed. The division of a word into syllables for the purposes of writing does not always accord with the natural phonetics of the language. For example, Brahmic scripts commonly handle a phonetic sequence CVC-CV as CV-CCV or CV-C-CV. However, sometimes phonetic CVC syllables are handled as single units, and the final consonant may be represented: in much the same way as the second consonant in CCV, e.g. in the Tibetan, Khmer and Tai Tham scripts. The positioning of the components may be slightly different, as in Khmer and Tai Tham. by a special dependent consonant sign, which may be a smaller or differently placed version of the full consonant letter, or may be a distinct sign altogether. not at all. For example, repeated consonants need not be represented, homorganic nasals may be ignored, and in Philippine scripts, the syllable-final consonant was traditionally never represented. More complicated unit structures (e.g. CC or CCVC) are handled by combining the various techniques above. Family-specific features There are three principal families of abugidas, depending on whether vowels are indicated by modifying consonants by diacritics, distortion, or orientation. The oldest and largest is the Brahmic family of India and Southeast Asia, in which vowels are marked with diacritics and syllable-final consonants, when they occur, are indicated with ligatures, diacritics, or with a special vowel-canceling mark. In the Ethiopic family, vowels are marked by modifying the shapes of the consonants, and one of the vowel-forms serves additionally to indicate final consonants. In Canadian Aboriginal syllabics, vowels are marked by rotating or flipping the consonants, and final consonants are indicated with either special diacritics or superscript forms of the main initial consonants. Tāna of the Maldives has dependent vowels and a zero vowel sign, but no inherent vowel. Indic (Brahmic) Indic scripts originated in India and spread to Southeast Asia, Bangladesh, Sri Lanka, Nepal, Bhutan, Tibet, Mongolia, and Russia. All surviving Indic scripts are descendants of the Brahmi alphabet. Today they are used in most languages of South Asia (although replaced by Perso-Arabic in Urdu, Kashmiri and some other languages of Pakistan and India), mainland Southeast Asia (Myanmar, Thailand, Laos, Cambodia, and Vietnam), Tibet (Tibetan), Indonesian archipelago (Javanese, Balinese, Sundanese), Philippines (Baybayin, Buhid, Hanunuo, Kulitan, and Aborlan Tagbanwa), Malaysia (Rencong, etc.). The primary division is into North Indic scripts used in Northern India, Nepal, Tibet, Bhutan, Mongolia, and Russia and Southern Indic scripts used in South India, Sri Lanka and Southeast Asia. South Indic letter forms are very rounded; North Indic less so, though Odia, Golmol and Litumol of Nepal script are rounded. Most North Indic scripts' full letters incorporate a horizontal line at the top, with Gujarati and Odia as exceptions; South Indic scripts do not. Indic scripts indicate vowels through dependent vowel signs (diacritics) around the consonants, often including a sign that explicitly indicates the lack of a vowel. If a consonant has no vowel sign, this indicates a default vowel. Vowel diacritics may appear above, below, to the left, to the right, or around the consonant. The most widely used Indic script is Devanagari, shared by Hindi, Bihari, Marathi, Konkani, Nepali, and often Sanskrit. A basic letter such as क in Hindi represents a syllable with the default vowel, in this case ka (). In some languages, including Hindi, it becomes a final closing consonant at the end of a word, in this case k. The inherent vowel may be changed by adding vowel mark (diacritics), producing syllables such as कि ki, कु ku, के ke, को ko. In many of the Brahmic scripts, a syllable beginning with a cluster is treated as a single character for purposes of vowel marking, so a vowel marker like ि -i, falling before the character it modifies, may appear several positions before the place where it is pronounced. For example, the game cricket in Hindi is क्रिकेट cricket; the diacritic for appears before the consonant cluster , not before the . A more unusual example is seen in the Batak alphabet: Here the syllable bim is written ba-ma-i-(virama). That is, the vowel diacritic and virama are both written after the consonants for the whole syllable. In many abugidas, there is also a diacritic to suppress the inherent vowel, yielding the bare consonant. In Devanagari, क् is k, and ल् is l. This is called the virāma or halantam in Sanskrit. It may be used to form consonant clusters, or to indicate that a consonant occurs at the end of a word. Thus in Sanskrit, a default vowel consonant such as क does not take on a final consonant sound. Instead, it keeps its vowel. For writing two consonants without a vowel in between, instead of using diacritics on the first consonant to remove its vowel, another popular method of special conjunct forms is used in which two or more consonant characters are merged to express a cluster, such as Devanagari: क्ल kla. (Note that some fonts display this as क् followed by ल, rather than forming a conjunct. This expedient is used by ISCII and South Asian scripts of Unicode.) Thus a closed syllable such as kal requires two aksharas to write. The Róng script used for the Lepcha language goes further than other Indic
because of an embargo on the ruble. On 13 September 1979, ABBA began ABBA: The Tour at Northlands Coliseum in Edmonton, Canada, with a full house of 14,000. "The voices of the band, Agnetha's high sauciness combined with round, rich lower tones of Anni-Frid, were excellent...Technically perfect, melodically correct and always in perfect pitch...The soft lower voice of Anni-Frid and the high, edgy vocals of Agnetha were stunning", raved Edmonton Journal. During the next four weeks they played a total of 17 sold-out dates, 13 in the United States and four in Canada. The last scheduled ABBA concert in the United States in Washington, D.C. was cancelled due to Fältskog's emotional distress suffered during the flight from New York to Boston, when the group's private plane was subjected to extreme weather conditions and was unable to land for an extended period. They appeared at the Boston Music Hall for the performance 90 minutes late. The tour ended with a show in Toronto, Canada at Maple Leaf Gardens before a capacity crowd of 18,000. "ABBA plays with surprising power and volume; but although they are loud, they're also clear, which does justice to the signature vocal sound... Anyone who's been waiting five years to see Abba will be well satisfied", wrote Record World. On 19 October 1979, the tour resumed in Western Europe where the band played 23 sold-out gigs, including six sold-out nights at London's Wembley Arena. Progression In March 1980, ABBA travelled to Japan where upon their arrival at Narita International Airport, they were besieged by thousands of fans. The group performed eleven concerts to full houses, including six shows at Tokyo's Budokan. This tour was the last "on the road" adventure of their career. In July 1980, ABBA released the single "The Winner Takes It All", the group's eighth UK chart topper (and their first since 1978). The song is widely misunderstood as being written about Ulvaeus and Fältskog's marital tribulations; Ulvaeus wrote the lyrics, but has stated they were not about his own divorce; Fältskog has repeatedly stated she was not the loser in their divorce. In the United States, the single peaked at number-eight on the Billboard Hot 100 chart and became ABBA's second Billboard Adult Contemporary number-one. It was also re-recorded by Andersson and Ulvaeus with a slightly different backing track, by French chanteuse Mireille Mathieu at the end of 1980 – as "Bravo tu as gagné", with French lyrics by Alain Boublil. November the same year saw the release of ABBA's seventh album Super Trouper, which reflected a certain change in ABBA's style with more prominent use of synthesizers and increasingly personal lyrics. It set a record for the most pre-orders ever received for a UK album after one million copies were ordered before release. The second single from the album, "Super Trouper", also hit number-one in the UK, becoming the group's ninth and final UK chart-topper. Another track from the album, "Lay All Your Love on Me", released in 1981 as a Twelve-inch single only in selected territories, managed to top the Billboard Hot Dance Club Play chart and peaked at number-seven on the UK singles chart becoming, at the time, the highest ever charting 12-inch release in UK chart history. Also in 1980, ABBA recorded a compilation of Spanish-language versions of their hits called Gracias Por La Música. This was released in Spanish-speaking countries as well as in Japan and Australia. The album became a major success, and along with the Spanish version of "Chiquitita", this signalled the group's breakthrough in Latin America. ABBA Oro: Grandes Éxitos, the Spanish equivalent of ABBA Gold: Greatest Hits, was released in 1999. 1981–1982: The Visitors and later performances In January 1981, Ulvaeus married Lena Källersjö, and manager Stig Anderson celebrated his 50th birthday with a party. For this occasion, ABBA recorded the track "Hovas Vittne" (a pun on the Swedish name for Jehovah's Witness and Anderson's birthplace, Hova) as a tribute to him, and released it only on 200 red vinyl copies, to be distributed to the guests attending the party. This single has become a sought-after collectable. In mid-February 1981, Andersson and Lyngstad announced they were filing for divorce. Information surfaced that their marriage had been an uphill struggle for years, and Benny had already met another woman, Mona Nörklit, whom he married in November 1981. Andersson and Ulvaeus had songwriting sessions in early 1981, and recording sessions began in mid-March. At the end of April, the group recorded a TV special, Dick Cavett Meets ABBA with the US talk show host Dick Cavett. The Visitors, ABBA's eighth studio album, showed a songwriting maturity and depth of feeling distinctly lacking from their earlier recordings but still placing the band squarely in the pop genre, with catchy tunes and harmonies. Although not revealed at the time of its release, the album's title track, according to Ulvaeus, refers to the secret meetings held against the approval of totalitarian governments in Soviet-dominated states, while other tracks address topics like failed relationships, the threat of war, ageing, and loss of innocence. The album's only major single release, "One of Us", proved to be the last of ABBA's nine number-one singles in Germany, this being in December 1981; and the swansong of their sixteen Top 5 singles on the South African chart. "One of Us" was also ABBA's final Top 3 hit in the UK, reaching number-three on the UK Singles Chart. Although it topped the album charts across most of Europe, including Ireland, the UK and Germany, The Visitors was not as commercially successful as its predecessors, showing a commercial decline in previously loyal markets such as France, Australia and Japan. A track from the album, "When All Is Said and Done", was released as a single in North America, Australia and New Zealand, and fittingly became ABBA's final Top 40 hit in the US (debuting on the US charts on 31 December 1981), while also reaching the US Adult Contemporary Top 10, and number-four on the RPM Adult Contemporary chart in Canada. The song's lyrics, as with "The Winner Takes It All" and "One of Us", dealt with the painful experience of separating from a long-term partner, though it looked at the trauma more optimistically. With the now publicised story of Andersson and Lyngstad's divorce, speculation increased of tension within the band. Also released in the United States was the title track of The Visitors, which hit the Top Ten on the Billboard Hot Dance Club Play chart. Later recording sessions In the spring of 1982, songwriting sessions had started and the group came together for more recordings. Plans were not completely clear, but a new album was discussed and the prospect of a small tour suggested. The recording sessions in May and June 1982 were a struggle, and only three songs were eventually recorded: "You Owe Me One", "I Am the City" and "Just Like That". Andersson and Ulvaeus were not satisfied with the outcome, so the tapes were shelved and the group took a break for the summer. Back in the studio again in early August, the group had changed plans for the rest of the year: they settled for a Christmas release of a double album compilation of all their past single releases to be named The Singles: The First Ten Years. New songwriting and recording sessions took place, and during October and December, they released the singles "The Day Before You Came"/"Cassandra" and "Under Attack"/"You Owe Me One", the A-sides of which were included on the compilation album. Neither single made the Top 20 in the United Kingdom, though "The Day Before You Came" became a Top 5 hit in many European countries such as Germany, the Netherlands and Belgium. The album went to number one in the UK and Belgium, Top 5 in the Netherlands and Germany and Top 20 in many other countries. "Under Attack", the group's final release before disbanding, was a Top 5 hit in the Netherlands and Belgium. "I Am the City" and "Just Like That" were left unreleased on The Singles: The First Ten Years for possible inclusion on the next projected studio album, though this never came to fruition. "I Am the City" was eventually released on the compilation album More ABBA Gold in 1993, while "Just Like That" has been recycled in new songs with other artists produced by Andersson and Ulvaeus. A reworked version of the verses ended up in the musical Chess. The chorus section of "Just Like That" was eventually released on a retrospective box set in 1994, as well as in the ABBA Undeleted medley featured on disc 9 of The Complete Studio Recordings. Despite a number of requests from fans, Ulvaeus and Andersson are still refusing to release ABBA's version of "Just Like That" in its entirety, even though the complete version has surfaced on bootlegs. The group travelled to London to promote The Singles: The First Ten Years in the first week of November 1982, appearing on Saturday Superstore and The Late, Late Breakfast Show, and also to West Germany in the second week, to perform on Show Express. On 19 November 1982, ABBA appeared for the last time in Sweden on the TV programme Nöjesmaskinen, and on 11 December 1982, they made their last performance ever, transmitted to the UK on Noel Edmonds' The Late, Late Breakfast Show, through a live link from a TV studio in Stockholm. Later performances Andersson and Ulvaeus began collaborating with Tim Rice in early 1983 on writing songs for the musical project Chess, while Fältskog and Lyngstad both concentrated on international solo careers. While Andersson and Ulvaeus were working on the musical, a further co-operation among the three of them came with the musical Abbacadabra that was produced in France for television. It was a children's musical using 14 ABBA songs. Alain and Daniel Boublil, who wrote Les Misérables, had been in touch with Stig Anderson about the project, and the TV musical was aired over Christmas on French TV and later a Dutch version was also broadcast. Boublil previously also wrote the French lyric for Mireille Mathieu's version of "The Winner Takes It All". Lyngstad, who had recently moved to Paris, participated in the French version, and recorded a single, "Belle", a duet with French singer Daniel Balavoine. The song was a cover of ABBA's 1976 instrumental track "Arrival". As the single "Belle" sold well in France, Cameron Mackintosh wanted to stage an English-language version of the show in London, with the French lyrics translated by David Wood and Don Black; Andersson and Ulvaeus got involved in the project, and contributed with one new song, "I Am the Seeker". "Abbacadabra" premiered on 8 December 1983 at the Lyric Hammersmith Theatre in London, to mixed reviews and full houses for eight weeks, closing on 21 January 1984. Lyngstad was also involved in this production, recording "Belle" in English as "Time", a duet with actor and singer B. A. Robertson: the single sold well, and was produced and recorded by Mike Batt. In May 1984, Lyngstad performed "I Have a Dream" with a children's choir at the United Nations Organisation Gala, in Geneva, Switzerland. All four members made their (at the time, final) public appearance as four friends more than as ABBA in January 1986, when they recorded a video of themselves performing an acoustic version of "Tivedshambo" (which was the first song written by their manager Stig Anderson), for a Swedish TV show honouring Anderson on his 55th birthday. The four had not seen each other for more than two years. That same year they also performed privately at another friend's 40th birthday: their old tour manager, Claes af Geijerstam. They sang a self-written song titled "Der Kleine Franz" that was later to resurface in Chess. Also in 1986, ABBA Live was released, featuring selections of live performances from the group's 1977 and 1979 tours. The four members were guests at the 50th birthday of Görel Hanser in 1999. Hanser was a long-time friend of all four, and also former secretary of Stig Anderson. Honouring Görel, ABBA performed a Swedish birthday song "Med en enkel tulipan" a cappella. Andersson has on several occasions performed ABBA songs. In June 1992, he and Ulvaeus appeared with U2 at a Stockholm concert, singing the chorus of "Dancing Queen", and a few years later during the final performance of the B & B in Concert in Stockholm, Andersson joined the cast for an encore at the piano. Andersson frequently adds an ABBA song to the playlist when he performs with his BAO band. He also played the piano during new recordings of the ABBA songs "Like an Angel Passing Through My Room" with opera singer Anne Sofie von Otter, and "When All Is Said and Done" with Swede Viktoria Tolstoy. In 2002, Andersson and Ulvaeus both performed an a cappella rendition of the first verse of "Fernando" as they accepted their Ivor Novello award in London. Lyngstad performed and recorded an a cappella version of "Dancing Queen" with the Swedish group the Real Group in 1993, and also re-recorded "I Have a Dream" with Swiss singer Dan Daniell in 2003. Break and reunion ABBA never officially announced the end of the group or an indefinite break, but it was long considered dissolved after their final public performance together in 1982. Their final public performance together as ABBA before their 2016 reunion was on the British TV programme The Late, Late Breakfast Show (live from Stockholm) on 11 December 1982. While reminiscing on "The Day Before You Came", Ulvaeus said: "we might have continued for a while longer if that had been a number one". In January 1983, Fältskog started recording sessions for a solo album, as Lyngstad had successfully released her album Something's Going On some months earlier. Ulvaeus and Andersson, meanwhile, started songwriting sessions for the musical Chess. In interviews at the time, Björn and Benny denied the split of ABBA ("Who are we without our ladies? Initials of Brigitte Bardot?"), and Lyngstad and Fältskog kept claiming in interviews that ABBA would come together for a new album repeatedly during 1983 and 1984. Internal strife between the group and their manager escalated and the band members sold their shares in Polar Music during 1983. Except for a TV appearance in 1986, the foursome did not come together publicly again until they were reunited at the Swedish premiere of the Mamma Mia! movie on 4 July 2008. The individual members' endeavours shortly before and after their final public performance coupled with the collapse of both marriages and the lack of significant activity in the following few years after that widely suggested that the group had broken up. In an interview with the Sunday Telegraph following the premiere, Ulvaeus and Andersson said that there was nothing that could entice them back on stage again. Ulvaeus said: "We will never appear on stage again. [...] There is simply no motivation to re-group. Money is not a factor and we would like people to remember us as we were. Young, exuberant, full of energy and ambition. I remember Robert Plant saying Led Zeppelin were a cover band now because they cover all their own stuff. I think that hit the nail on the head." However, on 3 January 2011, Fältskog, long considered to be the most reclusive member of the group and a major obstacle to any reunion, raised the possibility of reuniting for a one-off engagement. She admitted that she has not yet brought the idea up to the other three members. In April 2013, she reiterated her hopes for reunion during an interview with Die Zeit, stating: "If they ask me, I'll say yes." In a May 2013 interview, Fältskog, aged 63 at the time, stated that an ABBA reunion would never occur: "I think we have to accept that it will not happen, because we are too old and each one of us has their own life. Too many years have gone by since we stopped, and there's really no meaning in putting us together again". Fältskog further explained that the band members remained on amicable terms: "It's always nice to see each other now and then and to talk a little and to be a little nostalgic." In an April 2014 interview, Fältskog, when asked about whether the band might reunite for a new recording said: "It's difficult to talk about this because then all the news stories will be: 'ABBA is going to record another song!' But as long as we can sing and play, then why not? I would love to, but it's up to Björn and Benny." Resurgence of public interest The same year the members of ABBA went their separate ways, the French production of a "tribute" show (a children's TV musical named Abbacadabra using 14 ABBA songs) spawned new interest in the group's music. After receiving little attention during the mid-to-late-1980s, ABBA's music experienced a resurgence in the early 1990s due to the UK synth-pop duo Erasure, who released Abba-esque, a four track extended play release featuring cover versions of ABBA songs which topped several European charts in 1992. As U2 arrived in Stockholm for a concert in June of that year, the band paid homage to ABBA by inviting Björn Ulvaeus and Benny Andersson to join them on stage for a rendition of "Dancing Queen", playing guitar and keyboards. September 1992 saw the release of ABBA Gold: Greatest Hits, a new compilation album. The single "Dancing Queen" received radio airplay in the UK in the middle of 1992 to promote the album. The song returned to the Top 20 of the UK singles chart in August that year, this time peaking at number 16. With sales of 30 million, Gold is the best-selling ABBA album, as well as one of the best-selling albums worldwide. With sales of 5.5 million copies it is the second-highest selling album of all time in the UK, after Queen's Greatest Hits. More ABBA Gold: More ABBA Hits, a follow-up to Gold, was released in 1993. In 1994, two Australian cult films caught the attention of the world's media, both focusing on admiration for ABBA: The Adventures of Priscilla, Queen of the Desert and Muriel's Wedding. The same year, Thank You for the Music, a four-disc box set comprising all the group's hits and stand-out album tracks, was released with the involvement of all four members. "By the end of the twentieth century," American critic Chuck Klosterman wrote a decade later, "it was far more contrarian to hate ABBA than to love them." ABBA were soon recognised and embraced by other acts: Evan Dando of the Lemonheads recorded a cover version of "Knowing Me, Knowing You"; Sinéad O'Connor and Boyzone's Stephen Gately have recorded "Chiquitita"; Tanita Tikaram, Blancmange and Steven Wilson paid tribute to "The Day Before You Came". Cliff Richard covered "Lay All Your Love on Me", while Dionne Warwick, Peter Cetera, Frank Sidebottom and Celebrity Skin recorded their versions of "SOS". US alternative-rock musician Marshall Crenshaw has also been known to play a version of "Knowing Me, Knowing You" in concert appearances, while legendary English Latin pop songwriter Richard Daniel Roman has recognised ABBA as a major influence. Swedish metal guitarist Yngwie Malmsteen covered "Gimme! Gimme! Gimme! (A Man After Midnight)" with slightly altered lyrics. Two different compilation albums of ABBA songs have been released. ABBA: A Tribute coincided with the 25th anniversary celebration and featured 17 songs, some of which were recorded especially for this release. Notable tracks include Go West's "One of Us", Army of Lovers "Hasta Mañana", Information Society's "Lay All Your Love on Me", Erasure's "Take a Chance on Me" (with MC Kinky), and Lyngstad's a cappella duet with the Real Group of "Dancing Queen". A second 12-track album was released in 1999, titled ABBAmania, with proceeds going to the Youth Music charity in England. It featured all new cover versions: notable tracks were by Madness ("Money, Money, Money"), Culture Club ("Voulez-Vous"), the Corrs ("The Winner Takes It All"), Steps ("Lay All Your Love on Me", "I Know Him So Well"), and a medley titled "Thank ABBA for the Music" performed by several artists and as featured on the Brits Awards that same year. In 1998, an ABBA tribute group was formed, the ABBA Teens, which was subsequently renamed the A-Teens to allow the group some independence. The group's first album, The ABBA Generation, consisting solely of ABBA covers reimagined as 1990s pop songs, was a worldwide success and so were subsequent albums. The group disbanded in 2004 due to a gruelling schedule and intentions to go solo. In Sweden, the growing recognition of the legacy of Andersson and Ulvaeus resulted in the 1998 B & B Concerts, a tribute concert (with Swedish singers who had worked with the songwriters through the years) showcasing not only their ABBA years, but hits both before and after ABBA. The concert was a success, and was ultimately released on CD. It later toured Scandinavia and even went to Beijing in the People's Republic of China for two concerts. In 2000 ABBA were reported to have turned down an offer of approximately one billion US dollars to do a reunion tour consisting of 100 concerts. For the semi-final of the Eurovision Song Contest 2004, staged in Istanbul 30 years after ABBA had won the contest in Brighton, all four members made cameo appearances in a special comedy video made for the interval act, titled Our Last Video Ever. Other well-known stars such as Rik Mayall, Cher and Iron Maiden's Eddie also made appearances in the video. It was not included in the official DVD release of the 2004 Eurovision contest, but was issued as a separate DVD release, retitled The Last Video at the request of the former ABBA members. The video was made using puppet models of the members of the band. The video has surpassed 13 million views on YouTube as of November 2020. In 2005, all four members of ABBA appeared at the Stockholm premiere of the musical Mamma Mia!. On 22 October 2005, at the 50th anniversary celebration of the Eurovision Song Contest, "Waterloo" was chosen as the best song in the competition's history. In the same month, American singer Madonna released the single "Hung Up", which contains a sample of the keyboard melody from ABBA's 1979 song "Gimme! Gimme! Gimme! (A Man After Midnight)"; the song was a smash hit, peaking at number one in at least 50 countries. On 4 July 2008, all four ABBA members were reunited at the Swedish premiere of the film Mamma Mia!. It was only the second time all of them had appeared together in public since 1986. During the appearance, they re-emphasised that they intended never to officially reunite, citing the opinion of Robert Plant that the re-formed Led Zeppelin was more like a cover band of itself than the original band. Ulvaeus stated that he wanted the band to be remembered as they were during the peak years of their success. Gold returned to number-one in the UK album charts for the fifth time on 3 August 2008. On 14 August 2008, the Mamma Mia! The Movie film soundtrack went to number-one on the US Billboard charts, ABBA's first US chart-topping album. During the band's heyday the highest album chart position they had ever achieved in America was number 14. In November 2008, all eight studio albums, together with a ninth of rare tracks, were released as The Albums. It hit several charts, peaking at number-four in Sweden and reaching the Top 10 in several other European territories. In 2008, Sony Computer Entertainment Europe, in collaboration with Universal Music Group Sweden AB, released SingStar ABBA on both the PlayStation 2 and PlayStation 3 games consoles, as part of the SingStar music video games. The PS2 version features 20 ABBA songs, while 25 songs feature on the PS3 version. On 22 January 2009, Fältskog and Lyngstad appeared together on stage to receive the Swedish music award "Rockbjörnen" (for "lifetime achievement"). In an interview, the two women expressed their gratitude for the honorary award and thanked their fans. On 25 November 2009, PRS for Music announced that the British public voted ABBA as the band they would most like to see re-form. On 27 January 2010, ABBAWORLD, a 25-room touring exhibition featuring interactive and audiovisual activities, debuted at Earls Court Exhibition Centre in London. According to the exhibition's website, ABBAWORLD is "approved and fully supported" by the band members. "Mamma Mia" was released as one of the first few non-premium song selections for the online RPG game Bandmaster. On 17 May 2011, "Gimme! Gimme! Gimme!" was added as a non-premium song selection for the Bandmaster Philippines server. On 15 November 2011, Ubisoft released a dancing game called ABBA: You Can Dance for the Wii. In January 2012, Universal Music announced the re-release of ABBA's final album The Visitors, featuring a previously unheard track "From a Twinkling Star to a Passing Angel". A book titled ABBA: The Official Photo Book was published in early 2014 to mark the 40th anniversary of the band's Eurovision victory. The book reveals that part of the reason for the band's outrageous costumes was that Swedish tax laws at the time allowed the cost of garish outfits that were not suitable for daily wear to be tax deductible. A sequel to the 2008 movie Mamma Mia!, titled Mamma Mia! Here We Go Again, was announced in May 2017; the film was released on 20 July 2018. Cher, who appeared in the movie, also released Dancing Queen, an ABBA cover album, in September 2018. In June 2017, a blue plaque outside Brighton Dome was set to commemorate their 1974 Eurovision win. In May 2020, it was announced that ABBA's entire studio discography would be released on coloured vinyl for the first time, in a box set titled ABBA: The Studio Albums. The initial release sold out in just a few hours. 2016–present: Reunion, Voyage and ABBAtars On 20 January 2016, all four members of ABBA made a public appearance at Mamma Mia! The Party in Stockholm. On 6 June 2016, the quartet appeared together at a private party at Berns Salonger in Stockholm, which was held to celebrate the 50th anniversary of Andersson and Ulvaeus's first meeting. Fältskog and Lyngstad performed live, singing "The Way Old Friends Do" before they were joined on stage by Andersson and Ulvaeus. British manager Simon Fuller announced in a statement in October 2016 that the group would be reuniting to work on a new 'digital entertainment experience'. The project would feature the members in their "life-like" avatar form, called ABBAtars, based on their late 1970s tour and would be set to launch by the spring of 2019. On 27 April 2018, all four original members of ABBA made a joint announcement that they had recorded two new songs, titled "I Still Have Faith in You" and "Don't Shut Me Down", to feature in a TV special set to air later that year. In September 2018, Ulvaeus stated that the two new songs, as well as the aforementioned TV special, now called ABBA: Thank You for the Music, An All-Star Tribute, would not be released until 2019. The TV special was later revealed to be scrapped by 2018, as Andersson and Ulvaeus rejected Fuller's project, and instead partnered with visual effects company Industrial Light and Magic to prepare the ABBAtars for a music video and a concert. In January 2019, it was revealed that neither song would be released before the summer. Andersson hinted at the possibility of a third song. In June 2019, Ulvaeus announced that the first new song and video containing the ABBAtars would be released in November 2019. In September, he stated in an interview that there were now five new ABBA songs to be released in 2020. In early 2020, Andersson confirmed that he was aiming for the songs to be released in September 2020. In April 2020, Ulvaeus gave an interview saying that in the wake of the COVID-19 pandemic, the avatar project had been delayed by six months. As of 2020, five out of the eight original songs written by Benny for the new album had been recorded by the two female members, and the release of a new music video with new unseen technology that cost £15 million was to be decided. In July 2020, Ulvaeus told podcaster Geoff Lloyd that the release of the new ABBA recordings had been delayed until 2021. On 22 September 2020, all four ABBA members reunited at Ealing Studios in London to continue working on the avatar project and filming for the tour. Björn said that the avatar tour would be scheduled for 2022 since the nature of the technology was complex. When questioned if the new recordings were definitely coming out in 2021, Björn said "There will be new music this year, that is definite, it's not a case anymore of it might happen, it will happen." On 26 August 2021, a new website was launched, with the title ABBA Voyage. On the page, visitors were prompted to subscribe "to be the first in line to hear more about ABBA Voyage". Simultaneously with the launch of the webpage, new ABBA Voyage social media accounts were launched, and billboards around London started to appear, all showing the date "02.09.21", leading to expectation of what was to be revealed on that date. On 29 August, the band officially joined TikTok with a video of Benny Andersson playing "Dancing Queen" on the piano, and media reported on a new album to be announced on 2 September. On that date, Voyage, their first new album in 40 years, was announced to be released on 5 November 2021, along with ABBA Voyage, a concert residency in London featuring the motion capture digital avatars of the four band members alongside a 10-piece live band, due to start in May 2022. Fältskog stated that the Voyage album and tour are likely to be their last. The announcement of the new album was accompanied by the release of the previously-announced new singles "I Still Have Faith in You" and "Don't Shut Me Down". The music video for "I Still Have Faith in You", featuring footage of the band during their performing years and also a first look at the ABBAtars, earned over a million views in its first three hours. "Don't Shut Me Down" became the first ABBA release since October 1978 to top the singles chart in Sweden. In October 2021, the third single "Just a Notion" was released, and it was announced that ABBA would split for good after the release of Voyage. However, in an interview with BBC Radio 2 on 11 November, Lyngstad stated "don't be too sure" that Voyage is the final ABBA album. Also, in an interview with BBC News on 5 November, Andersson stated "if they (the ladies) twist my arm I might change my mind." The fourth single from the album, “Little Things”, was released on 3 December. Artistry Recording process ABBA were perfectionists in the studio, working on tracks until they got them right rather than leaving them to come back to later on. They spent the bulk of their time within the studio; in separate 2021 interviews Ulvaeus stated they may have toured for only 6 months while Andersson said they played fewer than 100 shows during the band's career. The band created a basic rhythm track with a drummer, guitarist and bass player, and overlaid other arrangements and instruments. Vocals were then added, and orchestra overdubs were usually left until last. Fältskog and Lyngstad contributed ideas at the studio stage. Andersson and Ulvaeus played them the backing tracks and they made comments and suggestions. According to Fältskog, she and Lyngstad had the final say in how the lyrics were shaped. After vocals and overdubs were done, the band took up to five days to mix a song. Their single "S.O.S." was "heavily influenced by Phil Spector's Wall of Sound and the melodies of the Beach Boys", according to Billboard writer Fred Bronson, who also reported that Ulvaeus had said, "Because there was the Latin-American influence, the German, the Italian, the English, the American, all of that. I suppose we were a bit exotic in every territory in an acceptable way." Fashion, style, videos, advertising campaigns ABBA was widely noted for the colourful and trend-setting costumes its members wore. The reason for the wild costumes was Swedish tax law: the cost of the clothes was deductible only if they could not be worn other than for performances. Choreography by Graham Tainton also contributed to their performance style. The videos that accompanied some of the band's biggest hits are often cited as being among the earliest examples of the genre. Most of ABBA's videos (and ABBA: The Movie) were directed by Lasse Hallström, who would later direct the films My Life as a Dog, The Cider House Rules and Chocolat. ABBA made videos because their songs were hits in many different countries and personal appearances were not always possible. This was also done in an effort to minimise travelling, particularly to countries that would have required extremely long flights. Fältskog and Ulvaeus had two young children and Fältskog, who was also afraid of flying, was very reluctant to leave her children for such a long time. ABBA's manager, Stig Anderson, realised the potential of showing a simple video clip on television to publicise a single or album, thereby allowing easier and quicker exposure than a concert tour. Some of these videos have become classics because of the 1970s-era costumes and early video effects, such as the grouping of the band members in different combinations of pairs, overlapping one singer's profile with the other's full face, and the contrasting of one member against another. In 1976, ABBA participated in an advertising campaign to promote the Matsushita Electric Industrial Co.'s brand, National, in Australia. The campaign was also broadcast in Japan. Five commercial spots, each of approximately one minute, were produced, each presenting the "National Song" performed by ABBA using the melody and instrumental arrangements of "Fernando" and revised lyrics. Political use of ABBA's music In September 2010, band members Andersson and Ulvaeus criticised the right-wing Danish People's Party (DF) for using the ABBA song "Mamma Mia" (with modified lyrics referencing Pia Kjærsgaard) at rallies. The band threatened to file a lawsuit against the DF, saying they never allowed their music to be used politically and that they had absolutely no interest in supporting the party. Their record label Universal Music later said that no legal action would be taken because an agreement had been reached. Success in the United States During their active career, from 1972 to 1982, 20 of ABBA's singles entered the Billboard Hot 100; 14 of these made the Top 40 (13 on the Cashbox Top 100), with 10 making the Top 20 on both charts. A total of four of those singles reached the Top 10, including "Dancing Queen", which reached number one in April 1977. While "Fernando" and "SOS" did not break the Top 10 on the Billboard Hot 100 (reaching number 13 and 15 respectively), they did reach the Top 10 on Cashbox ("Fernando") and Record World ("SOS") charts. Both "Dancing Queen" and "Take a Chance on Me" were certified gold by the Recording Industry Association of America for sales of over one million copies each. The group also had 12 Top 20 singles on the Billboard Adult Contemporary chart with two of them, "Fernando" and "The Winner Takes It All", reaching number one. "Lay All Your Love on Me" was ABBA's fourth number-one single on a Billboard chart, topping the Hot Dance Club Play chart. Ten ABBA albums have made their way into the top half of the Billboard 200 album chart, with eight reaching the Top 50, five reaching the Top 20 and one reaching the Top 10. In November 2021, Voyage became ABBA's highest charting album on the Billboard 200 peaking at No. 2. Five albums received RIAA gold certification (more than 500,000 copies sold), while three acquired platinum status (selling more than one million copies). The compilation album ABBA Gold: Greatest Hits topped the Billboard Top Pop Catalog Albums chart in August 2008 (15 years after it was first released in the US in 1993), becoming the group's first number-one album ever on any of the Billboard album charts. It has sold 6 million copies there. On 15 March 2010, ABBA were inducted into the Rock and Roll Hall of Fame by Bee Gees members Barry Gibb and Robin Gibb. The ceremony was held at the Waldorf Astoria Hotel in New York City. The group were represented by Anni-Frid Lyngstad and Benny Andersson. in November 2021, ABBA received a Grammy nomination for Record of the Year. The single, "I Still Have Faith In You", from the album, Voyage, was their first ever nomination. Band members Agnetha Fältskog – lead and backing vocals Anni-Frid "Frida" Lyngstad – lead and backing vocals Björn Ulvaeus – guitars, backing and lead vocals Benny Andersson – keyboards, synthesizers, piano, accordion, guitars, backing and lead vocals The members of ABBA were married as follows: Agnetha Fältskog and Björn Ulvaeus from 1971 to 1980: Benny Andersson and Anni-Frid Lyngstad from 1978 to 1981. In addition to the four members of ABBA, other musicians played on their studio recordings, live appearances and concert performances. These include Rutger Gunnarsson (1972–1982) bass guitar and string arrangements, Ola Brunkert (1972–1981) drums, Mike Watson (1972–1980) bass guitar, Janne Schaffer (1972–1982) lead electric guitar, Roger Palm (1972–1979) drums, Malando Gassama (1973–1979) percussion, Lasse Wellander (1974–2021) lead electric guitar, and Per Lindvall (1980–2021) drums. ABBA-related tributes Musical groups Abbaesque – An Irish ABBA tribute band A-Teens – A pop music group from Stockholm, Sweden Björn Again – An Australian tribute band; notable as the earliest-formed ABBA tribute band (1988) and, as of 2021, still currently touring. Gabba – An ABBA–Ramones tribute band that covers the former in the style of the latter, the name being a reference to the Ramones catchphrase "Gabba Gabba Hey". Media Saturday Night (1975) (TV) .... Season 1 Episode 5 (Hosted by Robert Klein with Musical Numbers by ABBA and Loudon Wainwright III) Abbacadabra – A French children's musical based on songs from ABBA Abba-esque – A 1992 cover EP by Erasure Abbasalutely – A compilation album released in 1995 as a tribute album to ABBA Mamma Mia! – A musical stage show based on songs of ABBA ABBAmania – An ITV programme and tribute album to ABBA released in 1999 Mamma Mia! – A film adaptation of the musical stage show Mamma Mia! Here We Go Again – A prequel/sequel to the original film ABBA: You Can Dance – A dance video game released by Ubisoft in 2011 with songs from ABBA and also a spin-off of Just Dance video game series Dancing Queen - A 2018 cover album by Cher Discography Studio albums Ring Ring (1973) Waterloo (1974) ABBA (1975) Arrival (1976) The Album (1977) Voulez-Vous (1979) Super Trouper (1980) The Visitors (1981) Voyage (2021) Tours 1973: Swedish Folkpark Tour 1974–1975: European Tour 1977: European & Australian Tour 1979–1980: ABBA: The Tour 2022: ABBA Voyage Awards and nominations See also ABBA: The Museum ABBA City Walks – Stockholm City Museum ABBAMAIL List of best-selling music artists List of Swedes in music Music of Sweden Popular music in Sweden Citations References Bibliography Further reading Benny Andersson, Björn Ulvaeus, Judy Craymer: Mamma Mia! How Can I Resist You?: The Inside Story of Mamma Mia! and the Songs of ABBA. Weidenfeld & Nicolson, 2006 Carl Magnus Palm. ABBA – The Complete Recording Sessions (1994) Carl Magnus Palm (2000). From "ABBA" to "Mamma Mia!" Elisabeth Vincentelli: ABBA Treasures: A Celebration of the Ultimate
a single in North America, Australia and New Zealand, and fittingly became ABBA's final Top 40 hit in the US (debuting on the US charts on 31 December 1981), while also reaching the US Adult Contemporary Top 10, and number-four on the RPM Adult Contemporary chart in Canada. The song's lyrics, as with "The Winner Takes It All" and "One of Us", dealt with the painful experience of separating from a long-term partner, though it looked at the trauma more optimistically. With the now publicised story of Andersson and Lyngstad's divorce, speculation increased of tension within the band. Also released in the United States was the title track of The Visitors, which hit the Top Ten on the Billboard Hot Dance Club Play chart. Later recording sessions In the spring of 1982, songwriting sessions had started and the group came together for more recordings. Plans were not completely clear, but a new album was discussed and the prospect of a small tour suggested. The recording sessions in May and June 1982 were a struggle, and only three songs were eventually recorded: "You Owe Me One", "I Am the City" and "Just Like That". Andersson and Ulvaeus were not satisfied with the outcome, so the tapes were shelved and the group took a break for the summer. Back in the studio again in early August, the group had changed plans for the rest of the year: they settled for a Christmas release of a double album compilation of all their past single releases to be named The Singles: The First Ten Years. New songwriting and recording sessions took place, and during October and December, they released the singles "The Day Before You Came"/"Cassandra" and "Under Attack"/"You Owe Me One", the A-sides of which were included on the compilation album. Neither single made the Top 20 in the United Kingdom, though "The Day Before You Came" became a Top 5 hit in many European countries such as Germany, the Netherlands and Belgium. The album went to number one in the UK and Belgium, Top 5 in the Netherlands and Germany and Top 20 in many other countries. "Under Attack", the group's final release before disbanding, was a Top 5 hit in the Netherlands and Belgium. "I Am the City" and "Just Like That" were left unreleased on The Singles: The First Ten Years for possible inclusion on the next projected studio album, though this never came to fruition. "I Am the City" was eventually released on the compilation album More ABBA Gold in 1993, while "Just Like That" has been recycled in new songs with other artists produced by Andersson and Ulvaeus. A reworked version of the verses ended up in the musical Chess. The chorus section of "Just Like That" was eventually released on a retrospective box set in 1994, as well as in the ABBA Undeleted medley featured on disc 9 of The Complete Studio Recordings. Despite a number of requests from fans, Ulvaeus and Andersson are still refusing to release ABBA's version of "Just Like That" in its entirety, even though the complete version has surfaced on bootlegs. The group travelled to London to promote The Singles: The First Ten Years in the first week of November 1982, appearing on Saturday Superstore and The Late, Late Breakfast Show, and also to West Germany in the second week, to perform on Show Express. On 19 November 1982, ABBA appeared for the last time in Sweden on the TV programme Nöjesmaskinen, and on 11 December 1982, they made their last performance ever, transmitted to the UK on Noel Edmonds' The Late, Late Breakfast Show, through a live link from a TV studio in Stockholm. Later performances Andersson and Ulvaeus began collaborating with Tim Rice in early 1983 on writing songs for the musical project Chess, while Fältskog and Lyngstad both concentrated on international solo careers. While Andersson and Ulvaeus were working on the musical, a further co-operation among the three of them came with the musical Abbacadabra that was produced in France for television. It was a children's musical using 14 ABBA songs. Alain and Daniel Boublil, who wrote Les Misérables, had been in touch with Stig Anderson about the project, and the TV musical was aired over Christmas on French TV and later a Dutch version was also broadcast. Boublil previously also wrote the French lyric for Mireille Mathieu's version of "The Winner Takes It All". Lyngstad, who had recently moved to Paris, participated in the French version, and recorded a single, "Belle", a duet with French singer Daniel Balavoine. The song was a cover of ABBA's 1976 instrumental track "Arrival". As the single "Belle" sold well in France, Cameron Mackintosh wanted to stage an English-language version of the show in London, with the French lyrics translated by David Wood and Don Black; Andersson and Ulvaeus got involved in the project, and contributed with one new song, "I Am the Seeker". "Abbacadabra" premiered on 8 December 1983 at the Lyric Hammersmith Theatre in London, to mixed reviews and full houses for eight weeks, closing on 21 January 1984. Lyngstad was also involved in this production, recording "Belle" in English as "Time", a duet with actor and singer B. A. Robertson: the single sold well, and was produced and recorded by Mike Batt. In May 1984, Lyngstad performed "I Have a Dream" with a children's choir at the United Nations Organisation Gala, in Geneva, Switzerland. All four members made their (at the time, final) public appearance as four friends more than as ABBA in January 1986, when they recorded a video of themselves performing an acoustic version of "Tivedshambo" (which was the first song written by their manager Stig Anderson), for a Swedish TV show honouring Anderson on his 55th birthday. The four had not seen each other for more than two years. That same year they also performed privately at another friend's 40th birthday: their old tour manager, Claes af Geijerstam. They sang a self-written song titled "Der Kleine Franz" that was later to resurface in Chess. Also in 1986, ABBA Live was released, featuring selections of live performances from the group's 1977 and 1979 tours. The four members were guests at the 50th birthday of Görel Hanser in 1999. Hanser was a long-time friend of all four, and also former secretary of Stig Anderson. Honouring Görel, ABBA performed a Swedish birthday song "Med en enkel tulipan" a cappella. Andersson has on several occasions performed ABBA songs. In June 1992, he and Ulvaeus appeared with U2 at a Stockholm concert, singing the chorus of "Dancing Queen", and a few years later during the final performance of the B & B in Concert in Stockholm, Andersson joined the cast for an encore at the piano. Andersson frequently adds an ABBA song to the playlist when he performs with his BAO band. He also played the piano during new recordings of the ABBA songs "Like an Angel Passing Through My Room" with opera singer Anne Sofie von Otter, and "When All Is Said and Done" with Swede Viktoria Tolstoy. In 2002, Andersson and Ulvaeus both performed an a cappella rendition of the first verse of "Fernando" as they accepted their Ivor Novello award in London. Lyngstad performed and recorded an a cappella version of "Dancing Queen" with the Swedish group the Real Group in 1993, and also re-recorded "I Have a Dream" with Swiss singer Dan Daniell in 2003. Break and reunion ABBA never officially announced the end of the group or an indefinite break, but it was long considered dissolved after their final public performance together in 1982. Their final public performance together as ABBA before their 2016 reunion was on the British TV programme The Late, Late Breakfast Show (live from Stockholm) on 11 December 1982. While reminiscing on "The Day Before You Came", Ulvaeus said: "we might have continued for a while longer if that had been a number one". In January 1983, Fältskog started recording sessions for a solo album, as Lyngstad had successfully released her album Something's Going On some months earlier. Ulvaeus and Andersson, meanwhile, started songwriting sessions for the musical Chess. In interviews at the time, Björn and Benny denied the split of ABBA ("Who are we without our ladies? Initials of Brigitte Bardot?"), and Lyngstad and Fältskog kept claiming in interviews that ABBA would come together for a new album repeatedly during 1983 and 1984. Internal strife between the group and their manager escalated and the band members sold their shares in Polar Music during 1983. Except for a TV appearance in 1986, the foursome did not come together publicly again until they were reunited at the Swedish premiere of the Mamma Mia! movie on 4 July 2008. The individual members' endeavours shortly before and after their final public performance coupled with the collapse of both marriages and the lack of significant activity in the following few years after that widely suggested that the group had broken up. In an interview with the Sunday Telegraph following the premiere, Ulvaeus and Andersson said that there was nothing that could entice them back on stage again. Ulvaeus said: "We will never appear on stage again. [...] There is simply no motivation to re-group. Money is not a factor and we would like people to remember us as we were. Young, exuberant, full of energy and ambition. I remember Robert Plant saying Led Zeppelin were a cover band now because they cover all their own stuff. I think that hit the nail on the head." However, on 3 January 2011, Fältskog, long considered to be the most reclusive member of the group and a major obstacle to any reunion, raised the possibility of reuniting for a one-off engagement. She admitted that she has not yet brought the idea up to the other three members. In April 2013, she reiterated her hopes for reunion during an interview with Die Zeit, stating: "If they ask me, I'll say yes." In a May 2013 interview, Fältskog, aged 63 at the time, stated that an ABBA reunion would never occur: "I think we have to accept that it will not happen, because we are too old and each one of us has their own life. Too many years have gone by since we stopped, and there's really no meaning in putting us together again". Fältskog further explained that the band members remained on amicable terms: "It's always nice to see each other now and then and to talk a little and to be a little nostalgic." In an April 2014 interview, Fältskog, when asked about whether the band might reunite for a new recording said: "It's difficult to talk about this because then all the news stories will be: 'ABBA is going to record another song!' But as long as we can sing and play, then why not? I would love to, but it's up to Björn and Benny." Resurgence of public interest The same year the members of ABBA went their separate ways, the French production of a "tribute" show (a children's TV musical named Abbacadabra using 14 ABBA songs) spawned new interest in the group's music. After receiving little attention during the mid-to-late-1980s, ABBA's music experienced a resurgence in the early 1990s due to the UK synth-pop duo Erasure, who released Abba-esque, a four track extended play release featuring cover versions of ABBA songs which topped several European charts in 1992. As U2 arrived in Stockholm for a concert in June of that year, the band paid homage to ABBA by inviting Björn Ulvaeus and Benny Andersson to join them on stage for a rendition of "Dancing Queen", playing guitar and keyboards. September 1992 saw the release of ABBA Gold: Greatest Hits, a new compilation album. The single "Dancing Queen" received radio airplay in the UK in the middle of 1992 to promote the album. The song returned to the Top 20 of the UK singles chart in August that year, this time peaking at number 16. With sales of 30 million, Gold is the best-selling ABBA album, as well as one of the best-selling albums worldwide. With sales of 5.5 million copies it is the second-highest selling album of all time in the UK, after Queen's Greatest Hits. More ABBA Gold: More ABBA Hits, a follow-up to Gold, was released in 1993. In 1994, two Australian cult films caught the attention of the world's media, both focusing on admiration for ABBA: The Adventures of Priscilla, Queen of the Desert and Muriel's Wedding. The same year, Thank You for the Music, a four-disc box set comprising all the group's hits and stand-out album tracks, was released with the involvement of all four members. "By the end of the twentieth century," American critic Chuck Klosterman wrote a decade later, "it was far more contrarian to hate ABBA than to love them." ABBA were soon recognised and embraced by other acts: Evan Dando of the Lemonheads recorded a cover version of "Knowing Me, Knowing You"; Sinéad O'Connor and Boyzone's Stephen Gately have recorded "Chiquitita"; Tanita Tikaram, Blancmange and Steven Wilson paid tribute to "The Day Before You Came". Cliff Richard covered "Lay All Your Love on Me", while Dionne Warwick, Peter Cetera, Frank Sidebottom and Celebrity Skin recorded their versions of "SOS". US alternative-rock musician Marshall Crenshaw has also been known to play a version of "Knowing Me, Knowing You" in concert appearances, while legendary English Latin pop songwriter Richard Daniel Roman has recognised ABBA as a major influence. Swedish metal guitarist Yngwie Malmsteen covered "Gimme! Gimme! Gimme! (A Man After Midnight)" with slightly altered lyrics. Two different compilation albums of ABBA songs have been released. ABBA: A Tribute coincided with the 25th anniversary celebration and featured 17 songs, some of which were recorded especially for this release. Notable tracks include Go West's "One of Us", Army of Lovers "Hasta Mañana", Information Society's "Lay All Your Love on Me", Erasure's "Take a Chance on Me" (with MC Kinky), and Lyngstad's a cappella duet with the Real Group of "Dancing Queen". A second 12-track album was released in 1999, titled ABBAmania, with proceeds going to the Youth Music charity in England. It featured all new cover versions: notable tracks were by Madness ("Money, Money, Money"), Culture Club ("Voulez-Vous"), the Corrs ("The Winner Takes It All"), Steps ("Lay All Your Love on Me", "I Know Him So Well"), and a medley titled "Thank ABBA for the Music" performed by several artists and as featured on the Brits Awards that same year. In 1998, an ABBA tribute group was formed, the ABBA Teens, which was subsequently renamed the A-Teens to allow the group some independence. The group's first album, The ABBA Generation, consisting solely of ABBA covers reimagined as 1990s pop songs, was a worldwide success and so were subsequent albums. The group disbanded in 2004 due to a gruelling schedule and intentions to go solo. In Sweden, the growing recognition of the legacy of Andersson and Ulvaeus resulted in the 1998 B & B Concerts, a tribute concert (with Swedish singers who had worked with the songwriters through the years) showcasing not only their ABBA years, but hits both before and after ABBA. The concert was a success, and was ultimately released on CD. It later toured Scandinavia and even went to Beijing in the People's Republic of China for two concerts. In 2000 ABBA were reported to have turned down an offer of approximately one billion US dollars to do a reunion tour consisting of 100 concerts. For the semi-final of the Eurovision Song Contest 2004, staged in Istanbul 30 years after ABBA had won the contest in Brighton, all four members made cameo appearances in a special comedy video made for the interval act, titled Our Last Video Ever. Other well-known stars such as Rik Mayall, Cher and Iron Maiden's Eddie also made appearances in the video. It was not included in the official DVD release of the 2004 Eurovision contest, but was issued as a separate DVD release, retitled The Last Video at the request of the former ABBA members. The video was made using puppet models of the members of the band. The video has surpassed 13 million views on YouTube as of November 2020. In 2005, all four members of ABBA appeared at the Stockholm premiere of the musical Mamma Mia!. On 22 October 2005, at the 50th anniversary celebration of the Eurovision Song Contest, "Waterloo" was chosen as the best song in the competition's history. In the same month, American singer Madonna released the single "Hung Up", which contains a sample of the keyboard melody from ABBA's 1979 song "Gimme! Gimme! Gimme! (A Man After Midnight)"; the song was a smash hit, peaking at number one in at least 50 countries. On 4 July 2008, all four ABBA members were reunited at the Swedish premiere of the film Mamma Mia!. It was only the second time all of them had appeared together in public since 1986. During the appearance, they re-emphasised that they intended never to officially reunite, citing the opinion of Robert Plant that the re-formed Led Zeppelin was more like a cover band of itself than the original band. Ulvaeus stated that he wanted the band to be remembered as they were during the peak years of their success. Gold returned to number-one in the UK album charts for the fifth time on 3 August 2008. On 14 August 2008, the Mamma Mia! The Movie film soundtrack went to number-one on the US Billboard charts, ABBA's first US chart-topping album. During the band's heyday the highest album chart position they had ever achieved in America was number 14. In November 2008, all eight studio albums, together with a ninth of rare tracks, were released as The Albums. It hit several charts, peaking at number-four in Sweden and reaching the Top 10 in several other European territories. In 2008, Sony Computer Entertainment Europe, in collaboration with Universal Music Group Sweden AB, released SingStar ABBA on both the PlayStation 2 and PlayStation 3 games consoles, as part of the SingStar music video games. The PS2 version features 20 ABBA songs, while 25 songs feature on the PS3 version. On 22 January 2009, Fältskog and Lyngstad appeared together on stage to receive the Swedish music award "Rockbjörnen" (for "lifetime achievement"). In an interview, the two women expressed their gratitude for the honorary award and thanked their fans. On 25 November 2009, PRS for Music announced that the British public voted ABBA as the band they would most like to see re-form. On 27 January 2010, ABBAWORLD, a 25-room touring exhibition featuring interactive and audiovisual activities, debuted at Earls Court Exhibition Centre in London. According to the exhibition's website, ABBAWORLD is "approved and fully supported" by the band members. "Mamma Mia" was released as one of the first few non-premium song selections for the online RPG game Bandmaster. On 17 May 2011, "Gimme! Gimme! Gimme!" was added as a non-premium song selection for the Bandmaster Philippines server. On 15 November 2011, Ubisoft released a dancing game called ABBA: You Can Dance for the Wii. In January 2012, Universal Music announced the re-release of ABBA's final album The Visitors, featuring a previously unheard track "From a Twinkling Star to a Passing Angel". A book titled ABBA: The Official Photo Book was published in early 2014 to mark the 40th anniversary of the band's Eurovision victory. The book reveals that part of the reason for the band's outrageous costumes was that Swedish tax laws at the time allowed the cost of garish outfits that were not suitable for daily wear to be tax deductible. A sequel to the 2008 movie Mamma Mia!, titled Mamma Mia! Here We Go Again, was announced in May 2017; the film was released on 20 July 2018. Cher, who appeared in the movie, also released Dancing Queen, an ABBA cover album, in September 2018. In June 2017, a blue plaque outside Brighton Dome was set to commemorate their 1974 Eurovision win. In May 2020, it was announced that ABBA's entire studio discography would be released on coloured vinyl for the first time, in a box set titled ABBA: The Studio Albums. The initial release sold out in just a few hours. 2016–present: Reunion, Voyage and ABBAtars On 20 January 2016, all four members of ABBA made a public appearance at Mamma Mia! The Party in Stockholm. On 6 June 2016, the quartet appeared together at a private party at Berns Salonger in Stockholm, which was held to celebrate the 50th anniversary of Andersson and Ulvaeus's first meeting. Fältskog and Lyngstad performed live, singing "The Way Old Friends Do" before they were joined on stage by Andersson and Ulvaeus. British manager Simon Fuller announced in a statement in October 2016 that the group would be reuniting to work on a new 'digital entertainment experience'. The project would feature the members in their "life-like" avatar form, called ABBAtars, based on their late 1970s tour and would be set to launch by the spring of 2019. On 27 April 2018, all four original members of ABBA made a joint announcement that they had recorded two new songs, titled "I Still Have Faith in You" and "Don't Shut Me Down", to feature in a TV special set to air later that year. In September 2018, Ulvaeus stated that the two new songs, as well as the aforementioned TV special, now called ABBA: Thank You for the Music, An All-Star Tribute, would not be released until 2019. The TV special was later revealed to be scrapped by 2018, as Andersson and Ulvaeus rejected Fuller's project, and instead partnered with visual effects company Industrial Light and Magic to prepare the ABBAtars for a music video and a concert. In January 2019, it was revealed that neither song would be released before the summer. Andersson hinted at the possibility of a third song. In June 2019, Ulvaeus announced that the first new song and video containing the ABBAtars would be released in November 2019. In September, he stated in an interview that there were now five new ABBA songs to be released in 2020. In early 2020, Andersson confirmed that he was aiming for the songs to be released in September 2020. In April 2020, Ulvaeus gave an interview saying that in the wake of the COVID-19 pandemic, the avatar project had been delayed by six months. As of 2020, five out of the eight original songs written by Benny for the new album had been recorded by the two female members, and the release of a new music video with new unseen technology that cost £15 million was to be decided. In July 2020, Ulvaeus told podcaster Geoff Lloyd that the release of the new ABBA recordings had been delayed until 2021. On 22 September 2020, all four ABBA members reunited at Ealing Studios in London to continue working on the avatar project and filming for the tour. Björn said that the avatar tour would be scheduled for 2022 since the nature of the technology was complex. When questioned if the new recordings were definitely coming out in 2021, Björn said "There will be new music this year, that is definite, it's not a case anymore of it might happen, it will happen." On 26 August 2021, a new website was launched, with the title ABBA Voyage. On the page, visitors were prompted to subscribe "to be the first in line to hear more about ABBA Voyage". Simultaneously with the launch of the webpage, new ABBA Voyage social media accounts were launched, and billboards around London started to appear, all showing the date "02.09.21", leading to expectation of what was to be revealed on that date. On 29 August, the band officially joined TikTok with a video of Benny Andersson playing "Dancing Queen" on the piano, and media reported on a new album to be announced on 2 September. On that date, Voyage, their first new album in 40 years, was announced to be released on 5 November 2021, along with ABBA Voyage, a concert residency in London featuring the motion capture digital avatars of the four band members alongside a 10-piece live band, due to start in May 2022. Fältskog stated that the Voyage album and tour are likely to be their last. The announcement of the new album was accompanied by the release of the previously-announced new singles "I Still Have Faith in You" and "Don't Shut Me Down". The music video for "I Still Have Faith in You", featuring footage of the band during their performing years and also a first look at the ABBAtars, earned over a million views in its first three hours. "Don't Shut Me Down" became the first ABBA release since October 1978 to top the singles chart in Sweden. In October 2021, the third single "Just a Notion" was released, and it was announced that ABBA would split for good after the release of Voyage. However, in an interview with BBC Radio 2 on 11 November, Lyngstad stated "don't be too sure" that Voyage is the final ABBA album. Also, in an interview with BBC News on 5 November, Andersson stated "if they (the ladies) twist my arm I might change my mind." The fourth single from the album, “Little Things”, was released on 3 December. Artistry Recording process ABBA were perfectionists in the studio, working on tracks until they got them right rather than leaving them to come back to later on. They spent the bulk of their time within the studio; in separate 2021 interviews Ulvaeus stated they may have toured for only 6 months while Andersson said they played fewer than 100 shows during the band's career. The band created a basic rhythm track with a drummer, guitarist and bass player, and overlaid other arrangements and instruments. Vocals were then added, and orchestra overdubs were usually left until last. Fältskog and Lyngstad contributed ideas at the studio stage. Andersson and Ulvaeus played them the backing tracks and they made comments and suggestions. According to Fältskog, she and Lyngstad had the final say in how the lyrics were shaped. After vocals and overdubs were done, the band took up to five days to mix a song. Their single "S.O.S." was "heavily influenced by Phil Spector's Wall of Sound and the melodies of the Beach Boys", according to Billboard writer Fred Bronson, who also reported that Ulvaeus had said, "Because there was the Latin-American influence, the German, the Italian, the English, the American, all of that. I suppose we were a bit exotic in every territory in an acceptable way." Fashion, style, videos, advertising campaigns ABBA was widely noted for the colourful and trend-setting costumes its members wore. The reason for the wild costumes was Swedish tax law: the cost of the clothes was deductible only if they could not be worn other than for performances. Choreography by Graham Tainton also contributed to their performance style. The videos that accompanied some of the band's biggest hits are often cited as being among the earliest examples of the genre. Most of ABBA's videos (and ABBA: The Movie) were directed by Lasse Hallström, who would later direct the films My Life as a Dog, The Cider House Rules and Chocolat. ABBA made videos because their songs were hits in many different countries and personal appearances were not always possible. This was also done in an effort to minimise travelling, particularly to countries that would have required extremely long flights. Fältskog and Ulvaeus had two young children and Fältskog, who was also afraid of flying, was very reluctant to leave her children for such a long time. ABBA's manager, Stig Anderson, realised the potential of showing a simple video clip on television to publicise a single or album, thereby allowing easier and quicker exposure than a concert tour. Some of these videos have become classics because of the 1970s-era costumes and early video effects, such as the grouping of the band members in different combinations of pairs, overlapping one singer's profile with the other's full face, and the contrasting of one member against another. In 1976, ABBA participated in an advertising campaign to promote the Matsushita Electric Industrial Co.'s brand, National, in Australia. The campaign was also broadcast in Japan. Five commercial spots, each of approximately one minute, were produced, each presenting the "National Song" performed by ABBA using the melody and instrumental arrangements of "Fernando" and revised lyrics. Political use of ABBA's music In September 2010, band members Andersson and Ulvaeus criticised the right-wing Danish People's Party (DF) for using the ABBA song "Mamma Mia" (with modified lyrics referencing Pia Kjærsgaard) at rallies. The band threatened to file a lawsuit against the DF, saying they never allowed their music to be used politically and that they had absolutely no interest in supporting the party. Their record label Universal Music later said that no legal action would be taken because an agreement had been reached. Success in the United States During their active career, from 1972 to 1982, 20 of ABBA's singles entered the Billboard Hot 100; 14 of these made the Top 40 (13 on the Cashbox Top 100), with 10 making the Top 20 on both charts. A total of four of those singles reached the Top 10, including "Dancing Queen", which reached number one in April 1977. While "Fernando" and "SOS" did not break the Top 10 on the Billboard Hot 100 (reaching number 13 and 15 respectively), they did reach the Top 10 on Cashbox ("Fernando") and Record World ("SOS") charts. Both "Dancing Queen" and "Take a Chance on Me" were certified gold by the Recording Industry Association of America for sales of over one million copies each. The group also had 12 Top 20 singles on the Billboard Adult Contemporary chart with two of them, "Fernando" and "The Winner Takes It All", reaching number one. "Lay All Your Love on Me" was ABBA's fourth number-one single on a Billboard chart, topping the Hot Dance Club Play chart. Ten ABBA albums have made their way into the top half of the Billboard 200 album chart, with eight reaching the Top 50, five reaching the Top 20 and one reaching the Top 10. In November 2021, Voyage became ABBA's highest charting album on the Billboard 200 peaking at No. 2. Five albums received RIAA gold certification (more than 500,000 copies sold), while three acquired platinum status (selling more than one million copies). The compilation album ABBA Gold: Greatest Hits topped the Billboard Top Pop Catalog Albums chart in August 2008 (15 years after it was first released in the US in 1993), becoming the group's first number-one album ever on any of the Billboard album charts. It has sold 6 million copies there. On 15 March 2010, ABBA were inducted into the Rock and Roll Hall of Fame by Bee Gees members Barry Gibb and Robin Gibb. The ceremony was held at the Waldorf Astoria Hotel in New York City. The group were represented by Anni-Frid Lyngstad and Benny Andersson. in November 2021, ABBA received a Grammy nomination for Record of the Year. The single, "I Still Have Faith In You", from the album, Voyage, was their first ever nomination. Band members Agnetha Fältskog – lead and backing vocals Anni-Frid "Frida" Lyngstad – lead and backing vocals Björn Ulvaeus – guitars, backing and lead vocals Benny Andersson – keyboards, synthesizers, piano, accordion, guitars, backing and lead vocals The members of ABBA were married as follows: Agnetha Fältskog and Björn Ulvaeus from 1971 to 1980: Benny Andersson and Anni-Frid Lyngstad from 1978 to 1981. In addition to the four members of ABBA, other musicians played on their studio recordings, live appearances and concert performances. These include Rutger Gunnarsson (1972–1982) bass guitar and string arrangements, Ola Brunkert (1972–1981) drums, Mike Watson (1972–1980) bass guitar, Janne Schaffer (1972–1982) lead electric guitar, Roger Palm (1972–1979) drums, Malando Gassama (1973–1979) percussion, Lasse Wellander (1974–2021) lead electric guitar, and Per Lindvall (1980–2021) drums. ABBA-related tributes Musical groups Abbaesque – An Irish ABBA tribute band A-Teens – A pop music group from Stockholm, Sweden Björn Again – An Australian tribute band; notable as the earliest-formed ABBA tribute band (1988) and, as of 2021, still currently touring. Gabba – An ABBA–Ramones tribute band that covers the former in the style of the latter, the name being a reference to the Ramones catchphrase "Gabba Gabba Hey". Media Saturday Night (1975) (TV) .... Season 1 Episode 5 (Hosted by Robert Klein with Musical Numbers by ABBA and Loudon Wainwright III) Abbacadabra – A French children's musical based on songs from ABBA Abba-esque – A 1992 cover EP by Erasure Abbasalutely – A compilation album released in 1995 as a tribute album to ABBA Mamma Mia! – A musical stage show based on songs of ABBA ABBAmania – An ITV programme and tribute album to ABBA released in 1999 Mamma Mia! – A film adaptation of the musical stage show Mamma Mia! Here We Go Again – A prequel/sequel to the original film ABBA: You Can Dance – A dance video game released by Ubisoft in 2011 with songs from ABBA and also a spin-off of Just Dance video game series Dancing Queen - A 2018 cover album by Cher Discography Studio albums Ring Ring (1973) Waterloo (1974) ABBA (1975) Arrival (1976) The Album (1977) Voulez-Vous (1979) Super Trouper (1980) The Visitors (1981) Voyage (2021) Tours 1973: Swedish Folkpark Tour 1974–1975: European Tour 1977: European & Australian Tour 1979–1980: ABBA: The Tour 2022: ABBA Voyage Awards and nominations See also ABBA: The Museum ABBA City Walks – Stockholm City Museum ABBAMAIL List of best-selling music artists List of Swedes in music Music of Sweden Popular music in Sweden Citations References Bibliography Further reading Benny Andersson, Björn Ulvaeus, Judy Craymer: Mamma Mia! How Can I Resist You?: The Inside Story of Mamma Mia! and the Songs of ABBA. Weidenfeld & Nicolson, 2006 Carl Magnus Palm. ABBA – The Complete Recording Sessions (1994) Carl Magnus Palm (2000). From "ABBA" to "Mamma Mia!" Elisabeth Vincentelli: ABBA Treasures: A Celebration of the Ultimate Pop Group. Omnibus Press, 2010, Oldham, Andrew, Calder, Tony & Irvin, Colin (1995) "ABBA: The Name of the Game", Potiez, Jean-Marie (2000). ABBA – The Book Simon Sheridan: The Complete ABBA. Titan Books, 2012, Anna Henker (ed.), Astrid Heyde (ed.): Abba – Das Lexikon. Northern Europe Institut, Humboldt-University Berlin, 2015 (German) Steve Harnell (ed.): Classic Pop Presents Abba: A Celebration. Classic Pop Magazine (special edition), November 2016 Documentaries A for ABBA. BBC, 20 July 1993 Thierry Lecuyer, Jean-Marie Potiez: Thank You ABBA. Willow Wil Studios/A2C Video, 1993 Barry Barnes: ABBA − The History. Polar Music International AB, 1999 Chris Hunt: The Winner Takes it All − The ABBA Story. Littlestar Services/lambic Productions, 1999 Steve Cole, Chris Hunt: Super Troupers − Thirty Years of ABBA. BBC, 2004 The Joy of ABBA. BBC 4, 27 December 2013 (BBC page) Carl Magnus Palm, Roger Backlund: ABBA –
subjects of such state, and, also, persons who, though born abroad, are British subjects by reason of parentage, may, by declarations of alienage, get rid of British nationality. Emigration to an uncivilized country left British nationality unaffected: indeed the right claimed by all states to follow with their authority their subjects so emigrating was one of the usual and recognized means of colonial expansion. United States The doctrine that no man can cast off his native allegiance without the consent of his sovereign was early abandoned in the United States, and Chief Justice John Rutledge also declared in Talbot v. Janson, "a man may, at the same time, enjoy the rights of citizenship under two governments." On July 27, 1868, the day before the Fourteenth Amendment was adopted, U.S. Congress declared in the preamble of the Expatriation Act that "the right of expatriation is a natural and inherent right of all people, indispensable to the enjoyment of the rights of life, liberty and the pursuit of happiness," and (Section I) one of "the fundamental principles of this government" (United States Revised Statutes, sec. 1999). Every natural-born citizen of a foreign state who is also an American citizen, and every natural-born American citizen who is also a citizen of a foreign land, owes a double allegiance, one to the United States, and one to their homeland (in the event of an immigrant becoming a citizen of the US) or to their adopted land (in the event of an emigrant natural-born citizen of the US becoming a citizen of another nation). If these allegiances come into conflict, the person may be guilty of treason against one or both. If the demands of these two sovereigns upon their duty of allegiance come into conflict, those of the United States have the paramount authority in American law; likewise, those of the foreign land have paramount authority in their legal system. In such a situation, it may be incumbent on the individual to renounce one of their citizenships, to avoid possibly being forced into situations where countervailing duties are required of them, such as might occur in the event of war. Oath of allegiance The oath of allegiance is an oath of fidelity to the sovereign taken by all persons holding important public office and as a condition of naturalization. By ancient common law, it was required of all persons above the age of 12, and it was repeatedly used as a test for the disaffected. In England, it was first imposed by statute in the reign of Elizabeth I (1558), and its form has, more than once, been altered since. Up to the time of the revolution, the promise was "to be true and faithful to the king and his heirs, and truth and faith to bear of life and limb and terrene honour, and not to know or hear of any ill or damage intended him without defending him therefrom." This was thought to favour the doctrine of absolute non-resistance, and, accordingly, the Convention Parliament enacted the form that has been in use since that time – "I do sincerely promise and swear that I will be faithful and bear true allegiance to His Majesty ..." In the United States and some other republics, the oath is known as the Pledge of Allegiance. Instead of declaring fidelity to a monarch, the pledge is made to the flag, the republic, and to the core values of the country, specifically liberty and justice. The reciting of the pledge in the United States is voluntary because
1 M & W 70; Lyons Corp v East India Co (1836) 1 Moo PCC 175; Birtwhistle v Vardill (1840) 7 Cl & Fin 895; R v Lopez, R v Sattler (1858) Dears & B 525; Ex p Brown (1864) 5 B & S 280); (a) Ligeantia naturalis, absoluta, pura et indefinita, and this originally is due by nature and birthright, and is called alta ligeantia, and those that owe this are called subditus natus; (b) Ligeantia acquisita, not by nature but by acquisition or denization, being called a denizen, or rather denizon, because they are subditus datus; (c) Ligeantia localis, by operation of law, when a friendly alien enters the country, because so long as they are in the country they are within the sovereign's protection, therefore they owe the sovereign a local obedience or allegiance (R v Cowle (1759) 2 Burr 834; Low v Routledge (1865) 1 Ch App 42; Re Johnson, Roberts v Attorney-General [1903] 1 Ch 821; Tingley v Muller [1917] 2 Ch 144; Rodriguez v Speyer [1919] AC 59; Johnstone v Pedlar [1921] 2 AC 262; R v Tucker (1694) Show Parl Cas 186; R v Keyn (1876) 2 Ex D 63; Re Stepney Election Petn, Isaacson v Durant (1886) 17 QBD 54); (d) A legal obedience, where a particular law requires the taking of an oath of allegiance by subject or alien alike. Natural allegiance was acquired by birth within the sovereign's dominions (except for the issue of diplomats or of invading forces or of an alien in an enemy occupied territory). The natural allegiance and obedience are an incident inseparable from every subject, for as soon as they are born they owe by birthright allegiance and obedience to the Sovereign (Ex p. Anderson (1861) 3 E & E 487). A natural-born subject owes allegiance wherever they may be, so that where territory is occupied in the course of hostilities by an enemy's force, even if the annexation of the occupied country is proclaimed by the enemy, there can be no change of allegiance during the progress of hostilities on the part of a citizen of the occupied country (R v Vermaak (1900) 21 NLR 204 (South Africa)). Acquired allegiance was acquired by naturalisation or denization. Denization, or ligeantia acquisita, appears to be threefold (Thomas v Sorrel (1673) 3 Keb 143); (a) absolute, as the common denization, without any limitation or restraint; (b) limited, as when the sovereign grants letters of denization to an alien, and the alien's male heirs, or to an alien for the term of their life; (c) It may be granted upon condition, cujus est dare, ejus est disponere, and this denization of an alien may come about three ways: by parliament; by letters patent, which was the usual manner; and by conquest. Local allegiance was due by an alien while in the protection of the crown. All friendly resident aliens incurred all the obligations of subjects (The Angelique (1801) 3 Ch Rob App 7). An alien, coming into a colony, also became, temporarily, a subject of the crown, and acquired rights both within and beyond the colony, and these latter rights could not be affected by the laws of that colony (Routledge v Low (1868) LR 3 HL 100; 37 LJ Ch 454; 18 LT 874; 16 WR 1081, HL; Reid v Maxwell (1886) 2 TLR 790; Falcon v Famous Players Film Co [1926] 2 KB 474). A resident alien owed allegiance even when the protection of the crown was withdrawn owing to the occupation of an enemy, because the absence of the crown's protection was temporary and involuntary (de Jager v Attorney-General of Natal [1907] AC 326). Legal allegiance was due when an alien took an oath of allegiance required for a particular office under the crown. By the Naturalisation Act 1870, it was made possible for British subjects to renounce their nationality and allegiance, and the ways in which that nationality is lost were defined. So British subjects voluntarily naturalized in a foreign state are deemed aliens from the time of such naturalization, unless, in the case of persons naturalized before the passing of the act, they had declared their desire to remain British subjects within two years from the passing of the act. Persons who, from having been born within British territory, are British subjects, but who, at birth, came under the law of any foreign state or of subjects of such state, and, also, persons who, though born abroad, are British subjects by reason of parentage, may, by declarations of alienage, get rid of British nationality. Emigration to an uncivilized country left British nationality unaffected: indeed the right claimed by all states to follow with their authority their subjects so emigrating was one of the usual and recognized means of colonial expansion. United States The doctrine that no man can cast off his native allegiance without the consent of his sovereign was early abandoned in the United States, and Chief Justice John Rutledge also declared in Talbot v. Janson, "a man may, at the same time, enjoy the rights of citizenship under two governments." On July 27, 1868, the day before the Fourteenth Amendment was adopted, U.S. Congress declared in the preamble of the Expatriation Act that "the right
Linz, in Upper Austria Altenberg an der Rax, in Styria Germany Altenberg (Bergisches Land), an area in Odenthal, North Rhine-Westphalia, Germany Altenberg Abbey, Cistercian monastery in Altenberg (Bergisches Land) Altenberger Dom sometimes called Altenberg Cathedral, the former church of this Cistercian monastery Altenberg, Saxony, a town in the Free State of Saxony Altenberga, a municipality in the Saale-Holzfeld district, Thuringia Altenberg Abbey, Solms, a former Premonstratensian
former zinc mine in Kelmis, Moresnet, Belgium Altenberg, a district in the city of Bern, Switzerland Other uses Altenberg Lieder (Five Orchestral Songs), composed by Alban Berg in 1911/12 Altenberg Publishing (1880–1934), a former Polish publishing house Altenberg Trio, a Viennese piano trio People with the surname Jakob Altenberg (1875–1944), Austrian businessman Lee Altenberg, theoretical biologist
longer-lasting ones without any damage to the eMate hardware whatsoever. Prototypes Many prototypes of additional Newton devices were spotted. Most notable was a Newton tablet or "slate", a large, flat screen that could be written on. Others included a "Kids Newton" with side handgrips and buttons, "VideoPads" which would have incorporated a video camera and screen on their flip-top covers for two-way communications, the "Mini 2000" which would have been very similar to a Palm Pilot, and the NewtonPhone developed by Siemens, which incorporated a handset and a keyboard. Market reception Fourteen months after Sculley demoed it at the May 1992, Chicago CES, the MessagePad was first offered for sale on August 2, 1993, at the Boston Macworld Expo. The hottest item at the show, it cost $900. 50,000 MessagePads were sold in the device's first three months on the market. The original Apple MessagePad and MessagePad 100 were limited by the very short lifetime of their inadequate AAA batteries. Critics also panned the handwriting recognition that was available in the debut models, which had been trumpeted in the Newton's marketing campaign. It was this problem that was skewered in the Doonesbury comic strips in which a written text entry is (erroneously) translated as "Egg Freckles?", as well as in the animated series The Simpsons. However, the word 'freckles' was not included in the Newton dictionary, although a user could add it themselves. Difficulties were in part caused by the long time requirements for the Calligrapher handwriting recognition software to "learn" the user's handwriting; this process could take from two weeks to two months. Another factor which limited the early Newton devices' appeal was that desktop connectivity was not included in the basic retail package, a problem that was later solved with 2.x Newton devices - these were bundled with a serial cable and the appropriate Newton Connection Utilities software. Later versions of Newton OS offered improved handwriting recognition, quite possibly a leading reason for the continued popularity of the devices among Newton users. Even given the age of the hardware and software, Newtons still demand a sale price on the used market far greater than that of comparatively aged PDAs produced by other companies. In 2006, CNET compared an Apple MessagePad 2000 to a Samsung Q1, and the Newton was declared better. In 2009, CNET compared an Apple MessagePad 2000 to an iPhone 3GS, and the Newton was declared more innovative at its time of release. A chain of dedicated Newton only stores called Newton Source existed from 1994 until 1998. Locations included New York, Los Angeles, San Francisco, Chicago and Boston. The Westwood Village, California, near U.C.L.A. featured the trademark red and yellow light bulb Newton logo in neon. The stores provided an informative educational venue to learn about the Newton platform in a hands on relaxed fashion. The stores had no traditional computer retail counters and featured oval desktops where interested users could become intimately involved with the Newton product range. The stores were a model for the later Apple Stores. Newton device models {| class="wikitable" |+ !Brand | colspan="2" |Apple |Sharp |Siemens | colspan="2" |Apple |Sharp |Apple |Digital Ocean |Motorola |Harris |Digital Ocean | colspan="4" |Apple | colspan="3" |Harris |Siemens |Schlumberger |- !Device |OMP (Original Newton MessagePad) |Newton "Dummy" |ExpertPad PI-7000 |Notephone.[better source needed] |MessagePad 100 |MessagePad 110 |Sharp ExpertPad PI-7100 |MessagePad 120 |Tarpon |Marco |SuperTech 2000 |Seahorse |MessagePad 130 |eMate 300 |MessagePad 2000 |MessagePad 2100 |Access Device 2000 |Access Device, GPS |Access Device, Wireline |Online Terminal, also known as Online Access Device(OAD) |Watson |- !Introduced |August 3, 1993 (US) December 1993 (Germany) |? |August 3, 1993(US), ? (Japan) |1993? | colspan="2" |March 1994 |April 1994 |October 1994 (Germany), January 1995 (US) | colspan="2" |January 1995 (US) |August 1995 in the US |January 1996 in the US |March 1996 | colspan="2" |March 1997 |November 1997 | colspan="3" |1998 |Announced 1997 |? |- !Discontinued | colspan="3" |March 1994 |? | colspan="2" |April 1995 |late 1994 |June 1996 |? |? |? |? |April 1997 | colspan="3" |February 1998 | | | | | |- !Code name |Junior | |? |? |Junior |Lindy |? |Gelato |? |? |? |? |Dante |? |Q |? | | | | | |- !Model No. |H1000 | |? |? |H1000 |H0059 |? |H0131 |? |? |? |? |H0196 |H0208 |H0136 |H0149 | | | | | |- !Processor | colspan="13" |ARM 610 (20 MHz) |ARM 710a (25 MHz) | colspan="7" |StrongARM SA-110 (162 MHz) |- !ROM | colspan="7" |4 MB | colspan="2" |4 MB (OS 1.3) or 8 MB (OS 2.0) |5 MB |4 MB | colspan="5" |8 MB | | | | | |- !System Memory (RAM) | colspan="5" |490 KB* SRAM |544 KB SRAM |490 KB* SRAM | colspan="2" |639/687 KB DRAM |544 KB SRAM |639 KB DRAM | colspan="2" |1199 KB DRAM |1 MB DRAM (Upgradable) |1 MB DRAM |4 MB DRAM | colspan="3" |1 MB DRAM |? |1 MB DRAM |- !User Storage | colspan="5" |150 KB* SRAM |480 KB SRAM |150 KB* SRAM | colspan="2" |385/1361 KB Flash RAM |480 KB SRAM |385 KB Flash RAM | colspan="2" |1361 KB Flash RAM |2 MB Flash RAM(Upgradable) | colspan="5" |4 MB Flash RAM |? |4 MB Flash RAM |- !Total RAM | colspan="5" |640 KB |1 MB |640 KB | colspan="2" |1.0/2.0 MB | colspan="2" |1 MB | colspan="2" |2.5 MB |3 MB (Upgradable via Internal Expansion) |5 MB |8 MB | colspan="3" |5 MB |? |5 MB |- !Display | colspan="5" |336 × 240 (B&W) |320 × 240 (B&W) |336 × 240 (B&W) |320 × 240 (B&W) |320 × 240 (B&W) w/ backlight |320 × 240 (B&W) | colspan="3" |320 × 240 (B&W) w/ backlight | colspan="6" |480 × 320 grayscale (16 shades) w/ backlight | |480 × 320 greyscale (16 shades) w/ backlight |- !Newton OS version | colspan="3" |1.0 to 1.05, or 1.10 to 1.11 |1.11 | colspan="2" |1.2 or 1.3 |1.3 | colspan="2" |1.3 or 2.0 | colspan="2" |1.3 | colspan="2" |2.0 |2.1 (2.2) | colspan="2" |2.1 | colspan="5" |2.1 |- !Newton OS languages |English or German | |English or Japanese |German |English, German or French |English or French |English or Japanese |English, German or French | colspan="4" |English |English or German | colspan="2" |English |English or German | colspan="3" |English |German |French |- !Connectivity | colspan="3" |RS422, LocalTalk & SHARP ASK Infrared |Modem and Telephone dock Attachment | colspan="4" |RS422, LocalTalk & SHARP ASK Infrared |RS422, LocalTalk & SHARP ASK Infrared |RS422, LocalTalk, Infrared, ARDIS Network |RS232, LocalTalk WLAN, V.22bis modem, Analog/Digital Cellular, CDPD, RAM, ARDIS , Trunk Radio |RS232, LocalTalk, CDPD, WLAN, Optional dGPS, GSM, or IR via modular attachments |RS422, LocalTalk & SHARP ASK Infrared |IrDA, headphone port, Interconnect port, LocalTalk, Audio I/O, Autodock |Dual-mode IR;IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock, Phone I/O |Dual-mode IR; IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock | colspan="3" |Dual-mode IR;IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock, Phone I/O |? |Dual-mode IR;IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock, Phone I/O |- !PCMCIA | colspan="13" |1 PCMCIA-slot II, 5v or 12v |1 PCMCIA-slot I/II/III, 5v | colspan="2" |2 PCMCIA-slot II, 5v or 12v | colspan="2" |1 PCMCIA-slot II, 5v or 12v |1 PCMCIA-slot II, 5v or 12v, 2nd slot Propriety Rado Card | colspan="2" |1 PCMCIA-slot II, 5v or 12v, 1 Smart Card Reader |- !Power | colspan="5" |4 AAA or NiCd rechargeable or external power supply |4 AA or NiCd rechargeable or external power supply |4 AAA or NiCd rechargeable or external power supply |4 AA or NiCd rechargeable or external power supply | colspan="2" |NiCd battery pack or external power supply |4 AA or NiCd rechargeable or external power supply |NiCd battery pack or external power supply |4 AA or NiCd rechargeable or external power supply |NiMH battery pack (built-in) or external power supply | colspan="2" |4 AA or NiMH rechargeable or external power supply | colspan="3" |Custom NiMH rechargeable or external power supply |? Unknown, but likely external power supply |4 AA or NiMH rechargeable or external power supply |- !Dimensions (HxWxD) | | | (lid open) | colspan="2" | | | (lid open) | | | |? | | | | colspan="2" | |? |? |? |9 x 14.5 x 5.1 inches (23 x 37 x 13 cm) |? |- !Weight | | | with batteries installed | | | with batteries installed | with batteries installed |with batteries installed | | |? | | with batteries installed | | colspan="2" | |? |? |? |? |? |} * Varies with Installed OS Notes: The eMate 300 actually has ROM chips silk screened with 2.2 on them. Stephanie Mak on her website discusses this: If one removes all patches to the eMate 300 (by replacing the ROM chip, and then putting in the original one again, as the eMate and the MessagePad 2000/2100 devices erase their memory completely after replacing the chip), the result will be the Newton OS saying that this is version 2.2.00. Also, the Original MessagePad and the MessagePad 100 share the same model number, as they only differ in the ROM chip version. (The OMP has OS versions 1.0 to 1.05, or 1.10 to 1.11, while the MP100 has 1.3 that can be upgraded with various patches.) Other uses There were a number of projects that used
only) and durable computer for classroom use. However, in order to achieve its low price, the eMate 300 did not have all the speed and features of the contemporary MessagePad equivalent, the MessagePad 2000. The eMate was cancelled along with the rest of the Newton products in 1998. It is the only Newton device to use the ARM710 microprocessor (running at 25 MHz), have an integrated keyboard, use Newton OS 2.2 (officially numbered 2.1), and its batteries are officially irreplaceable, although several users replaced them with longer-lasting ones without any damage to the eMate hardware whatsoever. Prototypes Many prototypes of additional Newton devices were spotted. Most notable was a Newton tablet or "slate", a large, flat screen that could be written on. Others included a "Kids Newton" with side handgrips and buttons, "VideoPads" which would have incorporated a video camera and screen on their flip-top covers for two-way communications, the "Mini 2000" which would have been very similar to a Palm Pilot, and the NewtonPhone developed by Siemens, which incorporated a handset and a keyboard. Market reception Fourteen months after Sculley demoed it at the May 1992, Chicago CES, the MessagePad was first offered for sale on August 2, 1993, at the Boston Macworld Expo. The hottest item at the show, it cost $900. 50,000 MessagePads were sold in the device's first three months on the market. The original Apple MessagePad and MessagePad 100 were limited by the very short lifetime of their inadequate AAA batteries. Critics also panned the handwriting recognition that was available in the debut models, which had been trumpeted in the Newton's marketing campaign. It was this problem that was skewered in the Doonesbury comic strips in which a written text entry is (erroneously) translated as "Egg Freckles?", as well as in the animated series The Simpsons. However, the word 'freckles' was not included in the Newton dictionary, although a user could add it themselves. Difficulties were in part caused by the long time requirements for the Calligrapher handwriting recognition software to "learn" the user's handwriting; this process could take from two weeks to two months. Another factor which limited the early Newton devices' appeal was that desktop connectivity was not included in the basic retail package, a problem that was later solved with 2.x Newton devices - these were bundled with a serial cable and the appropriate Newton Connection Utilities software. Later versions of Newton OS offered improved handwriting recognition, quite possibly a leading reason for the continued popularity of the devices among Newton users. Even given the age of the hardware and software, Newtons still demand a sale price on the used market far greater than that of comparatively aged PDAs produced by other companies. In 2006, CNET compared an Apple MessagePad 2000 to a Samsung Q1, and the Newton was declared better. In 2009, CNET compared an Apple MessagePad 2000 to an iPhone 3GS, and the Newton was declared more innovative at its time of release. A chain of dedicated Newton only stores called Newton Source existed from 1994 until 1998. Locations included New York, Los Angeles, San Francisco, Chicago and Boston. The Westwood Village, California, near U.C.L.A. featured the trademark red and yellow light bulb Newton logo in neon. The stores provided an informative educational venue to learn about the Newton platform in a hands on relaxed fashion. The stores had no traditional computer retail counters and featured oval desktops where interested users could become intimately involved with the Newton product range. The stores were a model for the later Apple Stores. Newton device models {| class="wikitable" |+ !Brand | colspan="2" |Apple |Sharp |Siemens | colspan="2" |Apple |Sharp |Apple |Digital Ocean |Motorola |Harris |Digital Ocean | colspan="4" |Apple | colspan="3" |Harris |Siemens |Schlumberger |- !Device |OMP (Original Newton MessagePad) |Newton "Dummy" |ExpertPad PI-7000 |Notephone.[better source needed] |MessagePad 100 |MessagePad 110 |Sharp ExpertPad PI-7100 |MessagePad 120 |Tarpon |Marco |SuperTech 2000 |Seahorse |MessagePad 130 |eMate 300 |MessagePad 2000 |MessagePad 2100 |Access Device 2000 |Access Device, GPS |Access Device, Wireline |Online Terminal, also known as Online Access Device(OAD) |Watson |- !Introduced |August 3, 1993 (US) December 1993 (Germany) |? |August 3, 1993(US), ? (Japan) |1993? | colspan="2" |March 1994 |April 1994 |October 1994 (Germany), January 1995 (US) | colspan="2" |January 1995 (US) |August 1995 in the US |January 1996 in the US |March 1996 | colspan="2" |March 1997 |November 1997 | colspan="3" |1998 |Announced 1997 |? |- !Discontinued | colspan="3" |March 1994 |? | colspan="2" |April 1995 |late 1994 |June 1996 |? |? |? |? |April 1997 | colspan="3" |February 1998 | | | | | |- !Code name |Junior | |? |? |Junior |Lindy |? |Gelato |? |? |? |? |Dante |? |Q |? | | | | | |- !Model No. |H1000 | |? |? |H1000 |H0059 |? |H0131 |? |? |? |? |H0196 |H0208 |H0136 |H0149 | | | | | |- !Processor | colspan="13" |ARM 610 (20 MHz) |ARM 710a (25 MHz) | colspan="7" |StrongARM SA-110 (162 MHz) |- !ROM | colspan="7" |4 MB | colspan="2" |4 MB (OS 1.3) or 8 MB (OS 2.0) |5 MB |4 MB | colspan="5" |8 MB | | | | | |- !System Memory (RAM) | colspan="5" |490 KB* SRAM |544 KB SRAM |490 KB* SRAM | colspan="2" |639/687 KB DRAM |544 KB SRAM |639 KB DRAM | colspan="2" |1199 KB DRAM |1 MB DRAM (Upgradable) |1 MB DRAM |4 MB DRAM | colspan="3" |1 MB DRAM |? |1 MB DRAM |- !User Storage | colspan="5" |150 KB* SRAM |480 KB SRAM |150 KB* SRAM | colspan="2" |385/1361 KB Flash RAM |480 KB SRAM |385 KB Flash RAM | colspan="2" |1361 KB Flash RAM |2 MB Flash RAM(Upgradable) | colspan="5" |4 MB Flash RAM |? |4 MB Flash RAM |- !Total RAM | colspan="5" |640 KB |1 MB |640 KB | colspan="2" |1.0/2.0 MB | colspan="2" |1 MB | colspan="2" |2.5 MB |3 MB (Upgradable via Internal Expansion) |5 MB |8 MB | colspan="3" |5 MB |? |5 MB |- !Display | colspan="5" |336 × 240 (B&W) |320 × 240 (B&W) |336 × 240 (B&W) |320 × 240 (B&W) |320 × 240 (B&W) w/ backlight |320 × 240 (B&W) | colspan="3" |320 × 240 (B&W) w/ backlight | colspan="6" |480 × 320 grayscale (16 shades) w/ backlight | |480 × 320 greyscale (16 shades) w/ backlight |- !Newton OS version | colspan="3" |1.0 to 1.05, or 1.10 to 1.11 |1.11 | colspan="2" |1.2 or 1.3 |1.3 | colspan="2" |1.3 or 2.0 | colspan="2" |1.3 | colspan="2" |2.0 |2.1 (2.2) | colspan="2" |2.1 | colspan="5" |2.1 |- !Newton OS languages |English or German | |English or Japanese |German |English, German or French |English or French |English or Japanese |English, German or French | colspan="4" |English |English or German | colspan="2" |English |English or German | colspan="3" |English |German |French |- !Connectivity | colspan="3" |RS422, LocalTalk & SHARP ASK Infrared |Modem and Telephone dock Attachment | colspan="4" |RS422, LocalTalk & SHARP ASK Infrared |RS422, LocalTalk & SHARP ASK Infrared |RS422, LocalTalk, Infrared, ARDIS Network |RS232, LocalTalk WLAN, V.22bis modem, Analog/Digital Cellular, CDPD, RAM, ARDIS , Trunk Radio |RS232, LocalTalk, CDPD, WLAN, Optional dGPS, GSM, or IR via modular attachments |RS422, LocalTalk & SHARP ASK Infrared |IrDA, headphone port, Interconnect port, LocalTalk, Audio I/O, Autodock |Dual-mode IR;IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock, Phone I/O |Dual-mode IR; IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock | colspan="3" |Dual-mode IR;IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock, Phone I/O |? |Dual-mode IR;IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock, Phone I/O |- !PCMCIA | colspan="13" |1 PCMCIA-slot II, 5v or 12v |1 PCMCIA-slot I/II/III, 5v | colspan="2" |2 PCMCIA-slot II, 5v or 12v | colspan="2" |1 PCMCIA-slot II, 5v or 12v |1 PCMCIA-slot II, 5v or 12v, 2nd slot Propriety Rado Card | colspan="2" |1 PCMCIA-slot II, 5v or 12v, 1 Smart Card Reader |- !Power | colspan="5" |4 AAA or NiCd rechargeable or external power supply |4 AA or NiCd rechargeable or external power supply |4 AAA or NiCd rechargeable or external power supply |4 AA or NiCd rechargeable or external power supply | colspan="2" |NiCd battery pack or external power supply |4 AA or NiCd rechargeable or external power supply |NiCd battery pack or external power supply |4 AA or NiCd rechargeable or external power supply |NiMH battery pack (built-in) or external power supply | colspan="2" |4 AA or NiMH rechargeable or external power supply | colspan="3" |Custom NiMH rechargeable or external power supply |? Unknown, but likely external power supply |4 AA or NiMH rechargeable or external power supply |- !Dimensions (HxWxD) | | | (lid open) | colspan="2" | | | (lid open) | | | |? | | | | colspan="2" | |? |? |? |9 x 14.5 x 5.1 inches (23 x 37 x 13 cm) |? |- !Weight | | | with batteries installed | | | with batteries installed | with batteries installed |with batteries installed | | |? | | with batteries installed | | colspan="2" | |? |? |? |? |? |} * Varies with Installed OS Notes: The eMate 300 actually has ROM chips silk screened with 2.2 on them. Stephanie Mak on her website discusses this: If one removes all patches to the eMate 300 (by replacing the ROM chip, and then putting in the original one again, as the eMate and the MessagePad 2000/2100 devices erase their memory completely after replacing the chip), the result will be the Newton OS saying that this is version 2.2.00. Also, the Original MessagePad and the MessagePad 100 share the same model number, as they only differ in the ROM chip version. (The OMP has OS versions 1.0 to 1.05, or 1.10 to 1.11, while the MP100 has 1.3 that can be upgraded with various patches.) Other uses There were
conducting interviews published in trade magazines. He added the middle name "Elton" at some point in the mid-1930s, and at least one confessional story (1937's "To Be His Keeper") was sold to the Toronto Star, who misspelled his name "Alfred Alton Bogt" in the byline. Shortly thereafter, he added the "van" to his surname, and from that point forward he used the name "A. E. van Vogt" both personally and professionally. Career By 1938, van Vogt decided to switch to writing science fiction, a genre he enjoyed reading. He was inspired by the August 1938 issue of Astounding Science Fiction, which he picked up at a newsstand. John W. Campbell's novelette "Who Goes There?" (later adapted into The Thing from Another World and The Thing) inspired van Vogt to write "Vault of the Beast", which he submitted to that same magazine. Campbell, who edited Astounding (and had written the story under a pseudonym), sent van Vogt a rejection letter, but one which encouraged van Vogt to try again. Van Vogt sent another story, entitled "Black Destroyer", which was accepted. It featured a fierce, carnivorous alien stalking the crew of a spaceship, and served as the inspiration for multiple science fiction movies, including Alien (1979). A revised version of "Vault of the Beast" was published in 1940. While still living in Winnipeg, in 1939 van Vogt married Edna Mayne Hull, a fellow Manitoban. Hull, who had previously worked as a private secretary, went on to act as van Vogt's typist, and was credited with writing several SF stories of her own throughout the early 1940s. The outbreak of World War II in September 1939 caused a change in van Vogt's circumstances. Ineligible for military service due to his poor eyesight, he accepted a clerking job with the Canadian Department of National Defence. This necessitated a move back to Ottawa, where he and his wife stayed for the next year and a half. Meanwhile, his writing career continued. "Discord in Scarlet" was van Vogt's second story to be published, also appearing as the cover story. It was accompanied by interior illustrations created by Frank Kramer and Paul Orban. (Van Vogt and Kramer thus debuted in the issue of Astounding that is sometimes identified as the start of the Golden Age of Science Fiction.) Among his most famous works of this era, "Far Centaurus" appeared in the January 1944 edition of Astounding. Van Vogt's first completed novel, and one of his most famous, is Slan (Arkham House, 1946), which Campbell serialized in Astounding (September to December 1940). Using what became one of van Vogt's recurring themes, it told the story of a nine-year-old superman living in a world in which his kind are slain by Homo sapiens. Others saw van Vogt's talent from his first story, and in May 1941 van Vogt decided to become a full-time writer, quitting his job at the Canadian Department of National Defence. Freed from the necessity of living in Ottawa, he and his wife lived for a time in the Gatineau region of Quebec before moving to Toronto in the fall of 1941. Prolific throughout this period, van Vogt wrote many of his more famous short stories and novels in the years from 1941 through 1944. The novels The Book of Ptath and The Weapon Makers both appeared in magazines in serial form during this period; they were later published in book form after World War II. As well, several (though not all) of the stories that were compiled to make up the novels The Weapon Shops of Isher, The Mixed Men and The War Against the Rull were published during this time. California and post-war writing (1944–1950) In November 1944, van Vogt and Hull moved to Hollywood; van Vogt would spend the rest of his life in California. He had been using the name "A. E. van Vogt" in his public life for several years, and as part of the process of obtaining American citizenship in 1945 he finally and formally changed his legal name from Alfred Vogt to Alfred Elton van Vogt. To his friends in the California science fiction community, he was known as "Van". Method and themes Van Vogt systematized his writing method, using scenes of 800 words or so where a new complication was added or something resolved. Several of his stories hinge on temporal conundra, a favorite theme. He stated that he acquired many of his writing techniques from three books: Narrative Technique by Thomas Uzzell, The Only Two Ways to Write a Story by John Gallishaw, and Twenty Problems of the Fiction Writer by Gallishaw. He also claimed many of his ideas came from dreams; throughout his writing life he arranged to be awakened every 90 minutes during his sleep period so he could write down his dreams. Van Vogt was also always interested in the idea of all-encompassing systems of knowledge (akin to modern meta-systems). The characters in his very first story used a system called "Nexialism" to analyze the alien's behavior. Around this time, he became particularly interested in the general semantics of Alfred Korzybski. He subsequently wrote a novel merging these overarching themes, The World of Ā, originally serialized in Astounding in 1945. Ā (often rendered as Null-A), or non-Aristotelian logic, refers to the capacity for, and practice of, using intuitive, inductive reasoning (compare fuzzy logic), rather than reflexive, or conditioned, deductive reasoning. The novel recounts the adventures of an individual living in an apparent Utopia, where those with superior brainpower make up the ruling class... though all is not as it seems. A sequel, The Players of Ā (later re-titled The Pawns of Null-A) was serialized in 1948–49. At the same time, in his fiction, van Vogt was consistently sympathetic to absolute monarchy as a form of government. This was the case, for instance, in the Weapon Shop series, the Mixed Men series, and in single stories such as "Heir Apparent" (1945), whose protagonist was described as a "benevolent dictator". These sympathies were the subject of much critical discussion during van Vogt's career, and afterwards. Van Vogt published "Enchanted Village" in the July 1950 issue of Other Worlds Science Stories. It was reprinted in over 20 collections or anthologies, and appeared many times in translation. Dianetics and fix-ups (1950–1961) In 1950, van Vogt was briefly appointed as head of L. Ron Hubbard's Dianetics operation in California. Van Vogt had first met Hubbard in 1945, and became interested in his Dianetics theories, which were published shortly thereafter. Dianetics was the secular precursor to Hubbard's Church of Scientology; van Vogt would have no association with Scientology, as he did not approve of its mysticism. The California Dianetics operation went broke nine months later, but never went bankrupt, due to van Vogt's arrangements with creditors. Very shortly after that, van Vogt and his wife opened their own Dianetics center, partly financed by his writings, until he "signed off" around 1961. From 1951 until 1961, van Vogt's focus was on Dianetics, and no new story ideas flowed from his typewriter. Fix-ups However, during the 1950s, van Vogt retrospectively patched together many of his previously published stories into novels, sometimes creating new interstitial material to help bridge gaps in the narrative. Van Vogt referred to the resulting books as "fix-ups", a term that entered the vocabulary of science-fiction criticism. When the original stories were closely related this was often successful, although some van Vogt fix-ups featured disparate stories thrown together that bore little relation to each other, generally making for a less coherent plot. One of his best-known (and well-regarded) novels, The Voyage of the Space Beagle (1950) was a fix-up of four short stories including "Discord in Scarlet"; it was published in at least five European languages by 1955. Although Van Vogt averaged a new book title every ten months from 1951 to 1961, none of them were new stories; they were all fix-ups, collections of previously published stories, expansions of previously published short stories to novel length, or republications of previous books under new titles and all based on story material written and originally published between 1939 and 1950. Examples include The Weapon Shops of Isher (1951), The Mixed Men (1952), The War Against the Rull (1959), and the two "Clane" novels, Empire of the Atom (1957) and The Wizard of Linn (1962), which were inspired (like Asimov's Foundation series) by Roman imperial history; specifically, as Damon Knight wrote, the plot of Empire of the Atom was "lifted almost bodily" from that of Robert Graves' I, Claudius. (Also, one non-fiction work, The Hypnotism Handbook, appeared in 1956, though it had apparently been written much earlier.) After more than a decade of running their Dianetics center, Hull and van Vogt closed it in 1961. Nevertheless, van Vogt maintained his association with the organization and was still president of the Californian Association of Dianetic Auditors into the 1980s. Return to writing and later career (1962–1986) Though the constant re-packaging of his older work meant that he had never really been away from the book publishing world, van Vogt had not published any wholly new fiction for almost 12 years when he decided to return to writing in 1962. He did not return immediately to science fiction, but instead wrote the only mainstream, non-sf novel of his career. Van Vogt was profoundly affected by revelations of totalitarian police states that emerged after World War II. Accordingly, he wrote a mainstream novel that he set in Communist China, The Violent Man (1962). Van Vogt explained that to research this book he had read 100 books about China. Into this book he incorporated his view of "the violent male type", which he described as a "man who had to be right", a man who "instantly attracts women" and who he said were the men who "run the world". Contemporary reviews were lukewarm at best, and van Vogt thereafter returned to science fiction. From 1963 through the mid-1980s, van Vogt once again
magazines. He added the middle name "Elton" at some point in the mid-1930s, and at least one confessional story (1937's "To Be His Keeper") was sold to the Toronto Star, who misspelled his name "Alfred Alton Bogt" in the byline. Shortly thereafter, he added the "van" to his surname, and from that point forward he used the name "A. E. van Vogt" both personally and professionally. Career By 1938, van Vogt decided to switch to writing science fiction, a genre he enjoyed reading. He was inspired by the August 1938 issue of Astounding Science Fiction, which he picked up at a newsstand. John W. Campbell's novelette "Who Goes There?" (later adapted into The Thing from Another World and The Thing) inspired van Vogt to write "Vault of the Beast", which he submitted to that same magazine. Campbell, who edited Astounding (and had written the story under a pseudonym), sent van Vogt a rejection letter, but one which encouraged van Vogt to try again. Van Vogt sent another story, entitled "Black Destroyer", which was accepted. It featured a fierce, carnivorous alien stalking the crew of a spaceship, and served as the inspiration for multiple science fiction movies, including Alien (1979). A revised version of "Vault of the Beast" was published in 1940. While still living in Winnipeg, in 1939 van Vogt married Edna Mayne Hull, a fellow Manitoban. Hull, who had previously worked as a private secretary, went on to act as van Vogt's typist, and was credited with writing several SF stories of her own throughout the early 1940s. The outbreak of World War II in September 1939 caused a change in van Vogt's circumstances. Ineligible for military service due to his poor eyesight, he accepted a clerking job with the Canadian Department of National Defence. This necessitated a move back to Ottawa, where he and his wife stayed for the next year and a half. Meanwhile, his writing career continued. "Discord in Scarlet" was van Vogt's second story to be published, also appearing as the cover story. It was accompanied by interior illustrations created by Frank Kramer and Paul Orban. (Van Vogt and Kramer thus debuted in the issue of Astounding that is sometimes identified as the start of the Golden Age of Science Fiction.) Among his most famous works of this era, "Far Centaurus" appeared in the January 1944 edition of Astounding. Van Vogt's first completed novel, and one of his most famous, is Slan (Arkham House, 1946), which Campbell serialized in Astounding (September to December 1940). Using what became one of van Vogt's recurring themes, it told the story of a nine-year-old superman living in a world in which his kind are slain by Homo sapiens. Others saw van Vogt's talent from his first story, and in May 1941 van Vogt decided to become a full-time writer, quitting his job at the Canadian Department of National Defence. Freed from the necessity of living in Ottawa, he and his wife lived for a time in the Gatineau region of Quebec before moving to Toronto in the fall of 1941. Prolific throughout this period, van Vogt wrote many of his more famous short stories and novels in the years from 1941 through 1944. The novels The Book of Ptath and The Weapon Makers both appeared in magazines in serial form during this period; they were later published in book form after World War II. As well, several (though not all) of the stories that were compiled to make up the novels The Weapon Shops of Isher, The Mixed Men and The War Against the Rull were published during this time. California and post-war writing (1944–1950) In November 1944, van Vogt and Hull moved to Hollywood; van Vogt would spend the rest of his life in California. He had been using the name "A. E. van Vogt" in his public life for several years, and as part of the process of obtaining American citizenship in 1945 he finally and formally changed his legal name from Alfred Vogt to Alfred Elton van Vogt. To his friends in the California science fiction community, he was known as "Van". Method and themes Van Vogt systematized his writing method, using scenes of 800 words or so where a new complication was added or something resolved. Several of his stories hinge on temporal conundra, a favorite theme. He stated that he acquired many of his writing techniques from three books: Narrative Technique by Thomas Uzzell, The Only Two Ways to Write a Story by John Gallishaw, and Twenty Problems of the Fiction Writer by Gallishaw. He also claimed many of his ideas came from dreams; throughout his writing life he arranged to be awakened every 90 minutes during his sleep period so he could write down his dreams. Van Vogt was also always interested in the idea of all-encompassing systems of knowledge (akin to modern meta-systems). The characters in his very first story used a system called "Nexialism" to analyze the alien's behavior. Around this time, he became particularly interested in the general semantics of Alfred Korzybski. He subsequently wrote a novel merging these overarching themes, The World of Ā, originally serialized in Astounding in 1945. Ā (often rendered as Null-A), or non-Aristotelian logic, refers to the capacity for, and practice of, using intuitive, inductive reasoning (compare fuzzy logic), rather than reflexive, or conditioned, deductive reasoning. The novel recounts the adventures of an individual living in an apparent Utopia, where those with superior brainpower make up the ruling class... though all is not as it seems. A sequel, The Players of Ā (later re-titled The Pawns of Null-A) was serialized in 1948–49. At the same time, in his fiction, van Vogt was consistently sympathetic to absolute monarchy as a form of government. This was the case, for instance, in the Weapon Shop series, the Mixed Men series, and in single stories such as "Heir Apparent" (1945), whose protagonist was described as a "benevolent dictator". These sympathies were the subject of much critical discussion during van Vogt's career, and afterwards. Van Vogt published "Enchanted Village" in the July 1950 issue of Other Worlds Science Stories. It was reprinted in over 20 collections or anthologies, and appeared many times in translation. Dianetics and fix-ups (1950–1961) In 1950, van Vogt was briefly appointed as head of L. Ron Hubbard's Dianetics operation in California. Van Vogt had first met Hubbard in 1945, and became interested in his Dianetics theories, which were published shortly thereafter. Dianetics was the secular precursor to Hubbard's Church of Scientology; van Vogt would have no association with Scientology, as he did not approve of its mysticism. The California Dianetics operation went broke nine months later, but never went bankrupt, due to van Vogt's arrangements with creditors. Very shortly after that, van Vogt and his wife opened their own Dianetics center, partly financed by his writings, until he "signed off" around 1961. From 1951 until 1961, van Vogt's focus was on Dianetics, and no new story ideas flowed from his typewriter. Fix-ups However, during the 1950s, van Vogt retrospectively patched together many of his previously published stories into novels, sometimes creating new interstitial material to help bridge gaps in the narrative. Van Vogt referred to the resulting books as "fix-ups", a term that entered the vocabulary of science-fiction criticism. When the original stories were closely related this was often successful, although some van Vogt fix-ups featured disparate stories thrown together that bore little relation to each other, generally making for a less coherent plot. One of his best-known (and well-regarded) novels, The Voyage of the Space Beagle (1950) was a fix-up of four short stories including "Discord in Scarlet"; it was published in at least five European languages by 1955. Although Van Vogt averaged a new book title every ten months from 1951 to 1961, none of them were new stories; they were all fix-ups, collections of previously published stories, expansions of previously published short stories to novel length, or republications of previous books under new titles and all based on story material written and originally published between 1939 and 1950. Examples include The Weapon Shops of Isher (1951), The Mixed Men (1952), The War Against the Rull (1959), and the two "Clane" novels, Empire of the Atom (1957) and The Wizard of Linn (1962), which were inspired (like Asimov's Foundation series) by Roman imperial history; specifically, as Damon Knight wrote, the plot of Empire of the Atom was "lifted almost bodily" from that of Robert Graves' I, Claudius. (Also, one non-fiction work, The Hypnotism Handbook, appeared in 1956, though it had apparently been written much earlier.) After more than a decade of running their Dianetics center,
season as World No. 12. While Kournikova had a successful singles season, she was even more successful in doubles. After their victory at the Australian Open, she and Martina Hingis won tournaments in Indian Wells, Rome, Eastbourne and the WTA Tour Championships, and reached the final of The French Open where they lost to Serena and Venus Williams. Partnering with Elena Likhovtseva, Kournikova also reached the final in Stanford. On 22 November 1999 she reached the world No. 1 ranking in doubles, and ended the season at this ranking. Anna Kournikova and Martina Hingis were presented with the WTA Award for Doubles Team of the Year. Kournikova opened her 2000 season winning the Gold Coast Open doubles tournament partnering with Julie Halard. She then reached the singles semi-finals at the Medibank International Sydney, losing to Lindsay Davenport. At the Australian Open, she reached the fourth round in singles and the semi-finals in doubles. That season, Kournikova reached eight semi-finals (Sydney, Scottsdale, Stanford, San Diego, Luxembourg, Leipzig and Tour Championships), seven quarterfinals (Gold Coast, Tokyo, Amelia Island, Hamburg, Eastbourne, Zürich and Philadelphia) and one final. On 20 November 2000 she broke into top 10 for the first time, reaching No. 8. She was also ranked No. 4 in doubles at the end of the season. Kournikova was once again, more successful in doubles. She reached the final of the US Open in mixed doubles, partnering with Max Mirnyi, but they lost to Jared Palmer and Arantxa Sánchez Vicario. She also won six doubles titles – Gold Coast (with Julie Halard), Hamburg (with Natasha Zvereva), Filderstadt, Zürich, Philadelphia and the Tour Championships (with Martina Hingis). 2001–2003: Injuries and final years Her 2001 season was plagued by injuries, including a left foot stress fracture which made her withdraw from 12 tournaments, including the French Open and Wimbledon. She underwent surgery in April. She reached her second career grand slam quarterfinals, at the Australian Open. Kournikova then withdrew from several events due to continuing problems with her left foot and did not return until Leipzig. With Barbara Schett, she won the doubles title in Sydney. She then lost in the finals in Tokyo, partnering with Iroda Tulyaganova, and at San Diego, partnering with Martina Hingis. Hingis and Kournikova also won the Kremlin Cup. At the end of the 2001 season, she was ranked No. 74 in singles and No. 26 in doubles. Kournikova regained some success in 2002. She reached the semi-finals of Auckland, Tokyo, Acapulco and San Diego, and the final of the China Open, losing to Anna Smashnova. This was Kournikova's last singles final. With Martina Hingis, she lost in the final at Sydney, but they won their second Grand Slam title together, the Australian Open. They also lost in the quarterfinals of the US Open. With Chanda Rubin, Kournikova played the semi-finals of Wimbledon, but they lost to Serena and Venus Williams. Partnering with Janet Lee, she won the Shanghai title. At the end of 2002 season, she was ranked No. 35 in singles and No. 11 in doubles. In 2003, Anna Kournikova achieved her first Grand Slam match victory in two years at the Australian Open. She defeated Henrieta Nagyová in the first round, and then lost to Justine Henin-Hardenne in the 2nd round. She withdrew from Tokyo due to a sprained back suffered at the Australian Open and did not return to Tour until Miami. On 9 April, in what would be the final WTA match of her career, Kournikova dropped out in the first round of the Family Circle Cup in Charleston, due to a left adductor strain. Her singles world ranking was 67. She reached the semi-finals at the ITF tournament in Sea Island, before withdrawing from a match versus Maria Sharapova due to the adductor injury. She lost in the first round of the ITF tournament in Charlottesville. She did not compete for the rest of the season due to a continuing back injury. At the end of the 2003 season and her professional career, she was ranked No. 305 in singles and No. 176 in doubles. Kournikova's two Grand Slam doubles titles came in 1999 and 2002, both at the Australian Open in the Women's Doubles event with partner Martina Hingis. Kournikova proved a successful doubles player on the professional circuit, winning 16 tournament doubles titles, including two Australian Opens and being a finalist in mixed doubles at the US Open and at Wimbledon, and reaching the No. 1 ranking in doubles in the WTA Tour rankings. Her pro career doubles record was 200–71. However, her singles career plateaued after 1999. For the most part, she managed to retain her ranking between 10 and 15 (her career high singles ranking was No.8), but her expected finals breakthrough failed to occur; she only reached four finals out of 130 singles tournaments, never in a Grand Slam event, and never won one. Her singles record is 209–129. Her final playing years were marred by a string of injuries, especially back injuries, which caused her ranking to erode gradually. As a personality Kournikova was among the most common search strings for both articles and images in her prime. 2004–present: Exhibitions and World Team Tennis Kournikova has not played on the WTA Tour since 2003, but still plays exhibition matches for charitable causes. In late 2004, she participated in three events organized by Elton John and by fellow tennis players Serena Williams and Andy Roddick. In January 2005, she played in a doubles charity event for the Indian Ocean tsunami with John McEnroe, Andy Roddick, and Chris Evert. In November 2005, she teamed up with Martina Hingis, playing against Lisa Raymond and Samantha Stosur in the WTT finals for charity. Kournikova is also a member of the St. Louis Aces in the World Team Tennis (WTT), playing doubles only. In September 2008, Kournikova showed up for the 2008 Nautica Malibu Triathlon held at Zuma Beach in Malibu, California. The Race raised funds for children's Hospital Los Angeles. She won that race for women's K-Swiss team. On 27 September 2008, Kournikova played exhibition mixed doubles matches in Charlotte, North Carolina, partnering with Tim Wilkison and Karel Nováček. Kournikova and Wilkison defeated Jimmy Arias and Chanda Rubin, and then Kournikova and Novacek defeated Rubin and Wilkison. On 12 October 2008, Anna Kournikova played one exhibition match for the annual charity event, hosted by Billie Jean King and Elton John, and raised more than $400,000 for the Elton John AIDS Foundation and Atlanta AIDS Partnership Fund. She played doubles with Andy Roddick (they were coached by David Chang) versus Martina Navratilova and Jesse Levine (coached by Billie Jean King); Kournikova and Roddick won. Kournikova competed alongside John McEnroe, Tracy Austin and Jim Courier at the "Legendary Night", which was held on 2 May 2009, at the Turning Stone Event Center in Verona, New York. The exhibition included a mixed doubles match of McEnroe and Austin against Courier and Kournikova. In 2008, she was named a spokesperson for K-Swiss. In 2005, Kournikova stated that if she were 100% fit, she would like to come back and compete again. In June 2010, Kournikova reunited with her doubles partner Martina Hingis to participate in competitive tennis for the first time in seven years in the Invitational Ladies Doubles event at Wimbledon. On 29 June 2010 they defeated the British pair Samantha Smith and Anne Hobbs. Playing style Kournikova plays right-handed with a two-handed backhand. She is a great player at the net. She can hit forceful groundstrokes and also drop shots. Her playing style fits the profile for a doubles player, and is complemented by her height. She has been compared to such doubles specialists as Pam Shriver and Peter Fleming. Personal life Kournikova was in a relationship with fellow Russian, Pavel Bure, an NHL ice hockey player. The two met in 1999, when Kournikova was still linked to Bure's former Russian teammate Sergei Fedorov. Bure and Kournikova were reported to have been engaged in 2000 after a reporter took a photo of them together in a Florida restaurant where Bure supposedly asked Kournikova to marry him. As the story made headlines in Russia, where they were both heavily followed in the media as celebrities, Bure and Kournikova both denied any engagement. Kournikova, 10 years younger than Bure, was 18 years old at the time. Fedorov claimed that he and Kournikova were married in 2001, and divorced in 2003. Kournikova's representatives deny any marriage to Fedorov; however, Fedorov's agent Pat Brisson claims that although he does not know when they got married, he knew "Fedorov was married". Kournikova started dating singer Enrique Iglesias in late 2001 after she had appeared in his music video for "Escape". She has consistently refused to directly confirm or deny the status of her personal relationships. In June 2008, Iglesias was quoted by the Daily Star as having married Kournikova the previous year. They reportedly split in October 2013 but reconciled. The couple have a son and daughter, Nicholas and Lucy, who are fraternal twins born on 16 December 2017. On 30 January 2020, their third child, a daughter, Mary, was born. It was reported in 2010 that Kournikova had become an American citizen. Media publicity In 2000, Kournikova became the new face for Berlei's shock absorber sports bras, and appeared in the "only the ball should bounce" billboard campaign. Following that, she was cast by the Farrelly brothers for a minor role in the 2000 film Me, Myself & Irene starring Jim Carrey and Renée Zellweger. Photographs of her have appeared on covers of various publications, including men's magazines, such as one in the much-publicized 2004 Sports Illustrated Swimsuit Issue, where she posed in bikinis and swimsuits, as well as in FHM and Maxim. Kournikova was named
their families. Early life Kournikova was born in Moscow, Russia on 7 June 1981. Her father, Sergei Kournikov (born 1961), a former Greco-Roman wrestling champion, eventually earned a PhD and was a professor at the University of Physical Culture and Sport in Moscow. As of 2001, he was still a part-time martial arts instructor there. Her mother Alla (born 1963) had been a 400-metre runner. Her younger half-brother, Allan, is a youth golf world champion who was featured in the 2013 documentary film The Short Game. Sergei Kournikov has said, "We were young and we liked the clean, physical life, so Anna was in a good environment for sport from the beginning". Kournikova received her first tennis racquet as a New Year gift in 1986 at the age of five. Describing her early regimen, she said, "I played two times a week from age six. It was a children's program. And it was just for fun; my parents didn't know I was going to play professionally, they just wanted me to do something because I had lots of energy. It was only when I started playing well at seven that I went to a professional academy. I would go to school, and then my parents would take me to the club, and I'd spend the rest of the day there just having fun with the kids." In 1986, Kournikova became a member of the Spartak Tennis Club, coached by Larissa Preobrazhenskaya. In 1989, at the age of eight, Kournikova began appearing in junior tournaments, and by the following year, was attracting attention from tennis scouts across the world. She signed a management deal at age ten and went to Bradenton, Florida, to train at Nick Bollettieri's celebrated tennis academy. Tennis career 1989–1997: Early years and breakthrough Following her arrival in the United States, she became prominent on the tennis scene. At the age of 14, she won the European Championships and the Italian Open Junior tournament. In December 1995, she became the youngest player to win the 18-and-under division of the Junior Orange Bowl tennis tournament. By the end of the year, Kournikova was crowned the ITF Junior World Champion U-18 and Junior European Champion U-18. Earlier, in September 1995, Kournikova, still only 14 years of age, debuted in the WTA Tour, when she received a wildcard into the qualifications at the WTA tournament in Moscow, the Moscow Ladies Open, and qualified before losing in the second round of the main draw to third-seeded Sabine Appelmans. She also reached her first WTA Tour doubles final in that debut appearance — partnering with 1995 Wimbledon girls' champion in both singles and doubles Aleksandra Olsza, she lost the title match to Meredith McGrath and Larisa Savchenko-Neiland. In February–March 1996, Kournikova won two ITF titles, in Midland, Michigan and Rockford, Illinois. Still only 14 years of age, in April 1996 she debuted at the Fed Cup for Russia, the youngest player ever to participate and win a match. In 1996, she started playing under a new coach, Ed Nagel. Her six-year association with Nagel was successful. At 15, she made her Grand Slam debut, reaching the fourth round of the 1996 US Open, losing to Steffi Graf, the eventual champion. After this tournament, Kournikova's ranking jumped from No. 144 to debut in the Top 100 at No. 69. Kournikova was a member of the Russian delegation to the 1996 Olympic Games in Atlanta, Georgia. In 1996, she was named WTA Newcomer of the Year, and she was ranked No. 57 in the end of the season. Kournikova entered the 1997 Australian Open as world No. 67, where she lost in the first round to world No. 12, Amanda Coetzer. At the Italian Open, Kournikova lost to Amanda Coetzer in the second round. She reached the semi-finals in the doubles partnering with Elena Likhovtseva, before losing to the sixth seeds Mary Joe Fernández and Patricia Tarabini. At the French Open, Kournikova made it to the third round before losing to world No. 1, Martina Hingis. She also reached the third round in doubles with Likhovtseva. At the Wimbledon Championships, Kournikova became only the second woman in the open era to reach the semi-finals in her Wimbledon debut, the first being Chris Evert in 1972. There she lost to eventual champion Martina Hingis. At the US Open, she lost in the second round to the eleventh seed Irina Spîrlea. Partnering with Likhovtseva, she reached the third round of the women's doubles event. Kournikova played her last WTA Tour event of 1997 at Porsche Tennis Grand Prix in Filderstadt, losing to Amanda Coetzer in the second round of singles, and in the first round of doubles to Lindsay Davenport and Jana Novotná partnering with Likhovtseva. She broke into the top 50 on 19 May, and was ranked No. 32 in singles and No. 41 in doubles at the end of the season. 1998–2000: Success and stardom In 1998, Kournikova broke into the WTA's top 20 rankings for the first time, when she was ranked No. 16. At the Australian Open, Kournikova lost in the third round to world No. 1 player, Martina Hingis. She also partnered with Larisa Savchenko-Neiland in women's doubles, and they lost to eventual champions Hingis and Mirjana Lučić in the second round. Although she lost in the second round of the Paris Open to Anke Huber in singles, Kournikova reached her second doubles WTA Tour final, partnering with Larisa Savchenko-Neiland. They lost to Sabine Appelmans and Miriam Oremans. Kournikova and Savchenko-Neiland reached their second consecutive final at the Linz Open, losing to Alexandra Fusai and Nathalie Tauziat. At the Miami Open, Kournikova reached her first WTA Tour singles final, before losing to Venus Williams in the final. Kournikova then reached two consecutive quarterfinals, at Amelia Island and the Italian Open, losing respectively to Lindsay Davenport and Martina Hingis. At the German Open, she reached the semi-finals in both singles and doubles, partnering with Larisa Savchenko-Neiland. At the French Open Kournikova had her best result at this tournament, making it to the fourth round before losing to Jana Novotná. She also reached her first Grand Slam doubles semi-finals, losing with Savchenko-Neiland to Lindsay Davenport and Natasha Zvereva. During her quarterfinals match at the grass-court Eastbourne Open versus Steffi Graf, Kournikova injured her thumb, which would eventually force her to withdraw from the 1998 Wimbledon Championships. However, she won that match, but then withdrew from her semi-finals match against Arantxa Sánchez Vicario. Kournikova returned for the Du Maurier Open
where he received his doctorate in 1908. During the following year, he began clinical work under the psychiatrist Emil Kraepelin and did laboratory work with Franz Nissl and Alois Alzheimer in Munich. In 1911, by way of an invitation from Wilhelm Weygandt, he relocated to Hamburg, where he worked with Theodor Kaes and eventually became head of the laboratory of anatomical pathology at the psychiatric State Hospital Hamburg-Friedrichsberg. Following the death of Kaes in 1913, Jakob succeeded him as prosector. During World War I he served as an army physician in Belgium, and afterwards returned to Hamburg. In 1919, he obtained his habilitation for neurology and in 1924 became a professor of neurology. Under Jakob's guidance the department
his habilitation for neurology and in 1924 became a professor of neurology. Under Jakob's guidance the department grew rapidly. He made significant contributions to knowledge on concussion and secondary nerve degeneration and became a doyen of neuropathology. Jakob was the author of five monographs and nearly 80 scientific papers. His neuropathological research contributed greatly to the delineation of several diseases, including multiple sclerosis and Friedreich's ataxia. He first recognised and described Alper's disease and Creutzfeldt–Jakob disease (named along with Munich neuropathologist Hans Gerhard Creutzfeldt). He gained experience in neurosyphilis, having a 200-bed ward devoted entirely to that disorder. Jakob made a lecture tour of the United States (1924) and South America (1928), of which, he wrote a paper on the neuropathology of yellow fever. He suffered from chronic osteomyelitis for the last seven years of his life. This eventually caused a retroperitoneal abscess and paralytic ileus from which he died following operation. Associated eponym Creutzfeldt–Jakob disease: A very rare and incurable degenerative neurological disease. It is the most common form of transmissible spongiform encephalopathies caused by prions. Eponym introduced by Walther Spielmeyer in 1922. Bibliography Die
creed but rather as a method of skeptical, evidence-based inquiry. The term Agnostic is also cognate with the Sanskrit word Ajñasi which translates literally to "not knowable", and relates to the ancient Indian philosophical school of Ajñana, which proposes that it is impossible to obtain knowledge of metaphysical nature or ascertain the truth value of philosophical propositions; and even if knowledge was possible, it is useless and disadvantageous for final salvation. In recent years, scientific literature dealing with neuroscience and psychology has used the word to mean "not knowable". In technical and marketing literature, "agnostic" can also mean independence from some parameters—for example, "platform agnostic" (referring to cross-platform software) or "hardware-agnostic". Qualifying agnosticism Scottish Enlightenment philosopher David Hume contended that meaningful statements about the universe are always qualified by some degree of doubt. He asserted that the fallibility of human beings means that they cannot obtain absolute certainty except in trivial cases where a statement is true by definition (e.g. tautologies such as "all bachelors are unmarried" or "all triangles have three corners"). Types Strong agnosticism (also called "hard", "closed", "strict", or "permanent agnosticism") The view that the question of the existence or nonexistence of a deity or deities, and the nature of ultimate reality is unknowable by reason of our natural inability to verify any experience with anything but another subjective experience. A strong agnostic would say, "I cannot know whether a deity exists or not, and neither can you." Weak agnosticism (also called "soft", "open", "empirical", or "temporal agnosticism") The view that the existence or nonexistence of any deities is currently unknown but is not necessarily unknowable; therefore, one will withhold judgment until evidence, if any, becomes available. A weak agnostic would say, "I don't know whether any deities exist or not, but maybe one day, if there is evidence, we can find something out." Apathetic agnosticism The view that no amount of debate can prove or disprove the existence of one or more deities, and if one or more deities exist, they do not appear to be concerned about the fate of humans. Therefore, their existence has little to no impact on personal human affairs and should be of little interest. An apathetic agnostic would say, "I don't know whether any deity exists or not, and I don't care if any deity exists or not." History Hindu philosophy Throughout the history of Hinduism there has been a strong tradition of philosophic speculation and skepticism. The Rig Veda takes an agnostic view on the fundamental question of how the universe and the gods were created. Nasadiya Sukta (Creation Hymn) in the tenth chapter of the Rig Veda says: Hume, Kant, and Kierkegaard Aristotle, Anselm, Aquinas, Descartes, and Gödel presented arguments attempting to rationally prove the existence of God. The skeptical empiricism of David Hume, the antinomies of Immanuel Kant, and the existential philosophy of Søren Kierkegaard convinced many later philosophers to abandon these attempts, regarding it impossible to construct any unassailable proof for the existence or non-existence of God. In his 1844 book, Philosophical Fragments, Kierkegaard writes: Hume was Huxley's favourite philosopher, calling him "the Prince of Agnostics". Diderot wrote to his mistress, telling of a visit by Hume to the Baron D'Holbach, and describing how a word for the position that Huxley would later describe as agnosticism didn't seem to exist, or at least wasn't common knowledge, at the time. United Kingdom Charles Darwin Raised in a religious environment, Charles Darwin (1809–1882) studied to be an Anglican clergyman. While eventually doubting parts of his faith, Darwin continued to help in church affairs, even while avoiding church attendance. Darwin stated that it would be "absurd to doubt that a man might be an ardent theist and an evolutionist". Although reticent about his religious views, in 1879 he wrote that "I have never been an atheist in the sense of denying the existence of a God. – I think that generally ... an agnostic would be the most correct description of my state of mind." Thomas Henry Huxley Agnostic views are as old as philosophical skepticism, but the terms agnostic and agnosticism were created by Huxley (1825–1895) to sum up his thoughts on contemporary developments of metaphysics about the "unconditioned" (William Hamilton) and the "unknowable" (Herbert Spencer). Though Huxley began to use the term "agnostic" in 1869, his opinions had taken shape some time before that date. In a letter of September 23, 1860, to Charles Kingsley, Huxley discussed his views extensively: And again, to the same correspondent, May 6, 1863: Of the origin of the name agnostic to describe this attitude, Huxley gave the following account: In 1889, Huxley wrote:Therefore, although it be, as I believe, demonstrable that we have no real knowledge of the authorship, or of the date of composition of the Gospels, as they have come down to us, and that nothing better than more or less probable guesses can be arrived at on that subject. William Stewart Ross William Stewart Ross (1844–1906) wrote under the name of Saladin. He was associated with Victorian Freethinkers and the organization the British Secular Union. He edited the Secular Review from 1882; it was renamed Agnostic Journal and Eclectic Review and closed in 1907. Ross championed agnosticism in opposition to the atheism of Charles Bradlaugh as an open-ended spiritual exploration. In Why I am an Agnostic (c. 1889) he claims that agnosticism is "the very reverse of atheism". Bertrand Russell Bertrand Russell (1872–1970) declared Why I Am Not a Christian in 1927, a classic statement of agnosticism. He calls upon his readers to "stand on their own two feet and look fair and square at the world with a fearless attitude and a free intelligence". In 1939, Russell gave a lecture on The existence and nature of God, in which he characterized himself as an atheist. He said: However, later in the same lecture, discussing modern non-anthropomorphic concepts of God, Russell states: In Russell's 1947 pamphlet, Am I An Atheist or an Agnostic? (subtitled A Plea For Tolerance in the Face of New Dogmas), he ruminates on the problem of what to call himself: In his 1953 essay, What Is An Agnostic? Russell states: Later in the essay, Russell adds: Leslie Weatherhead In 1965, Christian theologian Leslie Weatherhead (1893–1976) published The Christian Agnostic, in which he argues: Although radical and unpalatable to conventional theologians, Weatherhead's agnosticism falls far short of Huxley's, and short even of weak agnosticism: United States Robert G. Ingersoll Robert G. Ingersoll (1833–1899), an Illinois lawyer and politician who evolved into a well-known and sought-after orator in 19th-century America, has been referred to as the "Great Agnostic". In an 1896 lecture titled Why I Am An Agnostic, Ingersoll related why he was an agnostic: In the conclusion of the speech he simply sums up the agnostic position as: In 1885, Ingersoll explained his comparative view of agnosticism and atheism as follows: Bernard Iddings Bell Canon Bernard Iddings Bell (1886–1958), a popular cultural commentator, Episcopal priest, and author, lauded the necessity of agnosticism in Beyond Agnosticism: A Book for Tired Mechanists, calling it the foundation of "all intelligent Christianity." Agnosticism was a temporary mindset in which one rigorously questioned the truths of the age, including the way in which one believed God. His view of Robert Ingersoll and Thomas Paine was that they were not denouncing true Christianity but rather "a gross perversion of it." Part of the misunderstanding stemmed from ignorance of the concepts of God and religion. Historically, a god was any real, perceivable force that ruled the lives of humans and inspired admiration, love, fear, and homage; religion was the practice of it. Ancient peoples worshiped gods with real counterparts, such as Mammon (money and material things), Nabu (rationality), or Ba'al (violent weather); Bell argued that modern peoples were still paying homage—with their lives and their children's lives—to these old gods of wealth, physical appetites, and self-deification. Thus, if one attempted to be agnostic passively, he or she would incidentally join the worship of the world's gods. In Unfashionable Convictions (1931), he criticized the Enlightenment's complete faith in human sensory perception, augmented by scientific instruments, as a means of accurately grasping Reality. Firstly, it was fairly new, an innovation of the Western World, which Aristotle invented and Thomas Aquinas revived among the scientific community. Secondly, the divorce of "pure" science from human experience, as manifested in American Industrialization, had completely altered the environment, often disfiguring it, so as to suggest its insufficiency to human needs. Thirdly, because scientists were constantly producing more data—to the point where no single human could grasp it all at once—it followed that human intelligence was incapable of attaining a complete understanding of universe; therefore, to admit the mysteries of the unobserved universe was to be actually scientific. Bell believed that there were two other ways that humans could perceive and interact with the world. Artistic experience was how one expressed meaning through speaking, writing, painting, gesturing—any sort of communication which shared insight into a human's inner reality. Mystical experience was how one could "read" people and harmonize with them, being what we commonly call love. In summary, man was a scientist, artist, and lover. Without exercising all three, a person became "lopsided." Bell considered a humanist to be a person who cannot rightly ignore the other ways of knowing. However, humanism, like agnosticism, was also temporal, and would eventually lead to either scientific materialism or theism. He lays out the following thesis: Truth cannot be discovered by reasoning on the evidence of scientific data alone. Modern peoples' dissatisfaction with life is the result of depending on such incomplete data. Our ability to reason is not a way to discover Truth but rather a way to organize
and self-deification. Thus, if one attempted to be agnostic passively, he or she would incidentally join the worship of the world's gods. In Unfashionable Convictions (1931), he criticized the Enlightenment's complete faith in human sensory perception, augmented by scientific instruments, as a means of accurately grasping Reality. Firstly, it was fairly new, an innovation of the Western World, which Aristotle invented and Thomas Aquinas revived among the scientific community. Secondly, the divorce of "pure" science from human experience, as manifested in American Industrialization, had completely altered the environment, often disfiguring it, so as to suggest its insufficiency to human needs. Thirdly, because scientists were constantly producing more data—to the point where no single human could grasp it all at once—it followed that human intelligence was incapable of attaining a complete understanding of universe; therefore, to admit the mysteries of the unobserved universe was to be actually scientific. Bell believed that there were two other ways that humans could perceive and interact with the world. Artistic experience was how one expressed meaning through speaking, writing, painting, gesturing—any sort of communication which shared insight into a human's inner reality. Mystical experience was how one could "read" people and harmonize with them, being what we commonly call love. In summary, man was a scientist, artist, and lover. Without exercising all three, a person became "lopsided." Bell considered a humanist to be a person who cannot rightly ignore the other ways of knowing. However, humanism, like agnosticism, was also temporal, and would eventually lead to either scientific materialism or theism. He lays out the following thesis: Truth cannot be discovered by reasoning on the evidence of scientific data alone. Modern peoples' dissatisfaction with life is the result of depending on such incomplete data. Our ability to reason is not a way to discover Truth but rather a way to organize our knowledge and experiences somewhat sensibly. Without a full, human perception of the world, one's reason tends to lead them in the wrong direction. Beyond what can be measured with scientific tools, there are other types of perception, such as one's ability know another human through loving. One's loves cannot be dissected and logged in a scientific journal, but we know them far better than we know the surface of the sun. They show us an undefinable reality that is nevertheless intimate and personal, and they reveal qualities lovelier and truer than detached facts can provide. To be religious, in the Christian sense, is to live for the Whole of Reality (God) rather than for a small part (gods). Only by treating this Whole of Reality as a person—good and true and perfect—rather than an impersonal force, can we come closer to the Truth. An ultimate Person can be loved, but a cosmic force cannot. A scientist can only discover peripheral truths, but a lover is able to get at the Truth. There are many reasons to believe in God but they are not sufficient for an agnostic to become a theist. It is not enough to believe in an ancient holy book, even though when it is accurately analyzed without bias, it proves to be more trustworthy and admirable than what we are taught in school. Neither is it enough to realize how probable it is that a personal God would have to show human beings how to live, considering they have so much trouble on their own. Nor is it enough to believe for the reason that, throughout history, millions of people have arrived at this Wholeness of Reality only through religious experience. The aforementioned reasons may warm one toward religion, but they fall short of convincing. However, if one presupposes that God is in fact a knowable, loving person, as an experiment, and then lives according that religion, he or she will suddenly come face to face with experiences previously unknown. One's life becomes full, meaningful, and fearless in the face of death. It does not defy reason but exceeds it. Because God has been experienced through love, the orders of prayer, fellowship, and devotion now matter. They create order within one's life, continually renewing the "missing piece" that had previously felt lost. They empower one to be compassionate and humble, not small-minded or arrogant. No truth should be denied outright, but all should be questioned. Science reveals an ever-growing vision of our universe that should not be discounted due to bias toward older understandings. Reason is to be trusted and cultivated. To believe in God is not to forego reason or to deny scientific facts, but to step into the unknown and discover the fullness of life. Demographics Demographic research services normally do not differentiate between various types of non-religious respondents, so agnostics are often classified in the same category as atheists or other non-religious people. A 2010 survey published in Encyclopædia Britannica found that the non-religious people or the agnostics made up about 9.6% of the world's population. A November–December 2006 poll published in the Financial Times gives rates for the United States and five European countries. The rates of agnosticism in the United States were at 14%, while the rates of agnosticism in the European countries surveyed were considerably higher: Italy (20%), Spain (30%), Great Britain (35%), Germany (25%), and France (32%). A study conducted by the Pew Research Center found that about 16% of the world's people, the third largest group after Christianity and Islam, have no religious affiliation. According to a 2012 report by the Pew Research Center, agnostics made up 3.3% of the US adult population. In the U.S. Religious Landscape Survey, conducted by the Pew Research Center, 55% of agnostic respondents expressed "a belief in God or a universal spirit", whereas 41% stated that they thought that they felt a tension "being non-religious in a society where most people are religious". According to the 2011 Australian Bureau of Statistics, 22% of Australians have "no religion", a category that includes agnostics. Between 64% and 65% of Japanese and up to 81% of Vietnamese are atheists, agnostics, or do not believe in a god. An official European Union survey reported that 3% of the EU population is unsure about their belief in a god or spirit. Criticism Agnosticism is criticized from a variety of standpoints. Some atheists criticize the use of the term agnosticism as functionally indistinguishable from atheism; this results in frequent criticisms of those who adopt the term as avoiding the atheist label. Theistic Theistic critics claim that agnosticism is impossible in practice, since a person can live only either as if God did not exist (etsi deus non-daretur), or as if God did exist (etsi deus daretur). Christian According to Pope Benedict XVI, strong agnosticism in particular contradicts itself in affirming the power of reason to know scientific truth. He blames the exclusion of reasoning from religion and ethics for dangerous pathologies such as crimes against humanity and ecological disasters. "Agnosticism", said Benedict, "is always the fruit of a refusal of that knowledge which is in fact offered to man ... The knowledge of God has always existed". He asserted that agnosticism is a choice of comfort, pride, dominion, and utility over truth, and is opposed by the following attitudes: the keenest self-criticism, humble listening to the whole of existence, the persistent patience and self-correction of the scientific method, a readiness to be purified by the truth. The Catholic Church sees merit in examining what it calls "partial agnosticism", specifically those systems that "do not aim at constructing a complete philosophy of the unknowable, but at excluding special kinds of truth, notably religious, from the domain of knowledge". However, the Church is historically opposed to a full denial of the capacity of human reason to know God. The Council of the Vatican declares, "God, the beginning and end of all, can, by the natural
is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. The Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively. Isotopes The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating. In the Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days. Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes. The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as , and its content may be as high as 1.93% (Mars). The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table). Compounds Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space. Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa. Production Industrial Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year. In radioactive decays 40Ar, the most abundant isotope of argon, is produced by the decay of 40K with a half-life of 1.25 years by electron capture or positron emission. Because of this, it is used in potassium–argon dating to determine the age of rocks. Applications Argon has several desirable properties: Argon is a chemically inert gas. Argon is the cheapest alternative when nitrogen is not sufficiently inert. Argon has low thermal conductivity. Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications. Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. Argon is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of argon applications arise simply because it is inert and relatively cheap. Industrial processes Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium. Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and
electron capture or positron emission. Because of this, it is used in potassium–argon dating to determine the age of rocks. Applications Argon has several desirable properties: Argon is a chemically inert gas. Argon is the cheapest alternative when nitrogen is not sufficiently inert. Argon has low thermal conductivity. Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications. Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. Argon is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of argon applications arise simply because it is inert and relatively cheap. Industrial processes Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium. Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life. Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam. Scientific research Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in the Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within the Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions. At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials. Preservative Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon. In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry. Argon is sometimes used as the propellant in aerosol cans. Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage. Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced. Laboratory equipment Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus. Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication. Medical use Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient. Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects. Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood. Lighting Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers. Miscellaneous uses Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity. Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure. Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating is used to date sedimentary, metamorphic, and igneous rocks. Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable
of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen. Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and the salts are called arsenates, the most common arsenic contamination of groundwater, and a problem that affects many people. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons. The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3. A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes. All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.) Alloys Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide. Organoarsenic compounds A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive odor; it is very poisonous. Occurrence and production Arsenic comprises about 1.5 ppm (0.00015%) of the Earth's crust, and is the 53rd most abundant element. Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater. Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment. In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust. On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture. History The word arsenic has its origin in the Syriac word (al) zarniqa, from Arabic al-zarnīḵ 'the orpiment’, based on Persian zar 'gold' from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek as arsenikon (), a form that is folk etymology, being the neuter form of the Greek word arsenikos (), meaning "male", "virile". The Greek word was adopted in Latin as arsenicum, which in French became arsenic, from which the English word arsenic is taken. Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos (circa 300 AD) describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, it was frequently used for murder until the advent of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". During the Bronze Age, arsenic was often included in bronze, which made the alloy harder (so-called "arsenical bronze"). The isolation of arsenic was described by Jabir ibn Hayyan before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rare. Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt by the reaction of potassium acetate with arsenic trioxide. In the Victorian era, "arsenic" ("white arsenic" or arsenic trioxide) was mixed with vinegar and chalk and eaten by women to improve the complexion of their faces, making their skin paler to show they did not work in the fields. The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. Wallpaper production also began to use dyes made from arsenic, which was thought to increase the pigment's brightness. Two arsenic pigments have been widely used since their discovery – Paris Green and Scheele's Green. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942. Applications Agricultural The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations). Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out by 2013 in all agricultural activities except cotton farming. The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity. Arsenic is used as a feed additive in poultry and swine production, in particular in the U.S. to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers. Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continues to sell nitarsone, primarily for use in turkeys. Arsenic is intentionally added to the feed of chickens raised for human consumption. Organic arsenic compounds are less toxic than pure arsenic, and promote the growth of chickens. Under some conditions, the arsenic in chicken feed is converted to the toxic inorganic form. A 2006 study of the remains of the Australian racehorse, Phar Lap, determined that the 1932 death of the famous champion was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system." Medical use During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler). Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis, since although these drugs have the disadvantage of severe toxicity, the disease is almost uniformly fatal if untreated. Arsenic trioxide has been used in a variety of ways over the past 500 years, most commonly in the treatment of cancer, but also in medications as diverse as Fowler's solution in psoriasis. The US Food and Drug Administration in the year 2000 approved this compound for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid. A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations. In subtoxic doses, soluble arsenic compounds act as stimulants, and were once popular in small doses as medicine by people in the mid-18th to 19th centuries; its use as a stimulant was especially prevalent as sport animals such as race horses or with work dogs. Alloys The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light. Military After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice. Other uses Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets. Arsenic is used in bronzing and pyrotechnics. As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets. Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments. Arsenic is also used for taxonomic sample preservation. Arsenic was used as an opacifier in ceramics, creating white glazes. Until recently, arsenic was used in optical glass. Modern glass manufacturers, under pressure from environmentalists, have ceased using both arsenic and lead. Biological role Bacteria Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr). In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain PHS-1 has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues. In 2011, it was postulated that a strain of Halomonadaceae could be grown in the absence of phosphorus if that element were substituted with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups. Essential trace element in higher animals Some evidence indicates that arsenic is an essential trace mineral in birds (chickens), and in mammals (rats, hamsters, and goats). However, the biological function is not known. Heredity Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility. The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation. Biomethylation Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 µg/day. Values about 1000 µg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic. Environmental issues Exposure Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts. During the Victorian era, arsenic was widely used in home decor, especially wallpapers. Occurrence in drinking water Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a recent report of Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO minimum contamination level. Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 µg/L, suggesting that arsenic induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water. In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 part per billion drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits. Low-level exposure to arsenic at concentrations of 100 parts per billion (i.e., above the 10 parts per billion drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus. Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic. Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke. Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb
concerns relating to inorganic arsenic. Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke. Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation. Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminum oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap. Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water. Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced. Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes. Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 µg/L. This may find applications in areas where the potable water is extracted from underground aquifers. San Pedro de Atacama For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity. Hazard maps for contaminated groundwater Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground. Redox transformation of arsenic in natural waters Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution. Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic. The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are H3AsO4, H2AsO4−, HAsO42−, and AsO43− at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, H3AsO4 is predominant at pH 2–9. Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments. The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic. Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic. Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria. Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where SO42− reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1. Wood preservation in the US As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole. Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater. Mapping of industrial releases in the US One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources. Bioremediation Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered. Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination. Toxicity and precautions Arsenic and many of its compounds are especially potent poisons. Classification Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC. The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens. Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [H3AsO3; As(III)]". Legal limits, food, and drink In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb). In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard. Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic. In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior. Consumer Reports recommended: That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production; That the FDA establish a legal limit for food; That industry change production practices to lower arsenic levels, especially in food for children; and That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content). Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice. A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice. Reducing arsenic content in rice In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption. Occupational exposure limits Ecotoxicity Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils. Toxicity in animals Biological mechanism Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes. Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur. Exposure risks and remediation Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry. The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic. Treatment Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5
to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony. Isotopes Antimony has two stable isotopes: 121Sb with a natural abundance of 57.36% and 123Sb with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is 125Sb with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is 120m1Sb with a half-life of 5.76 days. Isotopes that are lighter than the stable 123Sb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. Occurrence The abundance of antimony in the Earth's crust is estimated to be 0.2 to 0.5 parts per million, comparable to thallium at 0.5 parts per million and silver at 0.07 ppm. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite (Sb2S3) which is the predominant ore mineral. Compounds Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more stable. Oxides and hydroxides Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts. Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides. Many antimony ores are sulfides, including stibnite (), pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric and features antimony in the +3 oxidation state and S–S bonds. Several thioantimonides are known, such as and . Halides Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry. The trifluoride is prepared by the reaction of with HF: + 6 HF → 2 + 3 It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten is a weak electrical conductor. The trichloride is prepared by dissolving in hydrochloric acid: + 6 HCl → 2 + 3 The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. is a powerful Lewis acid used to make the superacid fluoroantimonic acid ("H2SbF7"). Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and . Antimonides, hydrides, and organoantimony compounds Compounds in this class generally are described as derivatives of Sb3−. Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as Na3Sb and Zn3Sb2, are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, : + 3 → Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly. Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include Sb(C6H5)3 (triphenylstibine), Sb2(C6H5)4 (with an Sb-Sb bond), and cyclic [Sb(C6H5)]n. Pentacoordinated organoantimony compounds are common, examples being Sb(C6H5)5 and several related halides. History Antimony(III) sulfide, Sb2S3, was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented. An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable." The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable." The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise Natural History, around 77 AD. Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony. The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony. The intentional isolation of antimony is described by Jabir ibn Hayyan before 815 AD. A description of a procedure for isolating antimony is later given in the 1540 book De la pirotechnia by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, De re metallica. In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book Currus Triumphalis Antimonii (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio. The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface. With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals. The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden. Etymology The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is antimonium. The origin of this is uncertain; all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός anti-monachos or French antimoine, still has adherents; this would mean "monk-killer", and is explained by many early alchemists being monks, and antimony being poisonous. However, the low toxicity of antimony (see below) makes this unlikely. Another popular etymology is the hypothetical Greek
poor, and minting was soon discontinued. Antimony is resistant to attack by acids. Four allotropes of antimony are known: a stable metallic form, and three metastable forms (explosive, black, and yellow). Elemental antimony is a brittle, silver-white, shiny metalloid. When slowly cooled, molten antimony crystallizes into a trigonal cell, isomorphic with the gray allotrope of arsenic. A rare explosive form of antimony can be formed from the electrolysis of antimony trichloride. When scratched with a sharp implement, an exothermic reaction occurs and white fumes are given off as metallic antimony forms; when rubbed with a pestle in a mortar, a strong detonation occurs. Black antimony is formed upon rapid cooling of antimony vapor. It has the same crystal structure as red phosphorus and black arsenic; it oxidizes in air and may ignite spontaneously. At 100 °C, it gradually transforms into the stable form. The yellow allotrope of antimony is the most unstable; it has been generated only by oxidation of stibine (SbH3) at −90 °C. Above this temperature and in ambient light, this metastable allotrope transforms into the more stable black allotrope. Elemental antimony adopts a layered structure (space group Rm No. 166) whose layers consist of fused, ruffled, six-membered rings. The nearest and next-nearest neighbors form an irregular octahedral complex, with the three atoms in each double layer slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony. Isotopes Antimony has two stable isotopes: 121Sb with a natural abundance of 57.36% and 123Sb with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is 125Sb with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is 120m1Sb with a half-life of 5.76 days. Isotopes that are lighter than the stable 123Sb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. Occurrence The abundance of antimony in the Earth's crust is estimated to be 0.2 to 0.5 parts per million, comparable to thallium at 0.5 parts per million and silver at 0.07 ppm. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite (Sb2S3) which is the predominant ore mineral. Compounds Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more stable. Oxides and hydroxides Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts. Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides. Many antimony ores are sulfides, including stibnite (), pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric and features antimony in the +3 oxidation state and S–S bonds. Several thioantimonides are known, such as and . Halides Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry. The trifluoride is prepared by the reaction of with HF: + 6 HF → 2 + 3 It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten is a weak electrical conductor. The trichloride is prepared by dissolving in hydrochloric acid: + 6 HCl → 2 + 3 The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. is a powerful Lewis acid used to make the superacid fluoroantimonic acid ("H2SbF7"). Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and . Antimonides, hydrides, and organoantimony compounds Compounds in this class generally are described as derivatives of Sb3−. Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as Na3Sb and Zn3Sb2, are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, : + 3 → Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly. Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include Sb(C6H5)3 (triphenylstibine), Sb2(C6H5)4 (with an Sb-Sb bond), and cyclic [Sb(C6H5)]n. Pentacoordinated organoantimony compounds are common, examples being Sb(C6H5)5 and several related halides. History Antimony(III) sulfide, Sb2S3, was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented. An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable." The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable." The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise Natural History, around 77 AD. Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony. The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony. The intentional isolation of antimony is described by Jabir ibn Hayyan before 815 AD. A description of a procedure for isolating antimony is later given in the 1540 book De la pirotechnia by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, De re metallica. In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book Currus Triumphalis Antimonii (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio. The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface. With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals. The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden. Etymology The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is antimonium. The origin of this is uncertain; all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός anti-monachos or French antimoine, still has adherents; this would mean "monk-killer", and is explained by many early alchemists being monks, and antimony being poisonous. However, the low toxicity of antimony (see below) makes this unlikely. Another popular etymology is the hypothetical Greek word ἀντίμόνος antimonos, "against aloneness", explained as "not found as metal", or "not found unalloyed". Lippmann conjectured a hypothetical Greek word ανθήμόνιον anthemonion, which would mean "floret", and cites several examples of related Greek words (but not that one) which describe chemical or biological efflorescence. The early uses of antimonium include the translations, in 1050–1100, by Constantine the African of Arabic medical treatises. Several authorities believe antimonium is a scribal corruption of some Arabic form; Meyerhof derives it from ithmid; other possibilities include athimar, the Arabic name of the metalloid, and a hypothetical as-stimmi, derived from or parallel to the Greek. The standard chemical symbol for antimony (Sb) is credited to Jöns Jakob Berzelius, who derived the abbreviation from stibium. The ancient words for antimony mostly have, as their chief meaning, kohl, the sulfide of antimony. The Egyptians called antimony mśdmt; in hieroglyphs, the vowels are uncertain, but the Coptic form of the word is ⲥⲧⲏⲙ (stēm). Egyptian stm: O34:D46-G17-F21:D4 The Greek word, στίμμι (stimmi) is used by Attic tragic poets of the 5th century BC, and is possibly a loan word from Arabic or from Egyptian stm. Later Greeks also used στἰβι stibi, as did Celsus and Pliny, writing in Latin, in the first century AD. Pliny also gives the names stimi, larbaris, alabaster, and the "very common" platyophthalmos, "wide-eye" (from the effect of the cosmetic). Later Latin authors adapted the word to Latin as stibium. The Arabic word for the substance, as opposed to the cosmetic, can appear as إثمد ithmid, athmoud, othmod, or uthmod. Littré suggests the first form, which is the earliest, derives from stimmida, an accusative for stimmi. Production Process The extraction of antimony from ores depends on the quality and composition of the ore. Most antimony is mined as the sulfide; lower-grade ores are concentrated by froth flotation, while higher-grade ores are heated to 500–600 °C, the temperature at which stibnite melts and separates from the gangue minerals. Antimony can be isolated from the crude antimony sulfide by reduction with scrap iron: + 3 Fe → 2 Sb + 3 FeS The sulfide is converted to an oxide; the product is then roasted, sometimes for the purpose of vaporizing the volatile antimony(III) oxide, which is recovered. This material is often used directly for the main applications, impurities being arsenic and sulfide. Antimony is isolated from the oxide by a carbothermal reduction: 2 + 3 C → 4 Sb + 3 The lower-grade ores are reduced in blast furnaces while the higher-grade ores are reduced in reverberatory furnaces. Top producers and production volumes The British Geological Survey (BGS) reported that in 2005 China was the top producer of antimony with approximately 84% of the world share, followed at a distance by South Africa, Bolivia and Tajikistan. Xikuangshan Mine in Hunan province has the largest deposits in China with an estimated
yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at 1400 °C for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at 1000 °C. Isotopes Naturally occurring actinium is composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-six radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac. Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 205 u () to 236 u (). Occurrence and synthesis Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U. The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. ^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant. 225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac. Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between 1100 and 1300 °C. Higher temperatures resulted in evaporation of the product and lower ones lead to an incomplete transformation. Lithium was chosen among other alkali metals because its fluoride is most volatile. Applications Owing to its scarcity, high price and radioactivity, 227Ac currently has no significant industrial use, but 225Ac is currently being studied for use in cancer treatments such as targeted alpha therapies. 227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction: ^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations. 225Ac is applied in medicine to produce in a reusable generator or can be used alone as an agent for radiation therapy, in particular targeted alpha therapy (TAT). This isotope has a half-life of 10 days, making it much more suitable for radiation therapy than 213Bi (half-life 46 minutes). Additionally, 225Ac decays to nontoxic 209Bi rather than stable but toxic lead, which is the final product in the decay chains of several other candidate isotopes, namely 227Th, 228Th, and 230U. Not only 225Ac itself, but also its daughters, emit alpha particles which kill cancer cells in the body. The major difficulty
further oxidation. As with most lanthanides and actinides, actinium exists in the oxidation state +3, and the Ac3+ ions are colorless in solutions. The oxidation state +3 originates from the [Rn]6d17s2 electronic configuration of actinium, with three valence electrons that are easily donated to give the stable closed-shell structure of the noble gas radon. The rare oxidation state +2 is only known for actinium dihydride (AcH2); even this may in reality be an electride compound like its lighter congener LaH2 and thus have actinium(III). Ac3+ is the largest of all known tripositive ions and its first coordination sphere contains approximately 10.9 ± 0.5 water molecules. Chemical compounds Due to actinium's intense radioactivity, only a limited number of actinium compounds are known. These include: AcF3, AcCl3, AcBr3, AcOF, AcOCl, AcOBr, Ac2S3, Ac2O3, AcPO4 and Ac(NO3)3. Except for AcPO4, they are all similar to the corresponding lanthanum compounds. They all contain actinium in the oxidation state +3. In particular, the lattice constants of the analogous lanthanum and actinium compounds differ by only a few percent. Here a, b and c are lattice constants, No is space group number and Z is the number of formula units per unit cell. Density was not measured directly but calculated from the lattice parameters. Oxides Actinium oxide (Ac2O3) can be obtained by heating the hydroxide at 500 °C or the oxalate at 1100 °C, in vacuum. Its crystal lattice is isotypic with the oxides of most trivalent rare-earth metals. Halides Actinium trifluoride can be produced either in solution or in solid reaction. The former reaction is carried out at room temperature, by adding hydrofluoric acid to a solution containing actinium ions. In the latter method, actinium metal is treated with hydrogen fluoride vapors at 700 °C in an all-platinum setup. Treating actinium trifluoride with ammonium hydroxide at 900–1000 °C yields oxyfluoride AcOF. Whereas lanthanum oxyfluoride can be easily obtained by burning lanthanum trifluoride in air at 800 °C for an hour, similar treatment of actinium trifluoride yields no AcOF and only results in melting of the initial product. AcF3 + 2 NH3 + H2O → AcOF + 2 NH4F Actinium trichloride is obtained by reacting actinium hydroxide or oxalate with carbon tetrachloride vapors at temperatures above 960 °C. Similar to oxyfluoride, actinium oxychloride can be prepared by hydrolyzing actinium trichloride with ammonium hydroxide at 1000 °C. However, in contrast to the oxyfluoride, the oxychloride could well be synthesized by igniting a solution of actinium trichloride in hydrochloric acid with ammonia. Reaction of aluminium bromide and actinium oxide yields actinium tribromide: Ac2O3 + 2 AlBr3 → 2 AcBr3 + Al2O3 and treating it with ammonium hydroxide at 500 °C results in the oxybromide AcOBr. Other compounds Actinium hydride was obtained by reduction of actinium trichloride with potassium at 300 °C, and its structure was deduced by analogy with the corresponding LaH2 hydride. The source of hydrogen in the reaction was uncertain. Mixing monosodium phosphate (NaH2PO4) with a solution of actinium in hydrochloric acid yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at 1400 °C for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at 1000 °C. Isotopes Naturally occurring actinium is composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-six radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac. Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 205 u () to 236 u (). Occurrence and synthesis Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U. The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. ^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant. 225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac. Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between 1100 and 1300 °C. Higher temperatures resulted in
metal in high vacuum at 1100 °C. Occurrence The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of nuclear reactions, though this has not been confirmed. Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland. In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries/g (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils. Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Americium is also one of the elements that have been detected in Przybylski's Star. Synthesis and extraction Isotope nucleosynthesis Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about US$1,500 per gram of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order 100,000–160,000 USD/g. Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process: ^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am: ^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it spontaneously converts to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years. The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm: Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux: ^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am Metal generation Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away. Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten. An alternative is the reduction of americium dioxide by metallic lanthanum or thorium: Physical properties In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C). At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium. As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 µOhm·cm to 10 µOhm·cm after 40 hours, and saturates at about 16 µOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 µOhm·cm at liquid helium to 69 µOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium. Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is . Chemical properties Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3,. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states 2, 4, 5, 6 and 7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), AmV; (yellow), AmVI (brown) and AmVII (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm. Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state. The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction 3[AmO2]+ + 4H+ -> 2[AmO2]2+ + Am3+ + 2H2O is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like Li3AmO4 and Li6AmO6 are comparable to uranates and the ion AmO22+ is comparable to the uranyl ion, UO22+. Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate. Chemical compounds Oxygen compounds Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure. The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L. Halides Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions. Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are: Orthorhombic AmCl2: a = , b = and c = Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I: {Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg} Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions: Am^3+ + 3F^- -> AmF3(v) The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine: 2AmF3 + F2 -> 2AmF4 Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles. Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure. Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis: AmCl3 + H2O -> AmOCl + 2HCl Chalcogenides and pnictides The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice. Silicides and borides Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has
only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children Quiz Kids five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C. Occurrence The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of nuclear reactions, though this has not been confirmed. Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland. In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries/g (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils. Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Americium is also one of the elements that have been detected in Przybylski's Star. Synthesis and extraction Isotope nucleosynthesis Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about US$1,500 per gram of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order 100,000–160,000 USD/g. Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process: ^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am: ^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it spontaneously converts to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years. The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm: Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux: ^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am Metal generation Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away. Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten. An alternative is the reduction of americium dioxide by metallic lanthanum or thorium: Physical properties In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C). At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium. As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 µOhm·cm to 10 µOhm·cm after 40 hours, and saturates at about 16 µOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 µOhm·cm at liquid helium to 69 µOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium. Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is . Chemical properties Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3,. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states 2, 4, 5, 6 and 7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), AmV; (yellow), AmVI (brown) and AmVII (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm. Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state. The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction 3[AmO2]+ + 4H+ -> 2[AmO2]2+ + Am3+ + 2H2O is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like Li3AmO4 and Li6AmO6 are comparable to uranates and the ion AmO22+ is comparable to the uranyl ion, UO22+. Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate. Chemical compounds Oxygen compounds Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure. The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L. Halides Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions. Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are: Orthorhombic AmCl2: a = , b = and c = Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I: {Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg} Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions: Am^3+ + 3F^- -> AmF3(v) The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine: 2AmF3 + F2 -> 2AmF4 Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles. Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure. Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be
the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride. History In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit eka – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries. The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine; moreover, astatine is not found in the thorium series, and the true identity of dakin is not known. In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 via X-ray analysis. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine, his means to detect it were too weak, by current standards, to enable correct identification. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work. In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Berta Karlik and Traude Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results. Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet been discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, Nature published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Greek astatos (αστατος) meaning "unstable", because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element. Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine … it also exhibits metallic properties, more like its metallic neighbors Po and Bi." Isotopes There are 39 known isotopes of astatine, with atomic masses (mass numbers) of 191–229. Theoretical modeling suggests that 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist. Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with A = 215. A beta decay mode has been found for all other astatine isotopes except for astatine-213, astatine-214, and astatine-216m. Astatine-210 and lighter isotopes exhibit beta plus decay (positron emission), astatine-216 and heavier isotopes exhibit beta minus decay, and astatine-212 decays via both modes, while astatine-211 undergoes electron capture. The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209. Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-214m1; its half-life of 265 nanoseconds is shorter than those of all ground states except that of astatine-213. Natural occurrence Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 1025 grams) is estimated by some to be less than one gram at any given time. Other sources estimate the amount of ephemeral astatine, present on earth at any given moment, to be up to one ounce (about 28 grams). Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10−10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest
can replace a hydrogen atom in benzene to form astatobenzene C6H5At; this may be oxidized to C6H5AtCl2 by chlorine. By treating this compound with an alkaline solution of hypochlorite, C6H5AtO2 can be produced. The dipyridine-astatine(I) cation, [At(C5H5N)2]+, forms ionic compounds with perchlorate (a non-coordinating anion) and with nitrate, [At(C5H5N)2]NO3. This cation exists as a coordination complex in which two dative covalent bonds separately link the astatine(I) centre with each of the pyridine rings via their nitrogen atoms. With oxygen, there is evidence of the species AtO− and AtO+ in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid: the latter species might also be protonated astatous acid, . The species previously thought to be has since been determined to be , a hydrolysis product of AtO+ (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO3)3, following the oxidation of astatine by a hot Na2S2O8 solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate. Astatine may form bonds to the other chalcogens; these include S7At+ and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium. Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI2, CsAtI2, TlAtI2, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI]+, [AtBr]+, and [AtCl]+ have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride. History In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit eka – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries. The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine; moreover, astatine is not found in the thorium series, and the true identity of dakin is not known. In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 via X-ray analysis. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine, his means to detect it were too weak, by current standards, to enable correct identification. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work. In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Berta Karlik and Traude Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results. Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet been discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, Nature published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Greek astatos (αστατος) meaning "unstable", because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element. Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine … it also exhibits metallic properties, more like its metallic neighbors Po and Bi." Isotopes There are 39 known isotopes of astatine, with atomic masses (mass numbers) of 191–229. Theoretical modeling suggests that 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist. Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with A = 215. A beta decay mode has been found for all other astatine isotopes except for astatine-213, astatine-214, and astatine-216m. Astatine-210 and lighter isotopes exhibit beta plus decay (positron emission), astatine-216 and heavier isotopes exhibit beta minus decay, and astatine-212 decays via both modes, while astatine-211 undergoes electron capture. The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209. Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also
of iron, there are two or three atoms of oxygen (Fe2O2 and Fe2O3). As a final example: nitrous oxide is 63.3% nitrogen and 36.7% oxygen, nitric oxide is 44.05% nitrogen and 55.95% oxygen, and nitrogen dioxide is 29.5% nitrogen and 70.5% oxygen. Adjusting these figures, in nitrous oxide there is 80 g of oxygen for every 140 g of nitrogen, in nitric oxide there is about 160 g of oxygen for every 140 g of nitrogen, and in nitrogen dioxide there is 320 g of oxygen for every 140 g of nitrogen. 80, 160, and 320 form a ratio of 1:2:4. The respective formulas for these oxides are N2O, NO, and NO2. Kinetic theory of gases In the late 18th century, a number of scientists found that they could better explain the behavior of gases by describing them as collections of sub-microscopic particles and modelling their behavior using statistics and probability. Unlike Dalton's atomic theory, the kinetic theory of gases describes not how gases react chemically with each other to form compounds, but how they behave physically: diffusion, viscosity, conductivity, pressure, etc. Brownian motion In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion". This was thought to be caused by water molecules knocking the grains about. In 1905, Albert Einstein proved the reality of these molecules and their motions by producing the first statistical physics analysis of Brownian motion. French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of molecules, thereby providing physical evidence for the particle nature of matter. Discovery of the electron In 1897, J. J. Thomson discovered that cathode rays are not electromagnetic waves but made of particles that are 1,800 times lighter than hydrogen (the lightest atom). Thomson concluded that these particles came from the atoms within the cathode — they were subatomic particles. He called these new particles corpuscles but they were later renamed electrons. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. It was quickly recognized that electrons are the particles that carry electric currents in metal wires. Thomson concluded that these electrons emerged from the very atoms of the cathode in his instruments, which meant that atoms are not indivisible as the name atomos suggests. Discovery of the nucleus J. J. Thomson thought that the negatively-charged electrons were distributed throughout the atom in a sea of positive charge that was distributed across the whole volume of the atom. This model is sometimes known as the plum pudding model. Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles are much heavier than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle, and the electrons are so lightweight they should be pushed aside effortlessly by the much heavier alpha particles. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully. Between 1908 and 1913, Rutheford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with alpha particles. They spotted alpha particles being deflected by angles greater than 90°. To explain this, Rutherford proposed that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. Only such an intense concentration of charge could produce an electric field strong enough to deflect the alpha particles as observed. Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table. The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J. J. Thomson created a technique for isotope separation through his work on ionized gases, which subsequently led to the discovery of stable isotopes. Bohr model In 1913, the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see synchrotron radiation) and why elements absorb and emit electromagnetic radiation in discrete spectra. Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today. Chemical bonds between atoms were explained by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons. As the chemical properties of the elements were known to largely repeat themselves according to the periodic law, in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus. The Bohr model of the atom was the first complete physical model of the atom. It described the overall structure of the atom, how atoms bond to each other, and predicted the spectral lines of hydrogen. Bohr's model was not perfect and was soon superseded by the more accurate Schrödinger model, but it was sufficient to evaporate any remaining doubts that matter is composed of atoms. For chemists, the idea of the atom had been a useful heuristic tool, but physicists had doubts as to whether matter really is made up of atoms as nobody had yet developed a complete physical model of the atom. The Schrödinger model The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of atomic properties. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split in a way correlated with the direction of an atom's angular momentum, or spin. As this spin direction is initially random, the beam would be expected to deflect in a random direction. Instead, the beam was split into two directional components, corresponding to the atomic spin being oriented up or down with respect to the magnetic field. In 1925, Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed the de Broglie hypothesis: that all particles behave like waves to some extent, and in 1926 Erwin Schrödinger used this idea to develop the Schrödinger equation, a mathematical model of the atom (wave mechanics) that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed. Discovery of the neutron The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus. Fission, high-energy physics and condensed matter In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product. A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission. In 1944, Hahn received the Nobel Prize in Chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized. In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies. Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions. Structure Subatomic particles Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. The electron is by far the least massive of these particles at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron, or . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. Nucleus All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to femtometres, where is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass-energy equivalence formula, , where is the mass loss and is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. Electron cloud The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 million eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. Properties Nuclear properties By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single-proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible. About 339 nuclides occur naturally on Earth, of which 252 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 162 (bringing the total to 252) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the Solar System. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14). For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes. Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 252 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Mass The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of . As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg. Shape and size Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic
from the very atoms of the cathode in his instruments, which meant that atoms are not indivisible as the name atomos suggests. Discovery of the nucleus J. J. Thomson thought that the negatively-charged electrons were distributed throughout the atom in a sea of positive charge that was distributed across the whole volume of the atom. This model is sometimes known as the plum pudding model. Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles are much heavier than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle, and the electrons are so lightweight they should be pushed aside effortlessly by the much heavier alpha particles. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully. Between 1908 and 1913, Rutheford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with alpha particles. They spotted alpha particles being deflected by angles greater than 90°. To explain this, Rutherford proposed that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. Only such an intense concentration of charge could produce an electric field strong enough to deflect the alpha particles as observed. Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table. The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J. J. Thomson created a technique for isotope separation through his work on ionized gases, which subsequently led to the discovery of stable isotopes. Bohr model In 1913, the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see synchrotron radiation) and why elements absorb and emit electromagnetic radiation in discrete spectra. Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today. Chemical bonds between atoms were explained by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons. As the chemical properties of the elements were known to largely repeat themselves according to the periodic law, in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus. The Bohr model of the atom was the first complete physical model of the atom. It described the overall structure of the atom, how atoms bond to each other, and predicted the spectral lines of hydrogen. Bohr's model was not perfect and was soon superseded by the more accurate Schrödinger model, but it was sufficient to evaporate any remaining doubts that matter is composed of atoms. For chemists, the idea of the atom had been a useful heuristic tool, but physicists had doubts as to whether matter really is made up of atoms as nobody had yet developed a complete physical model of the atom. The Schrödinger model The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of atomic properties. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split in a way correlated with the direction of an atom's angular momentum, or spin. As this spin direction is initially random, the beam would be expected to deflect in a random direction. Instead, the beam was split into two directional components, corresponding to the atomic spin being oriented up or down with respect to the magnetic field. In 1925, Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed the de Broglie hypothesis: that all particles behave like waves to some extent, and in 1926 Erwin Schrödinger used this idea to develop the Schrödinger equation, a mathematical model of the atom (wave mechanics) that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed. Discovery of the neutron The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus. Fission, high-energy physics and condensed matter In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product. A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission. In 1944, Hahn received the Nobel Prize in Chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized. In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies. Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions. Structure Subatomic particles Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. The electron is by far the least massive of these particles at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron, or . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. Nucleus All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to femtometres, where is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass-energy equivalence formula, , where is the mass loss and is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. Electron cloud The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 million eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this
can be found in many rangeland areas elsewhere. Land incapable of being cultivated for the production of crops can sometimes be converted to arable land. New arable land makes more food and can reduce starvation. This outcome also makes a country more self-sufficient and politically independent, because food importation is reduced. Making non-arable land arable often involves digging new irrigation canals and new wells, aqueducts, desalination plants, planting trees for shade in the desert, hydroponics, fertilizer, nitrogen fertilizer, pesticides, reverse osmosis water processors, PET film insulation or other insulation against heat and cold, digging ditches and hills for protection against the wind, and installing greenhouses with internal light and heat for protection against the cold outside and to provide light in cloudy areas. Such modifications are often prohibitively expensive. An alternative is the seawater greenhouse, which desalinates water through evaporation and condensation using solar energy as the only energy input. This technology is optimized to grow crops on desert land close to the sea. The use of artifices does not make the land arable. Rock still remains rock, and shallowless than turnable soil is still not considered toilable. The use of artifice is an open-air none recycled water hydroponics relationship. The below described circumstances are not in perspective, have limited duration, and have a tendency to accumulate trace materials in soil that either there or elsewhere cause deoxygenation. The use of vast amounts of fertilizer may have unintended consequences for the environment by devastating rivers, waterways, and river endings through the accumulation of non-degradable toxins and nitrogen-bearing molecules that remove oxygen and cause non-aerobic processes to form. Examples of infertile non-arable land being turned into fertile arable land include: Aran Islands: These islands off the west coast of Ireland (not to be confused with the Isle of Arran in Scotland's Firth of Clyde) were unsuitable for arable farming because they were too rocky. The people covered the islands with a shallow layer of seaweed and sand from the ocean. Today, crops are grown there, even though the islands are still considered non-arable. Israel: The construction of desalination plants along Israel's coast allowed agriculture in some areas that were formerly desert. The desalination plants, which remove the salt from ocean water, have produced a new source of water for farming, drinking, and washing. Slash and burn agriculture uses nutrients in wood ash, but these expire within a few years. Terra preta, fertile tropical soils produced by adding charcoal. Examples of fertile arable land being turned into infertile land include: Droughts such as the "Dust Bowl" of the Great Depression in the US turned farmland into desert. Each year, arable land is lost due to desertification and human-induced erosion. Improper irrigation of farmland can wick the sodium, calcium, and magnesium from the soil and water to the surface. This process steadily concentrates salt in the root zone, decreasing
crops, but is suitable for uncultivated production of forage usable by grazing livestock. Similar examples can be found in many rangeland areas elsewhere. Land incapable of being cultivated for the production of crops can sometimes be converted to arable land. New arable land makes more food and can reduce starvation. This outcome also makes a country more self-sufficient and politically independent, because food importation is reduced. Making non-arable land arable often involves digging new irrigation canals and new wells, aqueducts, desalination plants, planting trees for shade in the desert, hydroponics, fertilizer, nitrogen fertilizer, pesticides, reverse osmosis water processors, PET film insulation or other insulation against heat and cold, digging ditches and hills for protection against the wind, and installing greenhouses with internal light and heat for protection against the cold outside and to provide light in cloudy areas. Such modifications are often prohibitively expensive. An alternative is the seawater greenhouse, which desalinates water through evaporation and condensation using solar energy as the only energy input. This technology is optimized to grow crops on desert land close to the sea. The use of artifices does not make the land arable. Rock still remains rock, and shallowless than turnable soil is still not considered toilable. The use of artifice is an open-air none recycled water hydroponics relationship. The below described circumstances are not in perspective, have limited duration, and have a tendency to accumulate trace materials in soil that either there or elsewhere cause deoxygenation. The use of vast amounts of fertilizer may have unintended consequences for the environment by devastating rivers, waterways, and river endings through the accumulation of non-degradable toxins and nitrogen-bearing molecules that remove oxygen and cause non-aerobic processes to form. Examples of infertile non-arable land being turned into fertile arable land include: Aran Islands: These islands off the west coast of Ireland (not to be confused with the Isle of Arran in
synthesized alumina by boiling clay in sulfuric acid and subsequently adding potash. Attempts to produce aluminium metal date back to 1760. The first successful attempt, however, was completed in 1824 by Danish physicist and chemist Hans Christian Ørsted. He reacted anhydrous aluminium chloride with potassium amalgam, yielding a lump of metal looking similar to tin. He presented his results and demonstrated a sample of the new metal in 1825. In 1827, German chemist Friedrich Wöhler repeated Ørsted's experiments but did not identify any aluminium. (The reason for this inconsistency was only discovered in 1921.) He conducted a similar experiment in the same year by mixing anhydrous aluminium chloride with potassium and produced a powder of aluminium. In 1845, he was able to produce small pieces of the metal and described some physical properties of this metal. For many years thereafter, Wöhler was credited as the discoverer of aluminium. As Wöhler's method could not yield great quantities of aluminium, the metal remained rare; its cost exceeded that of gold. The first industrial production of aluminium was established in 1856 by French chemist Henri Etienne Sainte-Claire Deville and companions. Deville had discovered that aluminium trichloride could be reduced by sodium, which was more convenient and less expensive than potassium, which Wöhler had used. Even then, aluminium was still not of great purity and produced aluminium differed in properties by sample. The first industrial large-scale production method was independently developed in 1886 by French engineer Paul Héroult and American engineer Charles Martin Hall; it is now known as the Hall–Héroult process. The Hall–Héroult process converts alumina into metal. Austrian chemist Carl Joseph Bayer discovered a way of purifying bauxite to yield alumina, now known as the Bayer process, in 1889. Modern production of the aluminium metal is based on the Bayer and Hall–Héroult processes. Prices of aluminium dropped and aluminium became widely used in jewelry, everyday items, eyeglass frames, optical instruments, tableware, and foil in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal with many uses at the time. During World War I, major governments demanded large shipments of aluminium for light strong airframes; during World War II, demand by major governments for aviation was even higher. By the mid-20th century, aluminium had become a part of everyday life and an essential component of housewares. In 1954, production of aluminium surpassed that of copper, historically second in production only to iron, making it the most produced non-ferrous metal. During the mid-20th century, aluminium emerged as a civil engineering material, with building applications in both basic construction and interior finish work, and increasingly being used in military engineering, for both airplanes and land armor vehicle engines. Earth's first artificial satellite, launched in 1957, consisted of two separate aluminium semi-spheres joined and all subsequent space vehicles have used aluminium to some extent. The aluminium can was invented in 1956 and employed as a storage for drinks in 1958. Throughout the 20th century, the production of aluminium rose rapidly: while the world production of aluminium in 1900 was 6,800 metric tons, the annual production first exceeded 100,000 metric tons in 1916; 1,000,000 tons in 1941; 10,000,000 tons in 1971. In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the oldest industrial metal exchange in the world, in 1978. The output continued to grow: the annual production of aluminium exceeded 50,000,000 metric tons in 2013. The real price for aluminium declined from $14,000 per metric ton in 1900 to $2,340 in 1948 (in 1998 United States dollars). Extraction and processing costs were lowered over technological progress and the scale of the economies. However, the need to exploit lower-grade poorer quality deposits and the use of fast increasing input costs (above all, energy) increased the net cost of aluminium; the real price began to grow in the 1970s with the rise of energy cost. Production moved from the industrialized countries to countries where production was cheaper. Production costs in the late 20th century changed because of advances in technology, lower energy prices, exchange rates of the United States dollar, and alumina prices. The BRIC countries' combined share in primary production and primary consumption grew substantially in the first decade of the 21st century. China is accumulating an especially large share of the world's production thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its consumption share from 2% in 1972 to 40% in 2010. In the United States, Western Europe, and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging. In 2021, prices for industrial metals such as aluminium have soared to near-record levels as energy shortages in China drive up costs for electricity. Etymology The names aluminium and aluminum are derived from the word alumine, an obsolete term for alumina, a naturally occurring oxide of aluminium. Alumine was borrowed from French, which in turn derived it from alumen, the classical Latin name for alum, the mineral from which it was collected. The Latin word alumen stems from the Proto-Indo-European root *alu- meaning "bitter" or "beer". Coinage British chemist Humphry Davy, who performed a number of experiments aimed to isolate the metal, is credited as the person who named the element. The first name proposed for the metal to be isolated from alum was alumium, which Davy suggested in an 1808 article on his electrochemical research, published in Philosophical Transactions of the Royal Society. It appeared that the name was coined from the English word alum and the Latin suffix -ium; however, it was customary at the time that the elements should have names originating in the Latin language, and as such, this name was not adopted universally. This name was criticized by contemporary chemists from France, Germany, and Sweden, who insisted the metal should be named for the oxide, alumina, from which it would be isolated. The English word name alum does not directly reference the Latin language, whereas alumine/alumina easily references the Latin word alumen (upon declension, alumen changes to alumin-). One example was a writing in French by Swedish chemist Jöns Jacob Berzelius titled Essai sur la Nomenclature chimique, published in July 1811; in this essay, among other things, Berzelius used the name aluminium for the element that would be synthesized from alum. (Another article in the same journal issue also refers to the metal whose oxide forms the basis of sapphire as to aluminium.) A January 1811 summary of one of Davy's lectures at the Royal Society mentioned the name aluminium as a possibility. The following year, Davy published a chemistry textbook in which he used the spelling aluminum. Both spellings have coexisted since; however, their usage has split by region: aluminum is the primary spelling in the United States and Canada while aluminium is in the rest of the English-speaking world. Spelling In 1812, British scientist Thomas Young wrote an anonymous review of Davy's book, in which he proposed the name aluminium instead of aluminum, which he felt had a "less classical sound". This name did catch on: while the spelling was occasionally used in Britain, the American scientific language used from the start. Most scientists used throughout the world in the 19th century, and it was entrenched in many other European languages, such as French, German, or Dutch. In 1828, American lexicographer Noah Webster used exclusively the aluminum spelling in his American Dictionary of the English Language. In the 1830s, the spelling started to gain usage in the United States; by the 1860s, it had become the more common spelling there outside science. In 1892, Hall used the spelling in his advertising handbill for his new electrolytic method of producing the metal, despite his constant use of the spelling in all the patents he filed between 1886 and 1903. It remains unknown whether this spelling was introduced by mistake or intentionally; however, Hall preferred aluminum since its introduction because it resembled platinum, the name of a prestigious metal. By 1890, both spellings had been common in the U.S. overall, the spelling being slightly more common; by 1895, the situation had reversed; by 1900, aluminum had become twice as common as aluminium; during the following decade, the spelling dominated American usage. In 1925, the American Chemical Society adopted this spelling. The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard international name for the element in 1990. In 1993, they recognized aluminum as an acceptable variant; the most recent 2005 edition of the IUPAC nomenclature of inorganic chemistry acknowledges this spelling as well. IUPAC official publications use the spelling as primary but list both where appropriate. Production and refinement The production of aluminium starts with the extraction of bauxite rock from the ground. The bauxite is processed and transformed using the Bayer process into alumina, which is then processed using the Hall–Héroult process, resulting in the final aluminium metal. Aluminium production is highly energy-consuming, and so the producers tend to locate smelters in places where electric power is both plentiful and inexpensive. As of 2019, the world's largest smelters of aluminium are located in China, India, Russia, Canada, and the United Arab Emirates, while China is by far the top producer of aluminium with a world share of fifty-five percent. According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of aluminium in use in society (i.e. in cars, buildings, electronics, etc.) is . Much of this is in more-developed countries ( per capita) rather than less-developed countries ( per capita). Bayer process Bauxite is converted to alumina by the Bayer process. Bauxite is blended for uniform composition and then is ground. The resulting slurry is mixed with a hot solution of sodium hydroxide; the mixture is then treated in a digester vessel at a pressure well above atmospheric, dissolving the aluminium hydroxide in bauxite while converting impurities into relatively insoluble compounds: After this reaction, the slurry is at a temperature above its atmospheric boiling point. It is cooled by removing steam as pressure is reduced. The bauxite residue is separated from the solution and discarded. The solution, free of solids, is seeded with small crystals of aluminium hydroxide; this causes decomposition of the [Al(OH)4]− ions to aluminium hydroxide. After about half of aluminium has precipitated, the mixture is sent to classifiers. Small crystals of aluminium hydroxide are collected to serve as seeding agents; coarse particles are converted to alumina by heating; the excess solution is removed by evaporation, (if needed) purified, and recycled. Hall–Héroult process The conversion of alumina to aluminium metal is achieved by the Hall–Héroult process. In this energy-intensive process, a solution of alumina in a molten () mixture of cryolite (Na3AlF6) with calcium fluoride is electrolyzed to produce metallic aluminium. The liquid aluminium metal sinks to the bottom of the solution and is tapped off, and usually cast into large blocks called aluminium billets for further processing. Anodes of the electrolysis cell are made of carbon—the most resistant material against fluoride corrosion—and either bake at the process or are prebaked. The former, also called Söderberg anodes, are less power-efficient and fumes released during baking are costly to collect, which is why they are being replaced by prebaked anodes even though they save the power, energy, and labor to prebake the cathodes. Carbon for anodes should be preferably pure so that neither aluminium nor the electrolyte is contaminated with ash. Despite carbon's resistivity against corrosion, it is still consumed at a rate of 0.4–0.5 kg per each kilogram of produced aluminium. Cathodes are made of anthracite; high purity for them is not required because impurities leach only very slowly. The cathode is consumed at a rate of 0.02–0.04 kg per each kilogram of produced aluminium. A cell is usually terminated after 2–6 years following a failure of the cathode. The Hall–Heroult process produces aluminium with a purity of above 99%. Further purification can be done by the Hoopes process. This process involves the electrolysis of molten aluminium with a sodium, barium, and aluminium fluoride electrolyte. The resulting aluminium has a purity of 99.99%. Electric power represents about 20 to 40% of the cost of producing aluminium, depending on the location of the smelter. Aluminium production consumes roughly 5% of electricity generated in the United States. Because of this, alternatives to the Hall–Héroult process have been researched, but none has turned out to be economically feasible. Recycling Recovery of the metal through recycling has become an important task of the aluminium industry. Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage cans brought it to public awareness. Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like oxide). An aluminium stack melter produces significantly less dross, with values reported below 1%. White dross from primary aluminium production and from secondary recycling operations still contains useful quantities of aluminium that can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases (including, among others, hydrogen, acetylene, and ammonia), which spontaneously ignites on contact with air; contact with damp air results in the release of copious quantities of ammonia gas. Despite these difficulties, the waste is used as a filler in asphalt and concrete. Applications Metal The global production of aluminium in 2016 was 58.8 million metric tons. It exceeded that of any other metal except iron (1,231 million metric tons). Aluminium is almost always alloyed, which markedly improves its mechanical properties, especially when tempered. For example, the common aluminium foils and beverage cans are alloys of 92% to 99% aluminium. The main alloying agents are copper, zinc, magnesium, manganese, and silicon (e.g., duralumin) with the levels of other metals in a few percent by weight. Aluminium, both wrought and cast, has been alloyed with: manganese, silicon, magnesium, copper and zinc among others. For example, the Kynal family of alloys was developed by the British chemical manufacturer Imperial Chemical Industries. The major uses for aluminium metal are in: Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles, spacecraft, etc.). Aluminium is used because of its low density; Packaging (cans, foil, frame, etc.). Aluminium is used because it is non-toxic (see below), non-adsorptive, and splinter-proof; Building and construction (windows, doors, siding, building wire, sheathing, roofing, etc.). Since steel is cheaper, aluminium is used when lightness, corrosion resistance, or engineering features are important; Electricity-related uses (conductor alloys, motors, and generators, transformers, capacitors, etc.). Aluminium is used because it is relatively cheap, highly conductive, has adequate mechanical strength and low density, and resists corrosion; A wide range of household items, from cooking utensils to furniture. Low density, good appearance, ease of fabrication, and durability are the key factors of aluminium usage; Machinery and equipment (processing equipment, pipes, tools). Aluminium is used because of its corrosion resistance, non-pyrophoricity, and mechanical strength. Portable computer cases. Currently rarely used without alloying, but aluminium can be recycled and clean aluminium has residual market value: for example, the used beverage can (UBC) material was used to encase the electronic components of MacBook Air laptop, Pixel 5 smartphone or Summit Lite smartwatch. Compounds The great majority (about 90%) of aluminium oxide is converted to metallic aluminium. Being a very hard material (Mohs hardness 9), alumina is widely used as an abrasive; being extraordinarily chemically inert, it is useful in highly reactive environments such as high pressure sodium lamps. Aluminium oxide is commonly used as a catalyst for industrial processes; e.g. the Claus process to convert hydrogen sulfide to sulfur in refineries and to alkylate amines. Many industrial catalysts are supported by alumina, meaning that the expensive catalyst material is dispersed over a surface of the inert alumina. Another principal use is as a drying agent or absorbent. Several sulfates of aluminium have industrial and commercial application. Aluminium sulfate (in its hydrate form) is produced on the annual scale of several millions of metric tons. About two-thirds is consumed in water treatment. The next major application is in the manufacture of paper. It is also used as a mordant in dyeing, in pickling seeds, deodorizing of mineral oils, in leather tanning, and in production of other aluminium compounds. Two kinds of alum, ammonium alum and potassium alum, were formerly used as mordants and in leather tanning, but their use has significantly declined following availability of high-purity aluminium sulfate. Anhydrous aluminium chloride is used as a catalyst in chemical and petrochemical industries, the dyeing industry, and in synthesis of various inorganic and organic compounds. Aluminium hydroxychlorides are used in purifying water, in the paper industry, and as antiperspirants. Sodium aluminate is used in treating water and as an accelerator of solidification of cement. Many aluminium compounds have niche applications, for example: Aluminium acetate in solution is used as an astringent. Aluminium phosphate is used in the manufacture of glass, ceramic, pulp and paper products, cosmetics, paints, varnishes, and in dental cement. Aluminium hydroxide is used as an antacid, and mordant; it is used also in water purification, the manufacture of glass and ceramics, and in the waterproofing of fabrics. Lithium aluminium hydride is a powerful reducing agent used in organic chemistry. Organoaluminiums are used as Lewis acids and co-catalysts. Methylaluminoxane is a co-catalyst for Ziegler–Natta olefin polymerization to produce vinyl polymers such as polyethene. Aqueous aluminium ions (such as aqueous aluminium sulfate) are used to treat against fish parasites such as Gyrodactylus salaris. In many vaccines, certain aluminium salts serve as an immune adjuvant (immune response booster) to allow the protein in the vaccine to achieve sufficient potency as an immune stimulant. Biology Despite its widespread occurrence in the Earth's crust, aluminium has no known function in biology. At pH 6–9 (relevant for most natural waters), aluminium precipitates out of water as the hydroxide and is hence not available; most elements behaving this way have no biological role or are toxic. Aluminium salts are nontoxic. Aluminium sulfate has an LD50 of 6207 mg/kg (oral, mouse), which corresponds to 435 grams for an person, though lethality and neurotoxicity differ in their implications. Andrási et al. discovered "significantly higher Aluminum" content in some brain regions when necroscopies of subjects with Alzheimer disease were compared to subjects without. Aluminium chelates with glyphosate. Toxicity Aluminium is classified as a non-carcinogen by the United States Department of Health and Human Services. A review published in 1988 said that there was little evidence that normal exposure to aluminium presents a risk to healthy adult, and a 2014 multi-element toxicology review was unable to find deleterious effects of aluminium consumed in amounts not greater than 40 mg/day per kg of body mass. Most aluminium consumed will leave the body in feces; most of the small part of it that enters the bloodstream, will be excreted via urine; nevertheless some aluminium does pass the blood-brain barrier and is lodged preferentially in the brains of Alzheimer's patients. Evidence published in 1989 indicates that, for Alzheimer's patients, aluminium may act by electrostatically crosslinking proteins, thus down-regulating genes in the superior temporal gyrus. Effects Aluminium, although rarely, can cause vitamin D-resistant osteomalacia, erythropoietin-resistant microcytic anemia, and central nervous system alterations. People with kidney insufficiency are especially at a risk. Chronic ingestion of hydrated aluminium silicates (for excess gastric acidity control) may result in aluminium binding to intestinal contents and increased elimination of other metals, such as iron or zinc; sufficiently high doses (>50 g/day) can cause anemia. During the 1988 Camelford water pollution incident people in Camelford had their drinking water contaminated with aluminium sulfate for several weeks. A final report into the incident in 2013 concluded it was unlikely that this had caused long-term health problems. Aluminium has been suspected of being a possible cause of Alzheimer's disease, but research into this for over 40 years has found, , no good evidence of causal effect. Aluminium increases estrogen-related gene expression in human breast cancer cells cultured in the laboratory. In very high doses, aluminium is associated with altered function of the blood–brain barrier. A small percentage of people have contact allergies to aluminium and experience itchy red rashes, headache, muscle pain, joint pain, poor memory, insomnia, depression, asthma, irritable bowel syndrome, or other symptoms upon contact with products containing aluminium. Exposure to powdered aluminium or aluminium welding fumes can cause pulmonary fibrosis. Fine aluminium powder can ignite or explode, posing another workplace hazard. Exposure routes Food is the main source of aluminium. Drinking water contains more aluminium than solid food; however, aluminium in food may be absorbed more than aluminium from water. Major sources of human oral exposure to aluminium include food (due to its use in food additives, food and beverage packaging, and cooking utensils), drinking water (due to its use in municipal water treatment), and aluminium-containing medications (particularly antacid/antiulcer and buffered aspirin formulations). Dietary exposure in Europeans averages to 0.2–1.5 mg/kg/week but can be as high as 2.3 mg/kg/week. Higher exposure levels of aluminium are mostly limited to miners, aluminium production workers, and dialysis patients. Consumption of antacids, antiperspirants, vaccines, and cosmetics provide possible routes of exposure. Consumption of acidic foods or liquids with aluminium enhances aluminium absorption, and maltol has been shown to increase the accumulation of aluminium in nerve and bone tissues. Treatment In case of suspected sudden intake of a large amount of aluminium, the only treatment is deferoxamine mesylate which may be given to help eliminate aluminium from the body by chelation. However, this should be applied with caution as this reduces not only aluminium body levels, but also those of other metals such as copper or iron. Environmental effects High levels of aluminium occur near mining sites; small amounts of
and interior finish work, and increasingly being used in military engineering, for both airplanes and land armor vehicle engines. Earth's first artificial satellite, launched in 1957, consisted of two separate aluminium semi-spheres joined and all subsequent space vehicles have used aluminium to some extent. The aluminium can was invented in 1956 and employed as a storage for drinks in 1958. Throughout the 20th century, the production of aluminium rose rapidly: while the world production of aluminium in 1900 was 6,800 metric tons, the annual production first exceeded 100,000 metric tons in 1916; 1,000,000 tons in 1941; 10,000,000 tons in 1971. In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the oldest industrial metal exchange in the world, in 1978. The output continued to grow: the annual production of aluminium exceeded 50,000,000 metric tons in 2013. The real price for aluminium declined from $14,000 per metric ton in 1900 to $2,340 in 1948 (in 1998 United States dollars). Extraction and processing costs were lowered over technological progress and the scale of the economies. However, the need to exploit lower-grade poorer quality deposits and the use of fast increasing input costs (above all, energy) increased the net cost of aluminium; the real price began to grow in the 1970s with the rise of energy cost. Production moved from the industrialized countries to countries where production was cheaper. Production costs in the late 20th century changed because of advances in technology, lower energy prices, exchange rates of the United States dollar, and alumina prices. The BRIC countries' combined share in primary production and primary consumption grew substantially in the first decade of the 21st century. China is accumulating an especially large share of the world's production thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its consumption share from 2% in 1972 to 40% in 2010. In the United States, Western Europe, and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging. In 2021, prices for industrial metals such as aluminium have soared to near-record levels as energy shortages in China drive up costs for electricity. Etymology The names aluminium and aluminum are derived from the word alumine, an obsolete term for alumina, a naturally occurring oxide of aluminium. Alumine was borrowed from French, which in turn derived it from alumen, the classical Latin name for alum, the mineral from which it was collected. The Latin word alumen stems from the Proto-Indo-European root *alu- meaning "bitter" or "beer". Coinage British chemist Humphry Davy, who performed a number of experiments aimed to isolate the metal, is credited as the person who named the element. The first name proposed for the metal to be isolated from alum was alumium, which Davy suggested in an 1808 article on his electrochemical research, published in Philosophical Transactions of the Royal Society. It appeared that the name was coined from the English word alum and the Latin suffix -ium; however, it was customary at the time that the elements should have names originating in the Latin language, and as such, this name was not adopted universally. This name was criticized by contemporary chemists from France, Germany, and Sweden, who insisted the metal should be named for the oxide, alumina, from which it would be isolated. The English word name alum does not directly reference the Latin language, whereas alumine/alumina easily references the Latin word alumen (upon declension, alumen changes to alumin-). One example was a writing in French by Swedish chemist Jöns Jacob Berzelius titled Essai sur la Nomenclature chimique, published in July 1811; in this essay, among other things, Berzelius used the name aluminium for the element that would be synthesized from alum. (Another article in the same journal issue also refers to the metal whose oxide forms the basis of sapphire as to aluminium.) A January 1811 summary of one of Davy's lectures at the Royal Society mentioned the name aluminium as a possibility. The following year, Davy published a chemistry textbook in which he used the spelling aluminum. Both spellings have coexisted since; however, their usage has split by region: aluminum is the primary spelling in the United States and Canada while aluminium is in the rest of the English-speaking world. Spelling In 1812, British scientist Thomas Young wrote an anonymous review of Davy's book, in which he proposed the name aluminium instead of aluminum, which he felt had a "less classical sound". This name did catch on: while the spelling was occasionally used in Britain, the American scientific language used from the start. Most scientists used throughout the world in the 19th century, and it was entrenched in many other European languages, such as French, German, or Dutch. In 1828, American lexicographer Noah Webster used exclusively the aluminum spelling in his American Dictionary of the English Language. In the 1830s, the spelling started to gain usage in the United States; by the 1860s, it had become the more common spelling there outside science. In 1892, Hall used the spelling in his advertising handbill for his new electrolytic method of producing the metal, despite his constant use of the spelling in all the patents he filed between 1886 and 1903. It remains unknown whether this spelling was introduced by mistake or intentionally; however, Hall preferred aluminum since its introduction because it resembled platinum, the name of a prestigious metal. By 1890, both spellings had been common in the U.S. overall, the spelling being slightly more common; by 1895, the situation had reversed; by 1900, aluminum had become twice as common as aluminium; during the following decade, the spelling dominated American usage. In 1925, the American Chemical Society adopted this spelling. The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard international name for the element in 1990. In 1993, they recognized aluminum as an acceptable variant; the most recent 2005 edition of the IUPAC nomenclature of inorganic chemistry acknowledges this spelling as well. IUPAC official publications use the spelling as primary but list both where appropriate. Production and refinement The production of aluminium starts with the extraction of bauxite rock from the ground. The bauxite is processed and transformed using the Bayer process into alumina, which is then processed using the Hall–Héroult process, resulting in the final aluminium metal. Aluminium production is highly energy-consuming, and so the producers tend to locate smelters in places where electric power is both plentiful and inexpensive. As of 2019, the world's largest smelters of aluminium are located in China, India, Russia, Canada, and the United Arab Emirates, while China is by far the top producer of aluminium with a world share of fifty-five percent. According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of aluminium in use in society (i.e. in cars, buildings, electronics, etc.) is . Much of this is in more-developed countries ( per capita) rather than less-developed countries ( per capita). Bayer process Bauxite is converted to alumina by the Bayer process. Bauxite is blended for uniform composition and then is ground. The resulting slurry is mixed with a hot solution of sodium hydroxide; the mixture is then treated in a digester vessel at a pressure well above atmospheric, dissolving the aluminium hydroxide in bauxite while converting impurities into relatively insoluble compounds: After this reaction, the slurry is at a temperature above its atmospheric boiling point. It is cooled by removing steam as pressure is reduced. The bauxite residue is separated from the solution and discarded. The solution, free of solids, is seeded with small crystals of aluminium hydroxide; this causes decomposition of the [Al(OH)4]− ions to aluminium hydroxide. After about half of aluminium has precipitated, the mixture is sent to classifiers. Small crystals of aluminium hydroxide are collected to serve as seeding agents; coarse particles are converted to alumina by heating; the excess solution is removed by evaporation, (if needed) purified, and recycled. Hall–Héroult process The conversion of alumina to aluminium metal is achieved by the Hall–Héroult process. In this energy-intensive process, a solution of alumina in a molten () mixture of cryolite (Na3AlF6) with calcium fluoride is electrolyzed to produce metallic aluminium. The liquid aluminium metal sinks to the bottom of the solution and is tapped off, and usually cast into large blocks called aluminium billets for further processing. Anodes of the electrolysis cell are made of carbon—the most resistant material against fluoride corrosion—and either bake at the process or are prebaked. The former, also called Söderberg anodes, are less power-efficient and fumes released during baking are costly to collect, which is why they are being replaced by prebaked anodes even though they save the power, energy, and labor to prebake the cathodes. Carbon for anodes should be preferably pure so that neither aluminium nor the electrolyte is contaminated with ash. Despite carbon's resistivity against corrosion, it is still consumed at a rate of 0.4–0.5 kg per each kilogram of produced aluminium. Cathodes are made of anthracite; high purity for them is not required because impurities leach only very slowly. The cathode is consumed at a rate of 0.02–0.04 kg per each kilogram of produced aluminium. A cell is usually terminated after 2–6 years following a failure of the cathode. The Hall–Heroult process produces aluminium with a purity of above 99%. Further purification can be done by the Hoopes process. This process involves the electrolysis of molten aluminium with a sodium, barium, and aluminium fluoride electrolyte. The resulting aluminium has a purity of 99.99%. Electric power represents about 20 to 40% of the cost of producing aluminium, depending on the location of the smelter. Aluminium production consumes roughly 5% of electricity generated in the United States. Because of this, alternatives to the Hall–Héroult process have been researched, but none has turned out to be economically feasible. Recycling Recovery of the metal through recycling has become an important task of the aluminium industry. Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage cans brought it to public awareness. Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like oxide). An aluminium stack melter produces significantly less dross, with values reported below 1%. White dross from primary aluminium production and from secondary recycling operations still contains useful quantities of aluminium that can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases (including, among others, hydrogen, acetylene, and ammonia), which spontaneously ignites on contact with air; contact with damp air results in the release of copious quantities of ammonia gas. Despite these difficulties, the waste is used as a filler in asphalt and concrete. Applications Metal The global production of aluminium in 2016 was 58.8 million metric tons. It exceeded that of any other metal except iron (1,231 million metric tons). Aluminium is almost always alloyed, which markedly improves its mechanical properties, especially when tempered. For example, the common aluminium foils and beverage cans are alloys of 92% to 99% aluminium. The main alloying agents are copper, zinc, magnesium, manganese, and silicon (e.g., duralumin) with the levels of other metals in a few percent by weight. Aluminium, both wrought and cast, has been alloyed with: manganese, silicon, magnesium, copper and zinc among others. For example, the Kynal family of alloys was developed by the British chemical manufacturer Imperial Chemical Industries. The major uses for aluminium metal are in: Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles, spacecraft, etc.). Aluminium is used because of its low density; Packaging (cans, foil, frame, etc.). Aluminium is used because it is non-toxic (see below), non-adsorptive, and splinter-proof; Building and construction (windows, doors, siding, building wire, sheathing, roofing, etc.). Since steel is cheaper, aluminium is used when lightness, corrosion resistance, or engineering features are important; Electricity-related uses (conductor alloys, motors, and generators, transformers, capacitors, etc.). Aluminium is used because it is relatively cheap, highly conductive, has adequate mechanical strength and low density, and resists corrosion; A wide range of household items, from cooking utensils to furniture. Low density, good appearance, ease of fabrication, and durability are the key factors of aluminium usage; Machinery and equipment (processing equipment, pipes, tools). Aluminium is used because of its corrosion resistance, non-pyrophoricity, and mechanical strength. Portable computer cases. Currently rarely used without alloying, but aluminium can be recycled and clean aluminium has residual market value: for example, the used beverage can (UBC) material was used to encase the electronic components of MacBook Air laptop, Pixel 5 smartphone or Summit Lite smartwatch. Compounds The great majority (about 90%) of aluminium oxide is converted to metallic aluminium. Being a very hard material (Mohs hardness 9), alumina is widely used as an abrasive; being extraordinarily chemically inert, it is useful in highly reactive environments such as high pressure sodium lamps. Aluminium oxide is commonly used as a catalyst for industrial processes; e.g. the Claus process to convert hydrogen sulfide to sulfur in refineries and to alkylate amines. Many industrial catalysts are supported by alumina, meaning that the expensive catalyst material is dispersed over a surface of the inert alumina. Another principal use is as a drying agent or absorbent. Several sulfates of aluminium have industrial and commercial application. Aluminium sulfate (in its hydrate form) is produced on the annual scale of several millions of metric tons. About two-thirds is consumed in water treatment. The next major application is in the manufacture of paper. It is also used as a mordant in dyeing, in pickling seeds, deodorizing of mineral oils, in leather tanning, and in
appealing, others view the desire of immigrants to be seen as German negatively, and they have actively sought to revive and recreate concepts of identity in connection to traditional ethnic origins. Advanced Chemistry helped to found the German chapter of the Zulu nation. The rivalry between Advanced Chemistry and Die Fantastischen Vier has served to highlight a dichotomy in the routes that hip hop has taken in becoming a part of the German soundscape. While Die Fantastischen Vier may be said to view hip hop primarily as an aesthetic art form, Advanced Chemistry understand hip hop as being inextricably linked to the social and political circumstances under which it is created. For Advanced Chemistry, hip hop is a “vehicle of general human emancipation,”. In their undertaking of social and political issues, the band introduced the term "Afro-German" into the context of German hip hop, and the theme of race is highlighted in much of their music. With the release of the single “Fremd im eigenen Land”, Advanced Chemistry separated itself from the rest of the rap being produced in Germany. This single was the first of its kind to go beyond simply imitating US rap and addressed the current issues of the time. Fremd im eigenen Land which translates to “foreign in my own country” dealt with the widespread racism that non-white German citizens faced. This change from simple imitation to political commentary was the start of German identification with rap. The sound of “Fremd im eigenen Land” was influenced by the 'wall of noise' created by Public Enemy's producers, The Bomb Squad. After the reunification of Germany, an abundance of anti-immigrant sentiment emerged, as well as attacks on the homes of refugees in the early 1990s. Advanced Chemistry came to prominence in the wake of these actions because of their pro-multicultural society stance in their music. Advanced Chemistry's attitudes revolve around their attempts to create a distinct "Germanness" in hip hop, as opposed to imitating American hip hop as other groups had done. Torch has said, "What the Americans do is exotic for us because we don't live like they do. What they do seems to be more interesting and newer. But not for me. For me it's more exciting to experience my fellow Germans in new contexts...For me, it's interesting to see what the kids try to do that's different from what I know." Advanced Chemistry were the first to use the term "Afro-German" in a hip hop context. This was part of the pro-immigrant political message they sent via their music. While Advanced Chemistry's use of the German language in their rap allows them to make claims to authenticity and true German heritage, bolstering pro-immigration sentiment, their style can also be problematic for immigrant notions of any real ethnic roots.
as one of the main pioneers in German hip hop. They were one of the first groups to rap in German (although their name is in English). Furthermore, their songs tackled controversial social and political issues, distinguishing them from early German hip hop group "Die Fantastischen Vier" (The Fantastic Four), which had a more light-hearted, playful, party image. Career Advanced Chemistry frequently rapped about their lives and experiences as children of immigrants, exposing the marginalization experienced by most ethnic minorities in Germany, and the feelings of frustration and resentment that being denied a German identity can cause. The song "Fremd im eigenen Land" (Foreign in your own nation) was released by Advanced Chemistry in November 1992. The single became a staple in the German hip hop scene. It made a strong statement about the status of immigrants throughout Germany, as the group was composed of multi-national and multi-racial members. The video shows several members brandishing their German passports as a demonstration of their German citizenship to skeptical and unaccepting 'ethnic' Germans. This idea of national identity is important, as many rap artists in Germany have been of foreign origin. These so-called Gastarbeiter (guest workers) children saw breakdance, graffiti, rap music, and hip hop culture as a means of expressing themselves. Since the release of "Fremd im eigenen Land", many other German-language rappers have also tried to confront anti-immigrant ideas and develop themes of citizenship. However, though many ethnic minority youth in Germany find these German identity themes appealing, others view the desire of immigrants to be seen as German negatively, and they have actively sought to revive and recreate concepts of identity in connection to traditional ethnic origins. Advanced Chemistry helped to found the German chapter of the Zulu nation. The rivalry between Advanced Chemistry and Die Fantastischen Vier has served to highlight a dichotomy in the routes that hip hop has taken in becoming a part of the German soundscape. While Die Fantastischen Vier may be said to view hip hop primarily as an aesthetic art form, Advanced Chemistry understand hip hop as being inextricably linked to the social and political circumstances under which it is created. For Advanced Chemistry, hip hop is a “vehicle of general human emancipation,”. In their undertaking of social and political issues, the band introduced the term "Afro-German" into the context of German hip hop, and the theme of race is highlighted in much of their music. With the release of the single “Fremd im eigenen Land”, Advanced Chemistry separated itself from the rest of the rap being produced in Germany. This single was the first of its kind to go beyond simply imitating US rap and addressed the current issues of the time. Fremd im eigenen Land which translates to “foreign in my own country” dealt with the widespread racism that non-white German citizens faced. This change from simple imitation to political commentary was the start of German identification with rap. The sound of “Fremd im eigenen Land” was influenced by the 'wall of noise' created by Public Enemy's producers, The Bomb Squad. After the reunification of Germany, an abundance of anti-immigrant sentiment emerged, as well as attacks on the homes of refugees in the early 1990s. Advanced Chemistry came to prominence in the wake of these actions because of their pro-multicultural society stance in their music. Advanced Chemistry's attitudes revolve around their attempts to create a distinct "Germanness"
believing"). Protracted conflict through the 17th century, with radical Protestants on the one hand and Roman Catholics who recognised the primacy of the Pope on the other, resulted in an association of churches that was both deliberately vague about doctrinal principles, yet bold in developing parameters of acceptable deviation. These parameters were most clearly articulated in the various rubrics of the successive prayer books, as well as the Thirty-Nine Articles of Religion (1563). These articles have historically shaped and continue to direct the ethos of the communion, an ethos reinforced by its interpretation and expansion by such influential early theologians such as Richard Hooker, Lancelot Andrewes and John Cosin. With the expansion of the British Empire the growth of Anglicanism outside Great Britain and Ireland, the communion sought to establish new vehicles of unity. The first major expressions of this were the Lambeth Conferences of the communion's bishops, first convened in 1867 by Charles Longley, the Archbishop of Canterbury. From the beginning, these were not intended to displace the autonomy of the emerging provinces of the communion, but to "discuss matters of practical interest, and pronounce what we deem expedient in resolutions which may serve as safe guides to future action". Chicago Lambeth Quadrilateral One of the enduringly influential early resolutions of the conference was the so-called Chicago-Lambeth Quadrilateral of 1888. Its intent was to provide the basis for discussions of reunion with the Roman Catholic and Orthodox churches, but it had the ancillary effect of establishing parameters of Anglican identity. It establishes four principles with these words: Instruments of communion As mentioned above, the Anglican Communion has no international juridical organisation. The Archbishop of Canterbury's role is strictly symbolic and unifying and the communion's three international bodies are consultative and collaborative, their resolutions having no legal effect on the autonomous provinces of the communion. Taken together, however, the four do function as "instruments of communion", since all churches of the communion participate in them. In order of antiquity, they are: The Archbishop of Canterbury functions as the spiritual head of the communion. The archbishop is the focus of unity, since no church claims membership in the Communion without being in communion with him. The present archbishop is Justin Welby. The Lambeth Conference (first held in 1867) is the oldest international consultation. It is a forum for bishops of the communion to reinforce unity and collegiality through manifesting the episcopate, to discuss matters of mutual concern, and to pass resolutions intended to act as guideposts. It is held roughly every 10 years and invitation is by the Archbishop of Canterbury. The Anglican Consultative Council (first met in 1971) was created by a 1968 Lambeth Conference resolution, and meets usually at three-yearly intervals. The council consists of representative bishops, other clergy and laity chosen by the 38 provinces. The body has a permanent secretariat, the Anglican Communion Office, of which the Archbishop of Canterbury is president. The Primates' Meeting (first met in 1979) is the most recent manifestation of international consultation and deliberation, having been first convened by Archbishop Donald Coggan as a forum for "leisurely thought, prayer and deep consultation". Since there is no binding authority in the Anglican Communion, these international bodies are a vehicle for consultation and persuasion. In recent times, persuasion has tipped over into debates over conformity in certain areas of doctrine, discipline, worship and ethics. The most notable example has been the objection of many provinces of the communion (particularly in Africa and Asia) to the changing acceptance of LGBTQ+ individuals in the North American churches (e.g., by blessing same-sex unions and ordaining and consecrating same-sex relationships) and to the process by which changes were undertaken. (See Anglican realignment) Those who objected condemned these actions as unscriptural, unilateral, and without the agreement of the communion prior to these steps being taken. In response, the American Episcopal Church and the Anglican Church of Canada answered that the actions had been undertaken after lengthy scriptural and theological reflection, legally in accordance with their own canons and constitutions and after extensive consultation with the provinces of the communion. The Primates' Meeting voted to request the two churches to withdraw their delegates from the 2005 meeting of the Anglican Consultative Council. Canada and the United States decided to attend the meeting but without exercising their right to vote. They have not been expelled or suspended, since there is no mechanism in this voluntary association to suspend or expel an independent province of the communion. Since membership is based on a province's communion with Canterbury, expulsion would require the Archbishop of Canterbury's refusal to be in communion with the affected jurisdictions. In line with the suggestion of the Windsor Report, Rowan Williams (the then Archbishop of Canterbury) established a working group to examine the feasibility of an Anglican covenant which would articulate the conditions for communion in some fashion. Organisation Provinces The Anglican communion consists of forty-one autonomous provinces each with its own primate and governing structure. These provinces may take the form of national churches (such as in Canada, Uganda, or Japan) or a collection of nations (such as the West Indies, Central Africa, or Southeast Asia). Extraprovincial churches In addition to the forty-one provinces, there are five extraprovincial churches under the metropolitical authority of the Archbishop of Canterbury. Former provinces New provinces in formation At its Autumn 2020 meeting the provincial standing committee of the Church of Southern Africa approved a plan to form the dioceses in Mozambique and Angola into a separate autonomous province of the Anglican Communion, to be named the Anglican Church of Mozambique and Angola (IAMA). The plans were also outlined to the Mozambique and Angola Anglican Association (MANNA) at its September 2020 annual general meeting. The new province is Portuguese-speaking, and consists of twelve dioceses (four in Angola, and eight in Mozambique). The twelve proposed new dioceses have been defined and named, and each has a "Task Force Committee" working towards its establishment as a diocese. The plan received the consent of the bishops and diocesan synods of all four existing dioceses in the two nations, and was submitted to the Anglican Consultative Council. In September 2020 the Archbishop of Canterbury announced that he had asked the bishops of the Church of Ceylon to begin planning for the formation of an autonomous province of Ceylon, so as to end his current position as Metropolitan of the two dioceses in that country. Churches in full communion In addition to other member churches, the churches of the Anglican Communion are in full communion with the Old Catholic churches of the Union of Utrecht and the Scandinavian Lutheran churches of the Porvoo Communion in Europe, the India-based Malankara Mar Thoma Syrian and Malabar Independent Syrian churches and the Philippine Independent Church, also known as the Aglipayan Church. History The Anglican Communion traces much of its growth to the older mission organisations of the Church of England such as the Society for Promoting Christian Knowledge (founded 1698), the Society for the Propagation of the Gospel in Foreign Parts (founded 1701) and the Church Missionary Society (founded 1799). The Church of England (which until the 20th century included the Church in Wales) initially separated from the Roman Catholic Church in 1534 in the reign of Henry VIII, reunited in 1555 under Mary I and then separated again in 1570 under Elizabeth I (the Roman Catholic Church excommunicated Elizabeth I in 1570 in response to the Act of Supremacy 1559). The Church of England has always thought of itself not as a new foundation but rather as a reformed continuation of the ancient "English Church" (Ecclesia Anglicana) and a reassertion of that church's rights. As such it was a distinctly national phenomenon. The Church of Scotland was formed as a separate church from the Roman Catholic Church as a result of the Scottish Reformation in 1560 and the later formation of the Scottish Episcopal Church began in 1582 in the reign of James VI over disagreements about the role of bishops. The oldest-surviving Anglican church building outside the British Isles (Britain and Ireland) is St Peter's Church in St. George's, Bermuda, established in 1612 (though the actual building had to be rebuilt several times over the following century). This is also the oldest surviving non-Roman Catholic church in the New World. It remained part of the Church of England until 1978 when the Anglican Church of Bermuda separated. The Church of England was the established church not only in England, but in its trans-Oceanic colonies. Thus the only member churches of the present Anglican Communion existing by the mid-18th century were the Church of England, its closely linked sister church the Church of Ireland (which also separated from Roman Catholicism under Henry VIII) and the Scottish Episcopal Church which for parts of the 17th and 18th centuries was partially underground (it was suspected of Jacobite sympathies). Global spread of Anglicanism The enormous expansion in the 18th and 19th centuries of the British Empire brought Anglicanism along with it. At first all these colonial churches were under the
of Anglican doctrine are summarised in the Thirty-nine Articles (1571). The Archbishop of Canterbury (currently Justin Welby) in England acts as a focus of unity, recognised as primus inter pares ("first among equals"), but does not exercise authority in Anglican provinces outside of the Church of England. Most, but not all, member churches of the communion are the historic national or regional Anglican churches. The Anglican Communion was officially and formally organised and recognised as such at the Lambeth Conference in 1867 in London under the leadership of Charles Longley, Archbishop of Canterbury. The churches of the Anglican Communion consider themselves to be part of the one, holy, catholic and apostolic church, and to be both catholic and reformed. As in the Church of England itself, the Anglican Communion includes the broad spectrum of beliefs and liturgical practises found in the Evangelical, Central and Anglo-Catholic traditions of Anglicanism. Each national or regional church is fully independent, retaining its own legislative process and episcopal polity under the leadership of local primates. For some adherents, Anglicanism represents a non-papal Catholicism, for others a form of Protestantism though without a guiding figure such as Luther, Knox, Calvin, Zwingli or Wesley, or for yet others a combination of the two. Most of its members live in the Anglosphere of former British territories. Full participation in the sacramental life of each church is available to all communicant members. Because of their historical link to England (ecclesia anglicana means "English church"), some of the member churches are known as "Anglican", such as the Anglican Church of Canada. Others, for example the Church of Ireland and the Scottish and American Episcopal churches, have official names that do not include "Anglican". Additionally, some churches which use the name "Anglican" are not part of the communion. Ecclesiology, polity and ethos The Anglican Communion has no official legal existence nor any governing structure which might exercise authority over the member churches. There is an Anglican Communion Office in London, under the aegis of the Archbishop of Canterbury, but it only serves in a supporting and organisational role. The communion is held together by a shared history, expressed in its ecclesiology, polity and ethos, and also by participation in international consultative bodies. Three elements have been important in holding the communion together: first, the shared ecclesial structure of the component churches, manifested in an episcopal polity maintained through the apostolic succession of bishops and synodical government; second, the principle of belief expressed in worship, investing importance in approved prayer books and their rubrics; and third, the historical documents and the writings of early Anglican divines that have influenced the ethos of the communion. Originally, the Church of England was self-contained and relied for its unity and identity on its own history, its traditional legal and episcopal structure, and its status as an established church of the state. As such, Anglicanism was from the outset a movement with an explicitly episcopal polity, a characteristic that has been vital in maintaining the unity of the communion by conveying the episcopate's role in manifesting visible catholicity and ecumenism. Early in its development following the English Reformation, Anglicanism developed a vernacular prayer book, called the Book of Common Prayer. Unlike other traditions, Anglicanism has never been governed by a magisterium nor by appeal to one founding theologian, nor by an extra-credal summary of doctrine (such as the Westminster Confession of the Presbyterian churches). Instead, Anglicans have typically appealed to the Book of Common Prayer (1662) and its offshoots as a guide to Anglican theology and practise. This has had the effect of inculcating in Anglican identity and confession the principle of lex orandi, lex credendi ("the law of praying [is] the law of believing"). Protracted conflict through the 17th century, with radical Protestants on the one hand and Roman Catholics who recognised the primacy of the Pope on the other, resulted in an association of churches that was both deliberately vague about doctrinal principles, yet bold in developing parameters of acceptable deviation. These parameters were most clearly articulated in the various rubrics of the successive prayer books, as well as the Thirty-Nine Articles of Religion (1563). These articles have historically shaped and continue to direct the ethos of the communion, an ethos reinforced by its interpretation and expansion by such influential early theologians such as Richard Hooker, Lancelot Andrewes and John Cosin. With the expansion of the British Empire the growth of Anglicanism outside Great Britain and Ireland, the communion sought to establish new vehicles of unity. The first major expressions of this were the Lambeth Conferences of the communion's bishops, first convened in 1867 by Charles Longley, the Archbishop of Canterbury. From the beginning, these were not intended to displace the autonomy of the emerging provinces of the communion, but to "discuss matters of practical interest, and pronounce what we deem expedient in resolutions which may serve as safe guides to future action". Chicago Lambeth Quadrilateral One of the enduringly influential early resolutions of the conference was the so-called Chicago-Lambeth Quadrilateral of 1888. Its intent was to provide the basis for discussions of reunion with the Roman Catholic and Orthodox churches, but it had the ancillary effect of establishing parameters of Anglican identity. It establishes four principles with these words: Instruments of communion As mentioned above, the Anglican Communion has no international juridical organisation. The Archbishop of Canterbury's role is strictly symbolic and unifying and the communion's three international bodies are consultative and collaborative, their resolutions having no legal effect on the autonomous provinces of the communion. Taken together, however, the four do function as "instruments of communion", since all churches of the communion participate in them. In order of antiquity, they are: The Archbishop of Canterbury functions as the spiritual head of the communion. The archbishop is the focus of unity, since no church claims membership in the Communion without being in communion with him. The present archbishop is Justin Welby. The Lambeth Conference (first held in 1867) is the oldest international consultation. It is a forum for bishops of the communion to reinforce unity and collegiality through manifesting the episcopate, to discuss matters of mutual concern, and to pass resolutions intended to act as guideposts. It is held roughly every 10 years and invitation is by the Archbishop of Canterbury. The Anglican Consultative Council (first met in 1971) was created by a 1968 Lambeth Conference resolution, and meets usually at three-yearly intervals. The council consists of representative bishops, other clergy and laity chosen by the 38 provinces. The body has a permanent secretariat, the Anglican Communion Office, of which the Archbishop of Canterbury is president. The Primates' Meeting (first met in 1979) is the most recent manifestation of international consultation and deliberation, having been first convened by Archbishop Donald Coggan as a forum for "leisurely thought, prayer and deep consultation". Since there is no binding authority in the Anglican Communion, these international bodies are a vehicle for consultation and persuasion. In recent times, persuasion has tipped over into debates over conformity in certain areas of doctrine, discipline, worship and ethics. The most notable example has been the objection of many provinces of the communion (particularly in Africa and Asia) to the changing acceptance of LGBTQ+ individuals in the North American churches (e.g., by blessing same-sex unions and ordaining and consecrating same-sex relationships) and to the process by which changes were undertaken. (See Anglican realignment) Those who objected condemned these actions as unscriptural, unilateral, and without the agreement of the communion prior to these steps being taken. In response, the American Episcopal Church and the Anglican Church of Canada answered that the actions had been undertaken after lengthy scriptural and theological reflection, legally in accordance with their own canons and constitutions and after extensive consultation with the provinces of the communion.
Swedish Academy of Engineering Sciences since 2007 and also a member of the editorial board of two scientific journals: Journal of Urban Technology and Centaurus. Lately, he has been occupied with the history of Large Technical Systems. References External links Homepage Extended homepage 1950 births Living people Swedish historians KTH Royal Institute of Technology faculty Members of the Royal Swedish Academy of Engineering Sciences Historians
Institute of Technology in Stockholm, and a former president of the Society for the History of Technology. Kaijser has published two books in Swedish: Stadens ljus. Etableringen av de första svenska gasverken and I fädrens spår. Den svenska infrastrukturens historiska utveckling och framtida utmaningar, and has co-edited several anthologies. Kaijser is a member of the Royal Swedish Academy of Engineering
is a chain, cluster, or collection of islands, or sometimes a sea containing a small number of scattered islands. Examples of archipelagos include: the Indonesian Archipelago, the Andaman and Nicobar Islands, the Lakshadweep Islands, the Galápagos Islands, the Japanese Archipelago, the Philippine Archipelago, the Maldives, the Balearic Isles, the Bahamas, the Aegean Islands, the Hawaiian Islands, the Canary Islands, Malta, the Azores, the Canadian Arctic Archipelago, the British Isles, the islands of the Archipelago Sea, and Shetland. They are sometimes defined by political boundaries. The Gulf archipelago off the north-eastern Pacific coast forms part of a larger archipelago that geographically includes Washington state's San Juan Islands. While the Gulf archipelago and San Juan Islands are geographically related, they are not technically included in the same archipelago due to manmade geopolitical borders. Etymology The word archipelago is derived from the Ancient Greek ἄρχι-(arkhi-, "chief") and πέλαγος (pélagos, "sea") through the Italian arcipelago. In antiquity, "Archipelago" (from medieval Greek *ἀρχιπέλαγος and Latin ) was the proper name for the Aegean Sea. Later, usage shifted to refer to the Aegean Islands (since the sea has a large number of islands). Geographic types Archipelagos may be found isolated in large amounts of water or neighbouring a large land mass. For example, Scotland has more than 700 islands surrounding its mainland which form an archipelago. Archipelagos are often volcanic, forming along island arcs generated by subduction zones or hotspots, but may also be the
to land masses that have separated from a continental mass due to tectonic displacement. The Farallon Islands off the coast of California are an example. Continental archipelagos Sets of islands formed close to the coast of a continent are considered continental archipelagos when they form part of the same continental shelf, when those islands are above-water extensions of the shelf. The islands of the Inside Passage off the coast of British Columbia and the Canadian Arctic Archipelago are examples. Artificial archipelagos Artificial archipelagos have been created in various countries for different purposes. Palm Islands and the World Islands off Dubai were or are being created for leisure and tourism purposes. Marker Wadden in the Netherlands is being built as a conservation area for birds and other wildlife. Further examples The largest archipelagic state in the world by area, and by population, is Indonesia. See also Island arc List of landforms List of archipelagos by number of islands List of archipelagos Archipelagic state List of
as a book or play, and is also considered a writer or poet. More broadly defined, an author is "the person who originated or gave existence to anything" and whose authorship determines responsibility for what was created. Legal significance of authorship Typically, the first owner of a copyright is the person who created the work, i.e. the author. If more than one person created the work, then a case of joint authorship can be made provided some criteria are met. In the copyright laws of various jurisdictions, there is a necessity for little flexibility regarding what constitutes authorship. The United States Copyright Office, for example, defines copyright as "a form of protection provided by the laws of the United States (title 17, U.S. Code) to authors of 'original works of authorship.'" Holding the title of "author" over any "literary, dramatic, musical, artistic, [or] certain other intellectual works" gives rights to this person, the owner of the copyright, especially the exclusive right to engage in or authorize any production or distribution of their work. Any person or entity wishing to use intellectual property held under copyright must receive permission from the copyright holder to use this work, and often will be asked to pay for the use of copyrighted material. After a fixed amount of time, the copyright expires on intellectual work and it enters the public domain, where it can be used without limit. Copyright laws in many jurisdictions – mostly following the lead of the United States, in which the entertainment and publishing industries have very strong lobbying power – have been amended repeatedly since their inception, to extend the length of this fixed period where the work is exclusively controlled by the copyright holder. However, copyright is merely the legal reassurance that one owns their work. Technically, someone owns their work from the time it's created. A notable aspect of authorship emerges with copyright in that, in many jurisdictions, it can be passed down to another upon one's death. The person who inherits the copyright is not the author, but enjoys the same legal benefits. Questions arise as to the application of copyright law. How does it, for example, apply to the complex issue of fan fiction? If the media agency responsible for the authorized production allows material from fans, what is the limit before legal constraints from actors, music, and other considerations, come into play? Additionally, how does copyright apply to fan-generated stories for books? What powers do the original authors, as well as the publishers, have in regulating or even stopping the fan fiction? This particular sort of case also illustrates how complex intellectual property law can be, since such fiction may also involved trademark law (e.g. for names of characters in media franchises), likeness rights (such as for actors, or even entirely fictional entities), fair use rights held by the public (including the right to parody or satirize), and many other interacting complications. Authors may portion out different rights they hold to different parties, at different times, and for different purposes or uses, such as the right to adapt a plot into a film, but only with different character names, because the characters have already been optioned by another company for a television series or a video game. An author may also not have rights when working under contract that they would otherwise have, such as when creating a work for hire (e.g., hired to write a city tour guide by a municipal government that totally owns the copyright to the finished work), or when writing material using intellectual property owned by others (such as when writing a novel or screenplay that is a new installment in an already established media franchise). Philosophical views of the nature of authorship In literary theory, critics find complications in the term author beyond what constitutes authorship in a legal setting. In the wake of postmodern literature, critics such as Roland Barthes and Michel Foucault have examined the role and relevance of authorship to the meaning or interpretation of a text. Barthes challenges the idea that a text can be attributed to any single author. He writes, in his essay "Death of the Author" (1968), that "it is language which speaks, not the author." The words and language of a text itself determine and expose meaning for Barthes, and not someone possessing legal responsibility for the process of its production. Every line of written text is a mere reflection of references from any of a multitude of traditions, or, as Barthes puts it, "the text is a tissue of quotations drawn from the innumerable centres of culture"; it is never original. With this, the perspective of the author is removed from the text, and the limits formerly imposed by the idea of one authorial voice, one ultimate and universal meaning, are destroyed. The explanation and meaning of a work does not have to be sought in the one who produced it, "as if it were always in the end, through the more or less transparent allegory of the fiction, the voice of a single person,
media agency responsible for the authorized production allows material from fans, what is the limit before legal constraints from actors, music, and other considerations, come into play? Additionally, how does copyright apply to fan-generated stories for books? What powers do the original authors, as well as the publishers, have in regulating or even stopping the fan fiction? This particular sort of case also illustrates how complex intellectual property law can be, since such fiction may also involved trademark law (e.g. for names of characters in media franchises), likeness rights (such as for actors, or even entirely fictional entities), fair use rights held by the public (including the right to parody or satirize), and many other interacting complications. Authors may portion out different rights they hold to different parties, at different times, and for different purposes or uses, such as the right to adapt a plot into a film, but only with different character names, because the characters have already been optioned by another company for a television series or a video game. An author may also not have rights when working under contract that they would otherwise have, such as when creating a work for hire (e.g., hired to write a city tour guide by a municipal government that totally owns the copyright to the finished work), or when writing material using intellectual property owned by others (such as when writing a novel or screenplay that is a new installment in an already established media franchise). Philosophical views of the nature of authorship In literary theory, critics find complications in the term author beyond what constitutes authorship in a legal setting. In the wake of postmodern literature, critics such as Roland Barthes and Michel Foucault have examined the role and relevance of authorship to the meaning or interpretation of a text. Barthes challenges the idea that a text can be attributed to any single author. He writes, in his essay "Death of the Author" (1968), that "it is language which speaks, not the author." The words and language of a text itself determine and expose meaning for Barthes, and not someone possessing legal responsibility for the process of its production. Every line of written text is a mere reflection of references from any of a multitude of traditions, or, as Barthes puts it, "the text is a tissue of quotations drawn from the innumerable centres of culture"; it is never original. With this, the perspective of the author is removed from the text, and the limits formerly imposed by the idea of one authorial voice, one ultimate and universal meaning, are destroyed. The explanation and meaning of a work does not have to be sought in the one who produced it, "as if it were always in the end, through the more or less transparent allegory of the fiction, the voice of a single person, the author 'confiding' in us." The psyche, culture, fanaticism of an author can be disregarded when interpreting a text, because the words are rich enough themselves with all of the traditions of language. To expose meanings in a written work without appealing to the celebrity of an author, their tastes, passions, vices, is, to Barthes, to allow language to speak, rather than author. Michel Foucault argues in his essay "What is an author?" (1969) that all authors are writers, but not all writers are authors. He states that "a private letter may have a signatory—it does not have an author." For a reader to assign the title of author upon any written work is to attribute certain standards upon the text which, for Foucault, are working in conjunction with the idea of "the author function." Foucault's author function is the idea that an author exists only as a function of a written work, a part of its structure, but not necessarily part of the interpretive process. The author's name "indicates the status of the discourse within a society and culture," and at one time was used as an anchor for interpreting a text, a practice which Barthes would argue is not a particularly relevant or valid endeavour. Expanding upon Foucault's position, Alexander Nehamas writes that Foucault suggests "an author [...] is whoever can be understood to have produced a particular text as we interpret it," not necessarily who penned the text. It is this distinction between producing a written work and producing the interpretation or meaning in a written work that both Barthes and Foucault are interested in. Foucault warns of the risks of keeping the author's name in mind during interpretation, because it could affect the value and meaning with which one handles an interpretation. Literary critics Barthes and Foucault suggest that readers should not rely on or look for the notion of one overarching voice when interpreting a written work, because of the complications inherent with a writer's title of "author." They warn of the dangers interpretations could suffer from when associating the subject of inherently meaningful words and language with the personality of one authorial voice. Instead, readers should allow a text to be interpreted in terms of the language as "author." Relationship with publisher Self-publishing Self-publishing, self-publishing, independent publishing, or artisanal publishing is the "publication of any book, album or other media by its author without the involvement of a traditional publisher. It is the modern equivalent to traditional publishing." Types Unless a book is to be sold directly from the author to the public, an ISBN is required to uniquely identify the title. ISBN is a global standard used for all titles worldwide. Most self-publishing companies either provide their own ISBN to a title or can provide direction; it may be in the best interest of the self-published author to retain ownership of ISBN and copyright instead of using a number owned by a vanity press. A separate ISBN is needed
Markov (1903–1979), was also a notable mathematician, making contributions to constructive mathematics and recursive function theory. Biography Andrey Markov was born on 14 June 1856 in Russia. He attended the St. Petersburg Grammar School, where some teachers saw him as a rebellious student. In his academics he performed poorly in most subjects other than mathematics. Later in life he attended Saint Petersburg Imperial University (now Saint Petersburg State University). among his teachers were Yulian Sokhotski (differential calculus, higher algebra), Konstantin Posse (analytic geometry), Yegor Zolotarev (integral calculus), Pafnuty Chebyshev (number theory and probability theory), Aleksandr Korkin (ordinary and partial differential equations), Mikhail Okatov (mechanism theory), Osip Somov (mechanics), and Nikolai Budajev (descriptive and higher geometry). He completed his studies at the university and was later asked if he would like to stay and have a career as a Mathematician. He later taught at high schools and continued his own mathematical studies. In this time he found a practical use for his mathematical skills. He figured out that he could use chains to model the alliteration of vowels and consonants in Russian literature. He also contributed to many other mathematical aspects in his time. He died at age 66 on 20 July 1922. Timeline In 1877, Markov was awarded a gold medal for his outstanding solution of the problem About Integration of Differential Equations by Continued Fractions with an Application to the Equation . During the following year, he passed the candidate's examinations, and he remained at the university to prepare for a lecturer's position. In April 1880, Markov defended his master's thesis "On the Binary Square Forms with Positive Determinant", which was directed by Aleksandr Korkin and Yegor Zolotarev. Four years later in 1884, he defended his doctoral thesis titled "On Certain Applications of the Algebraic Continuous Fractions". His pedagogical work began after the defense of his master's thesis in autumn 1880. As a privatdozent he lectured on differential and integral calculus. Later he lectured alternately on "introduction to analysis", probability theory (succeeding Chebyshev, who had left the university in 1882) and the calculus of differences. From 1895 through 1905 he also lectured in differential calculus. One year after the defense of his doctoral thesis, Markov was appointed extraordinary professor (1886) and in the same year he was elected adjunct to the Academy of Sciences. In 1890, after the death of Viktor Bunyakovsky, Markov became
the Markov brothers' inequality. His son, another Andrey Andreyevich Markov (1903–1979), was also a notable mathematician, making contributions to constructive mathematics and recursive function theory. Biography Andrey Markov was born on 14 June 1856 in Russia. He attended the St. Petersburg Grammar School, where some teachers saw him as a rebellious student. In his academics he performed poorly in most subjects other than mathematics. Later in life he attended Saint Petersburg Imperial University (now Saint Petersburg State University). among his teachers were Yulian Sokhotski (differential calculus, higher algebra), Konstantin Posse (analytic geometry), Yegor Zolotarev (integral calculus), Pafnuty Chebyshev (number theory and probability theory), Aleksandr Korkin (ordinary and partial differential equations), Mikhail Okatov (mechanism theory), Osip Somov (mechanics), and Nikolai Budajev (descriptive and higher geometry). He completed his studies at the university and was later asked if he would like to stay and have a career as a Mathematician. He later taught at high schools and continued his own mathematical studies. In this time he found a practical use for his mathematical skills. He figured out that he could use chains to model the alliteration of vowels and consonants in Russian literature. He also contributed to many other mathematical aspects in his time. He died at age 66 on 20 July 1922. Timeline In 1877, Markov was awarded a gold medal for his outstanding solution of the problem About Integration of Differential Equations by Continued Fractions with an Application to the Equation . During the following year, he passed the candidate's examinations, and he remained at the university to prepare for a lecturer's position. In April 1880, Markov defended his master's thesis "On the Binary Square Forms with Positive Determinant", which was directed by Aleksandr Korkin and Yegor Zolotarev. Four years later in 1884, he defended his doctoral thesis titled "On Certain Applications of the Algebraic Continuous Fractions". His pedagogical work began after the defense of his master's thesis in autumn 1880. As a privatdozent he lectured on differential and integral calculus. Later he lectured alternately on "introduction to analysis", probability theory (succeeding Chebyshev, who had left the university in 1882) and the calculus of differences. From 1895 through 1905 he also lectured in differential calculus. One year after the defense of his doctoral thesis, Markov was appointed extraordinary professor (1886) and in the same year he was elected adjunct to the Academy of Sciences. In 1890, after the death of Viktor Bunyakovsky, Markov became an extraordinary member of the academy. His promotion to an ordinary professor of St. Petersburg University
beings enjoy a freedom of choice that we find both appealing and terrifying. It is the anxiety of understanding of being free when considering undefined possibilities of one's life and the immense responsibility of having the power of choice over them. Kierkegaard's concept of angst reappeared in the works of existentialist philosophers who followed, such as Friedrich Nietzsche, Jean-Paul Sartre, and Martin Heidegger, each of whom developed the idea further in individual ways. While Kierkegaard's angst referred mainly to ambiguous feelings about moral freedom within a religious personal belief system, later existentialists discussed conflicts of personal principles, cultural norms, and existential despair. Music Existential angst makes its appearance in classical musical composition in the early twentieth century as a result of both philosophical developments and as a reflection of the war-torn times. Notable composers whose works are often linked with the concept include Gustav Mahler, Richard Strauss (operas Elektra and Salome), Claude-Achille Debussy (opera Pelleas et Melisande, ballet Jeux, other works), Jean Sibelius (especially the Fourth Symphony), Arnold Schoenberg (A Survivor
appealing and terrifying. It is the anxiety of understanding of being free when considering undefined possibilities of one's life and the immense responsibility of having the power of choice over them. Kierkegaard's concept of angst reappeared in the works of existentialist philosophers who followed, such as Friedrich Nietzsche, Jean-Paul Sartre, and Martin Heidegger, each of whom developed the idea further in individual ways. While Kierkegaard's angst referred mainly to ambiguous feelings about moral freedom within a religious personal belief system, later existentialists discussed conflicts of personal principles, cultural norms, and existential despair. Music Existential angst makes its appearance in classical musical composition in the early twentieth century as a result of both philosophical developments and as a reflection of the war-torn times. Notable composers whose works are often linked with the concept include Gustav Mahler, Richard Strauss (operas Elektra and Salome), Claude-Achille Debussy (opera Pelleas et Melisande, ballet Jeux, other works), Jean Sibelius (especially the Fourth Symphony), Arnold Schoenberg (A Survivor from Warsaw, other works), Alban Berg, Francis Poulenc (opera Dialogues
apprehension, or nervousness felt by students who have a fear of failing an exam. Students who have test anxiety may experience any of the following: the association of grades with personal worth; fear of embarrassment by a teacher; fear of alienation from parents or friends; time pressures; or feeling a loss of control. Sweating, dizziness, headaches, racing heartbeats, nausea, fidgeting, uncontrollable crying or laughing and drumming on a desk are all common. Because test anxiety hinges on fear of negative evaluation, debate exists as to whether test anxiety is itself a unique anxiety disorder or whether it is a specific type of social phobia. The DSM-IV classifies test anxiety as a type of social phobia. While the term "test anxiety" refers specifically to students, many workers share the same experience with regard to their career or profession. The fear of failing at a task and being negatively evaluated for failure can have a similarly negative effect on the adult. Management of test anxiety focuses on achieving relaxation and developing mechanisms to manage anxiety. Stranger, social, and intergroup anxiety Humans generally require social acceptance and thus sometimes dread the disapproval of others. Apprehension of being judged by others may cause anxiety in social environments. Anxiety during social interactions, particularly between strangers, is common among young people. It may persist into adulthood and become social anxiety or social phobia. "Stranger anxiety" in small children is not considered a phobia. In adults, an excessive fear of other people is not a developmentally common stage; it is called social anxiety. According to Cutting, social phobics do not fear the crowd but the fact that they may be judged negatively. Social anxiety varies in degree and severity. For some people, it is characterized by experiencing discomfort or awkwardness during physical social contact (e.g. embracing, shaking hands, etc.), while in other cases it can lead to a fear of interacting with unfamiliar people altogether. Those suffering from this condition may restrict their lifestyles to accommodate the anxiety, minimizing social interaction whenever possible. Social anxiety also forms a core aspect of certain personality disorders, including avoidant personality disorder. To the extent that a person is fearful of social encounters with unfamiliar others, some people may experience anxiety particularly during interactions with outgroup members, or people who share different group memberships (i.e., by race, ethnicity, class, gender, etc.). Depending on the nature of the antecedent relations, cognitions, and situational factors, intergroup contact may be stressful and lead to feelings of anxiety. This apprehension or fear of contact with outgroup members is often called interracial or intergroup anxiety. As is the case with the more generalized forms of social anxiety, intergroup anxiety has behavioral, cognitive, and affective effects. For instance, increases in schematic processing and simplified information processing can occur when anxiety is high. Indeed, such is consistent with related work on attentional bias in implicit memory. Additionally recent research has found that implicit racial evaluations (i.e. automatic prejudiced attitudes) can be amplified during intergroup interaction. Negative experiences have been illustrated in producing not only negative expectations, but also avoidant, or antagonistic, behavior such as hostility. Furthermore, when compared to anxiety levels and cognitive effort (e.g., impression management and self-presentation) in intragroup contexts, levels and depletion of resources may be exacerbated in the intergroup situation. Trait Anxiety can be either a short-term "state" or a long-term personality "trait." Trait anxiety reflects a stable tendency across the lifespan of responding with acute, state anxiety in the anticipation of threatening situations (whether they are actually deemed threatening or not). A meta-analysis showed that a high level of neuroticism is a risk factor for development of anxiety symptoms and disorders. Such anxiety may be conscious or unconscious. Personality can also be a trait leading to anxiety and depression. Through experience, many find it difficult to collect themselves due to their own personal nature. Choice or decision Anxiety induced by the need to choose between similar options is increasingly being recognized as a problem for individuals and for organizations. In 2004, Capgemini wrote: "Today we're all faced with greater choice, more competition and less time to consider our options or seek out the right advice." In a decision context, unpredictability or uncertainty may trigger emotional responses in anxious individuals that systematically alter decision-making. There are primarily two forms of this anxiety type. The first form refers to a choice in which there are multiple potential outcomes with known or calculable probabilities. The second form refers to the uncertainty and ambiguity related to a decision context in which there are multiple possible outcomes with unknown probabilities. Panic disorder Panic disorder may share symptoms of stress and anxiety, but it is actually very different. Panic disorder is an anxiety disorder that occurs without any triggers. According to the U.S Department of Health and Human Services, this disorder can be distinguished by unexpected and repeated episodes of intense fear. Someone who suffers from panic disorder will eventually develop constant fear of another attack and as this progresses it will begin to affect daily functioning and an individual's general quality of life. It is reported by the Cleveland Clinic that panic disorder affects 2 to 3 percent of adult Americans and can begin around the time of the teenage and early adult years. Some symptoms include: difficulty breathing, chest pain, dizziness, trembling or shaking, feeling faint, nausea, fear that you are losing control or are about to die. Even though they suffer from these symptoms during an attack, the main symptom is the persistent fear of having future panic attacks. Anxiety disorders Anxiety disorders are a group of mental disorders characterized by exaggerated feelings of anxiety and fear responses. Anxiety is a worry about future events and fear is a reaction to current events. These feelings may cause physical symptoms, such as a fast heart rate and shakiness. There are a number of anxiety disorders: including generalized anxiety disorder, specific phobia, social anxiety disorder, separation anxiety disorder, agoraphobia, panic disorder, and selective mutism. The disorder differs by what results in the symptoms. People often have more than one anxiety disorder. Anxiety disorders are caused by a complex combination of genetic and environmental factors. To be diagnosed, symptoms typically need to be present for at least six months, be more than would be expected for the situation, and decrease a person's ability to function in their daily lives. Other problems that may result in similar symptoms include hyperthyroidism, heart disease, caffeine, alcohol, or cannabis use, and withdrawal from certain drugs, among others. Without treatment, anxiety disorders tend to remain. Treatment may include lifestyle changes, counselling, and medications. Counselling is typically with a type of cognitive behavioural therapy. Medications, such as antidepressants or beta blockers, may improve symptoms. About 12% of people are affected by an anxiety disorder in a given year and between 5–30% are affected at some point in their life. They occur about twice as often in women than they do in men, and generally begin before the age of 25. The most common are specific phobia which affects nearly 12% and social anxiety disorder which affects 10% at some point in their life. They affect those between the ages of 15 and 35 the most and become less common after the age of 55. Rates appear to be higher in the United States and Europe. Short- and long-term anxiety Anxiety can be either a short-term "state" or a long-term "trait." Whereas trait anxiety represents worrying about future events, anxiety disorders are a group of mental disorders characterized by feelings of anxiety and fears. Four Ways to Be Anxious In his book Anxious: the modern mind
and an individual's general quality of life. It is reported by the Cleveland Clinic that panic disorder affects 2 to 3 percent of adult Americans and can begin around the time of the teenage and early adult years. Some symptoms include: difficulty breathing, chest pain, dizziness, trembling or shaking, feeling faint, nausea, fear that you are losing control or are about to die. Even though they suffer from these symptoms during an attack, the main symptom is the persistent fear of having future panic attacks. Anxiety disorders Anxiety disorders are a group of mental disorders characterized by exaggerated feelings of anxiety and fear responses. Anxiety is a worry about future events and fear is a reaction to current events. These feelings may cause physical symptoms, such as a fast heart rate and shakiness. There are a number of anxiety disorders: including generalized anxiety disorder, specific phobia, social anxiety disorder, separation anxiety disorder, agoraphobia, panic disorder, and selective mutism. The disorder differs by what results in the symptoms. People often have more than one anxiety disorder. Anxiety disorders are caused by a complex combination of genetic and environmental factors. To be diagnosed, symptoms typically need to be present for at least six months, be more than would be expected for the situation, and decrease a person's ability to function in their daily lives. Other problems that may result in similar symptoms include hyperthyroidism, heart disease, caffeine, alcohol, or cannabis use, and withdrawal from certain drugs, among others. Without treatment, anxiety disorders tend to remain. Treatment may include lifestyle changes, counselling, and medications. Counselling is typically with a type of cognitive behavioural therapy. Medications, such as antidepressants or beta blockers, may improve symptoms. About 12% of people are affected by an anxiety disorder in a given year and between 5–30% are affected at some point in their life. They occur about twice as often in women than they do in men, and generally begin before the age of 25. The most common are specific phobia which affects nearly 12% and social anxiety disorder which affects 10% at some point in their life. They affect those between the ages of 15 and 35 the most and become less common after the age of 55. Rates appear to be higher in the United States and Europe. Short- and long-term anxiety Anxiety can be either a short-term "state" or a long-term "trait." Whereas trait anxiety represents worrying about future events, anxiety disorders are a group of mental disorders characterized by feelings of anxiety and fears. Four Ways to Be Anxious In his book Anxious: the modern mind in the age of anxiety Joseph LeDoux examines four experiences of anxiety through a brain-based lens: In the presence of an existing or imminent external threat, you worry about the event and its implications for your physical and/or psychological well-being. When a threat signal occurs, it signifies either that danger is present or near in space and time or that it might be coming in the future. Nonconscius threats processing by the brain activates defensive survival circuits, resulting in changes in information processing in the brain, controlled in part by increases in arousal and behavioral and physiological responses in the body that then produce signals that feed back to the brain and complement the physiological changes there, intensifying them and extending their duration. When you notice body sensations, you worry about what they might mean for your physical and/or psychological well-being. The trigger stimulus does not have to be an external stimulus but can be an internal one, as some people are particularly sensitive to body signals. Thoughts and memories may lead to you to worry about your physical and/or psychological well-being. We do not need to be presence of an external or internal stimulus to be anxious. An episodic memory of a past trauma or of a panic attack in the past is sufficient to activate the defence circuits. Thoughts and memories may result in existential dread, such as worry about leading a meaningful life or the eventuality of death. Examples are contemplations of whether one's life has been meaningful, the inevitability of death, or the difficulty of making decisions that have a moral value. These do not necessarily activate defensive systems; they are more or less pure forms of cognitive anxiety. Co-morbidity Anxiety disorders often occur with other mental health disorders, particularly major depressive disorder, bipolar disorder, eating disorders, or certain personality disorders. It also commonly occurs with personality traits such as neuroticism. This observed co-occurrence is partly due to genetic and environmental influences shared between these traits and anxiety. It is common for those with obsessive-compulsive disorder to experience anxiety. Anxiety is also commonly found in those who experience panic disorders, phobic anxiety disorders, severe stress, dissociative disorders, somatoform disorders, and some neurotic disorders. Risk factors Anxiety disorders are partly genetic, with twin studies suggesting 30-40% genetic influence on individual differences in anxiety. Environmental factors are also important. Twin studies show that individual-specific environments have a large influence on anxiety, whereas shared environmental influences (environments that affect twins in the same way) operate during childhood but decline through adolescence. Specific measured ‘environments’ that have been associated with anxiety include child abuse, family history of mental health disorders, and poverty. Anxiety is also associated with drug use, including alcohol, caffeine, and benzodiazepines (which are often prescribed to treat anxiety). Neuroanatomy Neural circuitry involving the amygdala (which regulates emotions like anxiety and fear, stimulating the HPA axis and sympathetic nervous system) and hippocampus (which is implicated in emotional memory along with the amygdala) is thought to underlie anxiety. People who have anxiety tend to show high activity in response to emotional stimuli in the amygdala. Some writers believe that excessive anxiety can lead to an overpotentiation of the limbic system (which includes the amygdala and nucleus accumbens), giving increased future anxiety, but this does not appear to have been proven. Research upon adolescents who as infants had been highly apprehensive, vigilant, and fearful finds that their nucleus accumbens is more sensitive than that in other people when deciding to make an action that determined whether they received a reward. This suggests a link between circuits responsible for fear and also reward in anxious people. As researchers note, "a sense of 'responsibility', or self-agency, in a context of uncertainty (probabilistic outcomes) drives the neural system underlying appetitive motivation (i.e., nucleus accumbens) more strongly in temperamentally inhibited than noninhibited adolescents". The gut-brain axis The microbes of the gut can connect with the brain to affect anxiety. There are various pathways along which this communication can take place. One is through the major neurotransmitters. The gut microbes such as Bifidobacterium and Bacillus produce the neurotransmitters GABA and dopamine, respectively. The neurotransmitters signal to the nervous system of the gastrointestinal tract, and those signals will be carried to the brain through the vagus nerve or the spinal system. This is demonstrated by the fact that altering the microbiome has shown anxiety- and depression-reducing effects in mice, but not in subjects without vagus nerves. Another key pathway is the HPA axis, as mentioned above. The microbes can control the levels of cytokines in the body, and altering cytokine levels creates direct effects on areas of the brain such as the hypothalmus, the area that triggers HPA axis activity. The HPA axis regulates production of cortisol, a hormone that takes part in the body's stress response. When HPA activity spikes, cortisol levels increase, processing and reducing anxiety in stressful situations. These pathways, as well as the specific effects of individual taxa of microbes, are not yet completely clear, but the communication between the gut microbiome and the brain is undeniable, as is the ability of these pathways to alter anxiety levels. With this communication comes the potential to treat anxiety. Prebiotics and probiotics have been shown to reduced anxiety. For example, experiments in which mice were given fructo- and galacto-oligosaccharide prebiotics and Lactobacillus probiotics have both demonstrated a capability to reduce anxiety. In humans, results are not as concrete, but promising. Genetics Genetics and family history (e.g. parental anxiety) may put an individual at increased risk of an anxiety disorder, but generally external stimuli will trigger its onset or exacerbation. Estimates of genetic influence on anxiety, based on studies of twins, range from 25 to 40% depending on the specific type and age-group under study. For example, genetic differences account for about 43% of variance in panic disorder and 28% in generalized anxiety disorder. Longitudinal twin studies have shown the moderate stability of anxiety from childhood through to adulthood is mainly influenced by stability in genetic influence. When investigating how anxiety is passed on from parents to children, it is important to account for sharing of genes as well as environments, for example using the intergenerational children-of-twins design. Many studies in the past used a candidate gene approach to test whether single genes were associated with anxiety. These investigations were based on hypotheses about how certain known genes influence neurotransmitters (such as serotonin and norepinephrine) and hormones (such as cortisol) that are implicated in anxiety. None of these findings are well replicated, with the possible exception of TMEM132D, COMT and MAO-A. The epigenetic signature of BDNF, a gene that codes for a protein called brain derived neurotrophic factor that is found in the brain, has also been associated with anxiety and specific patterns of neural activity. and a receptor gene for BDNF called NTRK2 was associated with anxiety in a large genome-wide investigation. The reason that most candidate gene findings have not replicated is that anxiety is a complex trait that is influenced by many genomic variants, each of which has a small effect on its own. Increasingly, studies of anxiety are using a hypothesis-free approach to look for parts of the genome that are implicated in anxiety using big enough samples to find associations with variants that have small effects. The largest explorations of the common genetic architecture of anxiety have been facilitated by the UK Biobank, the ANGST consortium and the CRC Fear, Anxiety and Anxiety Disorders. Medical conditions Many medical conditions can cause anxiety. This includes conditions that affect the ability to breathe, like COPD and asthma, and the difficulty in breathing that often occurs near death. Conditions that cause abdominal pain or chest pain can cause anxiety and may in some cases be a somatization of anxiety; the same is true for some sexual dysfunctions. Conditions that affect the face or the skin can cause social anxiety especially among adolescents, and developmental disabilities often lead to social anxiety for children as well. Life-threatening conditions like cancer also cause anxiety. Furthermore, certain organic diseases may present with anxiety or symptoms that mimic anxiety. These disorders include certain endocrine diseases (hypo- and hyperthyroidism, hyperprolactinemia), metabolic disorders (diabetes), deficiency states (low levels of vitamin D, B2, B12, folic acid), gastrointestinal diseases (celiac disease, non-celiac gluten sensitivity, inflammatory bowel disease), heart diseases, blood diseases (anemia), cerebral vascular accidents (transient ischemic attack, stroke), and brain degenerative diseases (Parkinson's disease, dementia, multiple sclerosis, Huntington's disease), among others. Substance-induced Several drugs can cause or worsen anxiety, whether in intoxication, withdrawal or as side effect. These include alcohol, tobacco, sedatives (including prescription benzodiazepines), opioids (including prescription pain killers and illicit drugs like heroin), stimulants (such as caffeine, cocaine and amphetamines), hallucinogens, and inhalants. While many often report self-medicating anxiety with these substances, improvements in anxiety from drugs are usually short-lived (with worsening of anxiety in the long term, sometimes with acute anxiety as soon as the drug effects wear off) and tend to be exaggerated. Acute exposure to toxic levels of benzene may cause euphoria, anxiety, and irritability lasting up to 2 weeks after the exposure. Psychological Poor coping skills (e.g., rigidity/inflexible problem solving, denial, avoidance, impulsivity, extreme self-expectation, negative thoughts, affective instability, and inability to focus on problems) are associated with anxiety. Anxiety is also linked and perpetuated by the person's own pessimistic outcome expectancy and how they cope with feedback negativity. Temperament (e.g., neuroticism) and attitudes (e.g. pessimism) have been found to be risk factors for anxiety. Cognitive distortions such as overgeneralizing, catastrophizing, mind reading, emotional reasoning, binocular trick, and mental filter can result in anxiety. For example, an overgeneralized belief that something bad "always" happens may lead someone to have excessive fears of even minimally risky situations and to avoid benign social situations due to anticipatory anxiety of embarrassment. In addition, those who have high anxiety can also create future stressful life events. Together, these findings suggest that anxious thoughts can lead to anticipatory anxiety as well as stressful events, which in turn cause more anxiety. Such unhealthy thoughts can be targets for successful treatment with cognitive therapy. Psychodynamic theory posits that anxiety is often the result of opposing unconscious wishes or fears that manifest via maladaptive defense mechanisms (such as suppression, repression, anticipation, regression, somatization, passive aggression, dissociation) that develop to adapt to problems with early objects (e.g., caregivers) and empathic failures in childhood. For example, persistent parental discouragement of anger may result in repression/suppression of angry feelings which manifests as gastrointestinal distress (somatization) when provoked by another while the anger remains unconscious and outside the individual's awareness. Such conflicts can be targets for successful treatment with psychodynamic therapy. While psychodynamic therapy tends to explore the underlying roots of anxiety, cognitive behavioral therapy has also been shown to be a successful treatment for anxiety by altering irrational thoughts
was being told that writing a detective story would be in the worst of taste given the demand for children's books. He concluded that "the only excuse which I have yet discovered for writing anything is that I want to write it; and I should be as proud to be delivered of a Telephone Directory con amore as I should be ashamed to create a Blank Verse Tragedy at the bidding of others." 1926 to 1928 Milne is most famous for his two Pooh books about a boy named Christopher Robin after his son, Christopher Robin Milne (1920–1996), and various characters inspired by his son's stuffed animals, most notably the bear named Winnie-the-Pooh. Christopher Robin Milne's stuffed bear, originally named Edward, was renamed Winnie after a Canadian black bear named Winnie (after Winnipeg), which was used as a military mascot in World War I, and left to London Zoo during the war. "The Pooh" comes from a swan the young Milne named "Pooh". E. H. Shepard illustrated the original Pooh books, using his own son's teddy Growler ("a magnificent bear") as the model. The rest of Christopher Robin Milne's toys, Piglet, Eeyore, Kanga, Roo and Tigger, were incorporated into A. A. Milne's stories, and two more characters – Rabbit and Owl – were created by Milne's imagination. Christopher Robin Milne's own toys are now on display in New York where 750,000 people visit them every year. The fictional Hundred Acre Wood of the Pooh stories derives from Five Hundred Acre Wood in Ashdown Forest in East Sussex, South East England, where the Pooh stories were set. Milne lived on the northern edge of the forest at Cotchford Farm, , and took his son walking there. E. H. Shepard drew on the landscapes of Ashdown Forest as inspiration for many of the illustrations he provided for the Pooh books. The adult Christopher Robin commented: "Pooh's Forest and Ashdown Forest are identical." Popular tourist locations at Ashdown Forest include: Galleon's Lap, The Enchanted Place, the Heffalump Trap and Lone Pine, Eeyore’s Sad and Gloomy Place, and the wooden Pooh Bridge where Pooh and Piglet invented Poohsticks. Not yet known as Pooh, he made his first appearance in a poem, "Teddy Bear", published in Punch magazine in February 1924 and republished in When We Were Very Young. Pooh first appeared in the London Evening News on Christmas Eve, 1925, in a story called "The Wrong Sort of Bees". Winnie-the-Pooh was published in 1926, followed by The House at Pooh Corner in 1928. A second collection of nursery rhymes, Now We Are Six, was published in 1927. All four books were illustrated by E. H. Shepard. Milne also published four plays in this period. He also "gallantly stepped forward" to contribute a quarter of the costs of dramatising P. G. Wodehouse's A Damsel in Distress. The World of Pooh won the Lewis Carroll Shelf Award in 1958. 1929 onwards The success of his children's books was to become a source of considerable annoyance to Milne, whose self-avowed aim was to write whatever he pleased and who had, until then, found a ready audience for each change of direction: he had freed pre-war Punch from its ponderous facetiousness; he had made a considerable reputation as a playwright (like his idol J. M. Barrie) on both sides of the Atlantic; he had produced a witty piece of detective writing in The Red House Mystery (although this was severely criticised by Raymond Chandler for the implausibility of its plot in his essay The Simple Art of Murder in the eponymous collection that appeared in 1950). But once Milne had, in his own words, "said goodbye to all that in 70,000 words" (the approximate length of his four principal children's books), he had no intention of producing any reworkings lacking in originality, given that one of the sources of inspiration, his son, was growing older. Another reason Milne stopped writing children's books, and especially about Winnie-the-Pooh, was that he felt "amazement and disgust" over the fame his son was exposed to, and said that "I feel that the legal Christopher Robin has already had more publicity than I want for him. I do not want CR Milne to ever wish that his name were Charles Robert." In his literary home, Punch, where the When We Were Very Young verses had first appeared, Methuen continued to publish whatever Milne wrote, including the long poem "The Norman Church" and an assembly of articles entitled Year In, Year Out (which Milne likened to a benefit night for the author). In 1930, Milne adapted Kenneth Grahame's novel The Wind in the Willows for the stage as Toad of Toad Hall. The title was an implicit admission that such chapters as Chapter 7, "The Piper at the Gates of Dawn," could not survive translation to the theatre. A special introduction written by Milne is included in some editions of Grahame's novel. Milne and his wife became estranged from their son, who came to resent what he saw as his father's exploitation of his childhood and came to hate the books that had thrust him into the public eye. Christopher's marriage to his first cousin, Lesley de Sélincourt, distanced him still further from his parents – Lesley's father and Christopher's mother had not spoken to each other for 30 years. Death and legacy Commemoration A. A. Milne died at his home in Hartfield, Sussex, on 31 January 1956, nearly two weeks after his 74th birthday. After a memorial service in London, his ashes were scattered in a crematorium's memorial garden in Brighton. The rights to A. A. Milne's Pooh books were left to four beneficiaries: his family, the Royal Literary Fund, Westminster School and the Garrick Club. After Milne's death in 1956, thirteen days after his 74th birthday, his widow sold her rights to the Pooh characters to Stephen Slesinger, whose widow sold the rights after Slesinger's death to the Walt Disney Company, which has made many Pooh cartoon movies, a Disney Channel television show, as well as Pooh-related merchandise. In 2001, the other beneficiaries sold their interest in the estate to the Disney Corporation for $350m. Previously Disney had been paying twice-yearly royalties to these beneficiaries. The estate of E. H. Shepard also received a sum in the deal. The UK copyright on the text of the original Winnie the Pooh books expires on 1 January 2027; at the beginning of the year after the 70th anniversary of the author's death (PMA-70), and has already expired in those countries with a PMA-50 rule. This applies to all of Milne's works except those first published posthumously. The illustrations in the Pooh books will remain under copyright until the same amount of time has passed, after the illustrator's death; in the UK, this will be on 1 January 2047. In the United States, copyright will not expire until 95 years after publication for each of Milne's books first published before 1978, but this includes the illustrations. In 2008, a collection of original illustrations featuring Winnie-the-Pooh and his animal friends sold for more than £1.2 million at auction in Sotheby's, London. Forbes magazine ranked Winnie the Pooh the most valuable fictional character in 2002; Winnie the Pooh merchandising products alone had annual sales of more than $5.9 billion. In 2005, Winnie the Pooh generated $6 billion, a figure surpassed only by Mickey Mouse. A memorial plaque in Ashdown Forest, unveiled by Christopher Robin in 1979, commemorates the work of A. A. Milne and Shepard in creating the world of Pooh. Milne once wrote of Ashdown Forest: "In that enchanted place on the top of the forest a little boy and his bear will always be playing." In 2003, Winnie the Pooh was listed at number 7 on the BBC's poll The Big Read which determined the UK's "best-loved novels" of all time. In 2006, Winnie the Pooh received a star on the Hollywood Walk of Fame, marking the 80th birthday of Milne's creation. That same year a UK poll saw Winnie the Pooh voted onto the list of icons of England. Marking the 90th anniversary of Milne's creation of the character, and the 90th birthday of Elizabeth II, in 2016 a new story sees Winnie the Pooh meet the Queen at Buckingham Palace. The illustrated and audio adventure is titled Winnie-the-Pooh Meets the Queen, and has been narrated by actor Jim Broadbent. Also in 2016, a new character, a Penguin, was unveiled in The Best Bear in All the World, which was inspired by a long-lost photograph of Milne and his son Christopher with a toy penguin. Several of Milne's children's poems were set to music by the composer Harold Fraser-Simson. His poems have been parodied many times, including with the books When We Were Rather Older and Now We Are Sixty. The 1963 film The King's Breakfast was based on Milne's poem of the same name. The Pooh books were used as the basis for two academic satires by Frederick C Crews: 'The Pooh Perplex'(1963–4) and 'Postmodern Pooh'(2002). An exhibition entitled "Winnie-the-Pooh: Exploring a Classic" appeared at the V & A from 9 December 2017 to 8 April 2018. An elementary school in Houston, Texas, United States, operated by the Houston Independent School
in East Sussex, South East England, where the Pooh stories were set. Milne lived on the northern edge of the forest at Cotchford Farm, , and took his son walking there. E. H. Shepard drew on the landscapes of Ashdown Forest as inspiration for many of the illustrations he provided for the Pooh books. The adult Christopher Robin commented: "Pooh's Forest and Ashdown Forest are identical." Popular tourist locations at Ashdown Forest include: Galleon's Lap, The Enchanted Place, the Heffalump Trap and Lone Pine, Eeyore’s Sad and Gloomy Place, and the wooden Pooh Bridge where Pooh and Piglet invented Poohsticks. Not yet known as Pooh, he made his first appearance in a poem, "Teddy Bear", published in Punch magazine in February 1924 and republished in When We Were Very Young. Pooh first appeared in the London Evening News on Christmas Eve, 1925, in a story called "The Wrong Sort of Bees". Winnie-the-Pooh was published in 1926, followed by The House at Pooh Corner in 1928. A second collection of nursery rhymes, Now We Are Six, was published in 1927. All four books were illustrated by E. H. Shepard. Milne also published four plays in this period. He also "gallantly stepped forward" to contribute a quarter of the costs of dramatising P. G. Wodehouse's A Damsel in Distress. The World of Pooh won the Lewis Carroll Shelf Award in 1958. 1929 onwards The success of his children's books was to become a source of considerable annoyance to Milne, whose self-avowed aim was to write whatever he pleased and who had, until then, found a ready audience for each change of direction: he had freed pre-war Punch from its ponderous facetiousness; he had made a considerable reputation as a playwright (like his idol J. M. Barrie) on both sides of the Atlantic; he had produced a witty piece of detective writing in The Red House Mystery (although this was severely criticised by Raymond Chandler for the implausibility of its plot in his essay The Simple Art of Murder in the eponymous collection that appeared in 1950). But once Milne had, in his own words, "said goodbye to all that in 70,000 words" (the approximate length of his four principal children's books), he had no intention of producing any reworkings lacking in originality, given that one of the sources of inspiration, his son, was growing older. Another reason Milne stopped writing children's books, and especially about Winnie-the-Pooh, was that he felt "amazement and disgust" over the fame his son was exposed to, and said that "I feel that the legal Christopher Robin has already had more publicity than I want for him. I do not want CR Milne to ever wish that his name were Charles Robert." In his literary home, Punch, where the When We Were Very Young verses had first appeared, Methuen continued to publish whatever Milne wrote, including the long poem "The Norman Church" and an assembly of articles entitled Year In, Year Out (which Milne likened to a benefit night for the author). In 1930, Milne adapted Kenneth Grahame's novel The Wind in the Willows for the stage as Toad of Toad Hall. The title was an implicit admission that such chapters as Chapter 7, "The Piper at the Gates of Dawn," could not survive translation to the theatre. A special introduction written by Milne is included in some editions of Grahame's novel. Milne and his wife became estranged from their son, who came to resent what he saw as his father's exploitation of his childhood and came to hate the books that had thrust him into the public eye. Christopher's marriage to his first cousin, Lesley de Sélincourt, distanced him still further from his parents – Lesley's father and Christopher's mother had not spoken to each other for 30 years. Death and legacy Commemoration A. A. Milne died at his home in Hartfield, Sussex, on 31 January 1956, nearly two weeks after his 74th birthday. After a memorial service in London, his ashes were scattered in a crematorium's memorial garden in Brighton. The rights to A. A. Milne's Pooh books were left to four beneficiaries: his family, the Royal Literary Fund, Westminster School and the Garrick Club. After Milne's death in 1956, thirteen days after his 74th birthday, his widow sold her rights to the Pooh characters to Stephen Slesinger, whose widow sold the rights after Slesinger's death to the Walt Disney Company, which has made many Pooh cartoon movies, a Disney Channel television show, as well as Pooh-related merchandise. In 2001, the other beneficiaries sold their interest in the estate to the Disney Corporation for $350m. Previously Disney had been paying twice-yearly royalties to these beneficiaries. The estate of E. H. Shepard also received a sum in the deal. The UK copyright on the text of the original Winnie the Pooh books expires on 1 January 2027; at the beginning of the year after the 70th anniversary of the author's death (PMA-70), and has already expired in those countries with a PMA-50 rule. This applies to all of Milne's works except those first published posthumously. The illustrations in the Pooh books will remain under copyright until the same amount of time has passed, after the illustrator's death; in the UK, this will be on 1 January 2047. In the United States, copyright will not expire until 95 years after publication for each of Milne's books first published before 1978, but this includes the illustrations. In 2008, a collection of original illustrations featuring Winnie-the-Pooh and his animal friends sold for more than £1.2 million at auction in Sotheby's, London. Forbes magazine ranked Winnie the Pooh the most valuable fictional character in 2002; Winnie the Pooh merchandising products alone had annual sales of more than $5.9 billion. In 2005, Winnie the Pooh generated $6 billion, a figure surpassed only by Mickey Mouse. A memorial plaque in Ashdown Forest, unveiled by Christopher Robin in 1979, commemorates the work of A. A. Milne and Shepard in creating the world of Pooh. Milne once wrote of Ashdown Forest: "In that enchanted place on the top of the forest a little boy and his bear will always be playing." In 2003, Winnie the Pooh was listed at number 7 on the BBC's poll The Big Read which determined the UK's "best-loved novels" of all time. In 2006, Winnie the Pooh received a star on the Hollywood Walk of Fame, marking the 80th birthday of Milne's creation. That same year a UK poll saw Winnie the Pooh voted onto the list of icons of England. Marking the 90th anniversary of Milne's creation of the character, and the 90th birthday of Elizabeth II, in 2016 a new story sees Winnie the Pooh meet the Queen at Buckingham Palace. The illustrated and audio adventure is titled Winnie-the-Pooh Meets the Queen, and has been narrated by actor Jim Broadbent. Also in 2016, a new character, a Penguin, was unveiled in The Best Bear in All the World, which was inspired by a long-lost photograph of Milne and his son Christopher with a toy penguin. Several of Milne's children's poems were set to music by the composer Harold Fraser-Simson. His poems have been parodied many times, including with the books When We Were Rather Older and Now We Are Sixty. The 1963 film The King's Breakfast was based on Milne's poem of the same name. The Pooh books were used as the basis for two academic satires by Frederick C Crews: 'The Pooh Perplex'(1963–4) and 'Postmodern Pooh'(2002). An exhibition entitled "Winnie-the-Pooh: Exploring a Classic" appeared at the V & A from 9 December 2017 to 8 April 2018. An elementary school in Houston, Texas, United States, operated by the Houston Independent School District (HISD), is named after Milne. The school, A. A. Milne Elementary School in Brays Oaks, opened in 1991. Archive The bulk of A. A. Milne's papers are housed at the Harry Ransom Center at the University of Texas at Austin. The collection, established at the center in 1964, consists of manuscript drafts and fragments for over 150 of Milne's works, as well as correspondence, legal documents, genealogical records, and some personal effects. The library division holds several books formerly belonging to Milne
both were established by Buenos Aires English High School students. History Background The first club with the name "Alumni" played association football, having been found in 1898 by students of Buenos Aires English High School (BAEHS) along with director Alexander Watson Hutton. Originally under the name "English High School A.C.", the team would be later obliged by the Association to change its name, therefore "Alumni" was chosen, following a proposal by Carlos Bowers, a former student of the school. Alumni was the most successful team during the first years of Argentine football, winning 10 of 14 league championships contested. Alumni is still considered the first great football team in the country. Alumni was reorganised in 1908, "in order to encourage people to practise all kind of sports, specially football". This was the last try to develop itself as a sports club rather than just a football team, such as Lomas, Belgrano and Quilmes had successfully done in the past, but the efforts were not
country. Alumni was reorganised in 1908, "in order to encourage people to practise all kind of sports, specially football". This was the last try to develop itself as a sports club rather than just a football team, such as Lomas, Belgrano and Quilmes had successfully done in the past, but the efforts were not enough. Alumni played its last game in 1911 and was definitely dissolved on April 24, 1913. Rebirth through rugby In 1951, two guards of the BAEHS, Daniel Ginhson (also a former player of Buenos Aires F.C.) and Guillermo Cubelli, supported by the school's alumni and fathers of the students, they decided to establish a club focused on rugby union exclusively. Former players still alive of Alumni football club and descendants of other players already dead gave their permission to use the name "Alumni". On December 13, in a meeting presided by Carlos Bowers himself (who had proposed the name "Alumni" to the original football team 50 years before), the club was officially established under the name "Asociación Juvenil Alumni", also adopting the same colors as its predecessor. The team achieved good
makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts. Alessandro Padoa, Mario Pieri, and Giuseppe Peano were pioneers in this movement. Structuralist mathematics goes further, and develops theories and axioms (e.g. field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an "axiom" and a "postulate" disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. However, by throwing out Euclid's fifth postulate, one can get theories that have meaning in wider contexts (e.g., hyperbolic geometry). As such, one must simply be prepared to use labels such as "line" and "parallel" with greater flexibility. The development of hyperbolic geometry taught mathematicians that it is useful to regard postulates as purely formal statements, and not as facts based on experience. When mathematicians employ the field axioms, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields; field theory gives correct knowledge about them all. It is not correct to say that the axioms of field theory are "propositions that are regarded as true without proof." Rather, the field axioms are a set of constraints. If any given system of addition and multiplication satisfies these constraints, then one is in a position to instantly know a great deal of extra information about this system. Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and mathematics itself can be regarded as a branch of logic. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development. Another lesson learned in modern mathematics is to examine purported proofs carefully for hidden assumptions. In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow – by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axioms. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom. It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms. In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here, the emergence of Russell's paradox and similar antinomies of naïve set theory raised the possibility that any such system could turn out to be inconsistent. The formalist project suffered a decisive setback, when in 1931 Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an unprovable assertion within the scope of that theory. It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at present, there is no known way of demonstrating the consistency of the modern Zermelo–Fraenkel axioms for set theory. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo–Fraenkel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics. Other sciences Experimental sciences - as opposed to mathematics and logic - also have general founding assertions from which a deductive reasoning can be built so as to express propositions that predict properties - either still general or much more specialized to a specific experimental context. For instance, Newton's laws in classical mechanics, Maxwell's equations in classical electromagnetism, Einstein's equation in general relativity, Mandel's laws of genetics, Darwin's Natural selection law, etc. These founding assertions are usually called principles or postulates so as to distinguish from mathematical axioms. As a matter of facts, the role of axioms in mathematics and postulates in experimental sciences is different. In mathematics one neither "proves" nor "disproves" an axiom. A set of mathematical axioms gives a set of rules that fix a conceptual realm, in which the theorems logically follow. In contrast, in experimental sciences, a set of postulates shall allow deducing results that match or do not match experimental results. If postulates do not allow deducing experimental predictions, they do not set a scientific conceptual framework and have to be completed or made more accurate. If the postulates allow deducing predictions of experimental results, the comparison with experiments allows falsifying (falsified) the theory that the postulates install. A theory is considered valid as long as it has not been falsified. Now, the transition between the mathematical axioms and scientific postulates is always slightly blurred, especially in physics. This is due to the heavy use of mathematical tools to support the physical theories. For instance, the introduction of Newton's laws rarely establishes as a prerequisite neither Euclidian geometry or differential calculus that they imply. It became more apparent when Albert Einstein first introduced special relativity where the invariant quantity is no more the Euclidian length (defined as ) > but the Minkowski spacetime interval (defined as ), and then general relativity where flat Minkowskian geometry is replaced with pseudo-Riemannian geometry on curved manifolds. In quantum physics, two sets of postulates have coexisted for some time, which provide a very nice example of falsification. The 'Copenhagen school' (Niels Bohr, Werner Heisenberg, Max Born) developed an operational approach with a complete mathematical formalism that involves the description of quantum system by vectors ('states') in a separable Hilbert space, and physical quantities as linear operators that act in this Hilbert space. This approach is fully falsifiable and has so far produced the most accurate predictions in physics. But it has the unsatisfactory aspect of not allowing answers to questions one would naturally ask. For this reason, another 'hidden variables' approach was developed for some time by Albert Einstein, Erwin Schrödinger, David Bohm. It was created so as to try to give deterministic explanation to phenomena such as entanglement. This approach assumed that the Copenhagen school description was
that are not tautologies in the strict sense. Examples Propositional logic In propositional logic it is common to take as logical axioms all formulae of the following forms, where , , and can be any formulae of the language and where the included primitive connectives are only "" for negation of the immediately following proposition and "" for implication from antecedent to consequent propositions: Each of these patterns is an axiom schema, a rule for generating an infinite number of axioms. For example, if , , and are propositional variables, then and are both instances of axiom schema 1, and hence are axioms. It can be shown that with only these three axiom schemata and modus ponens, one can prove all tautologies of the propositional calculus. It can also be shown that no pair of these schemata is sufficient for proving all tautologies with modus ponens. Other axiom schemata involving the same or different sets of primitive connectives can be alternatively constructed. These axiom schemata are also used in the predicate calculus, but additional logical axioms are needed to include a quantifier in the calculus. First-order logic Axiom of Equality. Let be a first-order language. For each variable , the formula is universally valid. This means that, for any variable symbol the formula can be regarded as an axiom. Also, in this example, for this not to fall into vagueness and a never-ending series of "primitive notions", either a precise notion of what we mean by (or, for that matter, "to be equal") has to be well established first, or a purely formal and syntactical usage of the symbol has to be enforced, only regarding it as a string and only a string of symbols, and mathematical logic does indeed do that. Another, more interesting example axiom scheme, is that which provides us with what is known as Universal Instantiation: Axiom scheme for Universal Instantiation. Given a formula in a first-order language , a variable and a term that is substitutable for in , the formula is universally valid. Where the symbol stands for the formula with the term substituted for . (See Substitution of variables.) In informal terms, this example allows us to state that, if we know that a certain property holds for every and that stands for a particular object in our structure, then we should be able to claim . Again, we are claiming that the formula is valid, that is, we must be able to give a "proof" of this fact, or more properly speaking, a metaproof. These examples are metatheorems of our theory of mathematical logic since we are dealing with the very concept of proof itself. Aside from this, we can also have Existential Generalization: Axiom scheme for Existential Generalization. Given a formula in a first-order language , a variable and a term that is substitutable for in , the formula is universally valid. Non-logical axioms Non-logical axioms are formulas that play the role of theory-specific assumptions. Reasoning about two different structures, for example, the natural numbers and the integers, may involve the same logical axioms; the non-logical axioms aim to capture what is special about a particular structure (or set of structures, such as groups). Thus non-logical axioms, unlike logical axioms, are not tautologies. Another name for a non-logical axiom is postulate. Almost every modern mathematical theory starts from a given set of non-logical axioms, and it was thought that in principle every theory could be axiomatized in this way and formalized down to the bare language of logical formulas. Non-logical axioms are often simply referred to as axioms in mathematical discourse. This does not mean that it is claimed that they are true in some absolute sense. For example, in some groups, the group operation is commutative, and this can be asserted with the introduction of an additional axiom, but without this axiom, we can do quite well developing (the more general) group theory, and we can even take its negation as an axiom for the study of non-commutative groups. Thus, an axiom is an elementary basis for a formal logic system that together with the rules of inference define a deductive system. Examples This section gives examples of mathematical theories that are developed entirely from a set of non-logical axioms (axioms, henceforth). A rigorous treatment of any of these topics begins with a specification of these axioms. Basic theories, such as arithmetic, real analysis and complex analysis are often introduced non-axiomatically, but implicitly or explicitly there is generally an assumption that the axioms being used are the axioms of Zermelo–Fraenkel set theory with choice, abbreviated ZFC, or some very similar system of axiomatic set theory like Von Neumann–Bernays–Gödel set theory, a conservative extension of ZFC. Sometimes slightly stronger theories such as Morse–Kelley set theory or set theory with a strongly inaccessible cardinal allowing the use of a Grothendieck universe is used, but in fact, most mathematicians can actually prove all they need in systems weaker than ZFC, such as second-order arithmetic. The study of topology in mathematics extends all over through point set topology, algebraic topology, differential topology, and all the related paraphernalia, such as homology theory, homotopy theory. The development of abstract algebra brought with itself group theory, rings, fields, and Galois theory. This list could be expanded to include most fields of mathematics, including measure theory, ergodic theory, probability, representation theory, and differential geometry. Arithmetic The Peano axioms are the most widely used axiomatization of first-order arithmetic. They are a set of axioms strong enough to prove many important facts about number theory and they allowed Gödel to establish his famous second incompleteness theorem. We have a language where is a constant symbol and is a unary function and the following axioms: for any formula with one free variable. The standard structure is where is the set of natural numbers, is the successor function and is naturally interpreted as the number 0. Euclidean geometry Probably the oldest, and most famous, list of axioms are the 4 + 1 Euclid's postulates of plane geometry. The axioms are referred to as "4 + 1" because for nearly two millennia the fifth (parallel) postulate ("through a point outside a line there is exactly one parallel") was suspected of being derivable from the first four. Ultimately, the fifth postulate was found to be independent of the first four. One can assume that exactly one parallel through a point outside a line exists, or that infinitely many exist. This choice gives us two alternative forms of geometry in which the interior angles of a triangle add up to exactly 180 degrees or less, respectively, and are known as Euclidean and hyperbolic geometries. If one also removes the second postulate ("a line can be extended indefinitely") then elliptic geometry arises, where there is no parallel through a point outside a line, and in which the interior angles of a triangle add up to more than 180 degrees. Real analysis The objectives of the study are within the domain of real numbers. The real numbers are uniquely picked out (up to isomorphism) by the properties of a Dedekind complete ordered field, meaning that any nonempty set of real numbers with an upper bound has a least upper bound. However, expressing these properties as axioms requires the use of second-order logic. The Löwenheim–Skolem theorems tell us that if we restrict ourselves to first-order logic, any axiom system for the reals admits other models, including both models that are smaller than the reals and models that are larger. Some of the latter are studied in non-standard analysis. Role in mathematical logic Deductive systems and completeness A deductive system consists of a set of logical axioms, a set of non-logical axioms, and a set of rules of inference. A desirable property of a deductive system is that it be complete. A system is said to be complete if, for all formulas , that is, for any statement that is a logical consequence of there actually exists a deduction of the statement from . This is sometimes expressed as "everything that is true is provable", but it must be understood that "true" here means "made true by the set of axioms", and not, for example, "true in the intended interpretation". Gödel's completeness theorem establishes the completeness of a certain commonly used type of deductive system. Note that "completeness" has a different meaning here than it does in the context of Gödel's first incompleteness theorem, which states that no recursive, consistent set of non-logical axioms of the Theory of Arithmetic is complete, in the sense that there will always exist an arithmetic statement such that neither nor can be proved from the given set of axioms. There is thus, on the one hand, the notion of completeness of a deductive system and on the other hand that of completeness of a set of non-logical axioms. The completeness theorem and the incompleteness theorem, despite their names, do not contradict one another. Further discussion Early mathematicians regarded axiomatic geometry as a model of physical space, and obviously, there could only be one such model. The idea that alternative mathematical systems might exist was very troubling to mathematicians of the 19th century and the developers of systems such as Boolean algebra made elaborate efforts to derive them from traditional arithmetic. Galois showed just before his untimely death that these efforts were largely wasted. Ultimately, the abstract parallels between algebraic systems were seen to be more important than the details, and modern algebra was born. In the modern view, axioms may be any set of formulas, as long as they are not known to be inconsistent. See also Axiomatic system Dogma First principle, axiom in science and philosophy List of axioms Model theory Regulæ Juris Theorem Presupposition Physical law Principle Notes References Further reading Mendelson, Elliot (1987). Introduction to mathematical logic. Belmont, California: Wadsworth & Brooks. External links Metamath axioms page Ancient Greek philosophy Concepts in ancient Greek metaphysics Concepts in epistemology Concepts in ethics Concepts in logic Concepts in metaphysics Concepts in the philosophy of science Deductive
particles, alpha carbon and strength of electromagnetic interaction (as Fine-structure constant). Alpha also stands for thermal expansion coefficient of a compound in physical chemistry. It is also commonly used in mathematics in algebraic solutions representing quantities such as angles. Furthermore, in mathematics, the letter alpha is used to denote the area underneath a normal curve in statistics to denote significance level when proving null and alternative hypotheses. In ethology, it is used to name the dominant individual in a group of animals. In aerodynamics, the letter is used as a symbol for the angle of attack of an aircraft and the word "alpha" is used as a synonym for this property. In mathematical logic, α is sometimes used as a placeholder for ordinal numbers. The proportionality operator "∝" (in Unicode: U+221D) is sometimes mistaken for alpha. The uppercase letter alpha is not generally used as a symbol because it tends to be rendered identically to the uppercase Latin A. International Phonetic Alphabet In the International Phonetic Alphabet, the letter ɑ, which looks similar to the lower-case alpha, represents the open back unrounded vowel. History and symbolism Origin The Phoenician alphabet was adopted for Greek in the early 8th century BC, perhaps in Euboea. The majority of the letters of the Phoenician alphabet were adopted into Greek with much the same sounds as they had had in Phoenician, but ʼāleph, the Phoenician letter representing the glottal stop , was adopted as representing the vowel ; similarly, hē and ʽayin are Phoenician consonants that became Greek vowels, epsilon and omicron , respectively. Plutarch Plutarch, in Moralia, presents a discussion on why the letter alpha stands first in the alphabet. Ammonius asks Plutarch what he, being a Boeotian, has to say for Cadmus, the Phoenician who reputedly settled in Thebes and introduced the alphabet to Greece, placing alpha first because it is the Phoenician name for ox—which, unlike Hesiod, the Phoenicians considered not the second or third, but the first of all necessities. "Nothing at all," Plutarch replied. He then added that he would rather be assisted by Lamprias, his own grandfather,
long ([aː]) or short ([a]). Where there is ambiguity, long and short alpha are sometimes written with a macron and breve today: Ᾱᾱ, Ᾰᾰ. ὥρα = ὥρᾱ hōrā "a time" γλῶσσα = γλῶσσᾰ glôssa "tongue" In Modern Greek, vowel length has been lost, and all instances of alpha simply represent . In the polytonic orthography of Greek, alpha, like other vowel letters, can occur with several diacritic marks: any of three accent symbols (), and either of two breathing marks (), as well as combinations of these. It can also combine with the iota subscript (). Greek grammar In the Attic–Ionic dialect of Ancient Greek, long alpha fronted to (eta). In Ionic, the shift took place in all positions. In Attic, the shift did not take place after epsilon, iota, and rho (ε, ι, ρ; e, i, r). In Doric and Aeolic, long alpha is preserved in all positions. Doric, Aeolic, Attic chṓrā – Ionic chṓrē, "country" Doric, Aeolic phā́mā – Attic, Ionic phḗmē, "report" Privative a is the Ancient Greek prefix ἀ- or ἀν- a-, an-, added to words to negate them. It originates from the Proto-Indo-European * (syllabic nasal) and is cognate with English un-. Copulative a is the Greek prefix ἁ- or ἀ- ha-, a-. It comes from Proto-Indo-European *. Mathematics and science The letter alpha represents various concepts in physics and chemistry, including alpha radiation, angular acceleration, alpha particles, alpha carbon and strength of electromagnetic interaction (as Fine-structure constant). Alpha also stands for thermal expansion coefficient of a compound in physical chemistry. It is also commonly used in mathematics in algebraic solutions representing quantities such as angles. Furthermore, in mathematics, the letter alpha is used to denote the area underneath a normal curve in statistics to denote significance level when proving null and alternative hypotheses. In ethology, it is used to name the dominant individual in a group of animals. In aerodynamics, the letter is used as a symbol
Arts & Sciences: U.S. House Speaker Newt Gingrich publicly lauded his ideas about the future, and urged members of Congress to read Toffler's book, Creating a New Civilization (1995). Others, such as AOL founder Steve Case, cited Toffler's The Third Wave as a formative influence on his thinking, which inspired him to write The Third Wave: An Entrepreneur's Vision of the Future in 2016. Case said that Toffler was a "real pioneer in helping people, companies and even countries lean into the future." In 1980, Ted Turner founded CNN, which he said was inspired by Toffler's forecasting the end of the dominance of the three main television networks. Turner's company, Turner Broadcasting, published Toffler's Creating a New Civilization in 1995. Shortly after the book was released, the former Soviet president Mikhail Gorbachev hosted the Global Governance Conference in San Francisco with the theme, Toward a New Civilization, which was attended by dozens of world figures, including the Tofflers, George H. W. Bush, Margaret Thatcher, Carl Sagan, Abba Eban and Turner with his then-wife, actress Jane Fonda. Mexican billionaire Carlos Slim was influenced by his works, and became a friend of the writer. Global marketer J.D. Power also said he was inspired by Toffler's works. Since the 1960s, people had tried to make sense out of the effect of new technologies and social change, a problem which made Toffler's writings widely influential beyond the confines of scientific, economic, and public policy. His works and ideas have been subject to various criticisms, usually with the same argumentation used against futurology: that foreseeing the future is nigh impossible. Techno music pioneer Juan Atkins cites Toffler's phrase "techno rebels" in The Third Wave as inspiring him to use the word "techno" to describe the musical style he helped to create Musician Curtis Mayfield released a disco song called "Future Shock," later covered in an electro version by Herbie Hancock. Science fiction author John Brunner wrote "The Shockwave Rider," from the concept of "future shock." The nightclub Toffler, in Rotterdam, is named after him. In the song "Victoria" by The Exponents, the protagonist's daily routine and cultural interests are described: "She's up in time to watch the soap operas, reads Cosmopolitan and Alvin Toffler". Critical assessment Accenture, the management consultancy firm, identified Toffler in 2002 as being among the most influential voices in business leaders, along with Bill Gates and Peter Drucker. Toffler has also been described in a Financial Times interview as the "world's most famous futurologist". In 2006, the People's Daily classed him among the 50 foreigners who shaped modern China, which one U.S. newspaper notes made him a "guru of sorts to world statesmen." Chinese Premier and General Secretary Zhao Ziyang was greatly influenced by Toffler. He convened conferences to discuss The Third Wave in the early 1980s, and in 1985 the book was the No. 2 best seller in China. Author Mark Satin characterizes Toffler as an important early influence on radical centrist political thought. Newt Gingrich became close to the Tofflers in the 1970s and said The Third Wave had immensely influenced his own thinking and was "one of the great seminal works of our time." Selected awards Toffler has received several prestigious prizes and awards, including the McKinsey Foundation Book Award for Contributions to Management Literature, Officier de L'Ordre des Arts et Lettres, and appointments, including Fellow of the American Association for the Advancement of Science and the International Institute for Strategic Studies. In 2006, Alvin and Heidi Toffler were recipients of Brown University's Independent Award. Personal life Toffler was married to Heidi Toffler, also a writer and futurist. They lived in the Bel Air section of Los Angeles, California, and previously lived in Redding, Connecticut. The couple's only child, Karen Toffler (1954–2000), died at age 46 after more than a decade suffering from Guillain–Barré syndrome. Alvin Toffler died in his sleep on June 27, 2016, at his home in Los Angeles. No cause of death was given. He is buried at Westwood Memorial Park. Bibliography Alvin Toffler co-wrote his books with his wife Heidi. The Culture Consumers (1964) St. Martin's Press, The Schoolhouse in the City (1968) Praeger (editors), Future Shock (1970) Bantam Books, The Futurists (1972) Random House (editors), Learning for Tomorrow (1974) Random House (editors), The Eco-Spasm Report (1975) Bantam Books, The Third Wave (1980) Bantam Books, Previews & Premises (1983) William Morrow & Co, The Adaptive Corporation (1985) McGraw-Hill, Powershift: Knowledge, Wealth and Violence at the Edge of the 21st Century (1990) Bantam Books, War and Anti-War (1993) Warner Books, Creating a New Civilization (1995) Turner Pub, Revolutionary Wealth (2006) Knopf, See also Daniel Bell Norman Swan Human nature John Naisbitt References External links – official Alvin Toffler site Toffler Associates Interview with Alvin Toffler by the World Affairs Council Discuss Alvin Toffler's Future Shock with other readers, BookTalk.org Alvin Toffler at Find a Grave Future Shock Forum 2018 Finding aid to the Alvin and Heidi Toffler papers at Columbia University. Rare
for Playboy in 1964 was Ayn Rand." Toffler was hired by IBM to conduct research and write a paper on the social and organizational impact of computers, leading to his contact with the earliest computer "gurus" and artificial intelligence researchers and proponents. Xerox invited him to write about its research laboratory and AT&T consulted him for strategic advice. This AT&T work led to a study of telecommunications, which advised the company's top management to break up the company more than a decade before the government forced AT&T to break up. In the mid-1960s, the Tofflers began five years of research on what would become Future Shock, published in 1970. It has sold over 6 million copies worldwide, according to the New York Times, or over 15 million copies according to the Tofflers' Web site. Toffler coined the term "future shock" to refer to what happens to a society when change happens too fast, which results in social confusion and normal decision-making processes breaking down. The book has never been out of print and has been translated into dozens of languages. He continued the theme in The Third Wave in 1980. While he describes the first and second waves as the agricultural and industrial revolutions, the "third wave," a phrase he coined, represents the current information, computer-based revolution. He forecast the spread of the Internet and email, interactive media, cable television, cloning, and other digital advancements. He claimed that one of the side effects of the digital age has been "information overload," another term he coined. In 1990, he wrote Powershift, also with the help of his wife, Heidi. In 1996, with American business consultant Tom Johnson, they co-founded Toffler Associates, an advisory firm designed to implement many of the ideas the Tofflers had written on. The firm worked with businesses, NGOs, and governments in the United States, South Korea, Mexico, Brazil, Singapore, Australia, and other countries. During this period in his career, Toffler lectured worldwide, taught at several schools and met world leaders, such as Mikhail Gorbachev, along with key executives and military officials. Ideas and opinions Toffler stated many of his ideas during an interview with the Australian Broadcasting Corporation in 1998. "Society needs people who take care of the elderly and who know how to be compassionate and honest," he said. "Society needs people who work in hospitals. Society needs all kinds of skills that are not just cognitive; they're emotional, they're affectional. You can't run the society on data and computers alone." His opinions about the future of education, many of which were in Future Shock, have often been quoted. An often misattributed quote, however, is that of psychologist Herbert Gerjuoy: "Tomorrow's illiterate will not be the man who can't read; he will be the man who has not learned how to learn." Early in his career, after traveling to other countries, he became aware of the new and myriad inputs that visitors received from these other cultures. He explained during an interview that some visitors would become "truly disoriented and upset" by the strange environment, which he described as a reaction to culture shock. From that issue, he foresaw another problem for the future, when a culturally "new environment comes to you ... and comes to you rapidly." That kind of sudden cultural change within one's own country, which he felt many would not understand, would lead to a similar reaction, one of "future shock", which he wrote about in his book by that title. Toffler writes: In The Third Wave, Toffler describes three types of societies, based on the concept of "waves"—each wave pushes the older societies and cultures aside. He describes the "First Wave" as the society after agrarian revolution and replaced the first hunter-gatherer cultures. The "Second Wave," he labels society during the Industrial Revolution (ca. late 17th century through the mid-20th century). That period saw the increase of urban industrial populations which had undermined the traditional nuclear family, and initiated a factory-like education system, and the growth of the corporation. Toffler said: The "Third Wave" was a term he coined to describe the post-industrial society, which began in the late 1950s. His description of this period dovetails with other futurist writers, who also wrote about the Information Age, Space Age, Electronic Era, Global Village, terms which highlighted a scientific-technological revolution. The Tofflers claimed to have predicted a number of geopolitical events, such as the collapse of the Soviet Union, the fall of the Berlin Wall and the future economic growth in the Asia-Pacific region. Influences and popular culture Toffler often visited with dignitaries in Asia, including China's Zhao Ziyang, Singapore's Lee Kuan Yew and South Korea's Kim Dae Jung, all of whom were influenced by his views as Asia's emerging markets increased in global significance during the 1980s and 1990s. Although they had originally censored some of his books and ideas, China's government cited him along with Franklin Roosevelt and Bill Gates as being among the Westerners who had most influenced their country. The Third Wave along with a video documentary based on it became best-sellers in China and were widely distributed to schools. The video's success inspired the marketing of videos on related themes in the late 1990s by Infowars, whose name is derived from the term coined by Toffler in the book. Toffler's influence on Asian thinkers was summed up in an article in Daedalus, published by the American Academy of Arts & Sciences: U.S. House Speaker Newt Gingrich publicly lauded his ideas about the future, and urged members of Congress to read Toffler's
The exact reasons for the Ditko-Lee split have never been fully explained. Spider-Man successor artist John Romita Sr., in a 2010 deposition, recalled that Lee and Ditko "ended up not being able to work together because they disagreed on almost everything, cultural, social, historically, everything, they disagreed on characters..." In successor penciler Romita Sr.'s first issue, No. 39 (Aug. 1966), nemesis the Green Goblin discovers Spider-Man's secret identity and reveals his own to the captive hero. Romita's Spider-Man – more polished and heroic-looking than Ditko's – became the model for two decades. The Lee-Romita era saw the introduction of such characters as Daily Bugle managing editor Robbie Robertson in No. 52 (Sept. 1967) and NYPD Captain George Stacy, father of Parker's girlfriend Gwen Stacy, in No. 56 (Jan. 1968). The most important supporting character to be introduced during the Romita era was Mary Jane Watson, who made her first full appearance in No. 42, (Nov. 1966), although she first appeared in No. 25 (June 1965) with her face obscured and had been mentioned since No. 15 (Aug. 1964). Peter David wrote in 2010 that Romita "made the definitive statement of his arrival by pulling Mary Jane out from behind the oversized potted plant [that blocked the readers' view of her face in issue #25] and placing her on panel in what would instantly become an iconic moment." Romita has stated that in designing Mary Jane, he "used Ann-Margret from the movie Bye Bye Birdie as a guide, using her coloring, the shape of her face, her red hair and her form-fitting short skirts." Lee and Romita toned down the prevalent sense of antagonism in Parker's world by improving Parker's relationship with the supporting characters and having stories focused as much on the social and college lives of the characters as they did on Spider-Man's adventures. The stories became more topical, addressing issues such as civil rights, racism, prisoners' rights, the Vietnam War, and political elections. Issue No. 50 (June 1967) introduced the highly enduring criminal mastermind the Kingpin, who would become a major force as well in the superhero series Daredevil. Other notable first appearances in the Lee-Romita era include the Rhino in No. 41 (Oct. 1966), the Shocker in No. 46 (March 1967), the Prowler in No. 78 (Nov. 1969), and the Kingpin's son, Richard Fisk, in No. 83 (April 1970). 1970s Several spin-off series debuted in the 1970s: Marvel Team-Up in 1972, and The Spectacular Spider-Man in 1976. A short-lived series titled Giant-Size Spider-Man began in July 1974 and ran six issues through 1975. Spidey Super Stories, a series aimed at children ages 6–10, ran for 57 issues from October 1974 through 1982. The flagship title's second decade took a grim turn with a story in #89-90 (Oct.-Nov. 1970) featuring the death of Captain George Stacy. This was the first Spider-Man story to be penciled by Gil Kane, who would alternate drawing duties with Romita for the next year-and-a-half and would draw several landmark issues. One such story took place in the controversial issues #96–98 (May–July 1971). Writer-editor Lee defied the Comics Code Authority with this story, in which Parker's friend Harry Osborn, was hospitalized after over-dosing on pills. Lee wrote this story upon a request from the U. S. Department of Health, Education, and Welfare for a story about the dangers of drugs. Citing its dictum against depicting drug use, even in an anti-drug context, the CCA refused to put its seal on these issues. With the approval of Marvel publisher Martin Goodman, Lee had the comics published without the seal. The comics sold well and Marvel won praise for its socially conscious efforts. The CCA subsequently loosened the Code to permit negative depictions of drugs, among other new freedoms. "The Six Arms Saga" of #100–102 (Sept.–Nov. 1971) introduced Morbius, the Living Vampire. The second installment was the first Amazing Spider-Man story not written by co-creator Lee, with Roy Thomas taking over writing the book for several months before Lee returned to write #105–110 (Feb.-July 1972). Lee, who was going on to become Marvel Comics' publisher, with Thomas becoming editor-in-chief, then turned writing duties over to 19-year-old Gerry Conway, who scripted the series through 1975. Romita penciled Conway's first half-dozen issues, which introduced the gangster Hammerhead in No. 113 (Oct. 1972). Kane then succeeded Romita as penciler, although Romita would continue inking Kane for a time. Issues 121–122 (June–July 1973, by Conway-Kane-Romita), which featured the death of Gwen Stacy at the hands of the Green Goblin in "The Night Gwen Stacy Died" in issue No. 121. Her demise and the Goblin's apparent death one issue later formed a story arc widely considered as the most defining in the history of Spider-Man. The aftermath of the story deepened both the characterization of Mary Jane Watson and her relationship with Parker. In 1973, Gil Kane was succeeded by Ross Andru, whose run lasted from issue No. 125 (October 1973) to No. 185 (October 1978). Issue#129 (Feb. 1974) introduced the Punisher, who would become one of Marvel Comics' most popular characters. The Conway-Andru era featured the first appearances of the Man-Wolf in #124–125 (Sept.-Oct. 1973); the near-marriage of Doctor Octopus and Aunt May in No. 131 (April 1974); Harry Osborn stepping into his father's role as the Green Goblin in #135–137 (Aug.-Oct.1974); and the original "Clone Saga", containing the introduction of Spider-Man's clone, in #147–149 (Aug.-Oct. 1975). Archie Goodwin and Gil Kane produced the title's 150th issue (Nov. 1975) before Len Wein became writer with issue No. 151. During Wein's tenure, Harry Osborn and Liz Allen dated and became engaged; J. Jonah Jameson was introduced to his eventual second wife, Marla Madison; and Aunt May suffered a heart attack. Wein's last story on Amazing was a five-issue arc in #176–180 (Jan.-May 1978) featuring a third Green Goblin (Harry Osborn's psychiatrist, Bart Hamilton). Marv Wolfman, Marvel's editor-in-chief from 1975 to 1976, succeeded Wein as writer, and in his first issue, No. 182 (July 1978), had Parker propose marriage to Watson who refused, in the following issue. Keith Pollard succeeded Ross Andru as artist shortly afterward, and with Wolfman introduced the likable rogue the Black Cat (Felicia Hardy) in No. 194 (July 1979). As a love interest for Spider-Man, the Black Cat would go on to be an important supporting character for the better part of the next decade, and remain a friend and occasional lover into the 2010s. 1980s The Amazing Spider-Man No. 200 (Jan. 1980) featured the return and death of the burglar who killed Spider-Man's Uncle Ben. Writer Marv Wolfman and penciler Keith Pollard both left the title by mid-year, succeeded by Dennis O'Neil, a writer known for groundbreaking 1970s work at rival DC Comics, and penciler John Romita Jr. O'Neil wrote two issues of The Amazing Spider-Man Annual which were both drawn by Frank Miller. The 1980 Annual featured a team-up with Doctor Strange while the 1981 Annual showcased a meeting with the Punisher. Roger Stern, who had written nearly 20 issues of sister title The Spectacular Spider-Man, took over Amazing with issue No. 224 (January 1982). During his two years on the title, Stern augmented the backgrounds of long-established Spider-Man villains, and with Romita Jr. created the mysterious supervillain the Hobgoblin in #238–239 (March–April 1983). Fans engaged with the mystery of the Hobgoblin's secret identity, which continued throughout #244–245 and 249–251 (Sept.-Oct. 1983 and Feb.-April 1984). One lasting change was the reintroduction of Mary Jane Watson as a more serious, mature woman who becomes Peter's confidante after she reveals that she knows his secret identity. Stern also wrote "The Kid Who Collects Spider-Man" in The Amazing Spider-Man No. 248 (January 1984), a story which ranks among his most popular. By mid-1984, Tom DeFalco and Ron Frenz took over scripting and penciling. DeFalco helped establish Parker and Watson's mature relationship, laying the foundation for the characters' wedding in 1987. Notably, in No. 257 (Oct. 1984), Watson tells Parker that she knows he is Spider-Man, and in No. 259 (Dec. 1984), she reveals to Parker the extent of her troubled childhood. Other notable issues of the DeFalco-Frenz era include No. 252 (May 1984), with the first appearance of Spider-Man's black costume, which the hero would wear almost exclusively for the next four years' worth of comics; the debut of criminal mastermind the Rose, in No. 253 (June 1984); the revelation in No. 258 (Nov. 1984) that the black costume is a living being, a symbiote; and the introduction of the female mercenary Silver Sable in No. 265 (June 1985). Tom DeFalco and Ron Frenz were both removed from The Amazing Spider-Man in 1986 by editor Jim Owsley under acrimonious circumstances. A succession of artists including Alan Kupperberg, John Romita Jr., and Alex Saviuk penciled the series from 1987 to 1988; Owsley wrote the book for the first half of 1987, scripting the five-part "Gang War" story (#284–288) that DeFalco plotted. Former Spectacular Spider-Man writer Peter David scripted No. 289 (June 1987), which revealed Ned Leeds as being the Hobgoblin although this was retconned in 1996 by Roger Stern into Leeds not being the original Hobgoblin after all. David Michelinie took over as writer in the next issue, for a story arc in #290–292 (July–Sept. 1987) that led to the marriage of Peter Parker and Mary Jane Watson in Amazing Spider-Man Annual No. 21. The "Kraven's Last Hunt" storyline by writer J.M. DeMatteis and artists Mike Zeck and Bob McLeod crossed over into The Amazing Spider-Man No. 293 and 294. Issue No. 298 (March 1988) was the first Spider-Man comic to be drawn by future industry star Todd McFarlane, the first regular artist on The Amazing Spider-Man since Frenz's departure. McFarlane revolutionized Spider-Man's look. His depiction – "Ditko-esque" poses, large-eyed, with wiry, contorted limbs, and messy, knotted, convoluted webbing – influenced the way virtually all subsequent artists would draw the character. McFarlane's other significant contribution to the Spider-Man canon was the design for what would become one of Spider-Man's most wildly popular antagonists, the supervillain Venom. Issue No. 299 (April 1988) featured Venom's first appearance (a last-page cameo) before his first full appearance in No. 300 (May 1988). The latter issue featured Spider-Man reverting to his original red-and-blue costume. Other notable issues of the Michelinie-McFarlane era include No. 312 (Feb. 1989), featuring the Green Goblin vs. the Hobgoblin; and #315–317 (May–July 1989), with the return of Venom. In July 2012, Todd McFarlane's original cover art for The Amazing Spider-Man No. 328 sold for a bid of $657,250, making it the most expensive American comic book art ever sold at auction. 1990s With a civilian life as a married man, the Spider-Man of the 1990s was different from the superhero of the previous three decades. McFarlane left the title in 1990 to write and draw a new series titled simply Spider-Man. His successor, Erik Larsen, penciled the book from early 1990 to mid-1991. After issue No. 350, Larsen was succeeded by Mark Bagley, who had won the 1986 Marvel Tryout Contest and was assigned a number of low-profile penciling jobs followed by a run on New Warriors in 1990. Bagley penciled the flagship Spider-Man title from 1991 to 1996. During that time, Bagley's rendition of Spider-Man was used extensively for licensed material and merchandise. Issues #361–363 (April–June 1992) introduced Carnage, a second symbiote nemesis for Spider-Man. The series' 30th-anniversary issue, No. 365 (Aug. 1992), was a double-sized, hologram-cover issue with the cliffhanger ending of Peter Parker's parents, long thought dead, reappearing alive. It would be close to two years before they were revealed to be impostors, who are killed in No. 388 (April 1994), scripter Michelinie's last issue. His 1987–1994 stint gave him the second-longest run as writer on the title, behind Stan Lee. Issue No. 375 was released with a gold foil cover. There was an error affecting some issues and which are missing the majority of the foil. With No. 389, writer J. M. DeMatteis, whose Spider-Man credits included the 1987 "Kraven's Last Hunt" story arc and a 1991–1993 run on The Spectacular Spider-Man, took over the title. From October 1994 to June 1996, Amazing stopped running stories exclusive to it, and ran installments of multi-part stories that crossed over into all the Spider-Man books. One of the few self-contained stories during this period was in No. 400 (April 1995), which featured the death of Aunt May – later revealed to have been faked (although the death still stands in the MC2 continuity). The "Clone Saga" culminated with the revelation that the Spider-Man who had appeared in the previous 20 years of comics was a clone of the real Spider-Man. This plot twist was massively unpopular with many readers, and was later reversed in the "Revelations" story arc that crossed over the Spider-Man books in late 1996. The Clone Saga tied into a publishing gap after No. 406 (Oct. 1995), when the title was temporarily replaced by The Amazing Scarlet Spider #1–2 (Nov.-Dec. 1995), featuring Ben Reilly. The series picked up again with No. 407 (Jan. 1996), with Tom DeFalco returning as writer. Bagley completed his 5½-year run by September 1996. A succession of artists, including Ron Garney, Steve Skroce, Joe Bennett, Rafael Kayanan and John Byrne penciled the book until the final issue, No. 441 (Nov. 1998), after which Marvel rebooted the title with vol. 2, No. 1 (Jan. 1999). Relaunch and the 2000s Marvel began The Amazing Spider-Man relaunching the 'Amazing' comic book series with (vol. 2) #1 (Jan. 1999). Howard Mackie wrote the first 29 issues. The relaunch included the Sandman being regressed to his criminal ways and the "death" of Mary Jane, which was ultimately reversed. Other elements included the introduction of a new Spider-Woman (who was spun off into her own short-lived series) and references to John Byrne's miniseries Spider-Man: Chapter One, which was launched at the same time as the reboot. Byrne also penciled issues #1–18 (from 1999 to 2000) and wrote #13–14, John Romita Jr. took his place soon after in October 2000. Mackie's run ended with The Amazing Spider-Man Annual 2001, which saw the return of Mary Jane, who then left Parker upon reuniting with him. With issue #30 (June 2001), J. Michael Straczynski took over as writer and oversaw additional storylines – most notably his lengthy "Spider-Totem" arc, which raised the issue of whether Spider-Man's powers were magic-based, rather than as the result of a radioactive spider's bite. Additionally, Straczynski resurrected the plot point of Aunt May discovering her nephew was Spider-Man, and returned Mary Jane, with the couple reuniting in The Amazing Spider-Man (vol. 2) #50. Straczynski gave Spider-Man a new profession, having Parker teach at his former high school. Issue #30 began a dual numbering system, with the original series numbering (#471) returned and placed alongside the volume two number on the cover. Other longtime, rebooted Marvel Comics titles, including Fantastic Four, likewise were given the dual numbering around this time. After (vol. 2) #58 (Nov. 2003), the title reverted completely to its original numbering for issue #500 (Dec. 2003). Mike Deodato, Jr. penciled the series from mid-2004 until 2006. That year Peter Parker revealed his Spider-Man identity on live television in the company-crossover storyline "Civil War", in which the superhero community is split over whether to conform to the federal government's new Superhuman Registration Act. This knowledge was erased from the world with the event of the four-part, crossover story arc, "One More Day", written partially by J. Michael Straczynski and illustrated by Joe Quesada, running through The Amazing Spider-Man #544–545 (Nov.-Dec. 2007), Friendly Neighborhood Spider-Man No. 24 (Nov. 2007) and The Sensational Spider-Man No. 41 (Dec. 2007), the final issues of those two titles. Here, the demon Mephisto makes a Faustian bargain with Parker and Mary Jane, offering to save Parker's dying Aunt May if the couple will allow their marriage to have never existed, rewriting that portion of their pasts. This story arc marked the end of Straczynski's work on the title. Following this, Marvel made The Amazing Spider-Man the company's sole Spider-Man title, increasing its frequency of publication to three issues monthly, and inaugurating the series with a sequence of "back to basics" story arcs under the banner of "Brand New Day". Parker now exists in a changed world where he and Mary Jane had never married, and Parker has no memory of being married to her, with domino effect differences in their immediate world. The most notable of these revisions to Spider-Man continuity are the return of Harry Osborn, whose death in The Spectacular Spider-Man No. 200 (May 1993) is erased; and the reestablishment of Spider-Man's secret identity, with no one except Mary Jane able to recall that Parker is Spider-Man (although he soon reveals his secret identity to the New Avengers and the Fantastic Four). Under the banner of Brand New Day, Marvel tried to only use newly created villains instead of relying on older ones. Characters like Mister Negative and Overdrive both in Free Comic Book Day 2007 Spider-Man (July 2007), Menace in No. 549 (March 2008), Ana and Sasha Kravinoff in No. 565 (September 2008) and No. 567 (October 2008) respectively, and several more were introduced. The alternating regular writers were initially Dan Slott, Bob Gale, Marc Guggenheim, and Zeb Wells, joined by a rotation of artists that included Steve McNiven, Salvador Larroca, Phil Jimenez, Barry Kitson, Chris Bachalo, Mike McKone, Marcos Martín, and John Romita Jr. Joe Kelly, Mark Waid, Fred Van Lente and Roger Stern later joined the writing team and Paolo Rivera, Lee Weeks and Marco Checchetto the artist roster. Waid's work on the series included a meeting between Spider-Man and Stephen Colbert in The Amazing Spider-Man No. 573 (Dec. 2008). Issue No. 583 (March 2009) included a back-up story in which Spider-Man meets President Barack Obama. 2010s and temporary end of publication Mark Waid scripted the opening of "The Gauntlet" storyline in issue No. 612 (Jan. 2010). The Gauntlet story was concluded by Grim Hunt (No. 634-637) which saw the resurrection of long-dead Spider-Man villain, Kraven the Hunter. The series became a twice-monthly title with Dan Slott as sole writer at issue No. 648 (Jan. 2011), launching the Big Time storyline. Eight additional pages were added per issue. Big Time saw major changes in Spider-Man/Peter Parker's life, Peter would start working at Horizon Labs and begin a relationship with Carlie Cooper (his first serious relationship since his marriage to Mary Jane), Mac Gargan returned as Scorpion after spending the past few years as Venom, Phil Urich would take up the mantle of Hobgoblin, and the death of J. Jonah Jameson's wife, Marla Jameson. Issues 654 and 654.1 saw the birth of Agent Venom, Flash Thompson bonded with the Venom symbiote, which would lead to Venom getting his own series Venom (volume 2). Starting in No. 659 and going to No. 655, the series built-up to the Spider-Island event which officially started in No. 666 and ended in No. 673. Ends of the Earth was the next event that ran from No. 682 through No. 687. This publishing format lasted until issue No. 700, which concluded the "Dying Wish" storyline, in which Parker and Doctor Octopus swapped bodies, and the latter taking on the mantle of Spider-Man when Parker apparently died in Doctor Octopus' body. The Amazing Spider-Man ended with this issue, with the story continuing in the new series The Superior Spider-Man. Despite The Superior
order in 1999. In 2003, the series reverted to the numbering order of the first volume. The title has occasionally been published biweekly, and was published three times a month from 2008 to 2010. After DC Comics' relaunch of Action Comics and Detective Comics with new No. 1 issues in 2011, it had been the highest-numbered American comic still in circulation until it was cancelled. The title ended its 50-year run as a continuously published comic with the landmark issue #700 in December 2012. It was replaced by The Superior Spider-Man as part of the Marvel NOW! relaunch of Marvel's comic lines. Volume 3 of The Amazing Spider-Man was published in April 2014, following the conclusion of The Superior Spider-Man story arc. In late 2015, the series was relaunched with a 4th volume, following the 2015 Secret Wars event. The 5th and current volume began in 2018, as part of Marvel's Fresh Start series of comic relaunches. Publication history Writer-editor Stan Lee and artist and co-plotter Steve Ditko created the character of Spider-Man, and the pair produced 38 issues from March 1963 to July 1966. Ditko left after the 38th issue, while Lee remained as writer until issue 100. Since then, many writers and artists have taken over the monthly comic through the years, chronicling the adventures of Marvel's most identifiable hero. The Amazing Spider-Man has been the character's flagship series for his first fifty years in publication, and was the only monthly series to star Spider-Man until Peter Parker, The Spectacular Spider-Man, in 1976, although 1972 saw the debut of Marvel Team-Up, with the vast majority of issues featuring Spider-Man along with a rotating cast of other Marvel characters. Most of the major characters and villains of the Spider-Man saga have been introduced in Amazing, and with few exceptions, it is where most key events in the character's history have occurred. The title was published continuously until No. 441 (Nov. 1998) when Marvel Comics relaunched it as vol. 2 No. 1 (Jan. 1999), but on Spider-Man's 40th anniversary, this new title reverted to using the numbering of the original series, beginning again with issue No. 500 (Dec. 2003) and lasting until the final issue, No. 700 (Feb. 2013). 1960s Due to strong sales on the character's first appearance in Amazing Fantasy No. 15, Spider-Man was given his own ongoing series in March 1963. The initial years of the series, under Lee and Ditko, chronicled Spider-Man's nascent career as a masked super-human vigilante with his civilian life as hard-luck yet perpetually good-humored and well-meaning teenager Peter Parker. Peter balanced his career as Spider-Man with his job as a freelance photographer for The Daily Bugle under the bombastic editor-publisher J. Jonah Jameson to support himself and his frail Aunt May. At the same time, Peter dealt with public hostility towards Spider-Man and the antagonism of his classmates Flash Thompson and Liz Allan at Midtown High School, while embarking on a tentative, ill-fated romance with Jameson's secretary, Betty Brant. By focusing on Parker's everyday problems, Lee and Ditko created a groundbreakingly flawed, self-doubting superhero, and the first major teenaged superhero to be a protagonist and not a sidekick. Ditko's quirky art provided a stark contrast to the more cleanly dynamic stylings of Marvel's most prominent artist, Jack Kirby, and combined with the humor and pathos of Lee's writing to lay the foundation for what became an enduring mythos. Most of Spider-Man's key villains and supporting characters were introduced during this time. Issue No. 1 (March 1963) featured the first appearances of J. Jonah Jameson and his astronaut son John Jameson, and the supervillain the Chameleon. It included the hero's first encounter with the superhero team the Fantastic Four. Issue No. 2 (May 1963) featured the first appearance of the Vulture and the Tinkerer as well as the beginning of Parker's freelance photography career at the newspaper The Daily Bugle. The Lee-Ditko era continued to usher in a significant number of villains and supporting characters, including Doctor Octopus in No. 3 (July 1963); the Sandman and Betty Brant in No. 4 (Sept. 1963); the Lizard in No. 6 (Nov. 1963); Living Brain in (#8, January 1964); Electro in No. 9 (March 1964); Mysterio in No. 13 (June 1964); the Green Goblin in No. 14 (July 1964); Kraven The Hunter in No. 15 (Aug. 1964); reporter Ned Leeds in No. 18 (Nov. 1964); and the Scorpion in No. 20 (Jan. 1965). The Molten Man was introduced in No. 28 (Sept. 1965) which also featured Parker's graduation from high school. Peter began attending Empire State University in No. 31 (Dec. 1965), the issue which featured the first appearances of friends and classmates Gwen Stacy and Harry Osborn. Harry's father, Norman Osborn first appeared in No. 23 (April 1965) as a member of Jameson's country club but is not named nor revealed as Harry's father until No. 37 (June 1966). One of the most celebrated issues of the Lee-Ditko run is No. 33 (Feb. 1966), the third part of the story arc "If This Be My Destiny...!", which features the dramatic scene of Spider-Man, through force of will and thoughts of family, escaping from being pinned by heavy machinery. Comics historian Les Daniels noted that "Steve Ditko squeezes every ounce of anguish out of Spider-Man's predicament, complete with visions of the uncle he failed and the aunt he has sworn to save." Peter David observed that "After his origin, this two-page sequence from Amazing Spider-Man No. 33 is perhaps the best-loved sequence from the Stan Lee/Steve Ditko era." Steve Saffel stated the "full page Ditko image from The Amazing Spider-Man No. 33 is one of the most powerful ever to appear in the series and influenced writers and artists for many years to come." and Matthew K. Manning wrote that "Ditko's illustrations for the first few pages of this Lee story included what would become one of the most iconic scenes in Spider-Man's history." The story was chosen as No. 15 in the 100 Greatest Marvels of All Time poll of Marvel's readers in 2001. Editor Robert Greenberger wrote in his introduction to the story that "These first five pages are a modern-day equivalent to Shakespeare as Parker's soliloquy sets the stage for his next action. And with dramatic pacing and storytelling, Ditko delivers one of the great sequences in all comics." Although credited only as artist for most of his run, Ditko would eventually plot the stories as well as draw them, leaving Lee to script the dialogue. A rift between Ditko and Lee developed, and the two men were not on speaking terms long before Ditko completed his last issue, The Amazing Spider-Man No. 38 (July 1966). The exact reasons for the Ditko-Lee split have never been fully explained. Spider-Man successor artist John Romita Sr., in a 2010 deposition, recalled that Lee and Ditko "ended up not being able to work together because they disagreed on almost everything, cultural, social, historically, everything, they disagreed on characters..." In successor penciler Romita Sr.'s first issue, No. 39 (Aug. 1966), nemesis the Green Goblin discovers Spider-Man's secret identity and reveals his own to the captive hero. Romita's Spider-Man – more polished and heroic-looking than Ditko's – became the model for two decades. The Lee-Romita era saw the introduction of such characters as Daily Bugle managing editor Robbie Robertson in No. 52 (Sept. 1967) and NYPD Captain George Stacy, father of Parker's girlfriend Gwen Stacy, in No. 56 (Jan. 1968). The most important supporting character to be introduced during the Romita era was Mary Jane Watson, who made her first full appearance in No. 42, (Nov. 1966), although she first appeared in No. 25 (June 1965) with her face obscured and had been mentioned since No. 15 (Aug. 1964). Peter David wrote in 2010 that Romita "made the definitive statement of his arrival by pulling Mary Jane out from behind the oversized potted plant [that blocked the readers' view of her face in issue #25] and placing her on panel in what would instantly become an iconic moment." Romita has stated that in designing Mary Jane, he "used Ann-Margret from the movie Bye Bye Birdie as a guide, using her coloring, the shape of her face, her red hair and her form-fitting short skirts." Lee and Romita toned down the prevalent sense of antagonism in Parker's world by improving Parker's relationship with the supporting characters and having stories focused as much on the social and college lives of the characters as they did on Spider-Man's adventures. The stories became more topical, addressing issues such as civil rights, racism, prisoners' rights, the Vietnam War, and political elections. Issue No. 50 (June 1967) introduced the highly enduring criminal mastermind the Kingpin, who would become a major force as well in the superhero series Daredevil. Other notable first appearances in the Lee-Romita era include the Rhino in No. 41 (Oct. 1966), the Shocker in No. 46 (March 1967), the Prowler in No. 78 (Nov. 1969), and the Kingpin's son, Richard Fisk, in No. 83 (April 1970). 1970s Several spin-off series debuted in the 1970s: Marvel Team-Up in 1972, and The Spectacular Spider-Man in 1976. A short-lived series titled Giant-Size Spider-Man began in July 1974 and ran six issues through 1975. Spidey Super Stories, a series aimed at children ages 6–10, ran for 57 issues from October 1974 through 1982. The flagship title's second decade took a grim turn with a story in #89-90 (Oct.-Nov. 1970) featuring the death of Captain George Stacy. This was the first Spider-Man story to be penciled by Gil Kane, who would alternate drawing duties with Romita for the next year-and-a-half and would draw several landmark issues. One such story took place in the controversial issues #96–98 (May–July 1971). Writer-editor Lee defied the Comics Code Authority with this story, in which Parker's friend Harry Osborn, was hospitalized after over-dosing on pills. Lee wrote this story upon a request from the U. S. Department of Health, Education, and Welfare for a story about the dangers of drugs. Citing its dictum against depicting drug use, even in an anti-drug context, the CCA refused to put its seal on these issues. With the approval of Marvel publisher Martin Goodman, Lee had the comics published without the seal. The comics sold well and Marvel won praise for its socially conscious efforts. The CCA subsequently loosened the Code to permit negative depictions of drugs, among other new freedoms. "The Six Arms Saga" of #100–102 (Sept.–Nov. 1971) introduced Morbius, the Living Vampire. The second installment was the first Amazing Spider-Man story not written by co-creator Lee, with Roy Thomas taking over writing the book for several months before Lee returned to write #105–110 (Feb.-July 1972). Lee, who was going on to become Marvel Comics' publisher, with Thomas becoming editor-in-chief, then turned writing duties over to 19-year-old Gerry Conway, who scripted the series through 1975. Romita penciled Conway's first half-dozen issues, which introduced the gangster Hammerhead in No. 113 (Oct. 1972). Kane then succeeded Romita as penciler, although Romita would continue inking Kane for a time. Issues 121–122 (June–July 1973, by Conway-Kane-Romita), which featured the death of Gwen Stacy at the hands of the Green Goblin in "The Night Gwen Stacy Died" in issue No. 121. Her demise and the Goblin's apparent death one issue later formed a story arc widely considered as the most defining in the history of Spider-Man. The aftermath of the story deepened both the characterization of Mary Jane Watson and her relationship with Parker. In 1973, Gil Kane was succeeded by Ross Andru, whose run lasted from issue No. 125 (October 1973) to No. 185 (October 1978). Issue#129 (Feb. 1974) introduced the Punisher, who would become one of Marvel Comics' most popular characters. The Conway-Andru era featured the first appearances of the Man-Wolf in #124–125 (Sept.-Oct. 1973); the near-marriage of Doctor Octopus and Aunt May in No. 131 (April 1974); Harry Osborn stepping into his father's role as the Green Goblin in #135–137 (Aug.-Oct.1974); and the original "Clone Saga", containing the introduction of Spider-Man's clone, in #147–149 (Aug.-Oct. 1975). Archie Goodwin and Gil Kane produced the title's 150th issue (Nov. 1975) before Len Wein became writer with issue No. 151. During Wein's tenure, Harry Osborn and Liz Allen dated and became engaged; J. Jonah Jameson was introduced to his eventual second wife, Marla Madison; and Aunt May suffered a heart attack. Wein's last story on Amazing was a five-issue arc in #176–180 (Jan.-May 1978) featuring a third Green Goblin (Harry Osborn's psychiatrist, Bart Hamilton). Marv Wolfman, Marvel's editor-in-chief from 1975 to 1976, succeeded Wein as writer, and in his first issue, No. 182 (July 1978), had Parker propose marriage to Watson who refused, in the following issue. Keith Pollard succeeded Ross Andru as artist shortly afterward, and with Wolfman introduced the likable rogue the Black Cat (Felicia Hardy) in No. 194 (July 1979). As a love interest for Spider-Man, the Black Cat would go on to be an important supporting character for the better part of the next decade, and remain a friend and occasional lover into the 2010s. 1980s The Amazing Spider-Man No. 200 (Jan. 1980) featured the return and death of the burglar who killed Spider-Man's Uncle Ben. Writer Marv Wolfman and penciler Keith Pollard both left the title by mid-year, succeeded by Dennis O'Neil, a writer known for groundbreaking 1970s work at rival DC Comics, and penciler John Romita Jr. O'Neil wrote two issues of The Amazing Spider-Man Annual which were both drawn by Frank Miller. The 1980 Annual featured a team-up with Doctor Strange while the 1981 Annual showcased a meeting with the Punisher. Roger Stern, who had written nearly 20 issues of sister title The Spectacular Spider-Man, took over Amazing with issue No. 224 (January 1982). During his two years on the title, Stern augmented the backgrounds of long-established Spider-Man villains, and with Romita Jr. created the mysterious supervillain the Hobgoblin in #238–239 (March–April 1983). Fans engaged with the mystery of the Hobgoblin's secret identity, which continued throughout #244–245 and 249–251 (Sept.-Oct. 1983 and Feb.-April 1984). One lasting change was the reintroduction of Mary Jane Watson as a more serious, mature woman who becomes Peter's confidante after she reveals that she knows his secret identity. Stern also wrote "The Kid Who Collects Spider-Man" in The Amazing Spider-Man No. 248 (January 1984), a story which ranks among his most popular. By mid-1984, Tom DeFalco and Ron Frenz took over scripting and penciling. DeFalco helped establish Parker and Watson's mature relationship, laying the foundation for the characters' wedding in 1987. Notably, in No. 257 (Oct. 1984), Watson tells Parker that she knows he is Spider-Man, and in No. 259 (Dec. 1984), she reveals to Parker the extent of her troubled childhood. Other notable issues of the DeFalco-Frenz era include No. 252 (May 1984), with the first appearance of Spider-Man's black costume, which the hero would wear almost exclusively for the next four years' worth of comics; the debut of criminal mastermind the Rose, in No. 253 (June 1984); the revelation in No. 258 (Nov. 1984) that the black costume is a living being, a symbiote; and the introduction of the female mercenary Silver Sable in No. 265 (June 1985). Tom DeFalco and Ron Frenz were both removed from The Amazing Spider-Man in 1986 by editor Jim Owsley under acrimonious circumstances. A succession of artists including Alan Kupperberg, John Romita Jr., and Alex Saviuk penciled the series from 1987 to 1988; Owsley wrote the book for the first half of 1987, scripting the five-part "Gang War" story (#284–288) that DeFalco plotted. Former Spectacular Spider-Man writer Peter David scripted No. 289 (June 1987), which revealed Ned Leeds as being the Hobgoblin although this was retconned in 1996 by Roger Stern into Leeds not being the original Hobgoblin after all. David Michelinie took over as writer in the next issue, for a story arc in #290–292 (July–Sept. 1987) that led to the marriage of Peter Parker and Mary Jane Watson in Amazing Spider-Man Annual No. 21. The "Kraven's Last Hunt" storyline by writer J.M. DeMatteis and artists Mike Zeck and Bob McLeod crossed over into The Amazing Spider-Man No. 293 and 294. Issue No. 298 (March 1988) was the first Spider-Man comic to be drawn by future industry star Todd McFarlane, the first regular artist on The Amazing Spider-Man since Frenz's departure. McFarlane revolutionized Spider-Man's look. His depiction – "Ditko-esque" poses, large-eyed, with wiry, contorted limbs, and messy, knotted, convoluted webbing – influenced the way virtually all subsequent artists would draw the character. McFarlane's other significant contribution to the Spider-Man canon was the design for what would become one of Spider-Man's most wildly popular antagonists, the supervillain Venom. Issue No. 299 (April 1988) featured Venom's first appearance (a last-page cameo) before his first full appearance in No. 300 (May 1988). The latter issue featured Spider-Man reverting to his original red-and-blue costume. Other notable issues of the Michelinie-McFarlane era include No. 312 (Feb. 1989), featuring the Green Goblin vs. the Hobgoblin; and #315–317 (May–July 1989), with the return of Venom. In July 2012, Todd McFarlane's original cover art for The Amazing Spider-Man No. 328 sold for a bid of $657,250, making it the most expensive American comic book art ever sold at auction. 1990s With a civilian life as a married man, the Spider-Man of the 1990s was different from the superhero of the previous three decades. McFarlane left the title in 1990 to write and draw a new series titled simply Spider-Man. His successor, Erik Larsen, penciled the book from early 1990 to mid-1991. After issue No. 350, Larsen was succeeded by Mark Bagley, who had won the 1986 Marvel Tryout Contest and was assigned a number of low-profile penciling jobs followed by a run on New Warriors in 1990. Bagley penciled the flagship Spider-Man title from 1991 to 1996. During that time, Bagley's rendition of Spider-Man was used extensively for licensed material and merchandise. Issues #361–363 (April–June 1992) introduced Carnage, a second symbiote nemesis for Spider-Man. The series' 30th-anniversary issue, No. 365 (Aug. 1992), was a double-sized, hologram-cover issue with the cliffhanger ending of Peter Parker's
Automake software Agile modeling, a software engineering methodology for modeling and documenting software systems Amplitude modulation, an electronic communication technique Additive Manufacturing, a process of making a three-dimensional solid object of virtually any shape from a digital model. AM broadcasting, radio broadcasting using amplitude modulation Anti-materiel rifle Automated Mathematician, an artificial intelligence program Timekeeping ante meridiem, Latin for "before midday" Anno Mundi, a calendar era based on the biblical creation of the world Anno Martyrum, a method of numbering years in the Coptic calendar Transportation A.M. (automobile), a 1906 French car Aeroméxico (IATA airline code AM) Arkansas and Missouri Railroad All-mountain, a discipline of mountain biking Military AM, the United States Navy hull classification symbol for "minesweeper" Air marshal, a senior air officer
an electronic communication technique Additive Manufacturing, a process of making a three-dimensional solid object of virtually any shape from a digital model. AM broadcasting, radio broadcasting using amplitude modulation Anti-materiel rifle Automated Mathematician, an artificial intelligence program Timekeeping ante meridiem, Latin for "before midday" Anno Mundi, a calendar era based on the biblical creation of the world Anno Martyrum, a method of numbering years in the Coptic calendar Transportation A.M. (automobile), a 1906 French car Aeroméxico (IATA airline code AM) Arkansas and Missouri Railroad All-mountain, a discipline of mountain biking Military AM, the United States Navy hull classification symbol for "minesweeper" Air marshal, a senior air officer rank used in Commonwealth countries Anti-materiel rifle Aviation Structural Mechanic, a U.S. Navy occupational rating Other uses Am (cuneiform), a
was evacuated to Antigua. Amidst the following rebuilding efforts on Barbuda that were estimated to cost at least $100 million, the government announced plans to revoke a century-old law of communal land ownership by allowing residents to buy land; a move that has been criticised as promoting "disaster capitalism". Geography Antigua and Barbuda both are generally low-lying islands whose terrain has been influenced more by limestone formations than volcanic activity. The highest point on Antigua and Barbuda is Boggy Peak, located in southwestern Antigua, which is the remnant of a volcanic crater rising . The shorelines of both islands are greatly indented with beaches, lagoons, and natural harbours. The islands are rimmed by reefs and shoals. There are few streams as rainfall is slight. Both islands lack adequate amounts of fresh groundwater. About south-west of Antigua lies the small, rocky island of Redonda, which is uninhabited. Cities and villages The most populous cities in Antigua and Barbuda are mostly on Antigua, being Saint John's, All Saints, Piggotts, and Liberta. The most populous city on Barbuda is Codrington. It is estimated that 25% of the population lives in an Urban area, which is much lower than the international average of 55%. Islands Antigua and Barbuda consists mostly of its two namesake islands, Antigua, and Barbuda, other than that, Antigua and Barbuda's biggest islands are Guiana Island and Long Island off the coast of Antigua, and Redonda island, which is far from both of the main islands. Climate Rainfall averages per year, with the amount varying widely from season to season. In general the wettest period is between September and November. The islands generally experience low humidity and recurrent droughts. Temperatures average , with a range from to in the winter to from to in the summer and autumn. The coolest period is between December and February. Hurricanes strike on an average of once a year, including the powerful Category 5 Hurricane Irma, on 6 September 2017, which damaged 95% of the structures on Barbuda. Some 1,800 people were evacuated to Antigua. An estimate published by Time indicated that over $100 million would be required to rebuild homes and infrastructure. Philmore Mullin, Director of Barbuda's National Office of Disaster Services, said that "all critical infrastructure and utilities are non-existent – food supply, medicine, shelter, electricity, water, communications, waste management". He summarised the situation as follows: "Public utilities need to be rebuilt in their entirety... It is optimistic to think anything can be rebuilt in six months ... In my 25 years in disaster management, I have never seen something like this." Environmental issues Demographics Ethnic groups Antigua has a population of , mostly made up of people of West African, British, and Madeiran descent. The ethnic distribution consists of 91% Black, 4.4% mixed race, 1.7% White, and 2.9% other (primarily East Indian). Most Whites are of British descent. Christian Levantine Arabs and a small number of East Asians and Sephardic Jews make up the remainder of the population. An increasingly large percentage of the population lives abroad, most notably in the United Kingdom (Antiguan Britons), the United States and Canada. A minority of Antiguan residents are immigrants from other countries, particularly from Dominica, Guyana and Jamaica, and, increasingly, from the Dominican Republic, St. Vincent and the Grenadines and Nigeria. An estimated 4,500 American citizens also make their home in Antigua and Barbuda, making their numbers one of the largest American populations in the English-speaking Eastern Caribbean. Languages English is the official language. The Barbudan accent is slightly different from the Antiguan. In the years before Antigua and Barbuda's independence, Standard English was widely spoken in preference to Antiguan Creole. Generally, the upper and middle classes shun Antiguan Creole. The educational system dissuades the use of Antiguan Creole and instruction is done in Standard (British) English. Many of the words used in the Antiguan dialect are derived from British as well as African languages. This can be easily seen in phrases such as: "Ent it?" meaning "Ain't it?" which is itself dialectal and means "Isn't it?". Common island proverbs can often be traced to Africa. Spanish is spoken by around 10,000 inhabitants. Religion A majority (77%) of Antiguans are Christians, with the Anglicans (17.6%) being the largest single denomination. Other Christian denominations present are Seventh-day Adventist Church (12.4%), Pentecostalism (12.2%), Moravian Church (8.3%), Roman Catholics (8.2%), Methodist Church (5.6%), Wesleyan Holiness Church (4.5%), Church of God (4.1%), Baptists (3.6%), Mormonism (<1.0%), as well as Jehovah's Witnesses. Non-Christian religions practiced in the islands include the Rastafari, Islam, and Baháʼí Faith. Governance Political system The politics of Antigua and Barbuda take place within a framework of a unitary, parliamentary, representative democratic monarchy, in which the head of State is the monarch who appoints the Governor-General as vice-regal representative. Elizabeth II is the present Queen of Antigua and Barbuda, having served in that position since the islands' independence from the United Kingdom in 1981. The Queen is currently represented by Governor-General Sir Rodney Williams. A council of ministers is appointed by the governor-general on the advice of the prime minister, currently Gaston Browne (2014–). The prime minister is the head of government. Executive power is exercised by the government while legislative power is vested in both the government and the two Chambers of Parliament. The bicameral Parliament consists of the Senate (17 members appointed by members of the government and the opposition party, and approved by the Governor-General), and the House of Representatives (17 members elected by first past the post) to serve five-year terms. The current Leader of Her Majesty's Loyal Opposition is the United Progressive Party Member of Parliament (MP), the Honourable Baldwin Spencer. Elections The last election was held on 21 March 2018. The Antigua Barbuda Labour Party (ABLP) led by Prime Minister Gaston Browne won 15 of the 17 seats in the House of Representatives. The previous election was on 12 June 2014, during which the Antigua Labour Party won 14 seats, and the United Progressive Party 3 seats. Since 1951, elections have been won by the populist Antigua Labour Party. However, in the Antigua and Barbuda legislative election of 2004 saw the defeat of the longest-serving elected government in the Caribbean. Vere Bird was Prime Minister from 1981 to 1994 and Chief Minister of Antigua from 1960 to 1981, except for the 1971–1976 period when the Progressive Labour Movement (PLM) defeated his party. Bird, the nation's first Prime Minister, is credited with having brought Antigua and Barbuda and the Caribbean into a new era of independence. Prime Minister Lester Bryant Bird succeeded the elder Bird in 1994. Party elections Gaston Browne defeated his predecessor Lester Bryant Bird at the Antigua Labour Party's biennial convention in November 2012 held to elect a political leader and other officers. The party then altered its name from the Antigua Labour Party (ALP) to the Antigua and Barbuda Labour Party (ABLP). This was done to officially include the party's presence on the sister island of Barbuda in its organisation, the only political party on the mainland to have a physical branch in Barbuda. Judiciary The Judicial branch is the Eastern Caribbean Supreme Court (based in Saint Lucia; one judge of the Supreme Court is a resident of the islands and presides over the High Court of Justice). Antigua is also a member of the Caribbean Court of Justice. The Judicial Committee of the Privy Council serves as its Supreme Court of Appeal. Foreign relations Antigua and Barbuda is a member of the United Nations,
Barbuda legislative election of 2004 saw the defeat of the longest-serving elected government in the Caribbean. Vere Bird was Prime Minister from 1981 to 1994 and Chief Minister of Antigua from 1960 to 1981, except for the 1971–1976 period when the Progressive Labour Movement (PLM) defeated his party. Bird, the nation's first Prime Minister, is credited with having brought Antigua and Barbuda and the Caribbean into a new era of independence. Prime Minister Lester Bryant Bird succeeded the elder Bird in 1994. Party elections Gaston Browne defeated his predecessor Lester Bryant Bird at the Antigua Labour Party's biennial convention in November 2012 held to elect a political leader and other officers. The party then altered its name from the Antigua Labour Party (ALP) to the Antigua and Barbuda Labour Party (ABLP). This was done to officially include the party's presence on the sister island of Barbuda in its organisation, the only political party on the mainland to have a physical branch in Barbuda. Judiciary The Judicial branch is the Eastern Caribbean Supreme Court (based in Saint Lucia; one judge of the Supreme Court is a resident of the islands and presides over the High Court of Justice). Antigua is also a member of the Caribbean Court of Justice. The Judicial Committee of the Privy Council serves as its Supreme Court of Appeal. Foreign relations Antigua and Barbuda is a member of the United Nations, the Bolivarian Alliance for the Americas, the Commonwealth of Nations, the Caribbean Community, the Organization of Eastern Caribbean States, the Organization of American States, the World Trade Organization and the Eastern Caribbean's Regional Security System. Antigua and Barbuda is also a member of the International Criminal Court (with a Bilateral Immunity Agreement of Protection for the US military as covered under Article 98 of the Rome Statute). In 2013, Antigua and Barbuda called for reparations for slavery at the United Nations. Prime Minister Baldwin Spencer said "We have recently seen a number of leaders apologising", and that they should now "match their words with concrete and material benefits." Military The Royal Antigua and Barbuda Defence Force has around 260 members dispersed between the line infantry regiment, service and support unit and coast guard. There is also the Antigua and Barbuda Cadet Corps made up of 200 teenagers between the ages of 12 to 18. In 2018, Antigua and Barbuda signed the UN treaty on the Prohibition of Nuclear Weapons. Administrative divisions Antigua and Barbuda is divided into six parishes and two dependencies: Note: Though Barbuda and Redonda are called dependencies they are integral parts of the state, making them essentially administrative divisions. Dependency is simply a title. Human rights Antigua and Barbuda does not allow discrimination in employment, child labor, human trafficking, and there are laws against domestic abuse and child abuse. Although it has not been enforced or a case brought to trial in many years, like other Caribbean islands, same-sex sexual activity is illegal in Antigua and Barbuda and punishable by prison time. There are several current movements under way to repeal the buggery laws. Economy Tourism dominates the economy, accounting for more than half of the gross domestic product (GDP). Antigua is famous for its many luxury resorts as an ultra-high-end travel destination. Weakened tourist activity in the lower and middle market segments since early 2000 has slowed the economy, however, and squeezed the government into a tight fiscal corner. Antigua and Barbuda has enacted policies to attract high-net-worth citizens and residents, such as enacting a 0% personal income tax rate in 2019. Investment banking and financial services also make up an important part of the economy. Major world banks with offices in Antigua include the Royal Bank of Canada (RBC) and Scotiabank. Financial-services corporations with offices in Antigua include PriceWaterhouseCoopers. The US Securities and Exchange Commission has accused the Antigua-based Stanford International Bank, owned by Texas billionaire Allen Stanford, of orchestrating a huge fraud which may have bilked investors of some $8 billion. The twin-island nation's agricultural production is focused on its domestic market and constrained by a limited water supply and a labour shortage stemming from the lure of higher wages in tourism and construction work. Manufacturing is made up of enclave-type assembly for export, the major products being bedding, handicrafts and electronic components. Prospects for economic growth in the medium term will continue to depend on income growth in the industrialised world, especially in the United States, from which about one-third of all tourists come. Access to biocapacity is lower than world average. In 2016, Antigua and Barbuda had 0.8 global hectares of biocapacity per person within its territory, much less than the world average of 1.6 global hectares per person. In 2016, Antigua and Barbuda used 4.3 global hectares of biocapacity per person – their ecological footprint of consumption. This means they use more biocapacity than Antigua and Barbuda contains. As a result, Antigua and Barbuda are running a biocapacity deficit. Following the opening of the American University of Antigua College of Medicine by investor and attorney Neil Simon in 2003, a new source of revenue was established. The university employs many local Antiguans and the approximate 1000 students consume a large amount of the goods and services. Antigua and Barbuda also uses an economic citizenship program to spur investment into the country. Transport Education Culture The culture is predominantly a mixture of West African and British cultural influences. Cricket is the national sport. Other popular sports include football, boat racing and surfing. (Antigua Sailing Week attracts locals and visitors from all over the world). Music Festivals The national Carnival held each August commemorates the abolition of slavery in the British West Indies, although on some islands, Carnival may celebrate the coming of Lent. Its festive pageants, shows, contests and other activities are a major tourist attraction. Cuisine Media There are three newspapers: the Antigua Daily Observer, Antigua New Room and The Antiguan Times. The Antigua Observer is the only daily printed newspaper. The local television channel ABS TV 10 is available (it is the only station that shows exclusively local programs). There are also several local and regional radio stations, such as V2C-AM 620, ZDK-AM 1100, VYBZ-FM 92.9, ZDK-FM 97.1, Observer Radio 91.1 FM, DNECA Radio 90.1 FM, Second Advent Radio 101.5 FM, Abundant Life Radio 103.9 FM, Crusader Radio 107.3 FM, Nice FM 104.3. Literature Antiguan author Jamaica Kincaid has published over 20 works of literature. Sports The Antigua and Barbuda national cricket team represented the country at the 1998 Commonwealth Games, but Antiguan cricketers otherwise play for the Leeward Islands cricket team in domestic matches and the West Indies cricket team internationally. The 2007 Cricket World Cup was hosted in the West Indies from 11 March to 28 April 2007. Antigua hosted eight matches at the Sir Vivian Richards Stadium, which was completed on 11 February 2007 and can hold up to 20,000 people. Antigua is a Host of Stanford Twenty20 – Twenty20 Cricket, a version started by Allen Stanford in 2006 as a regional cricket game with almost all Caribbean islands taking part.Sir Vivian Richards Stadium is set to host 2022 ICC Under-19 Cricket World Cup. Rugby and Netball are popular as well. Association football, or soccer, is also a very popular sport. Antigua
which the army led by King Henry V of England defeated the forces led by Charles d'Albret on behalf of Charles VI of France, which has gone down in history as the Battle of Agincourt. According to M. Forrest, the French knights were so encumbered by their armour that they were exhausted even before the start of the battle. Later on, when he became king in 1509, Henry VIII is supposed to have commissioned an English translation of a Life of Henry V so that he could emulate him, on the grounds that he thought that launching a campaign against France would help him to impose himself on the European stage. In 1513, Henry VIII crossed the English Channel, stopping by at Azincourt. The battle, as was the tradition, was named after a nearby castle called Azincourt. The castle has since disappeared and the settlement
which is derived separately from another Germanic male name *Ingin-. History Azincourt is famous as being near the site of the battle fought on 25 October 1415 in which the army led by King Henry V of England defeated the forces led by Charles d'Albret on behalf of Charles VI of France, which has gone down in history as the Battle of Agincourt. According to M. Forrest, the French knights were so encumbered by their armour that they were exhausted even before the start of the battle. Later on, when he became king in 1509, Henry VIII is supposed to have commissioned an English translation of a Life of Henry V so that he could emulate him, on the grounds that he thought that launching a campaign against France would help him to impose himself on the European stage. In 1513, Henry VIII crossed the English Channel, stopping by at Azincourt. The battle, as was the tradition, was named after a nearby castle called Azincourt. The castle has since disappeared and the settlement now known as Azincourt adopted the name in the seventeenth century. John Cassell wrote in 1857 that "the village of Azincourt itself is now a group of dirty farmhouses and wretched cottages, but where the hottest of the battle raged, between that village and the commune of Tramecourt, there still remains a wood precisely corresponding with the one in which Henry placed his ambush; and there are yet existing
1924, when the crisis had abated, he transferred to the "much more reputable" Technical University of Munich. In 1925, he transferred again, this time to the Technical University of Berlin where he studied under Heinrich Tessenow, whom Speer greatly admired. After passing his exams in 1927, Speer became Tessenow's assistant, a high honor for a man of 22. As such, Speer taught some of his classes while continuing his own postgraduate studies. In Munich Speer began a close friendship, ultimately spanning over 50 years, with Rudolf Wolters, who also studied under Tessenow. In mid-1922, Speer began courting Margarete (Margret) Weber (1905–1987), the daughter of a successful craftsman who employed 50 workers. The relationship was frowned upon by Speer's class-conscious mother, who felt the Webers were socially inferior. Despite this opposition, the two married in Berlin on 28 August 1928; seven years elapsed before Margarete was invited to stay at her in-laws' home. The couple would have six children together, but Albert Speer grew increasingly distant from his family after 1933. He remained so even after his release from imprisonment in 1966, despite their efforts to forge closer bonds. Party architect and government functionary Joining the Nazis (1931–1934) In January 1931, Speer applied for Nazi Party membership, and on 1 March 1931, he became member number 474,481. The same year, with stipends shrinking amid the Depression, Speer surrendered his position as Tessenow's assistant and moved to Mannheim, hoping to make a living as an architect. After he failed to do so, his father gave him a part-time job as manager of his properties. In July 1932, the Speers visited Berlin to help out the Party before the Reichstag elections. While they were there his friend, Nazi Party official Karl Hanke recommended the young architect to Joseph Goebbels to help renovate the Party's Berlin headquarters. When the commission was completed, Speer returned to Mannheim and remained there as Hitler took office in January 1933. The organizers of the 1933 Nuremberg Rally asked Speer to submit designs for the rally, bringing him into contact with Hitler for the first time. Neither the organizers nor Rudolf Hess were willing to decide whether to approve the plans, and Hess sent Speer to Hitler's Munich apartment to seek his approval. This work won Speer his first national post, as Nazi Party "Commissioner for the Artistic and Technical Presentation of Party Rallies and Demonstrations". Shortly after Hitler came into power, he began to make plans to rebuild the chancellery. At the end of 1933, he contracted Paul Troost to renovate the entire building. Hitler appointed Speer, whose work for Goebbels had impressed him, to manage the building site for Troost. As Chancellor, Hitler had a residence in the building and came by every day to be briefed by Speer and the building supervisor on the progress of the renovations. After one of these briefings, Hitler invited Speer to lunch, to the architect's great excitement. Speer quickly became part of Hitler's inner circle; he was expected to call on him in the morning for a walk or chat, to provide consultation on architectural matters, and to discuss Hitler's ideas. Most days he was invited to dinner. In the English version of his memoirs, Speer says that his political commitment merely consisted of paying his "monthly dues". He assumed his German readers would not be so gullible and told them the Nazi Party offered a "new mission". He was more forthright in an interview with William Hamsher in which he said he joined the party in order to save "Germany from Communism". After the war, he claimed to have had little interest in politics at all and had joined almost by chance. Like many of those in power in the Third Reich, he was not an ideologue, "nor was he anything more than an instinctive anti-Semite." The historian Magnus Brechtken, discussing Speer, said he did not give anti-Jewish public speeches and that his anti-Semitism can best be understood through his actions—which were anti-Semitic. Brechtken added that, throughout Speer's life, his central motives were to gain power, rule, and acquire wealth. Nazi architect (1934–1937) When Troost died on 21 January 1934, Speer effectively replaced him as the Party's chief architect. Hitler appointed Speer as head of the Chief Office for Construction, which placed him nominally on Hess's staff. One of Speer's first commissions after Troost's death was the Zeppelinfeld stadium in Nuremberg. It was used for Nazi propaganda rallies and can be seen in Leni Riefenstahl's propaganda film Triumph of the Will. The building was able to hold 340,000 people. Speer insisted that as many events as possible be held at night, both to give greater prominence to his lighting effects and to hide the overweight Nazis. Nuremberg was the site of many official Nazi buildings. Many more buildings were planned. If built, the German Stadium would have accommodated 400,000 spectators. Speer modified Werner March's design for the Olympic Stadium being built for the 1936 Summer Olympics. He added a stone exterior that pleased Hitler. Speer designed the German Pavilion for the 1937 international exposition in Paris. Berlin's General Building Inspector (1937–1942) On 30 January 1937, Hitler appointed Speer as General Building Inspector for the Reich Capital. This carried with it the rank of State Secretary in the Reich government and gave him extraordinary powers over the Berlin city government. He was to report directly to Hitler, and was independent of both the mayor and the Gauleiter of Berlin. Hitler ordered Speer to develop plans to rebuild Berlin. These centered on a three-mile-long grand boulevard running from north to south, which Speer called the Prachtstrasse, or Street of Magnificence; he also referred to it as the "North–South Axis". At the northern end of the boulevard, Speer planned to build the Volkshalle, a huge domed assembly hall over high, with floor space for 180,000 people. At the southern end of the avenue, a great triumphal arch, almost high and able to fit the Arc de Triomphe inside its opening, was planned. The existing Berlin railroad termini were to be dismantled, and two large new stations built. Speer hired Wolters as part of his design team, with special responsibility for the Prachtstrasse. The outbreak of World War II in 1939 led to the postponement, and later the abandonment, of these plans. Plans to build a new Reich chancellery had been underway since 1934. Land had been purchased by the end of 1934 and starting in March 1936 the first buildings were demolished to create space at Voßstraße. Speer was involved virtually from the beginning. In the aftermath of the Night of the Long Knives, he had been commissioned to renovate the Borsig Palace on the corner of Voßstraße and Wilhelmstraße as headquarters of the Sturmabteilung (SA). He completed the preliminary work for the new chancellery by May 1936. In June 1936 he charged a personal honorarium of 30,000 Reichsmark and estimated the chancellery would be completed within three to four years. Detailed plans were completed in July 1937 and the first shell of the new chancellery was complete on 1 January 1938. On 27 January 1938, Speer received plenipotentiary powers from Hitler to finish the new chancellery by 1 January 1939. For propaganda Hitler claimed during the topping-out ceremony on 2 August 1938, that he had ordered Speer to complete the new chancellery that year. Shortages of labor meant the construction workers had to work in ten-to-twelve-hour shifts. The Schutzstaffel (SS) built two concentration camps in 1938 and used the inmates to quarry stone for its construction. A brick factory was built near the Oranienburg concentration camp at Speer's behest; when someone commented on the poor conditions there, Speer stated, "The Yids got used to making bricks while in Egyptian captivity". The chancellery was completed in early January 1939. The building itself was hailed by Hitler as the "crowning glory of the greater German political empire". During the Chancellery project, the pogrom of Kristallnacht took place. Speer made no mention of it in the first draft of Inside the Third Reich. It was only on the urgent advice of his publisher that he added a mention of seeing the ruins of the Central Synagogue in Berlin from his car. Kristallnacht accelerated Speer's ongoing efforts to dispossess Berlin's Jews from their homes. From 1939 on, Speer's Department used the Nuremberg Laws to evict Jewish tenants of non-Jewish landlords in Berlin, to make way for non-Jewish tenants displaced by redevelopment or bombing. Eventually, 75,000 Jews were displaced by these measures. Speer denied he knew they were being put on Holocaust trains and claimed that those displaced were, "Completely free and their families were still in their apartments". He also said: " ... en route to my ministry on the city highway, I could see ... crowds of people on the platform of nearby Nikolassee Railroad Station. I knew that these must be Berlin Jews who were being evacuated. I am sure that an oppressive feeling struck me as I drove past. I presumably had a sense of somber events." Matthias Schmidt said Speer had personally inspected concentration camps and described his comments as an "outright farce". Martin Kitchen described Speer's often repeated line that he knew nothing of the "dreadful things" as hollow—because not only was he fully aware of the fate of the Jews he was actively participating in their persecution. As Germany started World War II in Europe, Speer instituted quick-reaction squads to construct roads or clear away debris; before long, these units would be used to clear bomb sites. Speer used forced Jewish labor on these projects, in addition to regular German workers. Construction stopped on the Berlin and Nüremberg plans at the outbreak of war. Though stockpiling of materials and other work continued, this slowed to a halt as more resources were needed for the armament industry. Speer's offices undertook building work for each branch of the military, and for the SS, using slave labor. Speer's building work made him among the wealthiest of the Nazi elite. Minister of Armaments Appointment and increasing power In 1941, Speer was elected to the Reichstag from electoral constituency 2 (Berlin-West). On 8 February 1942, Reich Minister of Armaments and Munitions Fritz Todt died in a plane crash shortly after taking off from Hitler's eastern headquarters at Rastenburg. Speer arrived there the previous evening and accepted Todt's offer to fly with him to Berlin. Speer cancelled some hours before take-off because the previous night he had been up late in a meeting with Hitler. Hitler appointed Speer in Todt's place. Martin Kitchen, a British historian, says that the choice was not surprising. Speer was loyal to Hitler, and his experience building prisoner of war camps and other structures for the military qualified him for the job. Speer succeeded Todt not only as Reich Minister but in all his other powerful positions, including Inspector General of German Roadways, Inspector General for Water and Energy and Head of the Nazi Party's Office of Technology. At the same time, Hitler also appointed Speer as head of the Organisation Todt, a massive, government-controlled construction company. Characteristically Hitler did not give Speer any clear remit; he was left to fight his contemporaries in the regime for power and control. As an example, he wanted to be given power over all armaments issues under Hermann Göring's Four Year Plan. Göring was reluctant to grant this. However Speer secured Hitler's support, and on 1 March 1942, Göring signed a decree naming Speer "General Plenipotentiary for Armament Tasks" in the Four Year Plan. Speer proved to be ambitious, unrelenting and ruthless. Speer set out to gain control not just of armaments production in the army, but in the whole armed forces. It did not immediately dawn on his political rivals that his calls for rationalization and reorganization were hiding his desire to sideline them and take control. By April 1942, Speer had persuaded Göring to create a three-member Central Planning Board within the Four Year Plan, which he used to obtain supreme authority over procurement and allocation of raw materials and scheduling of production in order to consolidate German war production in a single agency. Speer was fêted at the time, and in the post-war era, for performing an "armaments miracle" in which German war production dramatically increased. This "miracle" was brought to a halt in the summer of 1943 by, among other factors, the first sustained Allied bombing. Other factors probably contributed to the increase more than Speer himself. Germany's armaments production had already begun to result in increases under his predecessor, Todt. Naval armaments were not under Speer's supervision until October 1943, nor the Luftwaffe's armaments until June of the following year. Yet each showed comparable increases in production despite not being under Speer's control. Another factor that produced the boom in ammunition was the policy of allocating more coal to the steel industry. Production of every type of weapon peaked in June and July 1944, but there was now a severe shortage of fuel. After August 1944, oil from the Romanian fields was no longer available. Oil production became so low that any possibility of offensive action became impossible and weaponry lay idle. As Minister of Armaments, Speer was responsible for supplying weapons to the army. With Hitler's full agreement, he decided to prioritize tank production, and he was given unrivaled power to ensure success. Hitler was closely involved with the design of the tanks, but kept changing his mind about the specifications. This delayed the program, and Speer was unable to remedy the situation. In consequence, despite tank production having the highest priority, relatively little of the armaments budget was spent on it. This led to a significant German Army failure at the Battle of Prokhorovka, a major turning point on the Eastern Front against the Soviet Red Army. As head of Organisation Todt, Speer was directly involved in the construction and alteration of concentration camps. He agreed to expand Auschwitz and some other camps, allocating 13.7 million Reichsmarks for the work to be carried out. This allowed an extra 300 huts to be built at Auschwitz, increasing the total human capacity to 132,000. Included in the building works was material to build gas chambers, crematoria and morgues. The SS called this "Professor Speer's Special Programme". Speer realized that with six million workers drafted into the armed forces, there was a labor shortage in the war economy, and not enough workers for his factories. In response, Hitler appointed Fritz Sauckel as a "manpower dictator" to obtain new workers. Speer and Sauckel cooperated closely to meet Speer's labor demands. Hitler gave Sauckel a free hand to obtain labor, something that delighted Speer, who had requested 1,000,000 "voluntary" laborers to meet the need for armament workers. Sauckel had whole villages in France, Holland and Belgium forcibly rounded up and shipped to Speer's factories. Sauckel obtained new workers often using the most brutal methods. In occupied areas of the Soviet Union, that had been subject to partisan action, civilian men and women were rounded up en masse and sent to work forcibly in Germany. By April 1943, Sauckel had supplied 1,568,801 "voluntary" laborers, forced laborers, prisoners of war and concentration camp prisoners to Speer for use in his armaments factories. It was for the maltreatment of these people, that Speer was principally convicted at the Nuremberg Trials. Consolidation of arms production Following his appointment as Minister of Armaments, Speer was in control of armaments production solely for the Army. He coveted control of the production of armaments for the Luftwaffe and Kriegsmarine as well. He set about extending his power and influence with unexpected ambition. His close relationship with Hitler provided him with political protection, and he was able to outwit and outmaneuver his rivals in the regime. Hitler's cabinet was dismayed at his tactics, but, regardless, he was able to accumulate new responsibilities and more power. By July 1943, he had gained control of armaments production for the Luftwaffe and Kriegsmarine. In August 1943, he took control of most of the Ministry of Economics, to become, in Admiral Dönitz's words, "Europe's economic dictator". His formal title was changed on 2 September 1943, to "Reich Minister for Armaments and War Production". He had become one of the most powerful people in Nazi Germany. Speer and his hand-picked director of submarine construction Otto Merker believed that the shipbuilding industry was being held back by outdated methods, and revolutionary new approaches imposed by outsiders would dramatically improve output. This belief proved incorrect, and Speer and Merker's attempt to build the Kriegsmarines new generation of submarines, the Type XXI and Type XXIII, as prefabricated sections at different facilities rather than at single dockyards contributed to the failure of this strategically important program. The designs were rushed into production, and the completed submarines were crippled by flaws which resulted from the way they had been constructed. While dozens of submarines were built, few ever entered service. In December 1943, Speer visited Organisation Todt workers in Lapland, while there he seriously damaged his knee and was incapacitated for several months. He was under the dubious care of Professor Karl Gebhardt at a medical clinic called Hohenlychen where patients "mysteriously failed to survive". In mid-January 1944, Speer had a lung embolism and fell seriously ill. Concerned about retaining power, he did not appoint a deputy and continued to direct work of the Armaments Ministry from his bedside. Speer's illness coincided with the Allied "Big Week", a series of bombing raids on the German aircraft factories that were a devastating blow to aircraft production. His political rivals used the opportunity to undermine his authority and damage his reputation with Hitler. He lost Hitler's unconditional support and began to lose power. In response to the Allied Big Week, Adolf Hitler authorized the creation of a Fighter Staff committee. Its aim was to ensure the preservation and growth of fighter aircraft production. The task force was established by 1 March 1944, orders of Speer, with support from Erhard Milch of the Reich Aviation Ministry. Production of German fighter aircraft more than doubled between 1943 and 1944. The growth, however, consisted in large part of models that were becoming obsolescent and proved easy prey for Allied aircraft. On 1 August 1944, Speer merged the Fighter Staff into a newly formed Armament Staff committee. The Fighter Staff committee was instrumental in bringing about the increased exploitation of slave labor in the war economy. The SS provided 64,000 prisoners for 20 separate projects from various concentration camps including Mittelbau-Dora. Prisoners worked for Junkers, Messerschmitt, Henschel and BMW, among others. To increase production, Speer introduced a system of punishments for his workforce. Those who feigned illness, slacked off, sabotaged production or tried to escape were denied food or sent to concentration camps. In 1944, this became endemic; over half a million workers were arrested. By this time, 140,000 people were working in Speer's underground factories. These factories were death-traps; discipline was brutal, with regular executions. There were so many corpses at the Dora underground factory, for example, that the crematorium was overwhelmed. Speer's own staff described the conditions there as "hell". The largest technological advance under Speer's command came through the rocket program. It began in 1932 but had not supplied any weaponry. Speer enthusiastically supported the program and in March 1942 made an order for A4 rockets, the predecessor of the world's first ballistic missile, the V-2 rocket. The rockets were researched at a facility in Peenemünde along with the V-1 flying bomb. The V-2's first target was Paris on 8 September 1944. The program while advanced proved to be an impediment to the war economy. The large capital investment was not repaid in military effectiveness. The rockets were built at an underground factory at Mittelwerk. Labor to build the A4 rockets came from the Mittelbau-Dora concentration camp. Of the 60,000 people who ended up at the camp 20,000 died, due to the appalling conditions. On 14 April 1944, Speer lost control of Organisation Todt to his Deputy, Franz Xaver Dorsch. He opposed the assassination attempt against Hitler on 20 July 1944. He was not involved in the plot, and played a minor role in the regime's efforts to regain control over Berlin after Hitler survived. After the plot Speer's rivals attacked some
anti-Semitism can best be understood through his actions—which were anti-Semitic. Brechtken added that, throughout Speer's life, his central motives were to gain power, rule, and acquire wealth. Nazi architect (1934–1937) When Troost died on 21 January 1934, Speer effectively replaced him as the Party's chief architect. Hitler appointed Speer as head of the Chief Office for Construction, which placed him nominally on Hess's staff. One of Speer's first commissions after Troost's death was the Zeppelinfeld stadium in Nuremberg. It was used for Nazi propaganda rallies and can be seen in Leni Riefenstahl's propaganda film Triumph of the Will. The building was able to hold 340,000 people. Speer insisted that as many events as possible be held at night, both to give greater prominence to his lighting effects and to hide the overweight Nazis. Nuremberg was the site of many official Nazi buildings. Many more buildings were planned. If built, the German Stadium would have accommodated 400,000 spectators. Speer modified Werner March's design for the Olympic Stadium being built for the 1936 Summer Olympics. He added a stone exterior that pleased Hitler. Speer designed the German Pavilion for the 1937 international exposition in Paris. Berlin's General Building Inspector (1937–1942) On 30 January 1937, Hitler appointed Speer as General Building Inspector for the Reich Capital. This carried with it the rank of State Secretary in the Reich government and gave him extraordinary powers over the Berlin city government. He was to report directly to Hitler, and was independent of both the mayor and the Gauleiter of Berlin. Hitler ordered Speer to develop plans to rebuild Berlin. These centered on a three-mile-long grand boulevard running from north to south, which Speer called the Prachtstrasse, or Street of Magnificence; he also referred to it as the "North–South Axis". At the northern end of the boulevard, Speer planned to build the Volkshalle, a huge domed assembly hall over high, with floor space for 180,000 people. At the southern end of the avenue, a great triumphal arch, almost high and able to fit the Arc de Triomphe inside its opening, was planned. The existing Berlin railroad termini were to be dismantled, and two large new stations built. Speer hired Wolters as part of his design team, with special responsibility for the Prachtstrasse. The outbreak of World War II in 1939 led to the postponement, and later the abandonment, of these plans. Plans to build a new Reich chancellery had been underway since 1934. Land had been purchased by the end of 1934 and starting in March 1936 the first buildings were demolished to create space at Voßstraße. Speer was involved virtually from the beginning. In the aftermath of the Night of the Long Knives, he had been commissioned to renovate the Borsig Palace on the corner of Voßstraße and Wilhelmstraße as headquarters of the Sturmabteilung (SA). He completed the preliminary work for the new chancellery by May 1936. In June 1936 he charged a personal honorarium of 30,000 Reichsmark and estimated the chancellery would be completed within three to four years. Detailed plans were completed in July 1937 and the first shell of the new chancellery was complete on 1 January 1938. On 27 January 1938, Speer received plenipotentiary powers from Hitler to finish the new chancellery by 1 January 1939. For propaganda Hitler claimed during the topping-out ceremony on 2 August 1938, that he had ordered Speer to complete the new chancellery that year. Shortages of labor meant the construction workers had to work in ten-to-twelve-hour shifts. The Schutzstaffel (SS) built two concentration camps in 1938 and used the inmates to quarry stone for its construction. A brick factory was built near the Oranienburg concentration camp at Speer's behest; when someone commented on the poor conditions there, Speer stated, "The Yids got used to making bricks while in Egyptian captivity". The chancellery was completed in early January 1939. The building itself was hailed by Hitler as the "crowning glory of the greater German political empire". During the Chancellery project, the pogrom of Kristallnacht took place. Speer made no mention of it in the first draft of Inside the Third Reich. It was only on the urgent advice of his publisher that he added a mention of seeing the ruins of the Central Synagogue in Berlin from his car. Kristallnacht accelerated Speer's ongoing efforts to dispossess Berlin's Jews from their homes. From 1939 on, Speer's Department used the Nuremberg Laws to evict Jewish tenants of non-Jewish landlords in Berlin, to make way for non-Jewish tenants displaced by redevelopment or bombing. Eventually, 75,000 Jews were displaced by these measures. Speer denied he knew they were being put on Holocaust trains and claimed that those displaced were, "Completely free and their families were still in their apartments". He also said: " ... en route to my ministry on the city highway, I could see ... crowds of people on the platform of nearby Nikolassee Railroad Station. I knew that these must be Berlin Jews who were being evacuated. I am sure that an oppressive feeling struck me as I drove past. I presumably had a sense of somber events." Matthias Schmidt said Speer had personally inspected concentration camps and described his comments as an "outright farce". Martin Kitchen described Speer's often repeated line that he knew nothing of the "dreadful things" as hollow—because not only was he fully aware of the fate of the Jews he was actively participating in their persecution. As Germany started World War II in Europe, Speer instituted quick-reaction squads to construct roads or clear away debris; before long, these units would be used to clear bomb sites. Speer used forced Jewish labor on these projects, in addition to regular German workers. Construction stopped on the Berlin and Nüremberg plans at the outbreak of war. Though stockpiling of materials and other work continued, this slowed to a halt as more resources were needed for the armament industry. Speer's offices undertook building work for each branch of the military, and for the SS, using slave labor. Speer's building work made him among the wealthiest of the Nazi elite. Minister of Armaments Appointment and increasing power In 1941, Speer was elected to the Reichstag from electoral constituency 2 (Berlin-West). On 8 February 1942, Reich Minister of Armaments and Munitions Fritz Todt died in a plane crash shortly after taking off from Hitler's eastern headquarters at Rastenburg. Speer arrived there the previous evening and accepted Todt's offer to fly with him to Berlin. Speer cancelled some hours before take-off because the previous night he had been up late in a meeting with Hitler. Hitler appointed Speer in Todt's place. Martin Kitchen, a British historian, says that the choice was not surprising. Speer was loyal to Hitler, and his experience building prisoner of war camps and other structures for the military qualified him for the job. Speer succeeded Todt not only as Reich Minister but in all his other powerful positions, including Inspector General of German Roadways, Inspector General for Water and Energy and Head of the Nazi Party's Office of Technology. At the same time, Hitler also appointed Speer as head of the Organisation Todt, a massive, government-controlled construction company. Characteristically Hitler did not give Speer any clear remit; he was left to fight his contemporaries in the regime for power and control. As an example, he wanted to be given power over all armaments issues under Hermann Göring's Four Year Plan. Göring was reluctant to grant this. However Speer secured Hitler's support, and on 1 March 1942, Göring signed a decree naming Speer "General Plenipotentiary for Armament Tasks" in the Four Year Plan. Speer proved to be ambitious, unrelenting and ruthless. Speer set out to gain control not just of armaments production in the army, but in the whole armed forces. It did not immediately dawn on his political rivals that his calls for rationalization and reorganization were hiding his desire to sideline them and take control. By April 1942, Speer had persuaded Göring to create a three-member Central Planning Board within the Four Year Plan, which he used to obtain supreme authority over procurement and allocation of raw materials and scheduling of production in order to consolidate German war production in a single agency. Speer was fêted at the time, and in the post-war era, for performing an "armaments miracle" in which German war production dramatically increased. This "miracle" was brought to a halt in the summer of 1943 by, among other factors, the first sustained Allied bombing. Other factors probably contributed to the increase more than Speer himself. Germany's armaments production had already begun to result in increases under his predecessor, Todt. Naval armaments were not under Speer's supervision until October 1943, nor the Luftwaffe's armaments until June of the following year. Yet each showed comparable increases in production despite not being under Speer's control. Another factor that produced the boom in ammunition was the policy of allocating more coal to the steel industry. Production of every type of weapon peaked in June and July 1944, but there was now a severe shortage of fuel. After August 1944, oil from the Romanian fields was no longer available. Oil production became so low that any possibility of offensive action became impossible and weaponry lay idle. As Minister of Armaments, Speer was responsible for supplying weapons to the army. With Hitler's full agreement, he decided to prioritize tank production, and he was given unrivaled power to ensure success. Hitler was closely involved with the design of the tanks, but kept changing his mind about the specifications. This delayed the program, and Speer was unable to remedy the situation. In consequence, despite tank production having the highest priority, relatively little of the armaments budget was spent on it. This led to a significant German Army failure at the Battle of Prokhorovka, a major turning point on the Eastern Front against the Soviet Red Army. As head of Organisation Todt, Speer was directly involved in the construction and alteration of concentration camps. He agreed to expand Auschwitz and some other camps, allocating 13.7 million Reichsmarks for the work to be carried out. This allowed an extra 300 huts to be built at Auschwitz, increasing the total human capacity to 132,000. Included in the building works was material to build gas chambers, crematoria and morgues. The SS called this "Professor Speer's Special Programme". Speer realized that with six million workers drafted into the armed forces, there was a labor shortage in the war economy, and not enough workers for his factories. In response, Hitler appointed Fritz Sauckel as a "manpower dictator" to obtain new workers. Speer and Sauckel cooperated closely to meet Speer's labor demands. Hitler gave Sauckel a free hand to obtain labor, something that delighted Speer, who had requested 1,000,000 "voluntary" laborers to meet the need for armament workers. Sauckel had whole villages in France, Holland and Belgium forcibly rounded up and shipped to Speer's factories. Sauckel obtained new workers often using the most brutal methods. In occupied areas of the Soviet Union, that had been subject to partisan action, civilian men and women were rounded up en masse and sent to work forcibly in Germany. By April 1943, Sauckel had supplied 1,568,801 "voluntary" laborers, forced laborers, prisoners of war and concentration camp prisoners to Speer for use in his armaments factories. It was for the maltreatment of these people, that Speer was principally convicted at the Nuremberg Trials. Consolidation of arms production Following his appointment as Minister of Armaments, Speer was in control of armaments production solely for the Army. He coveted control of the production of armaments for the Luftwaffe and Kriegsmarine as well. He set about extending his power and influence with unexpected ambition. His close relationship with Hitler provided him with political protection, and he was able to outwit and outmaneuver his rivals in the regime. Hitler's cabinet was dismayed at his tactics, but, regardless, he was able to accumulate new responsibilities and more power. By July 1943, he had gained control of armaments production for the Luftwaffe and Kriegsmarine. In August 1943, he took control of most of the Ministry of Economics, to become, in Admiral Dönitz's words, "Europe's economic dictator". His formal title was changed on 2 September 1943, to "Reich Minister for Armaments and War Production". He had become one of the most powerful people in Nazi Germany. Speer and his hand-picked director of submarine construction Otto Merker believed that the shipbuilding industry was
of the ligule is often divided into teeth, each one representing a petal. Some marginal florets may have no petals at all (filiform floret). The calyx of the florets may be absent, but when present is always modified into a pappus of two or more teeth, scales or bristles and this is often involved in the dispersion of the seeds. As with the bracts, the nature of the pappus is an important diagnostic feature. There are usually five stamens. The filaments are fused to the corolla, while the anthers are generally connate (syngenesious anthers), thus forming a sort of tube around the style (theca). They commonly have basal and/or apical appendages. Pollen is released inside the tube and is collected around the growing style, and then, as the style elongates, is pushed out of the tube (nüdelspritze). The pistil consists of two connate carpels. The style has two lobes. Stigmatic tissue may be located in the interior surface or form two lateral lines. The ovary is inferior and has only one ovule, with basal placentation. Fruits and seeds In members of the Asteraceae the fruit is achene-like, and is called a cypsela (plural cypselae). Although there are two fused carpels, there is only one locule, and only one seed per fruit is formed. It may sometimes be winged or spiny because the pappus, which is derived from calyx tissue often remains on the fruit (for example in dandelion). In some species, however, the pappus falls off (for example in Helianthus). Cypsela morphology is often used to help determine plant relationships at the genus and species level. The mature seeds usually have little endosperm or none. Pollen The pollen of composites is typically echinolophate, a morphological term meaning "with elaborate systems of ridges and spines dispersed around and between the apertures." Metabolites In Asteraceae, the energy store is generally in the form of inulin rather than starch. They produce iso/chlorogenic acid, sesquiterpene lactones, pentacyclic triterpene alcohols, various alkaloids, acetylenes (cyclic, aromatic, with vinyl end groups), tannins. They have terpenoid essential oils which never contain iridoids. Asteraceae produce secondary metabolites, such as flavonoids and terpenoids. Some of these molecules can inhibit protozoan parasites such as Plasmodium, Trypanosoma, Leishmania and parasitic intestinal worms, and thus have potential in medicine. Taxonomy History Compositae, the original name for Asteraceae, were first described in 1740 by Dutch botanist Adriaan van Royen. Traditionally, two subfamilies were recognised: Asteroideae (or Tubuliflorae) and Cichorioideae (or Liguliflorae). The latter has been shown to be extensively paraphyletic, and has now been divided into 12 subfamilies, but the former still stands. The study of this family is known as synantherology. Phylogeny The phylogenetic tree presented below is based on Panero & Funk (2002) updated in 2014, and now also includes the monotypic Famatinanthoideae. The diamond (♦) denotes a very poorly supported node (<50% bootstrap support), the dot (•) a poorly supported node (<80%). The family includes over 32,000 currently accepted species, in over 1,900 genera (list) in 13 subfamilies. The number of species in the family Asteraceae is rivaled only by Orchidaceae. Which is the larger family is unclear, because of the uncertainty about how many extant species each family includes. The four subfamilies Asteroideae, Cichorioideae, Carduoideae and Mutisioideae contain 99% of the species diversity of the whole family (approximately 70%, 14%, 11% and 3% respectively). Because of the morphological complexity exhibited by this family, agreeing on generic circumscriptions has often been difficult for taxonomists. As a result, several of these genera have required multiple revisions. Paleontology and evolutionary processes The oldest known fossils of members of Asteraceae are pollen grains from the Late Cretaceous of Antarctica, dated to ∼76–66 myr (Campanian to Maastrichtian) and assigned to the extant genus Dasyphyllum. Barreda, et al. (2015) estimated that the crown group of Asteraceae evolved at least 85.9 myr (Late Cretaceous, Santonian) with a stem node age of 88–89 myr (Late Cretaceous, Coniacian). It is not known whether the precise cause of their great success was the development of the highly specialised capitulum, their ability to store energy as fructans (mainly inulin), which is an advantage in relatively dry zones, or some combination of these and possibly other factors. Heterocarpy, or the ability to produce different fruit morphs, has evolved and is common in Asteraceae. It allows seeds to be dispersed over varying distances and each is adapted to different environments, increasing chances of survival. Etymology and pronunciation The name Asteraceae () comes to international scientific vocabulary from New Latin, from Aster, the type genus, + -aceae, a standardized suffix for plant family names in modern taxonomy. The genus name comes from the Classical Latin word , "star", which came from Ancient Greek (), "star". It refers to the star-like form of the inflorescence. The original name Compositae is still valid under the International Code of Nomenclature for algae, fungi, and plants. It refers to the "composite" nature of the capitula, which consist of a few or many individual flowers. The vernacular name daisy, widely applied to members of this family, is derived from the Old English name of the daisy (Bellis perennis): , meaning "day's eye". This is because the petals open at dawn and close at dusk. Distribution and habitat Asteraceae species have a widespread distribution, from subpolar to tropical regions in a wide variety of habitats. Most occur in hot desert and cold or hot semi-desert climates, and they are
to animal fur or be lifted by wind, aiding in seed dispersal. The whitish fluffy head of a dandelion, commonly blown on by children, is made of pappi with tiny seeds attached at the ends. The pappi provide a parachute like structure to help the seed be carried away in the wind. A ray flower is a 3-tipped (3-lobed), strap-shaped, individual flower in the head of some members of the family Asteraceae. Sometimes a ray flower is 2-tipped (2-lobed). The corolla of the ray flower may have 2 tiny teeth opposite the 3-lobed strap, or tongue, indicating evolution by fusion from an originally 5-part corolla. Sometimes, the 3:2 arrangement is reversed, with 2 tips on the tongue, and 0 or 3 tiny teeth opposite the tongue. A ligulate flower is a 5-tipped, strap-shaped, individual flower in the heads of other members. A ligule is the strap-shaped tongue of the corolla of either a ray flower or of a ligulate flower. A disk flower (or disc flower) is a radially symmetric (i.e., with identical shaped petals arranged in circle around the center) individual flower in the head, which is ringed by ray flowers when both are present. Sometimes ray flowers may be slightly off from radial symmetry, or weakly bilaterally symmetric, as in the case of desert pincushions Chaenactis fremontii. A radiate head has disc flowers surrounded by ray flowers. A ligulate head has all ligulate flowers. When a sunflower family flower head has only disc flowers that are sterile, male, or have both male and female parts, it is a discoid head. Disciform heads have only disc flowers, but may have two kinds (male flowers and female flowers) in one head, or may have different heads of two kinds (all male, or all female). Pistillate heads have all female flowers. Staminate heads have all male flowers. Sometimes, but rarely, the head contains only a single flower, or has a single flowered pistillate (female) head, and a multi-flowered male staminate (male) head. Floral structures The distinguishing characteristic of Asteraceae is their inflorescence, a type of specialised, composite flower head or pseudanthium, technically called a calathium or capitulum, that may look superficially like a single flower. The capitulum is a contracted raceme composed of numerous individual sessile flowers, called florets, all sharing the same receptacle. A set of bracts forms an involucre surrounding the base of the capitulum. These are called "phyllaries", or "involucral bracts". They may simulate the sepals of the pseudanthium. These are mostly herbaceous but can also be brightly coloured (e.g. Helichrysum) or have a scarious (dry and membranous) texture. The phyllaries can be free or fused, and arranged in one to many rows, overlapping like the tiles of a roof (imbricate) or not (this variation is important in identification of tribes and genera). Each floret may be subtended by a bract, called a "palea" or "receptacular bract". These bracts are often called "chaff". The presence or absence of these bracts, their distribution on the receptacle, and their size and shape are all important diagnostic characteristics for genera and tribes. The florets have five petals fused at the base to form a corolla tube and they may be either actinomorphic or zygomorphic. Disc florets are usually actinomorphic, with five petal lips on the rim of the corolla tube. The petal lips may be either very short, or long, in which case they form deeply lobed petals. The latter is the only kind of floret in the Carduoideae, while the first kind is more widespread. Ray florets are always highly zygomorphic and are characterised by the presence of a ligule, a strap-shaped structure on the edge of the corolla tube consisting of fused petals. In the Asteroideae and other minor subfamilies these are usually borne only on florets at the circumference of the capitulum and have a 3+2 scheme – above the fused corolla tube, three very long fused petals form the ligule, with the other two petals being inconspicuously small. The Cichorioideae has only ray florets, with a 5+0 scheme – all five petals form the ligule. A 4+1 scheme is found in the Barnadesioideae. The tip of the ligule is often divided into teeth, each one representing a petal. Some marginal florets may have no petals at all (filiform floret). The calyx of the florets may be absent, but when present is always modified into a pappus of two or more teeth, scales or bristles and this is often involved in the dispersion of the seeds. As with the bracts, the nature of the pappus is an important diagnostic feature. There are usually five stamens. The filaments are fused to the corolla, while the anthers are generally connate (syngenesious anthers), thus forming a sort of tube around the style (theca). They commonly have basal and/or apical appendages. Pollen is released inside the tube and is collected around the growing style, and then, as the style elongates, is pushed out of the tube (nüdelspritze). The pistil consists of two connate carpels. The style has two lobes. Stigmatic tissue may be located in the interior surface or form two lateral lines. The ovary is inferior and has only one ovule, with basal placentation. Fruits and seeds In members of the Asteraceae the fruit is achene-like, and is called a cypsela (plural cypselae). Although there are two fused carpels, there is only one locule, and only one seed per fruit is formed. It may sometimes be winged or spiny because the pappus, which is derived from calyx tissue often remains on the fruit (for example in dandelion). In some species, however, the pappus falls off (for example in Helianthus). Cypsela morphology is often used to help determine plant relationships at the genus and species level. The mature seeds usually have little endosperm or none. Pollen The pollen of composites is typically echinolophate, a morphological term meaning "with elaborate systems of ridges and spines dispersed around and between the apertures." Metabolites In Asteraceae, the energy store is generally in the form of inulin rather than starch. They produce iso/chlorogenic acid, sesquiterpene lactones, pentacyclic triterpene alcohols, various alkaloids, acetylenes (cyclic, aromatic, with vinyl end groups), tannins. They have terpenoid essential oils which never contain iridoids. Asteraceae produce secondary metabolites, such as flavonoids and terpenoids. Some of these molecules can inhibit protozoan parasites such as Plasmodium, Trypanosoma, Leishmania and parasitic intestinal worms, and thus have potential in medicine. Taxonomy History Compositae, the original name for Asteraceae, were first described in 1740 by Dutch botanist Adriaan van Royen. Traditionally, two subfamilies were recognised: Asteroideae (or Tubuliflorae) and Cichorioideae (or Liguliflorae). The latter has been shown to be extensively paraphyletic, and has now been divided into 12 subfamilies, but the former still stands. The study of this family is known as synantherology. Phylogeny The phylogenetic tree presented below is based on Panero & Funk (2002) updated in 2014, and now also includes the monotypic Famatinanthoideae. The diamond (♦) denotes a very poorly supported node (<50% bootstrap support), the dot (•) a poorly supported node (<80%). The family includes over 32,000 currently accepted species, in over 1,900 genera (list) in 13 subfamilies. The number of species in the family Asteraceae is rivaled only by Orchidaceae. Which is the larger family is unclear, because of the uncertainty about how many extant species each family includes. The four subfamilies Asteroideae, Cichorioideae, Carduoideae and Mutisioideae contain 99% of the species diversity of the whole family (approximately 70%, 14%, 11% and 3% respectively). Because of the morphological complexity exhibited by this family, agreeing on generic circumscriptions has often been difficult for taxonomists. As a result, several of these genera have required multiple revisions. Paleontology and evolutionary processes The oldest known fossils of members of Asteraceae are pollen grains from the Late Cretaceous of Antarctica, dated to ∼76–66 myr (Campanian to Maastrichtian) and assigned to the extant genus Dasyphyllum. Barreda, et al. (2015) estimated that the crown group of Asteraceae evolved at least 85.9 myr (Late Cretaceous, Santonian) with a stem node age of 88–89 myr (Late Cretaceous, Coniacian). It is not known whether the precise cause of their great success was the development of the highly specialised capitulum, their ability to store energy as fructans (mainly inulin), which is an advantage in relatively dry zones, or some combination of these and possibly other factors. Heterocarpy, or the ability to produce different fruit morphs, has evolved and is common in Asteraceae. It allows seeds to be dispersed over varying distances
are annual, biennial or perennial herbs (frequently with the leaves aggregated toward the base), though a minority are woody shrubs or small trees such as Bupleurum fruticosum. Their leaves are of variable size and alternately arranged, or with the upper leaves becoming nearly opposite. The leaves may be petiolate or sessile. There are no stipules but the petioles are frequently sheathing and the leaves may be perfoliate. The leaf blade is usually dissected, ternate, or pinnatifid, but simple and entire in some genera, e.g. Bupleurum. Commonly, their leaves emit a marked smell when crushed, aromatic to foetid, but absent in some species. The defining characteristic of this family is the inflorescence, the flowers nearly always aggregated in terminal umbels, that may be simple or more commonly compound, often umbelliform cymes. The flowers are usually perfect (hermaphroditic) and actinomorphic, but there may be zygomorphic flowers at the edge of the umbel, as in carrot (Daucus carota) and coriander, with petals of unequal size, the ones pointing outward from the umbel larger than the ones pointing inward. Some are andromonoecious, polygamomonoecious, or even dioecious (as in Acronema), with a distinct calyx and corolla, but the calyx is often highly reduced, to the point of being undetectable in many species, while the corolla can be white, yellow, pink or purple. The flowers are nearly perfectly pentamerous, with five petals and five stamens. There is often variation in the functionality of the stamens even within a single inflorescence. Some flowers are functionally staminate (where a pistil may be present but has no ovules capable of being fertilized) while others are functionally pistillate (where stamens are present but their anthers do not produce viable pollen). Pollination of one flower by the pollen of a different flower of the same plant (geitonogamy) is common. The gynoecium consists of two carpels fused into a single, bicarpellate pistil with an inferior ovary. Stylopodia support two styles and secrete nectar, attracting pollinators like flies, mosquitoes, gnats, beetles, moths, and bees. The fruit is a schizocarp consisting of two fused carpels that separate at maturity into two mericarps, each containing a single seed. The fruits of many species are dispersed by wind but others such as those of Daucus spp., are covered in bristles, which may be hooked in sanicle Sanicula europaea and thus catch in the fur of animals. The seeds have an oily endosperm and often contain essential oils, containing aromatic compounds that are responsible for the flavour of commercially important umbelliferous seed such as anise, cumin and coriander. The shape and details of the ornamentation of the ripe fruits are
order in the APG III system. It is closely related to Araliaceae and the boundaries between these families remain unclear. Traditionally groups within the family have been delimited largely based on fruit morphology, and the results from this have not been congruent with the more recent molecular phylogenetic analyses. The subfamilial and tribal classification for the family is currently in a state of flux, with many of the groups being found to be grossly paraphyletic or polyphyletic. General According to the Angiosperm Phylogeny Website , 434 genera are in the family Apiaceae. Ecology The black swallowtail butterfly, Papilio polyxenes, uses the family Apiaceae for food and host plants for oviposition. The 22-spot ladybird is also commonly found eating mildew on these shrubs. Uses Many members of this family are cultivated for various purposes. Parsnip (Pastinaca sativa), carrot (Daucus carota) and Hamburg parsley (Petroselinum crispum) produce tap roots that are large enough to be useful as food. Many species produce essential oils in their leaves or fruits and as a result are flavourful aromatic herbs. Examples are parsley (Petroselinum crispum), coriander (Coriandrum sativum), culantro, and dill (Anethum graveolens). The seeds may be used in cuisine, as with coriander (Coriandrum sativum), fennel (Foeniculum vulgare), cumin (Cuminum cyminum), and caraway (Carum carvi). Other notable cultivated Apiaceae include chervil (Anthriscus cerefolium), angelica (Angelica spp.), celery (Apium graveolens), arracacha (Arracacia xanthorrhiza), sea holly (Eryngium spp.), asafoetida (Ferula asafoetida), galbanum (Ferula gummosa), cicely (Myrrhis odorata), anise (Pimpinella anisum), lovage (Levisticum officinale), and hacquetia (Hacquetia epipactis). Cultivation Generally, all members of this family are best cultivated in the cool-season garden; indeed, they may not grow at all if the soils are too warm. Almost every widely cultivated plant of this group is a considered useful as a companion plant. One reason is because the tiny flowers clustered into umbels, are well suited for ladybugs, parasitic wasps, and predatory flies, which actually drink nectar when not reproducing. They then prey upon insect pests on nearby plants. Some of the members of this family considered "herbs" produce scents that are believed to ...mask the odours of nearby plants, thus making them harder for insect pests to find. Other uses The poisonous members of the Apiaceae have been used for a variety of purposes globally. The poisonous Oenanthe crocata has been used to stupefy fish, Cicuta douglasii has been used as an aid in suicides, and arrow poisons have been made from various other family species. Daucus carota has been used as coloring for butter. Dorema ammoniacum, Ferula galbaniflua, and Ferula moschata (sumbul) are sources of incense. The woody Azorella compacta Phil. has been used in South America for fuel. Toxicity Many species in the family Apiaceae produce phototoxic substances (called furanocoumarins) that sensitize human skin to sunlight. Contact with plant parts that contain furanocoumarins, followed by exposure to sunlight, may cause phytophotodermatitis, a serious skin inflammation. Phototoxic species include Ammi majus, Notobubon galbanum, the parsnip (Pastinaca sativa) and numerous species of the genus Heracleum, especially the giant hogweed (Heracleum mantegazzianum). Of all the plant species that have been reported to induce phytophotodermatitis, approximately half belong to the family Apiaceae. The family Apiaceae also includes a smaller number of poisonous species, including poison hemlock, water hemlock, spotted cowbane, fool's parsley, and various species of water dropwort. Some members of the family Apiaceae, including carrot, celery, fennel, parsley and parsnip, contain polyynes, an unusual class of organic compounds that exhibit cytotoxic effects. References Further reading Constance, L. (1971). "History of the classification of Umbelliferae (Apiaceae)." in Heywood, V. H. [ed.], The biology and chemistry of the Umbelliferae, 1–11. Academic Press, London. Cronquist, A. (1968). The Evolution and Classification of Flowering Plants. Boston: Houghton Mifflin. French, D. H. (1971). "Ethnobotany of the Umbelliferae." in Heywood, V. H. [ed.], The biology and chemistry of the Umbelliferae, 385–412. Academic Press, London. Hegnauer, R. (1971) "Chemical Patterns and Relationships of Umbelliferae." in Heywood, V. H.
system myelin membranes (found in an axon). It has a crucial role in restricting axonal regeneration in adult mammalian central nervous system. In recent studies, if Nogo-A is blocked and neutralized, it is possible to induce long-distance axonal regeneration which leads to enhancement of functional recovery in rats and mouse spinal cord. This has yet to be done on humans. A recent study has also found that macrophages activated through a specific inflammatory pathway activated by the Dectin-1 receptor are capable of promoting axon recovery, also however causing neurotoxicity in the neuron. Length regulation Axons vary largely in length from a few micrometers up to meters in some animals. This emphasizes that there must be a cellular length regulation mechanism allowing the neurons both to sense the length of their axons and to control their growth accordingly. It was discovered that motor proteins play an important role in regulating the length of axons. Based on this observation, researchers developed an explicit model for axonal growth describing how motor proteins could affect the axon length on the molecular level. These studies suggest that motor proteins carry signaling molecules from the soma to the growth cone and vice versa whose concentration oscillates in time with a length-dependent frequency. Classification The axons of neurons in the human peripheral nervous system can be classified based on their physical features and signal conduction properties. Axons were known to have different thicknesses (from 0.1 to 20 µm) and these differences were thought to relate to the speed at which an action potential could travel along the axon – its conductance velocity. Erlanger and Gasser proved this hypothesis, and identified several types of nerve fiber, establishing a relationship between the diameter of an axon and its nerve conduction velocity. They published their findings in 1941 giving the first classification of axons. Axons are classified in two systems. The first one introduced by Erlanger and Gasser, grouped the fibers into three main groups using the letters A, B, and C. These groups, group A, group B, and group C include both the sensory fibers (afferents) and the motor fibres (efferents). The first group A, was subdivided into alpha, beta, gamma, and delta fibers — Aα, Aβ, Aγ, and Aδ. The motor neurons of the different motor fibers, were the lower motor neurons – alpha motor neuron, beta motor neuron, and gamma motor neuron having the Aα, Aβ, and Aγ nerve fibers respectively. Later findings by other researchers identified two groups of Aa fibers that were sensory fibers. These were then introduced into a system that only included sensory fibers (though some of these were mixed nerves and were also motor fibers). This system refers to the sensory groups as Types and uses Roman numerals: Type Ia, Type Ib, Type II, Type III, and Type IV. Lower motor neurons have two kind of fibers: Different sensory receptors innervate different types of nerve fibers. Proprioceptors are innervated by type Ia, Ib and II sensory fibers, mechanoreceptors by type II and III sensory fibers and nociceptors and thermoreceptors by type III and IV sensory fibers. Autonomic The autonomic nervous system has two kinds of peripheral fibers: Clinical significance In order of degree of severity, injury to a nerve can be described as neurapraxia, axonotmesis, or neurotmesis. Concussion is considered a mild form of diffuse axonal injury. Axonal injury can also cause central chromatolysis. The dysfunction of axons in the nervous system is one of the major causes of many inherited neurological disorders that affect both peripheral and central neurons. When an axon is crushed, an active process of axonal degeneration takes place at the part of the axon furthest from the cell body. This degeneration takes place quickly following the injury, with the part of the axon being sealed off at the membranes and broken down by macrophages. This is known as Wallerian degeneration. Dying back of an axon can also take place in many neurodegenerative diseases, particularly when axonal transport is impaired, this is known as Wallerian-like degeneration. Studies suggest that the degeneration happens as a result of the axonal protein NMNAT2, being prevented from reaching all of the axon. Demyelination of axons causes the multitude of neurological symptoms found in the disease multiple sclerosis. Dysmyelination is the abnormal formation of the myelin sheath. This is implicated in several leukodystrophies, and also in schizophrenia. A severe traumatic brain injury can result in widespread lesions to nerve tracts damaging the axons in a condition known as diffuse axonal injury. This can lead to a persistent vegetative state. It has been shown in studies on the rat that axonal damage from a single mild traumatic brain injury, can leave a susceptibility to further damage, after repeated mild traumatic brain injuries. A nerve guidance conduit is an artificial means of guiding axon growth to enable neuroregeneration, and is one of the many treatments used for different kinds of nerve injury. History German anatomist Otto Friedrich Karl Deiters is generally credited with the discovery of the axon by distinguishing it from the dendrites. Swiss Rüdolf Albert von Kölliker and German Robert Remak were the first to identify and characterize the axon initial segment. Kölliker named the axon in 1896. Louis-Antoine Ranvier was the first to describe the gaps or nodes found on axons and for this contribution these axonal features are now commonly referred to as the nodes of Ranvier. Santiago Ramón y Cajal, a Spanish anatomist, proposed that axons were the output components of neurons, describing their functionality. Joseph Erlanger and Herbert Gasser earlier developed the classification system for peripheral nerve fibers, based on axonal conduction velocity, myelination, fiber size etc. Alan Hodgkin and Andrew Huxley also employed the squid giant axon (1939) and by 1952 they had obtained a full quantitative description of the ionic basis of the action potential, leading to the formulation of the Hodgkin–Huxley model. Hodgkin and Huxley were awarded jointly the Nobel Prize for this work in 1963. The formulae detailing axonal conductance were extended to vertebrates in the Frankenhaeuser–Huxley equations. The understanding of the biochemical basis for action potential propagation has advanced further, and includes many details about individual ion channels. Other animals The axons in invertebrates have been extensively studied. The longfin inshore squid, often used as a model organism has the longest known axon. The giant squid has the largest axon known. Its size ranges from a half (typically) to one millimetre in diameter and is used in the control of its jet propulsion system. The fastest recorded conduction speed of 210 m/s, is found in the ensheathed axons of some pelagic Penaeid shrimps and the usual range is between 90 and 200 m/s (cf 100–120 m/s for the fastest myelinated vertebrate axon.) In other cases as seen in rat studies an axon originates from a dendrite; such axons are said to have "dendritic origin". Some axons with dendritic origin similarly have a "proximal" initial segment that starts directly at the axon origin, while others have a "distal" initial segment, discernibly separated from the axon origin. In many species some of the neurons have axons that emanate from the dendrite and not from the cell body, and these are known as axon-carrying dendrites. In many cases, an axon originates at an axon hillock on the soma; such axons are said to have "somatic origin". Some axons with somatic origin have a "proximal" initial segment adjacent the axon hillock, while others have a "distal" initial segment, separated from the soma by an extended axon hillock. See also Electrophysiology
major myelin protein is proteolipid protein, and in the PNS it is myelin basic protein. Nodes of Ranvier Nodes of Ranvier (also known as myelin sheath gaps) are short unmyelinated segments of a myelinated axon, which are found periodically interspersed between segments of the myelin sheath. Therefore, at the point of the node of Ranvier, the axon is reduced in diameter. These nodes are areas where action potentials can be generated. In saltatory conduction, electrical currents produced at each node of Ranvier are conducted with little attenuation to the next node in line, where they remain strong enough to generate another action potential. Thus in a myelinated axon, action potentials effectively "jump" from node to node, bypassing the myelinated stretches in between, resulting in a propagation speed much faster than even the fastest unmyelinated axon can sustain. Axon terminals An axon can divide into many branches called telodendria (Greek–end of tree). At the end of each telodendron is an axon terminal (also called a synaptic bouton, or terminal bouton). Axon terminals contain synaptic vesicles that store the neurotransmitter for release at the synapse. This makes multiple synaptic connections with other neurons possible. Sometimes the axon of a neuron may synapse onto dendrites of the same neuron, when it is known as an autapse. Action potentials Most axons carry signals in the form of action potentials, which are discrete electrochemical impulses that travel rapidly along an axon, starting at the cell body and terminating at points where the axon makes synaptic contact with target cells. The defining characteristic of an action potential is that it is "all-or-nothing" — every action potential that an axon generates has essentially the same size and shape. This all-or-nothing characteristic allows action potentials to be transmitted from one end of a long axon to the other without any reduction in size. There are, however, some types of neurons with short axons that carry graded electrochemical signals, of variable amplitude. When an action potential reaches a presynaptic terminal, it activates the synaptic transmission process. The first step is rapid opening of calcium ion channels in the membrane of the axon, allowing calcium ions to flow inward across the membrane. The resulting increase in intracellular calcium concentration causes synaptic vesicles (tiny containers enclosed by a lipid membrane) filled with a neurotransmitter chemical to fuse with the axon's membrane and empty their contents into the extracellular space. The neurotransmitter is released from the presynaptic nerve through exocytosis. The neurotransmitter chemical then diffuses across to receptors located on the membrane of the target cell. The neurotransmitter binds to these receptors and activates them. Depending on the type of receptors that are activated, the effect on the target cell can be to excite the target cell, inhibit it, or alter its metabolism in some way. This entire sequence of events often takes place in less than a thousandth of a second. Afterward, inside the presynaptic terminal, a new set of vesicles is moved into position next to the membrane, ready to be released when the next action potential arrives. The action potential is the final electrical step in the integration of synaptic messages at the scale of the neuron. Extracellular recordings of action potential propagation in axons has been demonstrated in freely moving animals. While extracellular somatic action potentials have been used to study cellular activity in freely moving animals such as place cells, axonal activity in both white and gray matter can also be recorded. Extracellular recordings of axon action potential propagation is distinct from somatic action potentials in three ways: 1. The signal has a shorter peak-trough duration (~150μs) than of pyramidal cells (~500μs) or interneurons (~250μs). 2. The voltage change is triphasic. 3. Activity recorded on a tetrode is seen on only one of the four recording wires. In recordings from freely moving rats, axonal signals have been isolated in white matter tracts including the alveus and the corpus callosum as well hippocampal gray matter. In fact, the generation of action potentials in vivo is sequential in nature, and these sequential spikes constitute the digital codes in the neurons. Although previous studies indicate an axonal origin of a single spike evoked by short-term pulses, physiological signals in vivo trigger the initiation of sequential spikes at the cell bodies of the neurons. In addition to propagating action potentials to axonal terminals, the axon is able to amplify the action potentials, which makes sure a secure propagation of sequential action potentials toward the axonal terminal. In terms of molecular mechanisms, voltage-gated sodium channels in the axons possess lower threshold and shorter refractory period in response to short-term pulses. Development and growth Development The development of the axon to its target, is one of the six major stages in the overall development of the nervous system. Studies done on cultured hippocampal neurons suggest that neurons initially produce multiple neurites that are equivalent, yet only one of these neurites is destined to become the axon. It is unclear whether axon specification precedes axon elongation or vice versa, although recent evidence points to the latter. If an axon that is not fully developed is cut, the polarity can change and other neurites can potentially become the axon. This alteration of polarity only occurs when the axon is cut at least 10 μm shorter than the other neurites. After the incision is made, the longest neurite will become the future axon and all the other neurites, including the original axon, will turn into dendrites. Imposing an external force on a neurite, causing it to elongate, will make it become an axon. Nonetheless, axonal development is achieved through a complex interplay between extracellular signaling, intracellular signaling and cytoskeletal dynamics. Extracellular signaling The extracellular signals that propagate through the extracellular matrix surrounding neurons play a prominent role in axonal development. These signaling molecules include proteins, neurotrophic factors, and extracellular matrix and adhesion molecules. Netrin (also known as UNC-6) a secreted protein, functions in axon formation. When the UNC-5 netrin receptor is mutated, several neurites are irregularly projected out of neurons and finally a single axon is extended anteriorly. The neurotrophic factors – nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NTF3) are also involved in axon development and bind to Trk receptors. The ganglioside-converting enzyme plasma membrane ganglioside sialidase (PMGS), which is involved in the activation of TrkA at the tip of neutrites, is required for the elongation of axons. PMGS asymmetrically distributes to the tip of the neurite that is destined to become the future axon. Intracellular signaling During axonal development, the activity of PI3K is increased at the tip of destined axon. Disrupting the activity of PI3K inhibits axonal development. Activation of PI3K results in the production of phosphatidylinositol (3,4,5)-trisphosphate (PtdIns) which can cause significant elongation of a neurite, converting it into an axon. As such, the overexpression of phosphatases that dephosphorylate PtdIns leads into the failure of polarization. Cytoskeletal dynamics The neurite with the lowest actin filament content will become the axon. PGMS concentration and f-actin content are inversely correlated; when PGMS becomes enriched at the tip of a neurite, its f-actin content is substantially decreased. In addition, exposure to actin-depolimerizing drugs and toxin B (which inactivates Rho-signaling) causes the formation of multiple axons. Consequently, the interruption of the actin network in a growth cone will promote its neurite to become the axon. Growth Growing axons move through their environment via the growth cone, which is at the tip of the axon. The growth cone has a broad sheet-like extension called a lamellipodium which contain protrusions called filopodia. The filopodia are the mechanism by which the entire process adheres to surfaces and explores the surrounding environment. Actin plays a major role in the mobility of this system. Environments with high levels of cell adhesion molecules (CAMs) create an ideal environment for axonal growth. This seems to provide a "sticky" surface for axons to grow along. Examples of CAM's specific to neural systems include N-CAM, TAG-1—an axonal glycoprotein——and MAG, all of which are part of the immunoglobulin superfamily. Another set of molecules called extracellular matrix-adhesion molecules also provide a sticky substrate for axons to grow along. Examples of these molecules include laminin, fibronectin, tenascin, and perlecan. Some of these are surface bound to cells and thus act as short range attractants or repellents. Others are difusible ligands and thus can have long range effects. Cells called guidepost cells assist in the guidance of neuronal axon growth. These cells that help axon guidance, are typically other neurons that are sometimes immature. When the axon has completed its growth at its connection to the target, the diameter of the axon can increase by up to five times, depending on the speed of conduction required. It has also been discovered through research that if the axons of a neuron were damaged, as long as the soma (the cell body of a neuron) is not damaged, the axons would regenerate and remake the synaptic connections with neurons with the help of guidepost cells. This is also referred to as neuroregeneration. Nogo-A is a type of neurite outgrowth inhibitory component that is present in the central nervous system myelin membranes (found in an axon). It has a crucial role in restricting axonal regeneration in adult mammalian central nervous system. In recent studies, if Nogo-A is blocked and neutralized, it is possible to induce long-distance axonal regeneration which leads to enhancement of functional recovery in rats and mouse spinal cord. This has yet to be done on humans. A recent study has also found that macrophages activated through a specific inflammatory pathway activated by the Dectin-1 receptor are capable of promoting axon recovery, also however causing neurotoxicity in the neuron. Length regulation Axons vary largely in length from a few micrometers up to meters in some animals. This emphasizes that there must be a cellular length regulation mechanism allowing the neurons both to sense the length of their axons and to control their growth accordingly. It was discovered that motor proteins play an important role in regulating the length of axons. Based on this observation, researchers developed an explicit model for axonal growth describing how motor proteins could affect the axon length on the molecular level. These studies suggest that motor proteins carry signaling molecules from the soma to the growth cone and vice versa whose concentration oscillates in time with a length-dependent frequency. Classification The axons of neurons in the human peripheral nervous system can be classified based on their physical features and signal conduction properties. Axons were known to have different thicknesses (from 0.1 to 20 µm) and these differences were thought to relate to the speed at which an action potential could travel along the axon – its conductance velocity. Erlanger and Gasser proved this hypothesis, and identified several types of nerve fiber, establishing a relationship between the diameter of an axon and its nerve conduction velocity. They published their findings in 1941 giving the first classification of axons. Axons are classified in two systems. The first one introduced by Erlanger and Gasser, grouped the fibers into three main groups using the letters A, B, and C. These groups, group A, group B, and group C include both the sensory fibers (afferents) and the motor fibres (efferents). The first group A, was subdivided into alpha, beta, gamma, and delta fibers — Aα, Aβ, Aγ, and Aδ. The motor neurons of the different motor fibers, were the lower motor neurons – alpha motor neuron, beta motor neuron, and gamma motor neuron having the Aα, Aβ, and Aγ nerve fibers respectively. Later findings by other researchers identified two groups of Aa fibers that were sensory fibers. These were then introduced into a system that only included sensory fibers (though some of these were mixed nerves and were also motor fibers). This system refers to the sensory groups as Types and uses Roman numerals: Type Ia, Type Ib, Type II, Type III, and Type IV. Lower motor neurons have two kind of fibers: Different sensory receptors innervate different types of nerve fibers. Proprioceptors are innervated by type Ia, Ib and II sensory fibers, mechanoreceptors by type II and III sensory fibers and nociceptors and thermoreceptors by type III and IV sensory fibers. Autonomic The autonomic nervous system has two kinds of peripheral fibers: Clinical significance In order of degree of severity, injury to a nerve can be described as neurapraxia, axonotmesis, or neurotmesis. Concussion is considered a mild form of diffuse axonal injury. Axonal injury can also cause central chromatolysis. The dysfunction of axons in the nervous system is one of the major causes of many inherited neurological disorders that affect both peripheral and central neurons. When an axon is crushed, an active process of axonal degeneration takes place at the part of the axon furthest from the cell body. This degeneration takes place quickly following the injury, with the part of the axon being sealed off at the membranes and broken down by macrophages. This is known as Wallerian degeneration. Dying back of an axon can also take place in many neurodegenerative diseases, particularly when axonal transport is impaired, this is known as Wallerian-like degeneration. Studies suggest that
success of the Achaemenid Persians in holding their far-flung empire together for as long as they did." Imperial Aramaic was highly standardised; its orthography was based more on historical roots than any spoken dialect and was inevitably influenced by Old Persian. The Aramaic glyph forms of the period are often divided into two main styles, the "lapidary" form, usually inscribed on hard surfaces like stone monuments, and a cursive form whose lapidary form tended to be more conservative by remaining more visually similar to Phoenician and early Aramaic. Both were in use through the Achaemenid Persian period, but the cursive form steadily gained ground over the lapidary, which had largely disappeared by the 3rd century BC. For centuries after the fall of the Achaemenid Empire in 331 BC, Imperial Aramaic, or something near enough to it to be recognisable, would remain an influence on the various native Iranian languages. The Aramaic script would survive as the essential characteristics of the Iranian Pahlavi writing system. 30 Aramaic documents from Bactria have been recently discovered, an analysis of which was published in November 2006. The texts, which were rendered on leather, reflect the use of Aramaic in the 4th century BC in the Persian Achaemenid administration of Bactria and Sogdiana. The widespread usage of Achaemenid Aramaic in the Middle East led to the gradual adoption of the Aramaic alphabet for writing Hebrew. Formerly, Hebrew had been written using an alphabet closer in form to that of Phoenician, the Paleo-Hebrew alphabet. Aramaic-derived scripts Since the evolution of the Aramaic alphabet out of the Phoenician one was a gradual process, the division of the world's alphabets into the ones derived from the Phoenician one directly and the ones derived from Phoenician via Aramaic is somewhat artificial. In general, the alphabets of the Mediterranean region (Anatolia, Greece, Italy) are classified as Phoenician-derived, adapted from around the 8th century BC, and those of the East (the Levant, Persia, Central Asia and India) are considered Aramaic-derived, adapted from around the 6th century BC from the Imperial Aramaic script of the Achaemenid Empire. After the fall of the Achaemenid Empire, the unity of the Imperial Aramaic script was lost, diversifying into a number of descendant cursives. The Hebrew and Nabataean alphabets, as they stood by the Roman era, were little changed in style from the Imperial Aramaic alphabet. Ibn Khaldun (1332–1406) alleges that not only the old Nabataean writing was influenced by the "Syrian script" (i.e. Aramaic), but also the old Chaldean script. A cursive Hebrew variant developed from the early centuries AD, but it remained restricted to the status of a variant used alongside the noncursive. By contrast, the cursive developed out of the Nabataean alphabet in the same period soon became the standard for writing Arabic, evolving into the Arabic alphabet as it stood by the time of the early spread of Islam. The development of cursive versions of Aramaic also led to the creation of the Syriac, Palmyrene and Mandaic alphabets, which formed the basis of the historical scripts of Central Asia, such as the Sogdian and Mongolian alphabets. The Old Turkic script is generally considered to have its ultimate origins in Aramaic, in particular via the Pahlavi or Sogdian alphabets, as suggested by V. Thomsen, or possibly via Kharosthi (cf., Issyk inscription). Brahmi script was also possibly derived or inspired by Aramaic. Brahmic family of scripts includes Devanagari. Languages using the alphabet Today, Biblical Aramaic, Jewish Neo-Aramaic dialects and the Aramaic language of the Talmud are written in the modern-Hebrew alphabet (distinguished from the Old Hebrew script). In classical Jewish literature, the name given to the modern-Hebrew script was "Ashurit" (the ancient Assyrian script), a script now known widely as the Aramaic script. It is believed that during the period of Assyrian dominion that Aramaic script and language received official status. Syriac
near enough to it to be recognisable, would remain an influence on the various native Iranian languages. The Aramaic script would survive as the essential characteristics of the Iranian Pahlavi writing system. 30 Aramaic documents from Bactria have been recently discovered, an analysis of which was published in November 2006. The texts, which were rendered on leather, reflect the use of Aramaic in the 4th century BC in the Persian Achaemenid administration of Bactria and Sogdiana. The widespread usage of Achaemenid Aramaic in the Middle East led to the gradual adoption of the Aramaic alphabet for writing Hebrew. Formerly, Hebrew had been written using an alphabet closer in form to that of Phoenician, the Paleo-Hebrew alphabet. Aramaic-derived scripts Since the evolution of the Aramaic alphabet out of the Phoenician one was a gradual process, the division of the world's alphabets into the ones derived from the Phoenician one directly and the ones derived from Phoenician via Aramaic is somewhat artificial. In general, the alphabets of the Mediterranean region (Anatolia, Greece, Italy) are classified as Phoenician-derived, adapted from around the 8th century BC, and those of the East (the Levant, Persia, Central Asia and India) are considered Aramaic-derived, adapted from around the 6th century BC from the Imperial Aramaic script of the Achaemenid Empire. After the fall of the Achaemenid Empire, the unity of the Imperial Aramaic script was lost, diversifying into a number of descendant cursives. The Hebrew and Nabataean alphabets, as they stood by the Roman era, were little changed in style from the Imperial Aramaic alphabet. Ibn Khaldun (1332–1406) alleges that not only the old Nabataean writing was influenced by the "Syrian script" (i.e. Aramaic), but also the old Chaldean script. A cursive Hebrew variant developed from the early centuries AD, but it remained restricted to the status of a variant used alongside the noncursive. By contrast, the cursive developed out of the Nabataean alphabet in the same period soon became the standard for writing Arabic, evolving into the Arabic alphabet as it stood by the time of the early spread of Islam. The development of cursive versions of Aramaic also led to the creation of the Syriac, Palmyrene and Mandaic alphabets, which formed the basis of the historical scripts of Central Asia, such as the Sogdian and Mongolian alphabets. The Old Turkic script is generally considered to have its ultimate origins in Aramaic, in particular via the Pahlavi or Sogdian alphabets, as suggested by V. Thomsen, or possibly via Kharosthi (cf., Issyk inscription). Brahmi script was also possibly derived or inspired by Aramaic. Brahmic family of scripts includes Devanagari. Languages using the alphabet Today, Biblical Aramaic, Jewish Neo-Aramaic dialects and the Aramaic language of the Talmud are written in the modern-Hebrew alphabet (distinguished from the Old Hebrew script). In classical Jewish literature, the name given to the modern-Hebrew script was "Ashurit" (the ancient Assyrian script), a script now known widely as the Aramaic script. It is believed that during the period of Assyrian dominion that Aramaic script and language received official status. Syriac and Christian Neo-Aramaic dialects are today written in the Syriac alphabet, which script has superseded the more ancient Assyrian script and now bears its name. Mandaic is written in the Mandaic alphabet. The near-identical nature of the Aramaic and the classical Hebrew alphabets caused Aramaic text to be typeset mostly in the standard Hebrew script in scholarly literature. Maaloula In Maaloula, one of few surviving communities in which a Western Aramaic dialect is still spoken, an Aramaic institute was established in 2007 by Damascus University that teaches courses to keep the language alive. The institute's activities were suspended in 2010 amidst fears that the square Aramaic alphabet used in the program too closely resembled the square script of the Hebrew alphabet and all the signs with the square Aramaic script were taken down. The program stated that they would instead use the more distinct Syriac alphabet, although use of the Aramaic alphabet has continued to some degree. Al Jazeera Arabic also broadcast a program about Western Neo-Aramaic and the villages in which it is spoken with the square script still in use. Letters Matres lectionis In Aramaic writing, Waw and Yodh serve a double function. Originally, they represented only the consonants w and y, but they were later adopted to indicate the long vowels ū and ī respectively as well (often also ō and ē respectively). In the latter role, they are known as or "mothers of reading". Ālap, likewise, has some of the characteristics of a because in initial positions, it indicates a glottal stop (followed by a vowel), but otherwise, it often also stands for the long vowels ā or
film shot of a group of characters, who are arranged so that all are visible to the camera. The usual arrangement is for the actors to stand in an irregular line from one side of the screen to the other, with the actors at the end coming forward a little and standing more in profile than the others. The purpose of the composition is to allow complex dialogue scenes to be played out without changes in camera position. In some literature, this
film shot of a group of characters, who are arranged so that all are visible to the camera. The usual arrangement is for the actors to stand in an irregular line from one side of the screen to the other, with the actors at the end coming forward a little and standing more in profile than the others. The purpose of the composition is to allow complex dialogue scenes to be played out without
disease, is a hyperacute and frequently fatal form of ADEM. AHL is relatively rare (less than 100 cases have been reported in the medical literature ), it is seen in about 2% of ADEM cases, and is characterized by necrotizing vasculitis of venules and hemorrhage, and edema. Death is common in the first week and overall mortality is about 70%, but increasing evidence points to favorable outcomes after aggressive treatment with corticosteroids, immunoglobulins, cyclophosphamide, and plasma exchange. About 70% of survivors show residual neurological deficits, but some survivors have shown surprisingly little deficit considering the magnitude of the white matter affected. This disease has been occasionally associated with ulcerative colitis and Crohn's disease, malaria, sepsis associated with immune complex deposition, methanol poisoning, and other underlying conditions. Also anecdotal association with MS has been reported Laboratory studies that support diagnosis of AHL are: peripheral leukocytosis, cerebrospinal fluid (CSF) pleocytosis associated with normal glucose and increased protein. On magnetic resonance imaging (MRI), lesions of AHL typically show extensive T2-weighted and fluid-attenuated inversion recovery (FLAIR) white matter hyperintensities with areas of hemorrhages, significant edema, and mass effect. Treatment No controlled clinical trials have been conducted on ADEM treatment, but aggressive treatment aimed at rapidly reducing inflammation of the CNS is standard. The widely accepted first-line treatment is high doses of intravenous corticosteroids, such as methylprednisolone or dexamethasone, followed by 3–6 weeks of gradually lower oral doses of prednisolone. Patients treated with methylprednisolone have shown better outcomes than those treated with dexamethasone. Oral tapers of less than three weeks duration show a higher chance of relapsing, and tend to show poorer outcomes. Other anti-inflammatory and immunosuppressive therapies have been reported to show beneficial effect, such as plasmapheresis, high doses of intravenous immunoglobulin (IVIg), mitoxantrone and cyclophosphamide. These are considered alternative therapies, used when corticosteroids cannot be used or fail to show an effect. There is some evidence to suggest that patients may respond to a combination of methylprednisolone and immunoglobulins if they fail to respond to either separately In a study of 16 children with ADEM, 10 recovered completely after high-dose methylprednisolone, one severe case that failed to respond to steroids recovered completely after IV Ig; the five most severe cases – with ADAM and severe peripheral neuropathy – were treated with combined high-dose methylprednisolone and immunoglobulin, two remained paraplegic, one had motor and cognitive handicaps, and two recovered. A recent review of IVIg treatment of ADEM (of which the previous study formed the bulk of the cases) found that 70% of children showed complete recovery after treatment with IVIg, or IVIg plus corticosteroids. A study of IVIg treatment in adults with ADEM showed that IVIg seems more effective in treating sensory and motor disturbances, while steroids seem more effective in treating impairments of cognition, consciousness and rigor. This same study found one subject, a 71-year-old man who had not responded to steroids, that responded to an IVIg treatment 58 days after disease onset. Prognosis Full recovery is seen in 50 to 70% of cases, ranging to 70 to 90% recovery with some minor residual disability (typically assessed using measures such as mRS or EDSS), average time to recover is one to six months. The mortality rate may be as high as 5–10%. Poorer outcomes are associated with unresponsiveness to steroid therapy, unusually severe neurological symptoms, or sudden onset. Children tend to have more favorable outcomes than adults, and cases presenting without fevers tend to have poorer outcomes. The latter effect may be due to either protective effects of fever, or that diagnosis and treatment is sought more rapidly when fever is present. ADEM can progress to MS. It will be considered MS if some lesions appear in different times and brain areas Motor deficits Residual motor deficits are estimated to remain in about 8 to 30% of cases, the range in severity from mild clumsiness to ataxia and hemiparesis. Neurocognitive Patients with demyelinating illnesses, such as MS, have shown cognitive deficits even when there is minimal physical disability. Research suggests that similar effects are seen after ADEM, but that the deficits are less severe than those seen in MS. A study of six children with ADEM (mean age at presentation 7.7 years) were tested for a range of neurocognitive tests after an average of 3.5 years of recovery. All six children performed in the normal range on most tests, including verbal IQ and performance IQ, but performed at least one standard deviation below age norms in at least one cognitive domain, such as complex attention (one child), short-term memory (one child) and internalizing behaviour/affect (two children). Group means for each cognitive domain were all within one standard deviation of age norms, demonstrating that, as a group, they were normal. These deficits were less severe than those seen in similar aged children with a diagnosis of MS. Another study compared nineteen children with a history of ADEM, of which 10 were five years of age or younger at the time (average age 3.8 years old, tested an average of 3.9 years later) and nine were older (mean age 7.7y at time of ADEM, tested an
the possible clinical causes of anti-MOG associated encephalomyelitis About how the anti-MOG antibodies appear in the patients serum there are several theories: A preceding antigenic challenge can be identified in approximately two-thirds of people. Some viral infections thought to induce ADEM include influenza virus, dengue, enterovirus, measles, mumps, rubella, varicella zoster, Epstein–Barr virus, cytomegalovirus, herpes simplex virus, hepatitis A, coxsackievirus and COVID-19. Bacterial infections include Mycoplasma pneumoniae, Borrelia burgdorferi, Leptospira, and beta-hemolytic Streptococci. Exposure to vaccines: The only vaccine proven related to ADEM is the Semple form of the rabies vaccine, but hepatitis B, pertussis, diphtheria, measles, mumps, rubella, pneumococcus, varicella, influenza, Japanese encephalitis, and polio vaccines have all been implicated. The majority of the studies that correlate vaccination with ADEM onset use small samples or case studies. Large scale epidemiological studies (e.g., of MMR vaccine or smallpox vaccine) do not show increased risk of ADEM following vaccination. An upper bound for the risk of ADEM from measles vaccination, if it exists, can be estimated to be 10 per million, which is far lower than the risk of developing ADEM from an actual measles infection, which is about 1 per 1,000 cases. For a rubella infection, the risk is 1 per 5,000 cases. Some early vaccines, later shown to have been contaminated with host animal CNS tissue, had ADEM incident rates as high as 1 in 600. In rare cases, ADEM seems to follow from organ transplantation. Diagnosis ADEM term has been inconsistently used at different times. Currently, the commonly accepted international standard for the clinical case definition is the one published by the International Pediatric MS Study Group, revision 2007. Given that the definition is clinical, it is currently unknown if all the cases with ADEM are positive for anti-MOG autoantibody, but in any case, it seems strongly related to ADEM diagnosis. Differential diagnosis Multiple sclerosis While ADEM and MS both involve autoimmune demyelination, they differ in many clinical, genetic, imaging, and histopathological aspects. Some authors consider MS and its borderline forms to constitute a spectrum, differing only in chronicity, severity, and clinical course, while others consider them discretely different diseases. Typically, ADEM appears in children following an antigenic challenge and remains monophasic. Nevertheless, ADEM does occur in adults, and can also be clinically multiphasic. Problems for differential diagnosis increase due to the lack of agreement for a definition of multiple sclerosis. If MS were defined just by the separation in time and space of the demyelinating lesions as McDonald did, it would not be enough to make a difference, as some cases of ADEM satisfy these conditions. Therefore, some authors propose to establish the separation line in the shape of the lesions around the veins, being therefore "perivenous vs. confluent demyelination". The pathology of ADEM is very similar to that of MS with some differences. The pathological hallmark of ADEM is perivenular inflammation with limited "sleeves of demyelination". Nevertheless, MS-like plaques (confluent demyelination) can appear Plaques in the white matter in MS are sharply delineated, while the glial scar in ADEM is smooth. Axons are better preserved in ADEM lesions. Inflammation in ADEM is widely disseminated and ill-defined, and finally, lesions are strictly perivenous, while in MS they are disposed around veins, but not so sharply. Nevertheless, the co-occurrence of perivenous and confluent demyelination in some individuals suggests pathogenic overlap between acute disseminated encephalomyelitis and multiple sclerosis and misclassification even with biopsy or even postmortem ADEM in adults can progress to MS Multiphasic disseminated encephalomyelitis When the person has more than one demyelinating episode of ADEM, the disease is then called recurrent disseminated encephalomyelitis or multiphasic disseminated encephalomyelitis (MDEM). It has been found that anti-MOG auto-antibodies are related to this kind of ADEM Another variant of ADEM in adults has been described, also related to anti-MOG auto-antibodies, has been named fulminant disseminated encephalomyelitis, and it has been reported to be clinically ADEM, but showing MS-like lesions on autopsy. It has been classified inside the anti-MOG associated inflammatory demyelinating diseases. Acute hemorrhagic leukoencephalitis Acute hemorrhagic leukoencephalitis (AHL, or AHLE), acute hemorrhagic encephalomyelitis (AHEM), acute necrotizing hemorrhagic leukoencephalitis (ANHLE), Weston-Hurst syndrome, or Hurst's disease, is a hyperacute and frequently fatal form of ADEM. AHL is relatively rare (less than 100 cases have been reported in the medical literature ), it is seen in about 2% of ADEM cases, and is characterized by necrotizing vasculitis of venules and hemorrhage, and edema. Death is common in the first week and overall mortality is about 70%, but increasing evidence points to favorable outcomes after aggressive treatment with corticosteroids, immunoglobulins, cyclophosphamide, and plasma exchange. About 70% of survivors show residual neurological deficits, but some survivors have shown surprisingly little deficit considering the magnitude of the white matter affected. This disease has been occasionally associated with ulcerative colitis and Crohn's disease, malaria, sepsis associated with immune complex deposition, methanol poisoning, and other underlying conditions. Also anecdotal association with MS has been reported Laboratory studies that support diagnosis of AHL are: peripheral leukocytosis, cerebrospinal fluid (CSF) pleocytosis associated with normal glucose and increased protein. On magnetic resonance imaging (MRI), lesions of AHL typically show extensive T2-weighted and fluid-attenuated inversion recovery (FLAIR) white matter hyperintensities with areas of hemorrhages, significant edema, and mass effect. Treatment No controlled clinical trials have been conducted on ADEM treatment, but aggressive treatment aimed at rapidly reducing inflammation of the CNS is standard. The widely accepted first-line treatment is high doses of intravenous corticosteroids, such as methylprednisolone or dexamethasone, followed by 3–6 weeks of gradually lower oral doses of prednisolone. Patients treated with methylprednisolone have shown better outcomes than those treated with dexamethasone. Oral tapers of less than three weeks duration show a higher chance of relapsing, and tend to show poorer outcomes. Other anti-inflammatory and immunosuppressive therapies have been reported to show beneficial effect, such as plasmapheresis, high doses of intravenous
stepping, as well as staggering and lurching from side to side. Turning is also problematic and could result in falls. As cerebellar ataxia becomes severe, great assistance and effort are needed to stand and walk. Dysarthria, an impairment with articulation, may also be present and is characterized by "scanning" speech that consists of slower rate, irregular rhythm, and variable volume. Also, slurring of speech, tremor of the voice, and ataxic respiration may occur. Cerebellar ataxia could result with incoordination of movement, particularly in the extremities. Overshooting (or hypermetria) occurs with finger-to-nose testing and heel to shin testing; thus, dysmetria is evident. Impairments with alternating movements (dysdiadochokinesia), as well as dysrhythmia, may also be displayed. Tremor of the head and trunk (titubation) may be seen in individuals with cerebellar ataxia. Dysmetria is thought to be caused by a deficit in the control of interaction torques in multijoint motion. Interaction torques are created at an associated joint when the primary joint is moved. For example, if a movement required reaching to touch a target in front of the body, flexion at the shoulder would create a torque at the elbow, while extension of the elbow would create a torque at the wrist. These torques increase as the speed of movement increases and must be compensated and adjusted for to create coordinated movement. This may, therefore, explain decreased coordination at higher movement velocities and accelerations. Dysfunction of the vestibulocerebellum (flocculonodular lobe) impairs balance and the control of eye movements. This presents itself with postural instability, in which the person tends to separate his/her feet upon standing, to gain a wider base and to avoid titubation (bodily oscillations tending to be forward-backward ones). The instability is, therefore, worsened when standing with the feet together, regardless of whether the eyes are open or closed. This is a negative Romberg's test, or more accurately, it denotes the individual's inability to carry out the test, because the individual feels unstable even with open eyes. Dysfunction of the spinocerebellum (vermis and associated areas near the midline) presents itself with a wide-based "drunken sailor" gait (called truncal ataxia), characterised by uncertain starts and stops, lateral deviations, and unequal steps. As a result of this gait impairment, falling is a concern in patients with ataxia. Studies examining falls in this population show that 74–93% of patients have fallen at least once in the past year and up to 60% admit to fear of falling. 'Dysfunction of the cerebrocerebellum' (lateral hemispheres) presents as disturbances in carrying out voluntary, planned movements by the extremities (called appendicular ataxia). These include: Intention tremor (coarse trembling, accentuated over the execution of voluntary movements, possibly involving the head and eyes, as well as the limbs and torso) Peculiar writing abnormalities (large, unequal letters, irregular underlining) A peculiar pattern of dysarthria (slurred speech, sometimes characterised by explosive variations in voice intensity despite a regular rhythm) Inability to perform rapidly alternating movements, known as dysdiadochokinesia, occurs, and could involve rapidly switching from pronation to supination of the forearm. Movements become more irregular with increases of speed. Inability to judge distances or ranges of movement happens. This dysmetria is often seen as undershooting, hypometria, or overshooting, hypermetria, the required distance or range to reach a target. This is sometimes seen when a patient is asked to reach out and touch someone's finger or touch his or her own nose. The rebound phenomenon, also known as the loss of the check reflex, is also sometimes seen in patients with cerebellar ataxia, for example, when patients are flexing their elbows isometrically against a resistance. When the resistance is suddenly removed without warning, the patients' arms may swing up and even strike themselves. With an intact check reflex, the patients check and activate the opposing triceps to slow and stop the movement. Patients may exhibit a constellation of subtle to overt cognitive symptoms, which are gathered under the terminology of Schmahmann's syndrome. Sensory The term sensory ataxia is used to indicate ataxia due to loss of proprioception, the loss of sensitivity to the positions of joint and body parts. This is generally caused by dysfunction of the dorsal columns of the spinal cord, because they carry proprioceptive information up to the brain. In some cases, the cause of sensory ataxia may instead be dysfunction of the various parts of the brain that receive positional information, including the cerebellum, thalamus, and parietal lobes. Sensory ataxia presents itself with an unsteady "stomping" gait with heavy heel strikes, as well as a postural instability that is usually worsened when the lack of proprioceptive input cannot be compensated for by visual input, such as in poorly lit environments. Physicians can find evidence of sensory ataxia during physical examination by having patients stand with their feet together and eyes shut. In affected patients, this will cause the instability to worsen markedly, producing wide oscillations and possibly a fall; this is called a positive Romberg's test. Worsening of the finger-pointing test with the eyes closed is another feature of sensory ataxia. Also, when patients are standing with arms and hands extended toward the physician, if the eyes are closed, the patients' fingers tend to "fall down" and then be restored to the horizontal extended position by sudden muscular contractions (the "ataxic hand"). Vestibular The term vestibular ataxia is used to indicate ataxia due to dysfunction of the vestibular system, which in acute and unilateral cases is associated with prominent vertigo, nausea, and vomiting. In slow-onset, chronic bilateral cases of vestibular dysfunction, these characteristic manifestations may be absent, and dysequilibrium may be the sole presentation. Causes The three types of ataxia have overlapping causes, so can either coexist or occur in isolation. Cerebellar ataxia can have many causes despite normal neuroimaging. Focal lesions Any type of focal lesion of the central nervous system (such as stroke, brain tumor, multiple sclerosis, inflammatory [such as sarcoidosis], and "chronic lymphocytyc inflammation with pontine perivascular enhancement responsive to steroids syndrome" [CLIPPERS]) will cause the type of ataxia corresponding to the site of the lesion: cerebellar if in the cerebellum; sensory if in the dorsal spinal cord...to include cord compression by thickened ligamentum flavum or stenosis of the boney spinal canal...(and rarely in the thalamus or parietal lobe); or vestibular if in the vestibular system (including the vestibular areas of the cerebral cortex). Exogenous substances (metabolic ataxia) Exogenous substances that cause ataxia mainly do so because they have a depressant effect on central nervous system function. The most common example is ethanol (alcohol), which is capable of causing reversible cerebellar and vestibular ataxia. Chronic intake of ethanol causes atrophy of the cerebellum by oxidative and endoplasmic reticulum stresses induced by thiamine deficiency. Other examples include various prescription drugs (e.g. most antiepileptic drugs have cerebellar ataxia as a possible adverse effect), Lithium level over 1.5mEq/L, synthetic cannabinoid HU-211 ingestion and various other medical and recreational drugs (e.g. ketamine, PCP or dextromethorphan, all of which are NMDA receptor antagonists that produce a dissociative state at high doses). A further class of pharmaceuticals which can cause short term ataxia, especially in high doses, are benzodiazepines. Exposure to high levels of methylmercury, through consumption of fish with high mercury concentrations, is also a known cause of ataxia and other neurological disorders. Radiation poisoning Ataxia can be induced as a result of severe acute radiation poisoning with an absorbed dose of more than 30 grays. Vitamin B12 deficiency Vitamin B12 deficiency may cause, among several neurological abnormalities, overlapping cerebellar and sensory ataxia. Hypothyroidism Symptoms of neurological dysfunction may be the presenting feature in some patients with hypothyroidism. These include reversible cerebellar ataxia, dementia, peripheral neuropathy, psychosis and coma. Most of the neurological complications improve completely after thyroid hormone replacement therapy. Causes of isolated sensory ataxia Peripheral neuropathies may cause generalised or localised sensory ataxia (e.g. a limb only) depending on the extent of the neuropathic involvement. Spinal disorders of various types may cause sensory ataxia from the lesioned level below, when they involve the dorsal columns. Non-hereditary cerebellar degeneration Non-hereditary causes of cerebellar degeneration include chronic alcohol use disorder, head injury, paraneoplastic and non-paraneoplastic autoimmune ataxia, high altitude cerebral oedema, coeliac disease, normal pressure hydrocephalus and infectious or post-infectious cerebellitis. Hereditary ataxias Ataxia may depend on hereditary disorders consisting of degeneration of the cerebellum or of the spine; most cases feature both to some extent, and therefore present with overlapping cerebellar and sensory ataxia, even though one is often more evident than the other. Hereditary disorders causing ataxia include autosomal dominant ones such as spinocerebellar ataxia, episodic ataxia, and dentatorubropallidoluysian atrophy, as well as autosomal recessive disorders such as Friedreich's ataxia (sensory and cerebellar, with the former predominating) and Niemann Pick disease, ataxia-telangiectasia (sensory and cerebellar, with
at an associated joint when the primary joint is moved. For example, if a movement required reaching to touch a target in front of the body, flexion at the shoulder would create a torque at the elbow, while extension of the elbow would create a torque at the wrist. These torques increase as the speed of movement increases and must be compensated and adjusted for to create coordinated movement. This may, therefore, explain decreased coordination at higher movement velocities and accelerations. Dysfunction of the vestibulocerebellum (flocculonodular lobe) impairs balance and the control of eye movements. This presents itself with postural instability, in which the person tends to separate his/her feet upon standing, to gain a wider base and to avoid titubation (bodily oscillations tending to be forward-backward ones). The instability is, therefore, worsened when standing with the feet together, regardless of whether the eyes are open or closed. This is a negative Romberg's test, or more accurately, it denotes the individual's inability to carry out the test, because the individual feels unstable even with open eyes. Dysfunction of the spinocerebellum (vermis and associated areas near the midline) presents itself with a wide-based "drunken sailor" gait (called truncal ataxia), characterised by uncertain starts and stops, lateral deviations, and unequal steps. As a result of this gait impairment, falling is a concern in patients with ataxia. Studies examining falls in this population show that 74–93% of patients have fallen at least once in the past year and up to 60% admit to fear of falling. 'Dysfunction of the cerebrocerebellum' (lateral hemispheres) presents as disturbances in carrying out voluntary, planned movements by the extremities (called appendicular ataxia). These include: Intention tremor (coarse trembling, accentuated over the execution of voluntary movements, possibly involving the head and eyes, as well as the limbs and torso) Peculiar writing abnormalities (large, unequal letters, irregular underlining) A peculiar pattern of dysarthria (slurred speech, sometimes characterised by explosive variations in voice intensity despite a regular rhythm) Inability to perform rapidly alternating movements, known as dysdiadochokinesia, occurs, and could involve rapidly switching from pronation to supination of the forearm. Movements become more irregular with increases of speed. Inability to judge distances or ranges of movement happens. This dysmetria is often seen as undershooting, hypometria, or overshooting, hypermetria, the required distance or range to reach a target. This is sometimes seen when a patient is asked to reach out and touch someone's finger or touch his or her own nose. The rebound phenomenon, also known as the loss of the check reflex, is also sometimes seen in patients with cerebellar ataxia, for example, when patients are flexing their elbows isometrically against a resistance. When the resistance is suddenly removed without warning, the patients' arms may swing up and even strike themselves. With an intact check reflex, the patients check and activate the opposing triceps to slow and stop the movement. Patients may exhibit a constellation of subtle to overt cognitive symptoms, which are gathered under the terminology of Schmahmann's syndrome. Sensory The term sensory ataxia is used to indicate ataxia due to loss of proprioception, the loss of sensitivity to the positions of joint and body parts. This is generally caused by dysfunction of the dorsal columns of the spinal cord, because they carry proprioceptive information up to the brain. In some cases, the cause of sensory ataxia may instead be dysfunction of the various parts of the brain that receive positional information, including the cerebellum, thalamus, and parietal lobes. Sensory ataxia presents itself with an unsteady "stomping" gait with heavy heel strikes, as well as a postural instability that is usually worsened when the lack of proprioceptive input cannot be compensated for by visual input, such as in poorly lit environments. Physicians can find evidence of sensory ataxia during physical examination by having patients stand with their feet together and eyes shut. In affected patients, this will cause the instability to worsen markedly, producing wide oscillations and possibly a fall; this is called a positive Romberg's test. Worsening of the finger-pointing test with the eyes closed is another feature of sensory ataxia. Also, when patients are standing with arms and hands extended toward the physician, if the eyes are closed, the patients' fingers tend to "fall down" and then be restored to the horizontal extended position by sudden muscular contractions (the "ataxic hand"). Vestibular The term vestibular ataxia is used to indicate ataxia due to dysfunction of the vestibular system, which in acute and unilateral cases is associated with prominent vertigo, nausea, and vomiting. In slow-onset, chronic bilateral cases of vestibular dysfunction, these characteristic manifestations may be absent, and dysequilibrium may be the sole presentation. Causes The three types of ataxia have overlapping causes, so can either coexist or occur in isolation. Cerebellar ataxia can have many causes despite normal neuroimaging. Focal lesions Any
of her writing. The notes are around three times longer than the article itself and include (in Note G), in complete detail, a method for calculating a sequence of Bernoulli numbers using the Analytical Engine, which might have run correctly had it ever been built (only Babbage's Difference Engine has been built, completed in London in 2002). Based on this work, Lovelace is now considered by many to be the first computer programmer and her method has been called the world's first computer program. Others dispute this because some of Charles Babbage's earlier writings could be considered computer programs. Note G also contains Lovelace's dismissal of artificial intelligence. She wrote that "The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths." This objection has been the subject of much debate and rebuttal, for example by Alan Turing in his paper "Computing Machinery and Intelligence". Lovelace and Babbage had a minor falling out when the papers were published, when he tried to leave his own statement (criticising the government's treatment of his Engine) as an unsigned preface, which could have been mistakenly interpreted as a joint declaration. When Taylor's Scientific Memoirs ruled that the statement should be signed, Babbage wrote to Lovelace asking her to withdraw the paper. This was the first that she knew he was leaving it unsigned, and she wrote back refusing to withdraw the paper. The historian Benjamin Woolley theorised that "His actions suggested he had so enthusiastically sought Ada's involvement, and so happily indulged her ... because of her 'celebrated name'." Their friendship recovered, and they continued to correspond. On 12 August 1851, when she was dying of cancer, Lovelace wrote to him asking him to be her executor, though this letter did not give him the necessary legal authority. Part of the terrace at Worthy Manor was known as Philosopher's Walk, as it was there that Lovelace and Babbage were reputed to have walked while discussing mathematical principles. First computer program In 1840, Babbage was invited to give a seminar at the University of Turin about his Analytical Engine. Luigi Menabrea, a young Italian engineer and the future Prime Minister of Italy, transcribed Babbage's lecture into French, and this transcript was subsequently published in the Bibliothèque universelle de Genève in October 1842. Babbage's friend Charles Wheatstone commissioned Ada Lovelace to translate Menabrea's paper into English. She then augmented the paper with notes, which were added to the translation. Ada Lovelace spent the better part of a year doing this, assisted with input from Babbage. These notes, which are more extensive than Menabrea's paper, were then published in the September 1843 edition of Taylor's Scientific Memoirs under the initialism AAL. Ada Lovelace's notes were labelled alphabetically from A to G. In note G, she describes an algorithm for the Analytical Engine to compute Bernoulli numbers. It is considered to be the first published algorithm ever specifically tailored for implementation on a computer, and Ada Lovelace has often been cited as the first computer programmer for this reason. The engine was never completed so her program was never tested. In 1953, more than a century after her death, Ada Lovelace's notes on Babbage's Analytical Engine were republished as an appendix to B. V. Bowden's Faster than Thought: A Symposium on Digital Computing Machines. The engine has now been recognised as an early model for a computer and her notes as a description of a computer and software. Insight into potential of computing devices In her notes, Ada Lovelace emphasised the difference between the Analytical Engine and previous calculating machines, particularly its ability to be programmed to solve problems of any complexity. She realised the potential of the device extended far beyond mere number crunching. In her notes, she wrote: This analysis was an important development from previous ideas about the capabilities of computing devices and anticipated the implications of modern computing one hundred years before they were realised. Walter Isaacson ascribes Ada's insight regarding the application of computing to any process based on logical symbols to an observation about textiles: "When she saw some mechanical looms that used punchcards to direct the weaving of beautiful patterns, it reminded her of how Babbage's engine used punched cards to make calculations." This insight is seen as significant by writers such as Betty Toole and Benjamin Woolley, as well as the programmer John Graham-Cumming, whose project Plan 28 has the aim of constructing the first complete Analytical Engine. According to the historian of computing and Babbage specialist Doron Swade: Ada saw something that Babbage in some sense failed to see. In Babbage's world his engines were bound by number...What Lovelace saw...was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation—and looking back from the present high ground of modern computing, if we are looking and sifting history for that transition, then that transition was made explicitly by Ada in that 1843 paper. Controversy over contribution Though Lovelace is often referred to as the first computer programmer, some biographers, computer scientists and historians of computing claim otherwise. Allan G. Bromley, in the 1990 article Difference and Analytical Engines: Bruce Collier, who later wrote a biography of Babbage, wrote in his 1970 Harvard University PhD thesis that Lovelace "made a considerable contribution to publicizing the Analytical Engine, but there is no evidence that she advanced the design or theory of it in any way". Eugene Eric Kim and Betty Alexandra Toole consider it "incorrect" to regard Lovelace as the first computer programmer, as Babbage wrote the initial programs for his Analytical Engine, although the majority were never published. Bromley notes several dozen sample programs prepared by Babbage between 1837 and 1840, all substantially predating Lovelace's notes. Dorothy K. Stein regards Lovelace's notes as "more a reflection of the mathematical uncertainty of the author, the political purposes of the inventor, and, above all, of the social and cultural context in which it was written, than a blueprint for a scientific development." Doron Swade, a specialist on history of computing known for his work on Babbage, discussed Lovelace during a lecture on Babbage's analytical engine. He explained that Ada was only a "promising beginner" instead of genius in mathematics, that she began studying basic concepts of mathematics five years after Babbage conceived the analytical engine so she could not have made important contributions to it, and that she only published the first computer program instead of actually writing it. But he agrees that Ada was the only person to see the potential of the analytical engine as a machine capable of expressing entities other than quantities. In his self-published book, Idea Makers, Stephen Wolfram defends Lovelace's contributions. While acknowledging that Babbage wrote several unpublished algorithms for the Analytical Engine prior to Lovelace's notes, Wolfram argues that "there's nothing as sophisticated—or as clean—as Ada's computation of the Bernoulli numbers. Babbage certainly helped and commented on Ada's work, but she was definitely the driver of it." Wolfram then suggests that Lovelace's main achievement was to distill from Babbage's correspondence "a clear exposition of the abstract operation of the machine—something which Babbage never did." In popular culture 1810s Lord Byron wrote the poem "Fare Thee Well" to his wife Lady Byron in 1816, following their separation after the birth of Ada Lovelace. In the poem he writes: And when thou would'st solace gather— When our child's first accents flow— Wilt thou teach her to say "Father!" Though his care she must forego? When her little hands shall press thee— When her lip to thine is pressed— Think of him whose prayer shall bless thee— Think of him thy love had blessed! Should her lineaments resemble Those thou never more may'st see, Then thy heart will softly tremble With a pulse yet true to me. 1970s Lovelace is portrayed in Romulus Linney's 1977 play Childe Byron. 1990s In the 1990 steampunk novel The Difference Engine by William Gibson and Bruce Sterling, Lovelace delivers a lecture on the "punched cards" programme which proves Gödel's incompleteness theorems decades before their actual discovery. In the 1997 film Conceiving Ada, a computer scientist obsessed with Ada finds a way of communicating with her in the past by means of "undying information waves". In Tom Stoppard's 1993 play Arcadia, the precocious teenage genius Thomasina Coverly—a character "apparently based" on Ada Lovelace (the play also involves Lord Byron)—comes to understand chaos theory, and theorises the second law of thermodynamics, before either is officially recognised. 2000s Lovelace features in John Crowley's 2005 novel, Lord Byron's Novel: The Evening Land, as an unseen character whose personality is forcefully depicted in her annotations and anti-heroic efforts to archive her father's lost novel. 2010s The 2015 play Ada and the Engine by Lauren Gunderson portrays Lovelace and Charles Babbage in unrequited love, and it imagines a post-death meeting between Lovelace and her father. Lovelace and Babbage are the main characters in Sydney Padua's webcomic and graphic novel The Thrilling Adventures of Lovelace and Babbage. The comic features extensive footnotes on the history of Ada Lovelace, and many lines of dialogue are drawn from actual correspondence. Lovelace and Mary Shelley as teenagers are the central characters in Jordan Stratford's steampunk series, The Wollstonecraft Detective Agency. Lovelace, identified as Ada Augusta Byron, is portrayed by Lily Lesser in the second season of The Frankenstein Chronicles. She is employed as an "analyst" to provide the workings of a life-sized humanoid automaton. The brass workings of the machine are reminiscent of Babbage's analytical engine. Her employment is described as keeping her occupied until she returns to her studies in advanced mathematics. Lovelace and Babbage appear as characters in the second season of the ITV series Victoria (2017). Emerald Fennell portrays Lovelace in the episode, "The Green-Eyed Monster." The Cardano cryptocurrency platform, which was launched in 2017, uses Ada as the name for their cryptocurrency and Lovelace as the smallest sub-unit of an Ada. "Lovelace" is the name given to the operating system designed by the character Cameron Howe in Halt and Catch Fire. Lovelace is a primary character in the 2019 Big Finish Doctor Who audio play The Enchantress of Numbers, starring Tom Baker as the Fourth Doctor and Jane Slavin as his current companion, WPC Ann Kelso. Lovelace is played by Finty Williams. In 2019, Lovelace is a featured character in the play STEM FEMMES by Philadelphia theater company Applied Mechanics. 2020s Lovelace features as a character in "Spyfall, Part 2", the second episode of Doctor Who, series 12, which first aired on BBC One on 5 January 2020. The character was portrayed by Sylvie Briggs, alongside characterisations of Charles Babbage and Noor Inayat Khan. In 2021, Nvidia named their upcoming GPU architecture (to be released in 2022), "Ada Lovelace", after her. Commemoration The computer language Ada, created on behalf of the United States Department of Defense, was named after Lovelace. The reference manual for the language was approved on 10 December 1980 and the Department of Defense Military Standard for the language, MIL-STD-1815, was given the number of the year of her birth. In 1981, the Association for Women in Computing inaugurated its Ada Lovelace Award. Since 1998, the British Computer Society (BCS) has awarded the Lovelace Medal, and in 2008 initiated an annual competition for women students. BCSWomen sponsors the Lovelace Colloquium, an annual conference for women undergraduates. Ada College is a further-education college in Tottenham Hale, London, focused on digital skills. Ada Lovelace Day is an annual event celebrated on the second Tuesday of October, which began in 2009. Its goal is to "... raise the profile of women in science, technology, engineering, and maths," and to "create new role models for girls and women" in these fields. Events have included Wikipedia edit-a-thons with the aim of improving the representation of women on Wikipedia in terms of articles and editors to reduce unintended gender bias on Wikipedia. The Ada Initiative was a non-profit organisation dedicated to increasing the involvement of women in the free culture and open source movements. The Engineering in Computer Science and Telecommunications College building in Zaragoza University is called the Ada Byron Building. The computer centre in the village of Porlock, near where Lovelace lived, is named after her. Ada Lovelace House is a council-owned building in Kirkby-in-Ashfield, Nottinghamshire, near where Lovelace spent her infancy. In 2012, a Google Doodle and blog post honoured her on her birthday. In 2013, Ada Developers Academy was founded and named after her. The mission of Ada Developers Academy is to diversify tech by providing women and gender diverse people the skills, experience, and community support to become professional software developers
his paper "Computing Machinery and Intelligence". Lovelace and Babbage had a minor falling out when the papers were published, when he tried to leave his own statement (criticising the government's treatment of his Engine) as an unsigned preface, which could have been mistakenly interpreted as a joint declaration. When Taylor's Scientific Memoirs ruled that the statement should be signed, Babbage wrote to Lovelace asking her to withdraw the paper. This was the first that she knew he was leaving it unsigned, and she wrote back refusing to withdraw the paper. The historian Benjamin Woolley theorised that "His actions suggested he had so enthusiastically sought Ada's involvement, and so happily indulged her ... because of her 'celebrated name'." Their friendship recovered, and they continued to correspond. On 12 August 1851, when she was dying of cancer, Lovelace wrote to him asking him to be her executor, though this letter did not give him the necessary legal authority. Part of the terrace at Worthy Manor was known as Philosopher's Walk, as it was there that Lovelace and Babbage were reputed to have walked while discussing mathematical principles. First computer program In 1840, Babbage was invited to give a seminar at the University of Turin about his Analytical Engine. Luigi Menabrea, a young Italian engineer and the future Prime Minister of Italy, transcribed Babbage's lecture into French, and this transcript was subsequently published in the Bibliothèque universelle de Genève in October 1842. Babbage's friend Charles Wheatstone commissioned Ada Lovelace to translate Menabrea's paper into English. She then augmented the paper with notes, which were added to the translation. Ada Lovelace spent the better part of a year doing this, assisted with input from Babbage. These notes, which are more extensive than Menabrea's paper, were then published in the September 1843 edition of Taylor's Scientific Memoirs under the initialism AAL. Ada Lovelace's notes were labelled alphabetically from A to G. In note G, she describes an algorithm for the Analytical Engine to compute Bernoulli numbers. It is considered to be the first published algorithm ever specifically tailored for implementation on a computer, and Ada Lovelace has often been cited as the first computer programmer for this reason. The engine was never completed so her program was never tested. In 1953, more than a century after her death, Ada Lovelace's notes on Babbage's Analytical Engine were republished as an appendix to B. V. Bowden's Faster than Thought: A Symposium on Digital Computing Machines. The engine has now been recognised as an early model for a computer and her notes as a description of a computer and software. Insight into potential of computing devices In her notes, Ada Lovelace emphasised the difference between the Analytical Engine and previous calculating machines, particularly its ability to be programmed to solve problems of any complexity. She realised the potential of the device extended far beyond mere number crunching. In her notes, she wrote: This analysis was an important development from previous ideas about the capabilities of computing devices and anticipated the implications of modern computing one hundred years before they were realised. Walter Isaacson ascribes Ada's insight regarding the application of computing to any process based on logical symbols to an observation about textiles: "When she saw some mechanical looms that used punchcards to direct the weaving of beautiful patterns, it reminded her of how Babbage's engine used punched cards to make calculations." This insight is seen as significant by writers such as Betty Toole and Benjamin Woolley, as well as the programmer John Graham-Cumming, whose project Plan 28 has the aim of constructing the first complete Analytical Engine. According to the historian of computing and Babbage specialist Doron Swade: Ada saw something that Babbage in some sense failed to see. In Babbage's world his engines were bound by number...What Lovelace saw...was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation—and looking back from the present high ground of modern computing, if we are looking and sifting history for that transition, then that transition was made explicitly by Ada in that 1843 paper. Controversy over contribution Though Lovelace is often referred to as the first computer programmer, some biographers, computer scientists and historians of computing claim otherwise. Allan G. Bromley, in the 1990 article Difference and Analytical Engines: Bruce Collier, who later wrote a biography of Babbage, wrote in his 1970 Harvard University PhD thesis that Lovelace "made a considerable contribution to publicizing the Analytical Engine, but there is no evidence that she advanced the design or theory of it in any way". Eugene Eric Kim and Betty Alexandra Toole consider it "incorrect" to regard Lovelace as the first computer programmer, as Babbage wrote the initial programs for his Analytical Engine, although the majority were never published. Bromley notes several dozen sample programs prepared by Babbage between 1837 and 1840, all substantially predating Lovelace's notes. Dorothy K. Stein regards Lovelace's notes as "more a reflection of the mathematical uncertainty of the author, the political purposes of the inventor, and, above all, of the social and cultural context in which it was written, than a blueprint for a scientific development." Doron Swade, a specialist on history of computing known for his work on Babbage, discussed Lovelace during a lecture on Babbage's analytical engine. He explained that Ada was only a "promising beginner" instead of genius in mathematics, that she began studying basic concepts of mathematics five years after Babbage conceived the analytical engine so she could not have made important contributions to it, and that she only published the first computer program instead of actually writing it. But he agrees that Ada was the only person to see the potential of the analytical engine as a machine capable of expressing entities other than quantities. In his self-published book, Idea Makers, Stephen Wolfram defends Lovelace's contributions. While acknowledging that Babbage wrote several unpublished algorithms for the Analytical Engine prior to Lovelace's notes, Wolfram argues that "there's nothing as sophisticated—or as clean—as Ada's computation of the Bernoulli numbers. Babbage certainly helped and commented on Ada's work, but she was definitely the driver of it." Wolfram then suggests that Lovelace's main achievement was to distill from Babbage's correspondence "a clear exposition of the abstract operation of the machine—something which Babbage never did." In popular culture 1810s Lord Byron wrote the poem "Fare Thee Well" to his wife Lady Byron in 1816, following their separation after the birth of Ada Lovelace. In the poem he writes: And when thou would'st solace gather— When our child's first accents flow— Wilt thou teach her to say "Father!" Though his care she must forego? When her little hands shall press thee— When her lip to thine is pressed— Think of him whose prayer shall bless thee— Think of him thy love had blessed! Should her lineaments resemble Those thou never more may'st see, Then thy heart will softly tremble With a pulse yet true to me. 1970s Lovelace is portrayed in Romulus Linney's 1977 play Childe Byron. 1990s In the 1990 steampunk novel The Difference Engine by William Gibson and Bruce Sterling, Lovelace delivers a lecture on the "punched cards" programme which proves Gödel's incompleteness theorems decades before their actual discovery. In the 1997 film Conceiving Ada, a computer scientist obsessed with Ada finds a way of communicating with her in the past by means of "undying information waves". In Tom Stoppard's 1993 play Arcadia, the precocious teenage genius Thomasina Coverly—a character "apparently based" on Ada Lovelace (the play also involves Lord Byron)—comes to understand chaos theory, and theorises the second law of thermodynamics, before either is officially recognised. 2000s Lovelace features in John Crowley's 2005 novel, Lord Byron's Novel: The Evening Land, as an unseen character whose personality is forcefully depicted in her annotations and anti-heroic efforts to archive her father's lost novel. 2010s The 2015 play Ada and the Engine by Lauren Gunderson portrays Lovelace and Charles Babbage in unrequited love, and it imagines a post-death meeting between Lovelace and her father. Lovelace and Babbage are the main characters in Sydney Padua's webcomic and graphic novel The Thrilling Adventures of Lovelace and Babbage. The comic features extensive footnotes on the history of Ada Lovelace, and many lines of dialogue are drawn from actual correspondence. Lovelace and Mary Shelley as teenagers are the central characters in Jordan Stratford's steampunk series, The Wollstonecraft Detective Agency. Lovelace, identified as Ada Augusta Byron, is portrayed by Lily Lesser in the second season of The Frankenstein Chronicles. She is employed as an "analyst" to provide the workings of a life-sized humanoid automaton. The brass workings of the machine are reminiscent of Babbage's analytical engine. Her employment is described as keeping her occupied until she returns to her studies in advanced mathematics. Lovelace and Babbage appear as characters in the second season of the ITV series Victoria (2017). Emerald Fennell portrays Lovelace in the episode, "The Green-Eyed Monster." The Cardano cryptocurrency platform, which was launched in 2017, uses Ada as the name for their cryptocurrency and Lovelace as the smallest sub-unit of an Ada. "Lovelace" is the name given to the operating system designed by the character Cameron Howe in Halt and Catch Fire. Lovelace is a primary character in the 2019 Big Finish Doctor Who audio play The Enchantress of Numbers, starring Tom Baker as the Fourth Doctor and Jane Slavin as his current companion, WPC Ann Kelso. Lovelace is played by Finty Williams. In 2019, Lovelace is a featured character in the play STEM FEMMES by Philadelphia theater company Applied Mechanics. 2020s Lovelace features as a character in "Spyfall, Part 2", the second episode of Doctor Who, series 12, which first aired on BBC One on 5 January 2020. The character was portrayed by Sylvie Briggs, alongside characterisations of Charles Babbage and Noor Inayat Khan. In 2021, Nvidia named their upcoming GPU architecture (to be released in 2022), "Ada Lovelace", after her. Commemoration The computer language Ada, created on behalf of the United States Department of Defense, was named after Lovelace. The reference manual for the language was approved on 10 December 1980 and the Department of Defense Military Standard for the language, MIL-STD-1815, was given the number of the year of her birth. In 1981, the Association for Women in Computing inaugurated its Ada Lovelace Award. Since 1998, the British Computer Society (BCS) has awarded the Lovelace Medal, and in 2008 initiated an annual competition for women students. BCSWomen sponsors the Lovelace Colloquium, an annual conference for women undergraduates. Ada College is a further-education college in Tottenham Hale, London, focused on digital skills. Ada Lovelace Day is an annual event celebrated on the second Tuesday of October, which began in 2009. Its goal is to "... raise the profile of women in science, technology, engineering, and maths," and to "create new role models for girls and women" in these fields. Events have included Wikipedia edit-a-thons with the aim of improving the representation of women on Wikipedia in terms of articles and editors to reduce unintended gender bias on Wikipedia. The Ada Initiative was a non-profit organisation dedicated to increasing the involvement of women in the free culture and open source movements. The Engineering in Computer Science and Telecommunications College building in Zaragoza University is called the Ada Byron Building. The computer centre in the village of Porlock, near where Lovelace lived, is named after her. Ada Lovelace House is a council-owned building in Kirkby-in-Ashfield, Nottinghamshire, near where Lovelace spent her infancy. In 2012, a Google Doodle and blog post honoured her on her birthday. In 2013, Ada Developers Academy was founded and named after her. The mission of Ada Developers Academy is to diversify tech by providing women and gender diverse people the skills, experience, and community support to become professional software developers to change the face of tech. On 17 September 2013, an episode of Great Lives about Ada Lovelace aired. As of November 2015, all new British passports have included an illustration of Lovelace and Babbage. In 2017, a Google Doodle honoured her with other women on International Women's Day. On 2 February 2018, Satellogic, a high-resolution Earth observation imaging and analytics company, launched a ÑuSat type micro-satellite named in honour of Ada Lovelace. In March 2018, The New York Times published a belated obituary for Ada Lovelace. On 27 July 2018, Senator Ron Wyden submitted, in the United States Senate, the designation of 9 October 2018 as National Ada Lovelace Day: "To honor the life and contributions of Ada Lovelace as a leading woman in science and mathematics". The resolution (S.Res.592) was considered, and agreed to without amendment and with a preamble by unanimous consent. In November 2020 it was announced that Trinity College Dublin whose library had previously held forty busts, all of them of men, was commissioning four new busts of women, one of whom was to be Lovelace. Bicentenary The bicentenary of Ada Lovelace's birth was celebrated with a number of events, including: The Ada Lovelace Bicentenary Lectures on Computability, Israel Institute for Advanced Studies, 20 December 2015 – 31 January 2016. Ada Lovelace Symposium, University of Oxford, 13–14 October 2015. Ada.Ada.Ada, a one-woman show about the life and work of Ada Lovelace (using an LED dress), premiered at Edinburgh International Science Festival on 11 April 2015, and continues to touring internationally to promote diversity on STEM at technology conferences, businesses, government and educational organisations. Special exhibitions were displayed by the Science Museum in London, England and the Weston Library (part of the Bodleian Library) in Oxford, England. Publications Lovelace, Ada King. Ada, the Enchantress of Numbers: A Selection from the Letters of Lord Byron's Daughter and her Description of the First Computer. Mill Valley, CA: Strawberry Press, 1992. . Publication history Six copies of the 1843 first edition of Sketch of the Analytical Engine with Ada Lovelace's "Notes" have been located. Three are held at Harvard University, one at the University of Oklahoma, and one at the United States Air Force Academy. On 20 July 2018, the sixth copy was sold at auction to an anonymous buyer for £95,000. A digital facsimile of one of the copies in the Harvard University Library is available online. In December 2016, a letter written by Ada Lovelace was forfeited by Martin Shkreli to the New York State Department of Taxation and Finance for unpaid taxes owed by Shkreli. See also Ai-Da (robot) Code: Debugging the Gender Gap List of pioneers in computer science Timeline of women in science Women in computing Women in STEM fields Explanatory notes References General sources . . . . . . . With notes upon the memoir by the translator. Miller, Clair Cain. "Ada Lovelace, 1815–1852," New York Times, 8 March 2018. . . . . . . . Further reading Miranda Seymour, In Byron's Wake: The Turbulent Lives of Byron's Wife and Daughter: Annabella Milbanke and Ada Lovelace, Pegasus, 2018, 547 pp. Christopher Hollings, Ursula Martin, and Adrian Rice, Ada Lovelace: The Making of a Computer Scientist, Bodleian Library, 2018, 114 pp. Jenny Uglow, "Stepping Out of Byron's Shadow", The New York Review of Books, vol. LXV, no. 18 (22 November 2018), pp. 30–32. Jennifer Chiaverini, Enchantress of Numbers, Dutton, 2017, 426 pp. External links "Ada's Army gets set to rewrite history at Inspirefest 2018" by Luke Maxwell, 4 August 2018 "Untangling the Tale of Ada Lovelace" by Stephen Wolfram, December 2015 1815 births 1852 deaths 19th-century British women scientists 19th-century British writers 19th-century English mathematicians 19th-century English women writers 19th-century British inventors 19th-century English nobility Ada (programming language) British countesses British women computer scientists British women mathematicians Burials in Nottinghamshire Ada Women computer scientists Computer designers Daughters of
told "with tenderness and charm", while the Chicago Tribune concluded: "It's as though he turned back the pages of an old diary and told, with rekindled emotion, of the pangs of pain and the sharp, clear sweetness of a boy's first love." Helen Constance White, wrote in The Capital Times that it was "...the best articulated, the most fully disciplined of his stories." These were followed in 1943 with Shadow of Night, a Scribners' novel of which The Chicago Sun wrote: "Structurally it has the perfection of a carved jewel...A psychological novel of the first order, and an adventure tale that is unique and inspiriting." In November 1945, however, Derleth's work was attacked by his one-time admirer and mentor, Sinclair Lewis. Writing in Esquire, Lewis observed, "It is a proof of Mr. Derleth's merit that he makes one want to make the journey and see his particular Avalon: The Wisconsin River shining among its islands, and the castles of Baron Pierneau and Hercules Dousman. He is a champion and a justification of regionalism. Yet he is also a burly, bounding, bustling, self-confident, opinionated, and highly-sweatered young man with faults so grievous that a melancholy perusal of them may be of more value to apprentices than a study of his serious virtues. If he could ever be persuaded that he isn't half as good as he thinks he is, if he would learn the art of sitting still and using a blue pencil, he might become twice as good as he thinks he is – which would about rank him with Homer." Derleth good-humoredly reprinted the criticism along with a photograph of himself sans sweater, on the back cover of his 1948 country journal: Village Daybook. A lighter side to the Sac Prairie Saga is a series of quasi-autobiographical short stories known as the "Gus Elker Stories", amusing tales of country life that Peter Ruber, Derleth's last editor, said were "...models of construction and...fused with some of the most memorable characters in American literature." Most were written between 1934 and the late 1940s, though the last, "Tail of the Dog", was published in 1959 and won the Scholastic Magazine short story award for the year. The series was collected and republished in Country Matters in 1996. Walden West, published in 1961, is considered by many Derleth's finest work. This prose meditation is built out of the same fundamental material as the series of Sac Prairie journals, but is organized around three themes: "the persistence of memory...the sounds and odors of the country...and Thoreau's observation that the 'mass of men lead lives of quiet desperation.'" A blend of nature writing, philosophic musings, and careful observation of the people and place of "Sac Prairie." Of this work, George Vukelich, author of "North Country Notebook", writes: "Derleth's Walden West is...the equal of Sherwood Anderson's Winesburg,Ohio, Thornton Wilder's Our Town, and Edgar Lee Masters' Spoon River Anthology." This was followed eight years later by Return to Walden West, a work of similar quality, but with a more noticeable environmentalist edge to the writing, notes critic Norbert Blei. A close literary relative of the Sac Prairie Saga was Derleth's Wisconsin Saga, which comprises several historical novels. Detective and mystery fiction Detective fiction represented another substantial body of Derleth's work. Most notable among this work was a series of 70 stories in affectionate pastiche of Sherlock Holmes, whose creator, Sir Arthur Conan Doyle, he admired greatly. These included one published novel as well (Mr. Fairlie's Final Journey). The series features a (Sherlock Holmes-styled) British detective named Solar Pons, of 7B Praed Street in London. The series was greatly admired by such notable writers and critics of mystery and detective fiction as Ellery Queen (Frederic Dannay), Anthony Boucher, Vincent Starrett and Howard Haycraft. In his 1944 volume The Misadventures of Sherlock Holmes, Ellery Queen wrote of Derleth's The Norcross Riddle, an early Pons story: "How many budding authors, not even old enough to vote, could have captured the spirit and atmosphere with as much fidelity?" Queen adds, "...and his choice of the euphonic Solar Pons is an appealing addition to the fascinating lore of Sherlockian nomenclature." Vincent Starrett, in his foreword to the 1964 edition of The Casebook of Solar Pons, wrote that the series is "...as sparkling a galaxy of Sherlockian pastiches as we have had since the canonical entertainments came to an end." Despite close similarities to Doyle's creation, Pons lived in the post-World War I era, in the decades of the 1920s and 1930s. Though Derleth never wrote a Pons novel to equal The Hound of the Baskervilles, editor Peter Ruber wrote: "...Derleth produced more than a few Solar Pons stories almost as good as Sir Arthur's, and many that had better plot construction." Although these stories were a form of diversion for Derleth, Ruber, who edited The Original Text Solar Pons Omnibus Edition (2000), argued: "Because the stories were generally of such high quality, they ought to be assessed on their own merits as a unique contribution in the annals of mystery fiction, rather than suffering comparison as one of the endless imitators of Sherlock Holmes." Some of the stories were self-published, through a new imprint called "Mycroft & Moran", an appellation of humorous significance to Holmesian scholars. For approximately a decade, an active supporting group was the Praed Street Irregulars, patterned after the Baker Street Irregulars. In 1946, Conan Doyle's two sons made some attempts to force Derleth to cease publishing the Solar Pons series, but the efforts were unsuccessful and eventually withdrawn. Derleth's mystery and detective fiction also included a series of works set in Sac Prairie and featuring Judge Peck as the central character. Youth and children's fiction Derleth wrote many and varied children's works, including biographies meant to introduce younger readers to explorer Jacques Marquette, as well as Ralph Waldo Emerson and Henry David Thoreau. Arguably most important among his works for younger readers, however, is the Steve and Sim Mystery Series, also known as the Mill Creek Irregulars series. The ten-volume series, published between 1958 and 1970, is set in Sac Prairie of the 1920s and can thus be considered in its own right a part of the Sac Prairie Saga, as well as an extension of Derleth's body of mystery fiction. Robert Hood, writing in the New York Times said: "Steve and Sim, the major characters, are twentieth-century cousins of Huck Finn and Tom Sawyer; Derleth's minor characters, little gems of comic drawing." The first novel in the series, The Moon Tenders, does, in fact, involve a rafting adventure down the Wisconsin River, which led regional writer Jesse Stuart to suggest the novel was one that "older people might read to recapture the spirit and dream of youth." The connection to the Sac Prairie Saga was noted by the Chicago Tribune: "Once again a small midwest community in 1920s is depicted with perception, skill, and dry humor." Arkham House and the "Cthulhu Mythos" Derleth was a correspondent and friend of H. P. Lovecraft – when Lovecraft wrote about "le Comte d'Erlette" in his fiction, it was in homage to Derleth. Derleth invented the term "Cthulhu Mythos" to describe the fictional universe depicted in the series of stories shared by Lovecraft and other writers in his circle. When Lovecraft died in 1937, Derleth and Donald Wandrei assembled a collection of Lovecraft's stories and tried to get them published. Existing publishers showed little interest, so Derleth and Wandrei founded Arkham House in 1939 for that purpose. The name of the company derived from Lovecraft's fictional town of Arkham, Massachusetts, which features in many of his stories. In 1939, Arkham House published The Outsider and Others, a huge collection that contained most of Lovecraft's known short stories. Derleth and Wandrei soon expanded Arkham House and began a regular publishing schedule after its second book, Someone in the Dark, a collection of some of Derleth's own horror stories, was published in 1941. Following Lovecraft's death, Derleth wrote a number of stories based on fragments and notes left by Lovecraft. These were published in Weird Tales and later in book form, under the byline "H. P. Lovecraft and August Derleth", with Derleth calling himself a "posthumous collaborator." This practice has raised objections in some quarters that Derleth simply used Lovecraft's name to market what was essentially his own fiction; S. T. Joshi refers to the "posthumous collaborations" as marking the beginning of "perhaps the most disreputable phase of Derleth's activities". Dirk W. Mosig, S. T. Joshi, and Richard L. Tierney were dissatisfied with Derleth's invention of the term Cthulhu Mythos (Lovecraft himself used Yog-Sothothery) and his presentation of Lovecraft's fiction as having an overall pattern reflecting Derleth's own Christian world view, which they contrast with Lovecraft's depiction of an amoral universe. However, Robert M. Price points out that while Derleth's tales are distinct from Lovecraft's in their use of hope and his depiction of a struggle between good and evil, nevertheless the basis of Derlerth's systemization are found in Lovecraft. He also suggests that the differences can be overstated: Derleth was more optimistic than Lovecraft in his conception of the Mythos, but we are dealing with a difference more of degree than kind. There are indeed tales wherein Derleth's protagonists get off scot-free (like "The Shadow in the
he sold his first story, "Bat's Belfry", to Weird Tales magazine. Derleth wrote throughout his four years at the University of Wisconsin, where he received a B.A. in 1930. During this time he also served briefly as associate editor of Minneapolis-based Fawcett Publications Mystic Magazine. Returning to Sauk City in the summer of 1931, Derleth worked in a local canning factory and collaborated with childhood friend Mark Schorer (later Chairman of the University of California, Berkeley English Department). They rented a cabin, writing Gothic and other horror stories and selling them to Weird Tales magazine. Derleth won a place on the O'Brien Roll of Honor for Five Alone, published in Place of Hawks, but was first found in Pagany magazine. As a result of his early work on the Sac Prairie Saga, Derleth was awarded the prestigious Guggenheim Fellowship; his sponsors were Helen C. White, Nobel Prize-winning novelist Sinclair Lewis and poet Edgar Lee Masters of Spoon River Anthology fame. In the mid-1930s, Derleth organized a Ranger's Club for young people, served as clerk and president of the local school board, served as a parole officer, organized a local men's club and a parent-teacher association. He also lectured in American regional literature at the University of Wisconsin and was a contributing editor of Outdoors Magazine. With longtime friend Donald Wandrei, Derleth in 1939 founded Arkham House. Its initial objective was to publish the works of H. P. Lovecraft, with whom Derleth had corresponded since his teenage years. At the same time, he began teaching a course in American Regional Literature at the University of Wisconsin. In 1941, he became literary editor of The Capital Times newspaper in Madison, a post he held until his resignation in 1960. His hobbies included fencing, swimming, chess, philately and comic-strips (Derleth reportedly used the funding from his Guggenheim Fellowship to bind his comic book collection, most recently valued in the millions of dollars, rather than to travel abroad as the award intended.). Derleth's true avocation, however, was hiking the terrain of his native Wisconsin lands, and observing and recording nature with an expert eye. Derleth once wrote of his writing methods, "I write very swiftly, from 750,000 to a million words yearly, very little of it pulp material." In 1948, he was elected president of the Associated Fantasy Publishers at the 6th World Science Fiction Convention in Toronto. He was married April 6, 1953, to Sandra Evelyn Winters. They divorced six years later. Derleth retained custody of the couple's two children, April Rose and Walden William. April earned a Bachelor of Arts degree in English from the University of Wisconsin-Madison in 1977. She became majority stockholder, President, and CEO of Arkham House in 1994. She remained in that capacity until her death. She was known in the community as a naturalist and humanitarian. April died on March 21, 2011. In 1960, Derleth began editing and publishing a magazine called Hawk and Whippoorwill, dedicated to poems of man and nature. Derleth died of a heart attack on July 4, 1971, and is buried in St. Aloysius Cemetery in Sauk City. The U.S. 12 bridge over the Wisconsin River is named in his honor. Derleth was Roman Catholic. Career Derleth wrote more than 150 short stories and more than 100 books during his lifetime. The Sac Prairie Saga Derleth wrote an expansive series of novels, short stories, journals, poems, and other works about Sac Prairie (whose prototype is Sauk City). Derleth intended this series to comprise up to 50 novels telling the projected life-story of the region from the 19th century onwards, with analogies to Balzac's Human Comedy and Proust's Remembrance of Things Past. This, and other early work by Derleth, made him a well-known figure among the regional literary figures of his time: early Pulitzer Prize winners Hamlin Garland and Zona Gale, as well as Sinclair Lewis, the last both an admirer and critic of Derleth. As Edward Wagenknecht wrote in Cavalcade of the American Novel, "What Mr. Derleth has that is lacking...in modern novelists generally, is a country. He belongs. He writes of a land and a people that are bone of his bone and flesh of his flesh. In his fictional world, there is a unity much deeper and more fundamental than anything that can be conferred by an ideology. It is clear, too, that he did not get the best, and most fictionally useful, part of his background material from research in the library; like Scott, in his Border novels, he gives, rather, the impression of having drunk it in with his mother's milk." Jim Stephens, editor of An August Derleth Reader, (1992), argues: "what Derleth accomplished....was to gather a Wisconsin mythos which gave respect to the ancient fundament of our contemporary life." The author inaugurated the Sac Prairie Saga with four novellas comprising Place of Hawks, published by Loring & Mussey in 1935. At publication, The Detroit News wrote: "Certainly with this book Mr. Derleth may be added to the American writers of distinction." Derleth's first novel, Still is the Summer Night, was published two years later by the famous Charles Scribners' editor Maxwell Perkins, and was the second in his Sac Prairie Saga. Village Year, the first in a series of journals – meditations on nature, Midwestern village American life, and more – was published in 1941 to praise from The New York Times Book Review: "A book of instant sensitive responsiveness...recreates its scene with acuteness and beauty, and makes an unusual contribution to the Americana of the present day." The New York Herald Tribune observed that "Derleth...deepens the value of his village setting by presenting in full the enduring natural background; with the people projected against this, the writing comes to have the quality of an old Flemish picture, humanity lively and amusing and loveable in the foreground and nature magnificent beyond." James Grey, writing in the St. Louis Dispatch concluded, "Derleth has achieved a kind of prose equivalent of the Spoon River Anthology." In the same year, Evening in Spring was published by Charles Scribners & Sons. This work Derleth considered among his finest. What The Milwaukee Journal called "this beautiful little love story", is an autobiographical novel of first love beset by small-town religious bigotry. The work received critical praise: The New Yorker considered it a story told "with tenderness and charm", while the Chicago Tribune concluded: "It's as though he turned back the pages of an old diary and told, with rekindled emotion, of the pangs of pain and the sharp, clear sweetness of a boy's first love." Helen Constance White, wrote in The Capital Times that it was "...the best articulated, the most fully disciplined of his stories." These were followed in 1943 with Shadow of Night, a Scribners' novel of which The Chicago Sun wrote: "Structurally it has the perfection of a carved jewel...A psychological novel of the first order, and an adventure tale that is unique and inspiriting." In November 1945, however, Derleth's work was attacked by his one-time admirer and mentor, Sinclair Lewis. Writing in Esquire, Lewis observed, "It is a proof of Mr. Derleth's merit that he makes one want to make the journey and see his particular Avalon: The Wisconsin River shining among its islands, and the castles of Baron Pierneau and Hercules Dousman. He is a champion and a justification of regionalism. Yet he is also a burly, bounding, bustling, self-confident, opinionated, and highly-sweatered young man with faults so grievous that a melancholy perusal of them may be of more value to apprentices than a study of his serious virtues. If he could ever be persuaded that he isn't half as good as he thinks he is, if he would learn the art of sitting still and using a blue pencil, he might become twice as good as he thinks he is – which would about rank him with Homer." Derleth good-humoredly reprinted the criticism along with a
goddess Alphito, whose name is related to alphita, the "white flour"; alphos, a dull white leprosy; and finally the Proto-Indo-European word *albʰós. Similarly, the river god Alpheus is also supposed to derive from the Greek alphos and means whitish. In his commentary on the Aeneid of Vergil, the late fourth-century grammarian Maurus Servius Honoratus says that all high mountains are called Alpes by Celts. According to the Oxford English Dictionary, the Latin Alpes might possibly derive from a pre-Indo-European word *alb "hill"; "Albania" is a related derivation. Albania, a name not native to the region known as the country of Albania, has been used as a name for a number of mountainous areas across Europe. In Roman times, "Albania" was a name for the eastern Caucasus, while in the English languages "Albania" (or "Albany") was occasionally used as a name for Scotland, although it is more likely derived from the Latin word albus, the color white. In modern languages the term alp, alm, albe or alpe refers to a grazing pastures in the alpine regions below the glaciers, not the peaks. An alp refers to a high mountain pasture, typically near or above the tree line, where cows and other livestock are taken to be grazed during the summer months and where huts and hay barns can be found, sometimes constituting tiny hamlets. Therefore, the term "the Alps", as a reference to the mountains, is a misnomer. The term for the mountain peaks varies by nation and language: words such as Horn, Kogel, Kopf, Gipfel, Spitze, Stock, and Berg are used in German-speaking regions; Mont, Pic, Tête, Pointe, Dent, Roche, and Aiguille in French-speaking regions; and Monte, Picco, Corno, Punta, Pizzo, or Cima in Italian-speaking regions. Geography The Alps are a crescent shaped geographic feature of central Europe that ranges in an arc (curved line) from east to west and is in width. The mean height of the mountain peaks is . The range stretches from the Mediterranean Sea north above the Po basin, extending through France from Grenoble, and stretching eastward through mid and southern Switzerland. The range continues onward toward Vienna, Austria, and east to the Adriatic Sea and Slovenia. To the south it dips into northern Italy and to the north extends to the southern border of Bavaria in Germany. In areas like Chiasso, Switzerland, and Allgäu, Bavaria, the demarcation between the mountain range and the flatlands are clear; in other places such as Geneva, the demarcation is less clear. The countries with the greatest alpine territory are Austria (28.7% of the total area), Italy (27.2%), France (21.4%) and Switzerland (13.2%). The highest portion of the range is divided by the glacial trough of the Rhône valley, from Mont Blanc to the Matterhorn and Monte Rosa on the southern side, and the Bernese Alps on the northern. The peaks in the easterly portion of the range, in Austria and Slovenia, are smaller than those in the central and western portions. The variances in nomenclature in the region spanned by the Alps makes classification of the mountains and subregions difficult, but a general classification is that of the Eastern Alps and Western Alps with the divide between the two occurring in eastern Switzerland according to geologist Stefan Schmid, near the Splügen Pass. The highest peaks of the Western Alps and Eastern Alps, respectively, are Mont Blanc, at and Piz Bernina at . The second-highest major peaks are Monte Rosa at and Ortler, at , respectively. Series of lower mountain ranges run parallel to the main chain of the Alps, including the French Prealps in France and the Jura Mountains in Switzerland and France. The secondary chain of the Alps follows the watershed from the Mediterranean Sea to the Wienerwald, passing over many of the highest and most well-known peaks in the Alps. From the Colle di Cadibona to Col de Tende it runs westwards, before turning to the northwest and then, near the Colle della Maddalena, to the north. Upon reaching the Swiss border, the line of the main chain heads approximately east-northeast, a heading it follows until its end near Vienna. The northeast end of the Alpine arc directly on the Danube, which flows into the Black Sea, is the Leopoldsberg near Vienna. In contrast, the southeastern part of the Alps ends on the Adriatic Sea in the area around Trieste towards Duino and Barcola. Passes The Alps have been crossed for war and commerce, and by pilgrims, students and tourists. Crossing routes by road, train or foot are known as passes, and usually consist of depressions in the mountains in which a valley leads from the plains and hilly pre-mountainous zones. In the medieval period hospices were established by religious orders at the summits of many of the main passes. The most important passes are the Col de l'Iseran (the highest), the Col Agnel, the Brenner Pass, the Mont-Cenis, the Great St. Bernard Pass, the Col de Tende, the Gotthard Pass, the Semmering Pass, the Simplon Pass, and the Stelvio Pass. Crossing the Italian-Austrian border, the Brenner Pass separates the Ötztal Alps and Zillertal Alps and has been in use as a trading route since the 14th century. The lowest of the Alpine passes at , the Semmering crosses from Lower Austria to Styria; since the 12th century when a hospice was built there, it has seen continuous use. A railroad with a tunnel long was built along the route of the pass in the mid-19th century. With a summit of , the Great St. Bernard Pass is one of the highest in the Alps, crossing the Italian-Swiss border east of the Pennine Alps along the flanks of Mont Blanc. The pass was used by Napoleon Bonaparte to cross 40,000 troops in 1800. The Mont Cenis pass has been a major commercial and military road between Western Europe and Italy. The pass was crossed by many troops on their way to the Italian peninsula. From Constantine I, Pepin the Short and Charlemagne to Henry IV, Napoléon and more recently the German Gebirgsjägers during World War II. Now the pass has been supplanted by the Fréjus Highway Tunnel (opened 1980) and Rail Tunnel (opened 1871). The Saint Gotthard Pass crosses from Central Switzerland to Ticino; in 1882 the Saint Gotthard Railway Tunnel was opened connecting Lucerne in Switzerland, with Milan in Italy. 98 years later followed Gotthard Road Tunnel ( long) connecting the A2 motorway in Göschenen on the north side with Airolo on the south side, exactly like the railway tunnel. On 1 June 2016 the world's longest railway tunnel, the Gotthard Base Tunnel was opened, which connects Erstfeld in canton of Uri with Bodio in canton of Ticino by two single tubes of . It is the first tunnel that traverses the Alps on a flat route. From 11 December 2016, it has been part of the regular railway timetable and used hourly as standard ride between Basel/Lucerne/Zurich and Bellinzona/Lugano/Milan. The highest pass in the alps is the col de l'Iseran in Savoy (France) at , followed by the Stelvio Pass in northern Italy at ; the road was built in the 1820s. Highest mountains The Union Internationale des Associations d'Alpinisme (UIAA) has defined a list of 82 "official" Alpine summits that reach at least . The list includes not only mountains, but also subpeaks with little prominence that are considered important mountaineering objectives. Below are listed the 29 "four-thousanders" with at least of prominence. While Mont Blanc was first climbed in 1786 and the Jungfrau in 1811, most of the Alpine four-thousanders were climbed during the second half of the 19th century, notably Piz Bernina (1850), the Dom (1858), the Grand Combin (1859), the Weisshorn (1861) and the Barre des Écrins (1864); the ascent of the Matterhorn in 1865 marked the end of the golden age of alpinism. Karl Blodig (1859–1956) was among the first to successfully climb all the major 4,000 m peaks. He completed his series of ascents in 1911. Many of the big Alpine three-thousanders were climbed in the early 19th century, notably the Grossglockner (1800) and the Ortler (1804), although some of them were climbed only much later, such at Mont Pelvoux (1848), Monte Viso (1861) and La Meije (1877). The first British Mont Blanc ascent was in 1788; the first female ascent in 1819. By the mid-1850s Swiss mountaineers had ascended most of the peaks and were eagerly sought as mountain guides. Edward Whymper reached the top of the Matterhorn in 1865 (after seven attempts), and in 1938 the last of the six great north faces of the Alps was climbed with the first ascent of the Eiger Nordwand (north face of the Eiger). Geology and orogeny Important geological concepts were established as naturalists began studying the rock formations of the Alps in the 18th century. In the mid-19th century the now-defunct theory of geosynclines was used to explain the presence of "folded" mountain chains but by the mid-20th century the theory of plate tectonics became widely accepted. The formation of the Alps (the Alpine orogeny) was an episodic process that began about 300 million years ago. In the Paleozoic Era the Pangaean supercontinent consisted of a single tectonic plate; it broke into separate plates during the Mesozoic Era and the Tethys sea developed between Laurasia and Gondwana during the Jurassic Period. The Tethys was later squeezed between colliding plates causing the formation of mountain ranges called the Alpide belt, from Gibraltar through the Himalayas to Indonesia—a process that began at the end of the Mesozoic and continues into the present. The formation of the Alps was a segment of this orogenic process, caused by the collision between the African and the Eurasian plates that began in the late Cretaceous Period. Under extreme compressive stresses and pressure, marine sedimentary rocks were uplifted, creating characteristic recumbent folds, or nappes, and thrust faults. As the rising peaks underwent erosion, a layer of marine flysch sediments was deposited in the foreland basin, and the sediments became involved in younger nappes (folds) as the orogeny progressed. Coarse sediments from the continual uplift and erosion were later deposited in foreland areas as molasse. The molasse regions in Switzerland and Bavaria were well-developed and saw further upthrusting of flysch. The Alpine orogeny occurred in ongoing cycles through to the Paleogene causing differences in nappe structures, with a late-stage orogeny causing the development of the Jura Mountains. A series of tectonic events in the Triassic, Jurassic and Cretaceous periods caused different paleogeographic regions. The Alps are subdivided by different lithology (rock composition) and nappe structure according to the orogenic events that affected them. The geological subdivision differentiates the Western, Eastern Alps and Southern Alps: the Helveticum in the north, the Penninicum and Austroalpine system in the centre and, south of the Periadriatic Seam, the Southern Alpine system. According to geologist Stefan Schmid, because the Western Alps underwent a metamorphic event in the Cenozoic Era while the Austroalpine peaks underwent an event in the Cretaceous Period, the two areas show distinct differences in nappe formations. Flysch deposits in the Southern Alps of Lombardy probably occurred in the Cretaceous or later. Peaks in France, Italy and Switzerland lie in the "Houillière zone", which consists of basement with sediments from the Mesozoic Era. High "massifs" with external sedimentary cover are more common in the Western Alps and were affected by Neogene Period thin-skinned thrusting whereas the Eastern Alps have comparatively few high peaked massifs. Similarly the peaks in eastern Switzerland extending to western Austria (Helvetic nappes) consist of thin-skinned sedimentary folding that detached from former basement rock. In simple terms, the structure of the Alps consists of layers of rock of European, African and oceanic (Tethyan) origin. The bottom nappe structure is of continental European origin, above which are stacked marine sediment nappes, topped off by nappes derived from the African plate. The Matterhorn is an example of the ongoing orogeny and shows evidence of great folding. The tip of the mountain consists of gneisses from the African plate; the base of the peak, below the glaciated area, consists of European basement rock. The sequence of Tethyan marine sediments and their oceanic basement is sandwiched between rock derived from the African and European plates. The core regions of the Alpine orogenic belt have been folded and fractured in such a manner that erosion created the characteristic steep vertical peaks of the Swiss Alps that rise seemingly straight out of the foreland areas. Peaks such as Mont Blanc, the Matterhorn, and high peaks in the Pennine Alps, the Briançonnais, and Hohe Tauern consist of layers of rock from the various orogenies including exposures of basement rock. Due to the ever-present geologic instability, earthquakes continue in the Alps to this day. Typically, the largest earthquakes in the alps have been between magnitude 6 and 7 on the Richter scale. Minerals The Alps are a source of minerals that have been mined for thousands of years. In the 8th to 6th centuries BC during the Hallstatt culture, Celtic tribes mined copper; later the Romans mined gold for coins in the Bad Gastein area. Erzberg in Styria furnishes high-quality iron ore for the steel industry. Crystals, such as cinnabar, amethyst, and quartz, are found throughout much of the Alpine region. The cinnabar deposits in Slovenia are a notable source of cinnabar pigments. Alpine crystals have been studied and collected for hundreds of years, and began to be classified in the 18th century. Leonhard Euler studied the shapes of crystals, and by the 19th century crystal hunting was common in Alpine regions. David Friedrich Wiser amassed a collection of 8000 crystals that he studied and documented. In the 20th century Robert Parker wrote a well-known work about the rock crystals of the Swiss Alps; at the same period a commission was established to control and standardize the naming of Alpine minerals. Glaciers In the Miocene Epoch the mountains underwent severe erosion because of glaciation, which was noted in the mid-19th century by naturalist Louis Agassiz who presented a paper proclaiming the Alps were covered in ice at various intervals—a theory he formed when studying rocks near his Neuchâtel home which he believed originated to the west in the Bernese Oberland. Because of his work he came to be known as the "father of the ice-age concept" although other naturalists before him put forth similar ideas. Agassiz studied glacier movement in the 1840s at the Unteraar Glacier where he found the glacier moved per year, more rapidly in the middle than at the edges. His work was continued by other scientists and now a permanent laboratory exists inside a glacier under the Jungfraujoch, devoted exclusively to the study of Alpine glaciers. Glaciers pick up rocks and sediment with them as they flow. This causes erosion and the formation of valleys over time. The Inn valley is an example of a valley carved by glaciers during the ice ages with a typical terraced structure caused by erosion. Eroded rocks from the most recent ice age lie at the bottom of the valley while the top of the valley consists of erosion from earlier ice ages. Glacial valleys have characteristically steep walls (reliefs); valleys with lower reliefs and talus slopes are remnants of glacial troughs or previously infilled valleys. Moraines, piles of rock picked up during the movement of the glacier, accumulate at edges, centre and the terminus of glaciers. Alpine glaciers can be straight rivers of ice, long sweeping rivers, spread in a fan-like shape (Piedmont glaciers), and curtains of ice that hang from vertical slopes of the mountain peaks. The stress of the movement causes the ice to break and crack loudly, perhaps explaining why the mountains were believed to be home to dragons in the medieval period. The cracking creates unpredictable and dangerous crevasses, often invisible under new snowfall, which cause the greatest danger to mountaineers. Glaciers end in ice caves (the Rhône Glacier), by trailing into a lake or river, or by shedding snowmelt on a meadow. Sometimes a piece of glacier will detach or break resulting in flooding, property damage and loss of life. High levels of precipitation cause the glaciers to descend to permafrost levels in some areas whereas in other, more arid regions, glaciers remain above about the level. The of the Alps covered by glaciers in 1876 had shrunk to by 1973, resulting in decreased river run-off levels. Forty percent of the glaciation in Austria has disappeared since 1850, and 30% of that in Switzerland. Rivers and lakes The Alps provide lowland Europe with drinking water, irrigation, and hydroelectric power. Although the area is only about 11% of the surface area of Europe, the Alps provide up to 90% of water to lowland Europe, particularly to arid areas and during the summer months. Cities such as Milan depend on 80% of water from Alpine runoff. Water from the rivers is used in at least 550 hydroelectricity power plants, considering only those producing at least 10MW of electricity. Major European rivers flow from the Alps, such as the Rhine, the Rhône, the Inn, and the Po, all of which have headwaters in the Alps and flow into neighbouring countries, finally emptying into the North Sea, the Mediterranean Sea, the Adriatic Sea and the Black Sea. Other rivers such as the Danube have major tributaries flowing into them that originate in the Alps. The Rhône is second to the Nile as a freshwater source to the Mediterranean Sea; the river begins as glacial meltwater, flows into Lake Geneva, and from there to France where one of its uses is to cool nuclear power plants. The Rhine originates in a area in Switzerland and represents almost 60% of water exported from the country. Tributary valleys, some of which are complicated, channel water to the main valleys which can experience flooding during the snowmelt season when rapid runoff causes debris torrents and swollen rivers. The rivers form lakes, such as Lake Geneva, a crescent-shaped lake crossing the Swiss border with Lausanne on the Swiss side and the town of Evian-les-Bains on the French side. In Germany, the medieval St. Bartholomew's chapel was built on the south side of the Königssee, accessible only by boat or by climbing over the abutting peaks. Additionally, the Alps have led to the creation of large lakes in Italy. For instance, the Sarca, the primary inflow of Lake Garda, originates in the Italian Alps. The Italian Lakes are a popular tourist destination since the Roman Era for their mild climate. Scientists have been studying the impact of climate change and water use. For example, each year more water is diverted from rivers for snowmaking in the ski resorts, the effect of which is yet unknown. Furthermore, the decrease of glaciated areas combined with a succession of winters with lower-than-expected precipitation may have a future impact on the rivers in the Alps as well as an effect on the water availability to the lowlands. Climate The Alps are a classic example of what happens when a temperate area at lower altitude gives way to higher-elevation terrain. Elevations around the world that have cold climates similar to those of the polar regions have been called Alpine. A rise from sea level into the upper regions of the atmosphere causes the temperature to decrease (see adiabatic lapse rate). The effect of mountain chains on prevailing winds is to carry warm air belonging to the lower region into an upper zone, where it expands in volume at the cost of a proportionate loss of temperature, often accompanied by precipitation in the form of snow or rain. The height of the Alps is sufficient to divide the weather patterns in Europe into a wet north and a dry south because moisture is sucked from the air as it flows over the high peaks. The severe weather in the Alps has been studied since the 18th century; particularly the weather patterns such as the seasonal foehn wind. Numerous weather stations were placed in the mountains early in the early 20th century, providing continuous data for climatologists. Some of the valleys are quite arid such as the Aosta valley in Italy, the Maurienne in France, the Valais in Switzerland, and northern Tyrol. The areas that are not arid and receive high precipitation experience periodic flooding from rapid snowmelt and runoff. The mean precipitation in the Alps ranges from a low of per year to per year, with the higher levels occurring at high altitudes. At altitudes between , snowfall begins in November and accumulates through to April or May when the melt begins. Snow lines vary from , above which the snow is permanent and the temperatures hover around the freezing point even during July and August. High-water levels in streams and rivers peak in June and July when the snow is still melting at the higher altitudes. The Alps are split into five climatic zones, each with different vegetation. The climate, plant life and animal life vary among the different sections or zones of the mountains. The lowest zone is the colline zone, which exists between , depending on the location. The montane zone extends from , followed by the sub-Alpine zone from . The Alpine zone, extending from tree line to snow line, is followed by the glacial zone, which covers the glaciated areas of the mountain. Climatic conditions show variances within the same zones; for example, weather conditions at the head of a mountain valley, extending directly from the peaks, are colder and more severe than those at the mouth of a valley which tend to be less severe and receive less snowfall. Various models of climate change have been projected into the 22nd century for the Alps, with an expectation that a trend toward increased temperatures will have an effect on snowfall, snowpack, glaciation, and river runoff. Significant changes, of both natural and anthropogenic origins, have already been diagnosed from observations. Ecology Flora Thirteen thousand species of plants have been identified in the Alpine regions. Alpine plants are grouped by habitat and soil type which can be limestone or non-calcareous. The habitats range from meadows, bogs, woodland (deciduous and coniferous) areas to soil-less scree and moraines, and rock faces and ridges. A natural vegetation limit with altitude is given by the presence of the chief deciduous trees—oak, beech, ash and sycamore maple. These do not reach exactly to the same elevation, nor are they often found growing together; but their upper limit corresponds accurately enough to the change from a temperate to a colder climate that is further proved by a change in the presence of wild herbaceous vegetation. This limit usually lies about above the sea on the north side of the Alps, but on the southern slopes it often rises to , sometimes even to . Above the forestry, there is often a band of short pine trees (Pinus mugo), which is in turn superseded by Alpenrosen, dwarf shrubs, typically Rhododendron ferrugineum (on acid soils) or Rhododendron hirsutum (on alkaline soils). Although the Alpenrose prefers acidic soil, the plants are found throughout the region. Above the tree line is the area defined as "alpine" where in the alpine meadow plants are found that have adapted well to harsh conditions of cold temperatures, aridity, and high altitudes. The alpine area fluctuates greatly because of regional fluctuations in tree lines. Alpine plants such as the Alpine gentian grow in abundance in areas such as the meadows above the Lauterbrunnental. Gentians are named after the Illyrian king Gentius, and 40 species of the early-spring blooming flower grow in the Alps, in a range of . Writing about the gentians in Switzerland D. H. Lawrence described them as "darkening the day-time, torch-like with the smoking blueness of Pluto's gloom." Gentians tend to "appear" repeatedly as the spring blooming takes place at progressively later dates, moving from the lower altitude to the higher altitude meadows where the snow melts much later than in the valleys. On the highest rocky ledges the spring flowers bloom in the summer. At these higher altitudes, the plants tend to form isolated cushions. In the Alps, several species of flowering plants have been recorded above , including Ranunculus glacialis, Androsace alpina and Saxifraga biflora. Eritrichium nanum, commonly known as the King of the Alps, is the most elusive of the alpine flowers, growing on rocky ridges at . Perhaps the best known of the alpine plants is Edelweiss which grows in rocky areas and can be found at altitudes as low as and as high as . The plants that grow at the highest altitudes have adapted to conditions by specialization such as growing in rock screes that give protection from winds. The extreme and stressful climatic conditions give way to the growth of plant species with secondary metabolites important for medicinal purposes. Origanum vulgare, Prunella vulgaris, Solanum nigrum and Urtica dioica are some of the more useful medicinal species found in the Alps. Human interference has nearly exterminated the trees in many areas, and, except for the beech forests of the Austrian Alps, forests of deciduous trees are rarely found after the extreme deforestation between the 17th and 19th centuries. The vegetation has changed since the second half of the 20th century, as the high alpine meadows cease to be harvested for hay or used for grazing which eventually might result in a regrowth of forest. In some areas, the modern practice of building ski runs by mechanical means has destroyed the underlying tundra from which the plant life cannot recover during the non-skiing months, whereas areas that still practice a natural piste type of ski slope building preserve the fragile underlayers. Fauna The Alps are a habitat for 30,000 species of wildlife, ranging from the tiniest snow fleas to brown bears, many of which have made adaptations to the harsh cold conditions and high altitudes to the point that some only survive in specific micro-climates either directly above or below the snow line. The largest mammal to live in the highest altitudes are the alpine ibex, which have been sighted as high as . The ibex live in caves and descend to eat the succulent alpine grasses. Classified as antelopes, chamois are smaller than ibex and found throughout the Alps, living above the tree line and are common in the entire alpine range. Areas of the eastern Alps are still home to brown bears. In Switzerland the canton of Bern was named for the bears but the last bear is recorded as having been killed in 1792 above Kleine Scheidegg by three hunters from Grindelwald. Many rodents such as voles live underground. Marmots live almost exclusively above the tree line as high as . They hibernate in large groups to provide warmth, and can be found in all areas of the Alps, in large colonies they build beneath the alpine pastures. Golden eagles and bearded vultures are the largest birds to be found in the Alps; they nest high on rocky ledges and can be found at altitudes of . The most common bird is the alpine chough which can be found scavenging at climber's huts or at the Jungfraujoch, a high altitude tourist destination. Reptiles such as adders and vipers live up to the snow line; because they cannot bear the cold temperatures they hibernate underground and soak up the warmth on rocky ledges. The high-altitude Alpine salamanders have adapted to living above the snow line by giving birth to fully developed young rather
and climatic conditions consist of distinct zones. Wildlife such as ibex live in the higher peaks to elevations of , and plants such as Edelweiss grow in rocky areas in lower elevations as well as in higher elevations. Evidence of human habitation in the Alps goes back to the Palaeolithic era. A mummified man, determined to be 5,000 years old, was discovered on a glacier at the Austrian–Italian border in 1991. By the 6th century BC, the Celtic La Tène culture was well established. Hannibal famously crossed the Alps with a herd of elephants, and the Romans had settlements in the region. In 1800, Napoleon crossed one of the mountain passes with an army of 40,000. The 18th and 19th centuries saw an influx of naturalists, writers, and artists, in particular, the Romantics, followed by the golden age of alpinism as mountaineers began to ascend the peaks. The Alpine region has a strong cultural identity. The traditional culture of farming, cheesemaking, and woodworking still exists in Alpine villages, although the tourist industry began to grow early in the 20th century and expanded greatly after World War II to become the dominant industry by the end of the century. The Winter Olympic Games have been hosted in the Swiss, French, Italian, Austrian and German Alps. At present, the region is home to 14 million people and has 120 million annual visitors. Etymology and toponymy The English word Alps comes from the Latin Alpes. The Latin word Alpes could possibly come from the adjective albus ("white"), or could possibly come from the Greek goddess Alphito, whose name is related to alphita, the "white flour"; alphos, a dull white leprosy; and finally the Proto-Indo-European word *albʰós. Similarly, the river god Alpheus is also supposed to derive from the Greek alphos and means whitish. In his commentary on the Aeneid of Vergil, the late fourth-century grammarian Maurus Servius Honoratus says that all high mountains are called Alpes by Celts. According to the Oxford English Dictionary, the Latin Alpes might possibly derive from a pre-Indo-European word *alb "hill"; "Albania" is a related derivation. Albania, a name not native to the region known as the country of Albania, has been used as a name for a number of mountainous areas across Europe. In Roman times, "Albania" was a name for the eastern Caucasus, while in the English languages "Albania" (or "Albany") was occasionally used as a name for Scotland, although it is more likely derived from the Latin word albus, the color white. In modern languages the term alp, alm, albe or alpe refers to a grazing pastures in the alpine regions below the glaciers, not the peaks. An alp refers to a high mountain pasture, typically near or above the tree line, where cows and other livestock are taken to be grazed during the summer months and where huts and hay barns can be found, sometimes constituting tiny hamlets. Therefore, the term "the Alps", as a reference to the mountains, is a misnomer. The term for the mountain peaks varies by nation and language: words such as Horn, Kogel, Kopf, Gipfel, Spitze, Stock, and Berg are used in German-speaking regions; Mont, Pic, Tête, Pointe, Dent, Roche, and Aiguille in French-speaking regions; and Monte, Picco, Corno, Punta, Pizzo, or Cima in Italian-speaking regions. Geography The Alps are a crescent shaped geographic feature of central Europe that ranges in an arc (curved line) from east to west and is in width. The mean height of the mountain peaks is . The range stretches from the Mediterranean Sea north above the Po basin, extending through France from Grenoble, and stretching eastward through mid and southern Switzerland. The range continues onward toward Vienna, Austria, and east to the Adriatic Sea and Slovenia. To the south it dips into northern Italy and to the north extends to the southern border of Bavaria in Germany. In areas like Chiasso, Switzerland, and Allgäu, Bavaria, the demarcation between the mountain range and the flatlands are clear; in other places such as Geneva, the demarcation is less clear. The countries with the greatest alpine territory are Austria (28.7% of the total area), Italy (27.2%), France (21.4%) and Switzerland (13.2%). The highest portion of the range is divided by the glacial trough of the Rhône valley, from Mont Blanc to the Matterhorn and Monte Rosa on the southern side, and the Bernese Alps on the northern. The peaks in the easterly portion of the range, in Austria and Slovenia, are smaller than those in the central and western portions. The variances in nomenclature in the region spanned by the Alps makes classification of the mountains and subregions difficult, but a general classification is that of the Eastern Alps and Western Alps with the divide between the two occurring in eastern Switzerland according to geologist Stefan Schmid, near the Splügen Pass. The highest peaks of the Western Alps and Eastern Alps, respectively, are Mont Blanc, at and Piz Bernina at . The second-highest major peaks are Monte Rosa at and Ortler, at , respectively. Series of lower mountain ranges run parallel to the main chain of the Alps, including the French Prealps in France and the Jura Mountains in Switzerland and France. The secondary chain of the Alps follows the watershed from the Mediterranean Sea to the Wienerwald, passing over many of the highest and most well-known peaks in the Alps. From the Colle di Cadibona to Col de Tende it runs westwards, before turning to the northwest and then, near the Colle della Maddalena, to the north. Upon reaching the Swiss border, the line of the main chain heads approximately east-northeast, a heading it follows until its end near Vienna. The northeast end of the Alpine arc directly on the Danube, which flows into the Black Sea, is the Leopoldsberg near Vienna. In contrast, the southeastern part of the Alps ends on the Adriatic Sea in the area around Trieste towards Duino and Barcola. Passes The Alps have been crossed for war and commerce, and by pilgrims, students and tourists. Crossing routes by road, train or foot are known as passes, and usually consist of depressions in the mountains in which a valley leads from the plains and hilly pre-mountainous zones. In the medieval period hospices were established by religious orders at the summits of many of the main passes. The most important passes are the Col de l'Iseran (the highest), the Col Agnel, the Brenner Pass, the Mont-Cenis, the Great St. Bernard Pass, the Col de Tende, the Gotthard Pass, the Semmering Pass, the Simplon Pass, and the Stelvio Pass. Crossing the Italian-Austrian border, the Brenner Pass separates the Ötztal Alps and Zillertal Alps and has been in use as a trading route since the 14th century. The lowest of the Alpine passes at , the Semmering crosses from Lower Austria to Styria; since the 12th century when a hospice was built there, it has seen continuous use. A railroad with a tunnel long was built along the route of the pass in the mid-19th century. With a summit of , the Great St. Bernard Pass is one of the highest in the Alps, crossing the Italian-Swiss border east of the Pennine Alps along the flanks of Mont Blanc. The pass was used by Napoleon Bonaparte to cross 40,000 troops in 1800. The Mont Cenis pass has been a major commercial and military road between Western Europe and Italy. The pass was crossed by many troops on their way to the Italian peninsula. From Constantine I, Pepin the Short and Charlemagne to Henry IV, Napoléon and more recently the German Gebirgsjägers during World War II. Now the pass has been supplanted by the Fréjus Highway Tunnel (opened 1980) and Rail Tunnel (opened 1871). The Saint Gotthard Pass crosses from Central Switzerland to Ticino; in 1882 the Saint Gotthard Railway Tunnel was opened connecting Lucerne in Switzerland, with Milan in Italy. 98 years later followed Gotthard Road Tunnel ( long) connecting the A2 motorway in Göschenen on the north side with Airolo on the south side, exactly like the railway tunnel. On 1 June 2016 the world's longest railway tunnel, the Gotthard Base Tunnel was opened, which connects Erstfeld in canton of Uri with Bodio in canton of Ticino by two single tubes of . It is the first tunnel that traverses the Alps on a flat route. From 11 December 2016, it has been part of the regular railway timetable and used hourly as standard ride between Basel/Lucerne/Zurich and Bellinzona/Lugano/Milan. The highest pass in the alps is the col de l'Iseran in Savoy (France) at , followed by the Stelvio Pass in northern Italy at ; the road was built in the 1820s. Highest mountains The Union Internationale des Associations d'Alpinisme (UIAA) has defined a list of 82 "official" Alpine summits that reach at least . The list includes not only mountains, but also subpeaks with little prominence that are considered important mountaineering objectives. Below are listed the 29 "four-thousanders" with at least of prominence. While Mont Blanc was first climbed in 1786 and the Jungfrau in 1811, most of the Alpine four-thousanders were climbed during the second half of the 19th century, notably Piz Bernina (1850), the Dom (1858), the Grand Combin (1859), the Weisshorn (1861) and the Barre des Écrins (1864); the ascent of the Matterhorn in 1865 marked the end of the golden age of alpinism. Karl Blodig (1859–1956) was among the first to successfully climb all the major 4,000 m peaks. He completed his series of ascents in 1911. Many of the big Alpine three-thousanders were climbed in the early 19th century, notably the Grossglockner (1800) and the Ortler (1804), although some of them were climbed only much later, such at Mont Pelvoux (1848), Monte Viso (1861) and La Meije (1877). The first British Mont Blanc ascent was in 1788; the first female ascent in 1819. By the mid-1850s Swiss mountaineers had ascended most of the peaks and were eagerly sought as mountain guides. Edward Whymper reached the top of the Matterhorn in 1865 (after seven attempts), and in 1938 the last of the six great north faces of the Alps was climbed with the first ascent of the Eiger Nordwand (north face of the Eiger). Geology and orogeny Important geological concepts were established as naturalists began studying the rock formations of the Alps in the 18th century. In the mid-19th century the now-defunct theory of geosynclines was used to explain the presence of "folded" mountain chains but by the mid-20th century the theory of plate tectonics became widely accepted. The formation of the Alps (the Alpine orogeny) was an episodic process that began about 300 million years ago. In the Paleozoic Era the Pangaean supercontinent consisted of a single tectonic plate; it broke into separate plates during the Mesozoic Era and the Tethys sea developed between Laurasia and Gondwana during the Jurassic Period. The Tethys was later squeezed between colliding plates causing the formation of mountain ranges called the Alpide belt, from Gibraltar through the Himalayas to Indonesia—a process that began at the end of the Mesozoic and continues into the present. The formation of the Alps was a segment of this orogenic process, caused by the collision between the African and the Eurasian plates that began in the late Cretaceous Period. Under extreme compressive stresses and pressure, marine sedimentary rocks were uplifted, creating characteristic recumbent folds, or nappes, and thrust faults. As the rising peaks underwent erosion, a layer of marine flysch sediments was deposited in the foreland basin, and the sediments became involved in younger nappes (folds) as the orogeny progressed. Coarse sediments from the continual uplift and erosion were later deposited in foreland areas as molasse. The molasse regions in Switzerland and Bavaria were well-developed and saw further upthrusting of flysch. The Alpine orogeny occurred in ongoing cycles through to the Paleogene causing differences in nappe structures, with a late-stage orogeny causing the development of the Jura Mountains. A series of tectonic events in the Triassic, Jurassic and Cretaceous periods caused different paleogeographic regions. The Alps are subdivided by different lithology (rock composition) and nappe structure according to the orogenic events that affected them. The geological subdivision differentiates the Western, Eastern Alps and Southern Alps: the Helveticum in the north, the Penninicum and Austroalpine system in the centre and, south of the Periadriatic Seam, the Southern Alpine system. According to geologist Stefan Schmid, because the Western Alps underwent a metamorphic event in the Cenozoic Era while the Austroalpine peaks underwent an event in the Cretaceous Period, the two areas show distinct differences in nappe formations. Flysch deposits in the Southern Alps of Lombardy probably occurred in the Cretaceous or later. Peaks in France, Italy and Switzerland lie in the "Houillière zone", which consists of basement with sediments from the Mesozoic Era. High "massifs" with external sedimentary cover are more common in the Western Alps and were affected by Neogene Period thin-skinned thrusting whereas the Eastern Alps have comparatively few high peaked massifs. Similarly the peaks in eastern Switzerland extending to western Austria (Helvetic nappes) consist of thin-skinned sedimentary folding that detached from former basement rock. In simple terms, the structure of the Alps consists of layers of rock of European, African and oceanic (Tethyan) origin. The bottom nappe structure is of continental European origin, above which are stacked marine sediment nappes, topped off by nappes derived from the African plate. The Matterhorn is an example of the ongoing orogeny and shows evidence of great folding. The tip of the mountain consists of gneisses from the African plate; the base of the peak, below the glaciated area, consists of European basement rock. The sequence of Tethyan marine sediments and their oceanic basement is sandwiched between rock derived from the African and European plates. The core regions of the Alpine orogenic belt have been folded and fractured in such a manner that erosion created the characteristic steep vertical peaks of the Swiss Alps that rise seemingly straight out of the foreland areas. Peaks such as Mont Blanc, the Matterhorn, and high peaks in the Pennine Alps, the Briançonnais, and Hohe Tauern consist of layers of rock from the various orogenies including exposures of basement rock. Due to the ever-present geologic instability, earthquakes continue in the Alps to this day. Typically, the largest earthquakes in the alps have been between magnitude 6 and 7 on the Richter scale. Minerals The Alps are a source of minerals that have been mined for thousands of years. In the 8th to 6th centuries BC during the Hallstatt culture, Celtic tribes mined copper; later the Romans mined gold for coins in the Bad Gastein area. Erzberg in Styria furnishes high-quality iron ore for the steel industry. Crystals, such as cinnabar, amethyst, and quartz, are found throughout much of the Alpine region. The cinnabar deposits in Slovenia are a notable source of cinnabar pigments. Alpine crystals have been studied and collected for hundreds of years, and began to be classified in the 18th century. Leonhard Euler studied the shapes of crystals, and by the 19th century crystal hunting was common in Alpine regions. David Friedrich Wiser amassed a collection of 8000 crystals that he studied and documented. In the 20th century Robert Parker wrote a well-known work about the rock crystals of the Swiss Alps; at the same period a commission was established to control and standardize the naming of Alpine minerals. Glaciers In the Miocene Epoch the mountains underwent severe erosion because of glaciation, which was noted in the mid-19th century by naturalist Louis Agassiz who presented a paper proclaiming the Alps were covered in ice at various intervals—a theory he formed when studying rocks near his Neuchâtel home which he believed originated to the west in the Bernese Oberland. Because of his work he came to be known as the "father of the ice-age concept" although other naturalists before him put forth similar ideas. Agassiz studied glacier movement in the 1840s at the Unteraar Glacier where he found the glacier moved per year, more rapidly in the middle than at the edges. His work was continued by other scientists and now a permanent laboratory exists inside a glacier under the Jungfraujoch, devoted exclusively to the study of Alpine glaciers. Glaciers pick up rocks and sediment with them as they flow. This causes erosion and the formation of valleys over time. The Inn valley is an example of a valley carved by glaciers during the ice ages with a typical terraced structure caused by erosion. Eroded rocks from the most recent ice age lie at the bottom of the valley while the top of the valley consists of erosion from earlier ice ages. Glacial valleys have characteristically steep walls (reliefs); valleys with lower reliefs and talus slopes are remnants of glacial troughs or previously infilled valleys. Moraines, piles of rock picked up during the movement of the glacier, accumulate at edges, centre and the terminus of glaciers. Alpine glaciers can be straight rivers of ice, long sweeping rivers, spread in a fan-like shape (Piedmont glaciers), and curtains of ice that hang from vertical slopes of the mountain peaks. The stress of the movement causes the ice to break and crack loudly, perhaps explaining why the mountains were believed to be home to dragons in the medieval period. The cracking creates unpredictable and dangerous crevasses, often invisible under new snowfall, which cause the greatest danger to mountaineers. Glaciers end in ice caves (the Rhône Glacier), by trailing into a lake or river, or by shedding snowmelt on a meadow. Sometimes a piece of glacier will detach or break resulting in flooding, property damage and loss of life. High levels of precipitation cause the glaciers to descend to permafrost levels in some areas whereas in other, more arid regions, glaciers remain above about the level. The of the Alps covered by glaciers in 1876 had shrunk to by 1973, resulting in decreased river run-off levels. Forty percent of the glaciation in Austria has disappeared since 1850, and 30% of that in Switzerland. Rivers and lakes The Alps provide lowland Europe with drinking water, irrigation, and hydroelectric power. Although the area is only about 11% of the surface area of Europe, the Alps provide up to 90% of water to lowland Europe, particularly to arid areas and during the summer months. Cities such as Milan depend on 80% of water from Alpine runoff. Water from the rivers is used in at least 550 hydroelectricity power plants, considering only those producing at least 10MW of electricity. Major European rivers flow from the Alps, such as the Rhine, the Rhône, the Inn, and the Po, all of which have headwaters in the Alps and flow into neighbouring countries, finally emptying into the North Sea, the Mediterranean Sea, the Adriatic Sea and the Black Sea. Other rivers such as the Danube have major tributaries flowing into them that originate in the Alps. The Rhône is second to the Nile as a freshwater source to the Mediterranean Sea; the river begins as glacial meltwater, flows into Lake Geneva, and from there to France where one of its uses is to cool nuclear power plants. The Rhine originates in a area in Switzerland and represents almost 60% of water exported from the country. Tributary valleys, some of which are complicated, channel water to the main valleys which can experience flooding during the snowmelt season when rapid runoff causes debris torrents and swollen rivers. The rivers form lakes, such as Lake Geneva, a crescent-shaped lake crossing the Swiss border with Lausanne on the Swiss side and the town of Evian-les-Bains on the French side. In Germany, the medieval St. Bartholomew's chapel was built on the south side of the Königssee, accessible only by boat or by climbing over the abutting peaks. Additionally, the Alps have led to the creation of large lakes in Italy. For instance, the Sarca, the primary inflow of Lake Garda, originates in the Italian Alps. The Italian Lakes are a popular tourist destination since the Roman Era for their mild climate. Scientists have been studying the impact of climate change and water use. For example, each year more water is diverted from rivers for snowmaking in the ski resorts, the effect of which is yet unknown. Furthermore, the decrease of glaciated areas combined with a succession of winters with lower-than-expected precipitation may have a future impact on the rivers in the Alps as well as an effect on the water availability to the lowlands. Climate The Alps are a classic example of what happens when a temperate area at lower altitude gives way to higher-elevation terrain. Elevations around the world that have cold climates similar to those of the polar regions have been called Alpine. A rise from sea level into the upper regions of the atmosphere causes the temperature to decrease (see adiabatic lapse rate). The effect of mountain chains on prevailing winds is to carry warm air belonging to the lower region into an upper zone, where it expands in volume at the cost of a proportionate loss of temperature, often accompanied by precipitation in the form of snow or rain. The height of the Alps is sufficient to divide the weather patterns in Europe into a wet north and a dry south because moisture is sucked from the air as it flows over the high peaks. The severe weather in the Alps has been studied since the 18th century; particularly the weather patterns such as the seasonal foehn wind. Numerous weather stations were placed in the mountains early in the early 20th century, providing continuous data for climatologists. Some of the valleys are quite arid such as the Aosta valley in Italy, the Maurienne in France, the Valais in Switzerland, and northern Tyrol. The areas that are not arid and receive high precipitation experience periodic flooding from rapid snowmelt and runoff. The mean precipitation in the Alps ranges from a low of per year to per year, with the higher levels occurring at high altitudes. At altitudes between , snowfall begins in November and accumulates through to April or May when the melt begins. Snow lines vary from , above which the snow is permanent and the temperatures hover around the freezing point even during July and August. High-water levels in streams and rivers peak in June and July when the snow is still melting at the higher altitudes. The Alps are split into five climatic zones, each with different vegetation. The climate, plant life and animal life vary among the different sections or zones of the mountains. The lowest zone is the colline zone, which exists between , depending on the location. The montane zone extends from , followed by the sub-Alpine zone from . The Alpine zone, extending from tree line to snow line, is followed by the glacial zone, which covers the glaciated areas of the mountain.