id
stringlengths
2
8
url
stringlengths
31
133
title
stringlengths
1
79
text
stringlengths
10
201k
752
https://en.wikipedia.org/wiki/Art
Art
Art is a diverse range of human activity, and resulting product, that involves creative or imaginative talent expressive of technical proficiency, beauty, emotional power, or conceptual ideas. There is no generally agreed definition of what constitutes art, and ideas have changed over time. The three classical branches of visual art are painting, sculpture, and architecture. Theatre, dance, and other performing arts, as well as literature, music, film and other media such as interactive media, are included in a broader definition of the arts. Until the 17th century, art referred to any skill or mastery and was not differentiated from crafts or sciences. In modern usage after the 17th century, where aesthetic considerations are paramount, the fine arts are separated and distinguished from acquired skills in general, such as the decorative or applied arts. The nature of art and related concepts, such as creativity and interpretation, are explored in a branch of philosophy known as aesthetics. The resulting artworks are studied in the professional fields of art criticism and the history of art. Overview In the perspective of the history of art, artistic works have existed for almost as long as humankind: from early pre-historic art to contemporary art; however, some theorists feel that the typical concept of "artistic works" fits less well outside modern Western societies. One early sense of the definition of art is closely related to the older Latin meaning, which roughly translates to "skill" or "craft", as associated with words such as "artisan". English words derived from this meaning include artifact, artificial, artifice, medical arts, and military arts. However, there are many other colloquial uses of the word, all with some relation to its etymology. Over time, philosophers like Plato, Aristotle, Socrates and Kant, among others, questioned the meaning of art. Several dialogues in Plato tackle questions about art: Socrates says that poetry is inspired by the muses, and is not rational. He speaks approvingly of this, and other forms of divine madness (drunkenness, eroticism, and dreaming) in the Phaedrus (265a–c), and yet in the Republic wants to outlaw Homer's great poetic art, and laughter as well. In Ion, Socrates gives no hint of the disapproval of Homer that he expresses in the Republic. The dialogue Ion suggests that Homer's Iliad functioned in the ancient Greek world as the Bible does today in the modern Christian world: as divinely inspired literary art that can provide moral guidance, if only it can be properly interpreted. With regards to the literary art and the musical arts, Aristotle considered epic poetry, tragedy, comedy, Dithyrambic poetry and music to be mimetic or imitative art, each varying in imitation by medium, object, and manner. For example, music imitates with the media of rhythm and harmony, whereas dance imitates with rhythm alone, and poetry with language. The forms also differ in their object of imitation. Comedy, for instance, is a dramatic imitation of men worse than average; whereas tragedy imitates men slightly better than average. Lastly, the forms differ in their manner of imitation—through narrative or character, through change or no change, and through drama or no drama. Aristotle believed that imitation is natural to mankind and constitutes one of mankind's advantages over animals. The more recent and specific sense of the word art as an abbreviation for creative art or fine art emerged in the early 17th century. Fine art refers to a skill used to express the artist's creativity, or to engage the audience's aesthetic sensibilities, or to draw the audience towards consideration of more refined or finer work of art. Within this latter sense, the word art may refer to several things: (i) a study of a creative skill, (ii) a process of using the creative skill, (iii) a product of the creative skill, or (iv) the audience's experience with the creative skill. The creative arts (art as discipline) are a collection of disciplines which produce artworks (art as objects) that are compelled by a personal drive (art as activity) and convey a message, mood, or symbolism for the perceiver to interpret (art as experience). Art is something that stimulates an individual's thoughts, emotions, beliefs, or ideas through the senses. Works of art can be explicitly made for this purpose or interpreted on the basis of images or objects. For some scholars, such as Kant, the sciences and the arts could be distinguished by taking science as representing the domain of knowledge and the arts as representing the domain of the freedom of artistic expression. Often, if the skill is being used in a common or practical way, people will consider it a craft instead of art. Likewise, if the skill is being used in a commercial or industrial way, it may be considered commercial art instead of fine art. On the other hand, crafts and design are sometimes considered applied art. Some art followers have argued that the difference between fine art and applied art has more to do with value judgments made about the art than any clear definitional difference. However, even fine art often has goals beyond pure creativity and self-expression. The purpose of works of art may be to communicate ideas, such as in politically, spiritually, or philosophically motivated art; to create a sense of beauty (see aesthetics); to explore the nature of perception; for pleasure; or to generate strong emotions. The purpose may also be seemingly nonexistent. The nature of art has been described by philosopher Richard Wollheim as "one of the most elusive of the traditional problems of human culture". Art has been defined as a vehicle for the expression or communication of emotions and ideas, a means for exploring and appreciating formal elements for their own sake, and as mimesis or representation. Art as mimesis has deep roots in the philosophy of Aristotle. Leo Tolstoy identified art as a use of indirect means to communicate from one person to another. Benedetto Croce and R. G. Collingwood advanced the idealist view that art expresses emotions, and that the work of art therefore essentially exists in the mind of the creator. The theory of art as form has its roots in the philosophy of Kant, and was developed in the early 20th century by Roger Fry and Clive Bell. More recently, thinkers influenced by Martin Heidegger have interpreted art as the means by which a community develops for itself a medium for self-expression and interpretation. George Dickie has offered an institutional theory of art that defines a work of art as any artifact upon which a qualified person or persons acting on behalf of the social institution commonly referred to as "the art world" has conferred "the status of candidate for appreciation". Larry Shiner has described fine art as "not an essence or a fate but something we have made. Art as we have generally understood it is a European invention barely two hundred years old." Art may be characterized in terms of mimesis (its representation of reality), narrative (storytelling), expression, communication of emotion, or other qualities. During the Romantic period, art came to be seen as "a special faculty of the human mind to be classified with religion and science". History A shell engraved by Homo erectus was determined to be between 430,000 and 540,000 years old. A set of eight 130,000 years old white-tailed eagle talons bear cut marks and abrasion that indicate manipulation by neanderthals, possibly for using it as jewelry. A series of tiny, drilled snail shells about 75,000 years old—were discovered in a South African cave. Containers that may have been used to hold paints have been found dating as far back as 100,000 years. Sculptures, cave paintings, rock paintings and petroglyphs from the Upper Paleolithic dating to roughly 40,000 years ago have been found, but the precise meaning of such art is often disputed because so little is known about the cultures that produced them. Many great traditions in art have a foundation in the art of one of the great ancient civilizations: Ancient Egypt, Mesopotamia, Persia, India, China, Ancient Greece, Rome, as well as Inca, Maya, and Olmec. Each of these centers of early civilization developed a unique and characteristic style in its art. Because of the size and duration of these civilizations, more of their art works have survived and more of their influence has been transmitted to other cultures and later times. Some also have provided the first records of how artists worked. For example, this period of Greek art saw a veneration of the human physical form and the development of equivalent skills to show musculature, poise, beauty, and anatomically correct proportions. In Byzantine and Medieval art of the Western Middle Ages, much art focused on the expression of subjects about Biblical and religious culture, and used styles that showed the higher glory of a heavenly world, such as the use of gold in the background of paintings, or glass in mosaics or windows, which also presented figures in idealized, patterned (flat) forms. Nevertheless, a classical realist tradition persisted in small Byzantine works, and realism steadily grew in the art of Catholic Europe. Renaissance art had a greatly increased emphasis on the realistic depiction of the material world, and the place of humans in it, reflected in the corporeality of the human body, and development of a systematic method of graphical perspective to depict recession in a three-dimensional picture space. In the east, Islamic art's rejection of iconography led to emphasis on geometric patterns, calligraphy, and architecture. Further east, religion dominated artistic styles and forms too. India and Tibet saw emphasis on painted sculptures and dance, while religious painting borrowed many conventions from sculpture and tended to bright contrasting colors with emphasis on outlines. China saw the flourishing of many art forms: jade carving, bronzework, pottery (including the stunning terracotta army of Emperor Qin), poetry, calligraphy, music, painting, drama, fiction, etc. Chinese styles vary greatly from era to era and each one is traditionally named after the ruling dynasty. So, for example, Tang dynasty paintings are monochromatic and sparse, emphasizing idealized landscapes, but Ming dynasty paintings are busy and colorful, and focus on telling stories via setting and composition. Japan names its styles after imperial dynasties too, and also saw much interplay between the styles of calligraphy and painting. Woodblock printing became important in Japan after the 17th century. The western Age of Enlightenment in the 18th century saw artistic depictions of physical and rational certainties of the clockwork universe, as well as politically revolutionary visions of a post-monarchist world, such as Blake's portrayal of Newton as a divine geometer, or David's propagandistic paintings. This led to Romantic rejections of this in favor of pictures of the emotional side and individuality of humans, exemplified in the novels of Goethe. The late 19th century then saw a host of artistic movements, such as academic art, Symbolism, impressionism and fauvism among others. The history of 20th-century art is a narrative of endless possibilities and the search for new standards, each being torn down in succession by the next. Thus the parameters of Impressionism, Expressionism, Fauvism, Cubism, Dadaism, Surrealism, etc. cannot be maintained very much beyond the time of their invention. Increasing global interaction during this time saw an equivalent influence of other cultures into Western art. Thus, Japanese woodblock prints (themselves influenced by Western Renaissance draftsmanship) had an immense influence on impressionism and subsequent development. Later, African sculptures were taken up by Picasso and to some extent by Matisse. Similarly, in the 19th and 20th centuries the West has had huge impacts on Eastern art with originally western ideas like Communism and Post-Modernism exerting a powerful influence. Modernism, the idealistic search for truth, gave way in the latter half of the 20th century to a realization of its unattainability. Theodor W. Adorno said in 1970, "It is now taken for granted that nothing which concerns art can be taken for granted any more: neither art itself, nor art in relationship to the whole, nor even the right of art to exist." Relativism was accepted as an unavoidable truth, which led to the period of contemporary art and postmodern criticism, where cultures of the world and of history are seen as changing forms, which can be appreciated and drawn from only with skepticism and irony. Furthermore, the separation of cultures is increasingly blurred and some argue it is now more appropriate to think in terms of a global culture, rather than of regional ones. In The Origin of the Work of Art, Martin Heidegger, a German philosopher and a seminal thinker, describes the essence of art in terms of the concepts of being and truth. He argues that art is not only a way of expressing the element of truth in a culture, but the means of creating it and providing a springboard from which "that which is" can be revealed. Works of art are not merely representations of the way things are, but actually produce a community's shared understanding. Each time a new artwork is added to any culture, the meaning of what it is to exist is inherently changed. Historically, art and artistic skills and ideas have often been spread through trade. An example of this is the Silk Road, where Hellenistic, Iranian, Indian and Chinese influences could mix. Greco Buddhist art is one of the most vivid examples of this interaction. The meeting of different cultures and worldviews also influenced artistic creation. An example of this is the multicultural port metropolis of Trieste at the beginning of the 20th century, where James Joyce met writers from Central Europe and the artistic development of New York City as a cultural melting pot. Forms, genres, media, and styles The creative arts are often divided into more specific categories, typically along perceptually distinguishable categories such as media, genre, styles, and form. Art form refers to the elements of art that are independent of its interpretation or significance. It covers the methods adopted by the artist and the physical composition of the artwork, primarily non-semantic aspects of the work (i.e., figurae), such as color, contour, dimension, medium, melody, space, texture, and value. Form may also include visual design principles, such as arrangement, balance, contrast, emphasis, harmony, proportion, proximity, and rhythm. In general there are three schools of philosophy regarding art, focusing respectively on form, content, and context. Extreme Formalism is the view that all aesthetic properties of art are formal (that is, part of the art form). Philosophers almost universally reject this view and hold that the properties and aesthetics of art extend beyond materials, techniques, and form. Unfortunately, there is little consensus on terminology for these informal properties. Some authors refer to subject matter and content – i.e., denotations and connotations – while others prefer terms like meaning and significance. Extreme Intentionalism holds that authorial intent plays a decisive role in the meaning of a work of art, conveying the content or essential main idea, while all other interpretations can be discarded. It defines the subject as the persons or idea represented, and the content as the artist's experience of that subject. For example, the composition of Napoleon I on his Imperial Throne is partly borrowed from the Statue of Zeus at Olympia. As evidenced by the title, the subject is Napoleon, and the content is Ingres's representation of Napoleon as "Emperor-God beyond time and space". Similarly to extreme formalism, philosophers typically reject extreme intentionalism, because art may have multiple ambiguous meanings and authorial intent may be unknowable and thus irrelevant. Its restrictive interpretation is "socially unhealthy, philosophically unreal, and politically unwise". Finally, the developing theory of post-structuralism studies art's significance in a cultural context, such as the ideas, emotions, and reactions prompted by a work. The cultural context often reduces to the artist's techniques and intentions, in which case analysis proceeds along lines similar to formalism and intentionalism. However, in other cases historical and material conditions may predominate, such as religious and philosophical convictions, sociopolitical and economic structures, or even climate and geography. Art criticism continues to grow and develop alongside art. Skill and craft Art can connote a sense of trained ability or mastery of a medium. Art can also simply refer to the developed and efficient use of a language to convey meaning with immediacy or depth. Art can be defined as an act of expressing feelings, thoughts, and observations. There is an understanding that is reached with the material as a result of handling it, which facilitates one's thought processes. A common view is that the epithet "art", particular in its elevated sense, requires a certain level of creative expertise by the artist, whether this be a demonstration of technical ability, an originality in stylistic approach, or a combination of these two. Traditionally skill of execution was viewed as a quality inseparable from art and thus necessary for its success; for Leonardo da Vinci, art, neither more nor less than his other endeavors, was a manifestation of skill. Rembrandt's work, now praised for its ephemeral virtues, was most admired by his contemporaries for its virtuosity. At the turn of the 20th century, the adroit performances of John Singer Sargent were alternately admired and viewed with skepticism for their manual fluency, yet at nearly the same time the artist who would become the era's most recognized and peripatetic iconoclast, Pablo Picasso, was completing a traditional academic training at which he excelled. A common contemporary criticism of some modern art occurs along the lines of objecting to the apparent lack of skill or ability required in the production of the artistic object. In conceptual art, Marcel Duchamp's "Fountain" is among the first examples of pieces wherein the artist used found objects ("ready-made") and exercised no traditionally recognised set of skills. Tracey Emin's My Bed, or Damien Hirst's The Physical Impossibility of Death in the Mind of Someone Living follow this example and also manipulate the mass media. Emin slept (and engaged in other activities) in her bed before placing the result in a gallery as work of art. Hirst came up with the conceptual design for the artwork but has left most of the eventual creation of many works to employed artisans. Hirst's celebrity is founded entirely on his ability to produce shocking concepts. The actual production in many conceptual and contemporary works of art is a matter of assembly of found objects. However, there are many modernist and contemporary artists who continue to excel in the skills of drawing and painting and in creating hands-on works of art. Purpose Art has had a great number of different functions throughout its history, making its purpose difficult to abstract or quantify to any single concept. This does not imply that the purpose of Art is "vague", but that it has had many unique, different reasons for being created. Some of these functions of Art are provided in the following outline. The different purposes of art may be grouped according to those that are non-motivated, and those that are motivated (Lévi-Strauss). Non-motivated functions The non-motivated purposes of art are those that are integral to being human, transcend the individual, or do not fulfill a specific external purpose. In this sense, Art, as creativity, is something humans must do by their very nature (i.e., no other species creates art), and is therefore beyond utility. Basic human instinct for harmony, balance, rhythm. Art at this level is not an action or an object, but an internal appreciation of balance and harmony (beauty), and therefore an aspect of being human beyond utility.Imitation, then, is one instinct of our nature. Next, there is the instinct for 'harmony' and rhythm, meters being manifestly sections of rhythm. Persons, therefore, starting with this natural gift developed by degrees their special aptitudes, till their rude improvisations gave birth to Poetry. – Aristotle Experience of the mysterious. Art provides a way to experience one's self in relation to the universe. This experience may often come unmotivated, as one appreciates art, music or poetry.The most beautiful thing we can experience is the mysterious. It is the source of all true art and science. – Albert Einstein Expression of the imagination. Art provides a means to express the imagination in non-grammatic ways that are not tied to the formality of spoken or written language. Unlike words, which come in sequences and each of which have a definite meaning, art provides a range of forms, symbols and ideas with meanings that are malleable.Jupiter's eagle [as an example of art] is not, like logical (aesthetic) attributes of an object, the concept of the sublimity and majesty of creation, but rather something else—something that gives the imagination an incentive to spread its flight over a whole host of kindred representations that provoke more thought than admits of expression in a concept determined by words. They furnish an aesthetic idea, which serves the above rational idea as a substitute for logical presentation, but with the proper function, however, of animating the mind by opening out for it a prospect into a field of kindred representations stretching beyond its ken. – Immanuel Kant Ritualistic and symbolic functions. In many cultures, art is used in rituals, performances and dances as a decoration or symbol. While these often have no specific utilitarian (motivated) purpose, anthropologists know that they often serve a purpose at the level of meaning within a particular culture. This meaning is not furnished by any one individual, but is often the result of many generations of change, and of a cosmological relationship within the culture.Most scholars who deal with rock paintings or objects recovered from prehistoric contexts that cannot be explained in utilitarian terms and are thus categorized as decorative, ritual or symbolic, are aware of the trap posed by the term 'art'. – Silva Tomaskova Motivated functions Motivated purposes of art refer to intentional, conscious actions on the part of the artists or creator. These may be to bring about political change, to comment on an aspect of society, to convey a specific emotion or mood, to address personal psychology, to illustrate another discipline, to (with commercial arts) sell a product, or simply as a form of communication. Communication. Art, at its simplest, is a form of communication. As most forms of communication have an intent or goal directed toward another individual, this is a motivated purpose. Illustrative arts, such as scientific illustration, are a form of art as communication. Maps are another example. However, the content need not be scientific. Emotions, moods and feelings are also communicated through art.[Art is a set of] artefacts or images with symbolic meanings as a means of communication. – Steve Mithen Art as entertainment. Art may seek to bring about a particular emotion or mood, for the purpose of relaxing or entertaining the viewer. This is often the function of the art industries of Motion Pictures and Video Games. The Avant-Garde. Art for political change. One of the defining functions of early 20th-century art has been to use visual images to bring about political change. Art movements that had this goal—Dadaism, Surrealism, Russian constructivism, and Abstract Expressionism, among others—are collectively referred to as the avant-garde arts.By contrast, the realistic attitude, inspired by positivism, from Saint Thomas Aquinas to Anatole France, clearly seems to me to be hostile to any intellectual or moral advancement. I loathe it, for it is made up of mediocrity, hate, and dull conceit. It is this attitude which today gives birth to these ridiculous books, these insulting plays. It constantly feeds on and derives strength from the newspapers and stultifies both science and art by assiduously flattering the lowest of tastes; clarity bordering on stupidity, a dog's life. – André Breton (Surrealism) Art as a "free zone", removed from the action of the social censure. Unlike the avant-garde movements, which wanted to erase cultural differences in order to produce new universal values, contemporary art has enhanced its tolerance towards cultural differences as well as its critical and liberating functions (social inquiry, activism, subversion, deconstruction ...), becoming a more open place for research and experimentation. Art for social inquiry, subversion or anarchy. While similar to art for political change, subversive or deconstructivist art may seek to question aspects of society without any specific political goal. In this case, the function of art may be simply to criticize some aspect of society. Graffiti art and other types of street art are graphics and images that are spray-painted or stencilled on publicly viewable walls, buildings, buses, trains, and bridges, usually without permission. Certain art forms, such as graffiti, may also be illegal when they break laws (in this case vandalism). Art for social causes. Art can be used to raise awareness for a large variety of causes. A number of art activities were aimed at raising awareness of autism, cancer, human trafficking, and a variety of other topics, such as ocean conservation, human rights in Darfur, murdered and missing Aboriginal women, elder abuse, and pollution. Trashion, using trash to make fashion, practiced by artists such as Marina DeBris is one example of using art to raise awareness about pollution. Art for psychological and healing purposes. Art is also used by art therapists, psychotherapists and clinical psychologists as art therapy. The Diagnostic Drawing Series, for example, is used to determine the personality and emotional functioning of a patient. The end product is not the principal goal in this case, but rather a process of healing, through creative acts, is sought. The resultant piece of artwork may also offer insight into the troubles experienced by the subject and may suggest suitable approaches to be used in more conventional forms of psychiatric therapy. Art for propaganda, or commercialism. Art is often utilized as a form of propaganda, and thus can be used to subtly influence popular conceptions or mood. In a similar way, art that tries to sell a product also influences mood and emotion. In both cases, the purpose of art here is to subtly manipulate the viewer into a particular emotional or psychological response toward a particular idea or object. Art as a fitness indicator. It has been argued that the ability of the human brain by far exceeds what was needed for survival in the ancestral environment. One evolutionary psychology explanation for this is that the human brain and associated traits (such as artistic ability and creativity) are the human equivalent of the peacock's tail. The purpose of the male peacock's extravagant tail has been argued to be to attract females (see also Fisherian runaway and handicap principle). According to this theory superior execution of art was evolutionarily important because it attracted mates. The functions of art described above are not mutually exclusive, as many of them may overlap. For example, art for the purpose of entertainment may also seek to sell a product, i.e. the movie or video game. Public access Since ancient times, much of the finest art has represented a deliberate display of wealth or power, often achieved by using massive scale and expensive materials. Much art has been commissioned by political rulers or religious establishments, with more modest versions only available to the most wealthy in society. Nevertheless, there have been many periods where art of very high quality was available, in terms of ownership, across large parts of society, above all in cheap media such as pottery, which persists in the ground, and perishable media such as textiles and wood. In many different cultures, the ceramics of indigenous peoples of the Americas are found in such a wide range of graves that they were clearly not restricted to a social elite, though other forms of art may have been. Reproductive methods such as moulds made mass-production easier, and were used to bring high-quality Ancient Roman pottery and Greek Tanagra figurines to a very wide market. Cylinder seals were both artistic and practical, and very widely used by what can be loosely called the middle class in the Ancient Near East. Once coins were widely used, these also became an art form that reached the widest range of society. Another important innovation came in the 15th century in Europe, when printmaking began with small woodcuts, mostly religious, that were often very small and hand-colored, and affordable even by peasants who glued them to the walls of their homes. Printed books were initially very expensive, but fell steadily in price until by the 19th century even the poorest could afford some with printed illustrations. Popular prints of many different sorts have decorated homes and other places for centuries. In 1661, the city of Basel, in Switzerland, opened the first public museum of art in the world, the Kunstmuseum Basel. Today, its collection is distinguished by an impressively wide historic span, from the early 15th century up to the immediate present. Its various areas of emphasis give it international standing as one of the most significant museums of its kind. These encompass: paintings and drawings by artists active in the Upper Rhine region between 1400 and 1600, and on the art of the 19th to 21st centuries. Public buildings and monuments, secular and religious, by their nature normally address the whole of society, and visitors as viewers, and display to the general public has long been an important factor in their design. Egyptian temples are typical in that the most largest and most lavish decoration was placed on the parts that could be seen by the general public, rather than the areas seen only by the priests. Many areas of royal palaces, castles and the houses of the social elite were often generally accessible, and large parts of the art collections of such people could often be seen, either by anybody, or by those able to pay a small price, or those wearing the correct clothes, regardless of who they were, as at the Palace of Versailles, where the appropriate extra accessories (silver shoe buckles and a sword) could be hired from shops outside. Special arrangements were made to allow the public to see many royal or private collections placed in galleries, as with the Orleans Collection mostly housed in a wing of the Palais Royal in Paris, which could be visited for most of the 18th century. In Italy the art tourism of the Grand Tour became a major industry from the Renaissance onwards, and governments and cities made efforts to make their key works accessible. The British Royal Collection remains distinct, but large donations such as the Old Royal Library were made from it to the British Museum, established in 1753. The Uffizi in Florence opened entirely as a gallery in 1765, though this function had been gradually taking the building over from the original civil servants' offices for a long time before. The building now occupied by the Prado in Madrid was built before the French Revolution for the public display of parts of the royal art collection, and similar royal galleries open to the public existed in Vienna, Munich and other capitals. The opening of the Musée du Louvre during the French Revolution (in 1793) as a public museum for much of the former French royal collection certainly marked an important stage in the development of public access to art, transferring ownership to a republican state, but was a continuation of trends already well established. Most modern public museums and art education programs for children in schools can be traced back to this impulse to have art available to everyone. However, museums do not only provide availability to art, but do also influence the way art is being perceived by the audience, as studies found. Thus, the museum itself is not only a blunt stage for the presentation of art, but plays an active and vital role in the overall perception of art in modern society. Museums in the United States tend to be gifts from the very rich to the masses. (The Metropolitan Museum of Art in New York City, for example, was created by John Taylor Johnston, a railroad executive whose personal art collection seeded the museum.) But despite all this, at least one of the important functions of art in the 21st century remains as a marker of wealth and social status. There have been attempts by artists to create art that can not be bought by the wealthy as a status object. One of the prime original motivators of much of the art of the late 1960s and 1970s was to create art that could not be bought and sold. It is "necessary to present something more than mere objects" said the major post war German artist Joseph Beuys. This time period saw the rise of such things as performance art, video art, and conceptual art. The idea was that if the artwork was a performance that would leave nothing behind, or was simply an idea, it could not be bought and sold. "Democratic precepts revolving around the idea that a work of art is a commodity impelled the aesthetic innovation which germinated in the mid-1960s and was reaped throughout the 1970s. Artists broadly identified under the heading of Conceptual art ... substituting performance and publishing activities for engagement with both the material and materialistic concerns of painted or sculptural form ... [have] endeavored to undermine the art object qua object." In the decades since, these ideas have been somewhat lost as the art market has learned to sell limited edition DVDs of video works, invitations to exclusive performance art pieces, and the objects left over from conceptual pieces. Many of these performances create works that are only understood by the elite who have been educated as to why an idea or video or piece of apparent garbage may be considered art. The marker of status becomes understanding the work instead of necessarily owning it, and the artwork remains an upper-class activity. "With the widespread use of DVD recording technology in the early 2000s, artists, and the gallery system that derives its profits from the sale of artworks, gained an important means of controlling the sale of video and computer artworks in limited editions to collectors." Controversies Art has long been controversial, that is to say disliked by some viewers, for a wide variety of reasons, though most pre-modern controversies are dimly recorded, or completely lost to a modern view. Iconoclasm is the destruction of art that is disliked for a variety of reasons, including religious ones. Aniconism is a general dislike of either all figurative images, or often just religious ones, and has been a thread in many major religions. It has been a crucial factor in the history of Islamic art, where depictions of Muhammad remain especially controversial. Much art has been disliked purely because it depicted or otherwise stood for unpopular rulers, parties or other groups. Artistic conventions have often been conservative and taken very seriously by art critics, though often much less so by a wider public. The iconographic content of art could cause controversy, as with late medieval depictions of the new motif of the Swoon of the Virgin in scenes of the Crucifixion of Jesus. The Last Judgment by Michelangelo was controversial for various reasons, including breaches of decorum through nudity and the Apollo-like pose of Christ. The content of much formal art through history was dictated by the patron or commissioner rather than just the artist, but with the advent of Romanticism, and economic changes in the production of art, the artists' vision became the usual determinant of the content of his art, increasing the incidence of controversies, though often reducing their significance. Strong incentives for perceived originality and publicity also encouraged artists to court controversy. Théodore Géricault's Raft of the Medusa (c. 1820), was in part a political commentary on a recent event. Édouard Manet's Le Déjeuner sur l'Herbe (1863), was considered scandalous not because of the nude woman, but because she is seated next to men fully dressed in the clothing of the time, rather than in robes of the antique world. John Singer Sargent's Madame Pierre Gautreau (Madam X) (1884), caused a controversy over the reddish pink used to color the woman's ear lobe, considered far too suggestive and supposedly ruining the high-society model's reputation. The gradual abandonment of naturalism and the depiction of realistic representations of the visual appearance of subjects in the 19th and 20th centuries led to a rolling controversy lasting for over a century. In the 20th century, Pablo Picasso's Guernica (1937) used arresting cubist techniques and stark monochromatic oils, to depict the harrowing consequences of a contemporary bombing of a small, ancient Basque town. Leon Golub's Interrogation III (1981), depicts a female nude, hooded detainee strapped to a chair, her legs open to reveal her sexual organs, surrounded by two tormentors dressed in everyday clothing. Andres Serrano's Piss Christ (1989) is a photograph of a crucifix, sacred to the Christian religion and representing Christ's sacrifice and final suffering, submerged in a glass of the artist's own urine. The resulting uproar led to comments in the United States Senate about public funding of the arts. Theory Before Modernism, aesthetics in Western art was greatly concerned with achieving the appropriate balance between different aspects of realism or truth to nature and the ideal; ideas as to what the appropriate balance is have shifted to and fro over the centuries. This concern is largely absent in other traditions of art. The aesthetic theorist John Ruskin, who championed what he saw as the naturalism of J. M. W. Turner, saw art's role as the communication by artifice of an essential truth that could only be found in nature. The definition and evaluation of art has become especially problematic since the 20th century. Richard Wollheim distinguishes three approaches to assessing the aesthetic value of art: the Realist, whereby aesthetic quality is an absolute value independent of any human view; the Objectivist, whereby it is also an absolute value, but is dependent on general human experience; and the Relativist position, whereby it is not an absolute value, but depends on, and varies with, the human experience of different humans. Arrival of Modernism The arrival of Modernism in the late 19th century lead to a radical break in the conception of the function of art, and then again in the late 20th century with the advent of postmodernism. Clement Greenberg's 1960 article "Modernist Painting" defines modern art as "the use of characteristic methods of a discipline to criticize the discipline itself". Greenberg originally applied this idea to the Abstract Expressionist movement and used it as a way to understand and justify flat (non-illusionistic) abstract painting: After Greenberg, several important art theorists emerged, such as Michael Fried, T. J. Clark, Rosalind Krauss, Linda Nochlin and Griselda Pollock among others. Though only originally intended as a way of understanding a specific set of artists, Greenberg's definition of modern art is important to many of the ideas of art within the various art movements of the 20th century and early 21st century. Pop artists like Andy Warhol became both noteworthy and influential through work including and possibly critiquing popular culture, as well as the art world. Artists of the 1980s, 1990s, and 2000s expanded this technique of self-criticism beyond high art to all cultural image-making, including fashion images, comics, billboards and pornography. Duchamp once proposed that art is any activity of any kind-everything. However, the way that only certain activities are classified today as art is a social construction. There is evidence that there may be an element of truth to this. In The Invention of Art: A Cultural History, Larry Shiner examines the construction of the modern system of the arts, i.e. fine art. He finds evidence that the older system of the arts before our modern system (fine art) held art to be any skilled human activity; for example, Ancient Greek society did not possess the term art, but techne. Techne can be understood neither as art or craft, the reason being that the distinctions of art and craft are historical products that came later on in human history. Techne included painting, sculpting and music, but also cooking, medicine, horsemanship, geometry, carpentry, prophecy, and farming, etc. New Criticism and the "intentional fallacy" Following Duchamp during the first half of the 20th century, a significant shift to general aesthetic theory took place which attempted to apply aesthetic theory between various forms of art, including the literary arts and the visual arts, to each other. This resulted in the rise of the New Criticism school and debate concerning the intentional fallacy. At issue was the question of whether the aesthetic intentions of the artist in creating the work of art, whatever its specific form, should be associated with the criticism and evaluation of the final product of the work of art, or, if the work of art should be evaluated on its own merits independent of the intentions of the artist. In 1946, William K. Wimsatt and Monroe Beardsley published a classic and controversial New Critical essay entitled "The Intentional Fallacy", in which they argued strongly against the relevance of an author's intention, or "intended meaning" in the analysis of a literary work. For Wimsatt and Beardsley, the words on the page were all that mattered; importation of meanings from outside the text was considered irrelevant, and potentially distracting. In another essay, "The Affective Fallacy", which served as a kind of sister essay to "The Intentional Fallacy" Wimsatt and Beardsley also discounted the reader's personal/emotional reaction to a literary work as a valid means of analyzing a text. This fallacy would later be repudiated by theorists from the reader-response school of literary theory. Ironically, one of the leading theorists from this school, Stanley Fish, was himself trained by New Critics. Fish criticizes Wimsatt and Beardsley in his 1970 essay "Literature in the Reader". As summarized by Gaut and Livingston in their essay "The Creation of Art": "Structuralist and post-structuralists theorists and critics were sharply critical of many aspects of New Criticism, beginning with the emphasis on aesthetic appreciation and the so-called autonomy of art, but they reiterated the attack on biographical criticisms' assumption that the artist's activities and experience were a privileged critical topic." These authors contend that: "Anti-intentionalists, such as formalists, hold that the intentions involved in the making of art are irrelevant or peripheral to correctly interpreting art. So details of the act of creating a work, though possibly of interest in themselves, have no bearing on the correct interpretation of the work." Gaut and Livingston define the intentionalists as distinct from formalists stating that: "Intentionalists, unlike formalists, hold that reference to intentions is essential in fixing the correct interpretation of works." They quote Richard Wollheim as stating that, "The task of criticism is the reconstruction of the creative process, where the creative process must in turn be thought of as something not stopping short of, but terminating on, the work of art itself." "Linguistic turn" and its debate The end of the 20th century fostered an extensive debate known as the linguistic turn controversy, or the "innocent eye debate" in the philosophy of art. This debate discussed the encounter of the work of art as being determined by the relative extent to which the conceptual encounter with the work of art dominates over the perceptual encounter with the work of art. Decisive for the linguistic turn debate in art history and the humanities were the works of yet another tradition, namely the structuralism of Ferdinand de Saussure and the ensuing movement of poststructuralism. In 1981, the artist Mark Tansey created a work of art titled "The Innocent Eye" as a criticism of the prevailing climate of disagreement in the philosophy of art during the closing decades of the 20th century. Influential theorists include Judith Butler, Luce Irigaray, Julia Kristeva, Michel Foucault and Jacques Derrida. The power of language, more specifically of certain rhetorical tropes, in art history and historical discourse was explored by Hayden White. The fact that language is not a transparent medium of thought had been stressed by a very different form of philosophy of language which originated in the works of Johann Georg Hamann and Wilhelm von Humboldt. Ernst Gombrich and Nelson Goodman in his book Languages of Art: An Approach to a Theory of Symbols came to hold that the conceptual encounter with the work of art predominated exclusively over the perceptual and visual encounter with the work of art during the 1960s and 1970s. He was challenged on the basis of research done by the Nobel prize winning psychologist Roger Sperry who maintained that the human visual encounter was not limited to concepts represented in language alone (the linguistic turn) and that other forms of psychological representations of the work of art were equally defensible and demonstrable. Sperry's view eventually prevailed by the end of the 20th century with aesthetic philosophers such as Nick Zangwill strongly defending a return to moderate aesthetic formalism among other alternatives. Classification disputes Disputes as to whether or not to classify something as a work of art are referred to as classificatory disputes about art. Classificatory disputes in the 20th century have included cubist and impressionist paintings, Duchamp's Fountain, the movies, superlative imitations of banknotes, conceptual art, and video games. Philosopher David Novitz has argued that disagreement about the definition of art are rarely the heart of the problem. Rather, "the passionate concerns and interests that humans vest in their social life" are "so much a part of all classificatory disputes about art." According to Novitz, classificatory disputes are more often disputes about societal values and where society is trying to go than they are about theory proper. For example, when the Daily Mail criticized Hirst's and Emin's work by arguing "For 1,000 years art has been one of our great civilising forces. Today, pickled sheep and soiled beds threaten to make barbarians of us all" they are not advancing a definition or theory about art, but questioning the value of Hirst's and Emin's work. In 1998, Arthur Danto, suggested a thought experiment showing that "the status of an artifact as work of art results from the ideas a culture applies to it, rather than its inherent physical or perceptible qualities. Cultural interpretation (an art theory of some kind) is therefore constitutive of an object's arthood." Anti-art is a label for art that intentionally challenges the established parameters and values of art; it is term associated with Dadaism and attributed to Marcel Duchamp just before World War I, when he was making art from found objects. One of these, Fountain (1917), an ordinary urinal, has achieved considerable prominence and influence on art. Anti-art is a feature of work by Situationist International, the lo-fi Mail art movement, and the Young British Artists, though it is a form still rejected by the Stuckists, who describe themselves as anti-anti-art. Architecture is often included as one of the visual arts; however, like the decorative arts, or advertising, it involves the creation of objects where the practical considerations of use are essential in a way that they usually are not in a painting, for example. Value judgment Somewhat in relation to the above, the word art is also used to apply judgments of value, as in such expressions as "that meal was a work of art" (the cook is an artist), or "the art of deception" (the highly attained level of skill of the deceiver is praised). It is this use of the word as a measure of high quality and high value that gives the term its flavor of subjectivity. Making judgments of value requires a basis for criticism. At the simplest level, a way to determine whether the impact of the object on the senses meets the criteria to be considered art is whether it is perceived to be attractive or repulsive. Though perception is always colored by experience, and is necessarily subjective, it is commonly understood that what is not somehow aesthetically satisfying cannot be art. However, "good" art is not always or even regularly aesthetically appealing to a majority of viewers. In other words, an artist's prime motivation need not be the pursuit of the aesthetic. Also, art often depicts terrible images made for social, moral, or thought-provoking reasons. For example, Francisco Goya's painting depicting the Spanish shootings of 3 May 1808 is a graphic depiction of a firing squad executing several pleading civilians. Yet at the same time, the horrific imagery demonstrates Goya's keen artistic ability in composition and execution and produces fitting social and political outrage. Thus, the debate continues as to what mode of aesthetic satisfaction, if any, is required to define 'art'. The assumption of new values or the rebellion against accepted notions of what is aesthetically superior need not occur concurrently with a complete abandonment of the pursuit of what is aesthetically appealing. Indeed, the reverse is often true, that the revision of what is popularly conceived of as being aesthetically appealing allows for a re-invigoration of aesthetic sensibility, and a new appreciation for the standards of art itself. Countless schools have proposed their own ways to define quality, yet they all seem to agree in at least one point: once their aesthetic choices are accepted, the value of the work of art is determined by its capacity to transcend the limits of its chosen medium to strike some universal chord by the rarity of the skill of the artist or in its accurate reflection in what is termed the zeitgeist. Art is often intended to appeal to and connect with human emotion. It can arouse aesthetic or moral feelings, and can be understood as a way of communicating these feelings. Artists express something so that their audience is aroused to some extent, but they do not have to do so consciously. Art may be considered an exploration of the human condition; that is, what it is to be human. By extension, it has been argued by Emily L. Spratt that the development of artificial intelligence, especially in regard to its uses with images, necessitates a re-evaluation of aesthetic theory in art history today and a reconsideration of the limits of human creativity. Art and law An essential legal issue are art forgeries, plagiarism, replicas and works that are strongly based on other works of art. The trade in works of art or the export from a country may be subject to legal regulations. Internationally there are also extensive efforts to protect the works of art created. The UN, UNESCO and Blue Shield International try to ensure effective protection at the national level and to intervene directly in the event of armed conflicts or disasters. This can particularly affect museums, archives, art collections and excavation sites. This should also secure the economic basis of a country, especially because works of art are often of tourist importance. The founding president of Blue Shield International, Karl von Habsburg, explained an additional connection between the destruction of cultural property and the cause of flight during a mission in Lebanon in April 2019: “Cultural goods are part of the identity of the people who live in a certain place. If you destroy their culture, you also destroy their identity. Many people are uprooted, often no longer have any prospects and as a result flee from their homeland.” See also Applied arts Art movement Artist in residence Artistic freedom Cultural tourism Craftivism Formal analysis History of art List of artistic media List of art techniques Mathematics and art Street art (or "independent public art") Outline of the visual arts, a guide to the subject of art presented as a tree structured list of its subtopics. Visual impairment in art Notes Bibliography Oscar Wilde, Intentions, 1891 Stephen Davies, Definitions of Art, 1991 Nina Felshin, ed. But is it Art?, 1995 Catherine de Zegher (ed.). Inside the Visible. MIT Press, 1996 Evelyn Hatcher, ed. Art as Culture: An Introduction to the Anthropology of Art, 1999 Noel Carroll, Theories of Art Today, 2000 John Whitehead. Grasping for the Wind, 2001 Michael Ann Holly and Keith Moxey (eds.) Art History Aesthetics Visual Studies. New Haven: Yale University Press, 2002. Shiner, Larry. The Invention of Art: A Cultural History. Chicago: University of Chicago Press, 2003. Arthur Danto, The Abuse of Beauty: Aesthetics and the Concept of Art. 2003 Dana Arnold and Margaret Iverson, eds. Art and Thought. London: Blackwell, 2003. Jean Robertson and Craig McDaniel, Themes of Contemporary Art, Visual Art after 1980, 2005 Further reading Antony Briant and Griselda Pollock, eds. Digital and Other Virtualities: Renegotiating the image. London and NY: I.B.Tauris, 2010. Augros, Robert M., Stanciu, George N. The New Story of Science: mind and the universe, Lake Bluff, Ill.: Regnery Gateway, 1984. (this book has significant material on art and science) Benedetto Croce. Aesthetic as Science of Expression and General Linguistic, 2002 Botar, Oliver A.I. Technical Detours: The Early Moholy-Nagy Reconsidered. Art Gallery of The Graduate Center, The City University of New York and The Salgo Trust for Education, 2006. Burguete, Maria, and Lam, Lui, eds. (2011). Arts: A Science Matter. World Scientific: Singapore. Carol Armstrong and Catherine de Zegher, eds. Women Artists at the Millennium. Massachusetts: October Books/The MIT Press, 2006. Carl Jung, Man and His Symbols. London: Pan Books, 1978. E.H. Gombrich, The Story of Art. London: Phaidon Press, 1995. Florian Dombois, Ute Meta Bauer, Claudia Mareis and Michael Schwab, eds. Intellectual Birdhouse. Artistic Practice as Research. London: Koening Books, 2012. Katharine Everett Gilbert and Helmut Kuhn, A History of Esthetics. Edition 2, revised. Indiana: Indiana University Press, 1953. Kristine Stiles and Peter Selz, eds. Theories and Documents of Contemporary Art. Berkeley: University of California Press, 1986 Kleiner, Gardner, Mamiya and Tansey. Art Through the Ages, Twelfth Edition (2 volumes) Wadsworth, 2004. (vol 1) and (vol 2) Richard Wollheim, Art and its Objects: An introduction to aesthetics. New York: Harper & Row, 1968. Will Gompertz. What Are You Looking At?: 150 Years of Modern Art in the Blink of an Eye. New York: Viking, 2012. Władysław Tatarkiewicz, A History of Six Ideas: an Essay in Aesthetics, translated from the Polish by Christopher Kasparek, The Hague, Martinus Nijhoff, 1980 External links Art and Play from the Dictionary of the History of ideas In-depth directory of art Art and Artist Files in the Smithsonian Libraries Collection (2005) Smithsonian Digital Libraries Visual Arts Data Service (VADS) – online collections from UK museums, galleries, universities RevolutionArt – Art magazines with worldwide exhibitions, callings and competitions Aesthetics Visual arts
764
https://en.wikipedia.org/wiki/Agnostida
Agnostida
Agnostida is an order of arthropod which first developed near the end of the Early Cambrian period and thrived during the Middle Cambrian. They are present in the Lower Cambrian fossil record along with trilobites from the Redlichiida, Corynexochida, and Ptychopariida orders. The last agnostids went extinct in the Late Ordovician. Systematics The Agnostida are divided into two suborders — Agnostina and Eodiscina — which are then subdivided into a number of families. As a group, agnostids are isopygous, meaning their pygidium is similar in size and shape to their cephalon. Most agnostid species were eyeless. The systematic position of the order Agnostida within the class Trilobita remains uncertain, and there has been continuing debate whether they are trilobites or a stem group. The challenge to the status has focused on Agnostina partly due to the juveniles of one genus have been found with legs differing dramatically from those of adult trilobites, suggesting they are not members of the lamellipedian clade, of which trilobites are a part. Instead, the limbs of agnostids closely resemble those of stem group crustaceans, although they lack the proximal endite, which defines that group. They are likely the sister taxon to the crustacean stem lineage, and, as such, part of the clade, Crustaceomorpha. Other researchers have suggested, based on a cladistic analyses of dorsal exoskeletal features, that Eodiscina and Agnostida are closely united, and the Eodiscina descended from the trilobite order Ptychopariida. Ecology Scientists have long debated whether the agnostids lived a pelagic or a benthic lifestyle. Their lack of eyes, a morphology not well-suited for swimming, and their fossils found in association with other benthic trilobites suggest a benthic (bottom-dwelling) mode of life. They are likely to have lived on areas of the ocean floor which received little or no light and fed on detritus which descended from upper layers of the sea to the bottom. Their wide geographic dispersion in the fossil record is uncharacteristic of benthic animals, suggesting a pelagic existence. The thoracic segment appears to form a hinge between the head and pygidium allowing for a bivalved ostracodan-type lifestyle. The orientation of the thoracic appendages appears ill-suited for benthic living. Recent work suggests that some agnostids were benthic predators, engaging in cannibalism and possibly pack-hunting behavior. They are sometimes preserved within the voids of other organisms, for instance within empty hyolith conchs, within sponges, worm tubes and under the carapaces of bivalved arthropods, presumably in order to hide from predators or strong storm currents; or maybe whilst scavenging for food. In the case of the tapering worm tubes Selkirkia, trilobites are always found with their heads directed towards the opening of the tube, suggesting that they reversed in; the absence of any moulted carapaces suggests that moulting was not their primary reason for seeking shelter. References External links Order Agnostida by Sam Gon III. The Virtual Fossil Museum – Trilobite Order Agnostida Agnostida fact sheet by Sam Gon III. "Earth's Early Cannibals Caught in the Act", by Larry O'Hanlon, news.discovery.com. Trilobite orders Cambrian trilobites Ordovician trilobites Fossil taxa described in 1864 Cambrian first appearances Late Ordovician extinctions Taxa named by John William Salter
765
https://en.wikipedia.org/wiki/Abortion
Abortion
Abortion is the termination of a pregnancy by removal or expulsion of an embryo or fetus. An abortion that occurs without intervention is known as a miscarriage or "spontaneous abortion" and occurs in approximately 30% to 40% of pregnancies. When deliberate steps are taken to end a pregnancy, it is called an induced abortion, or less frequently "induced miscarriage". The unmodified word abortion generally refers to an induced abortion. Although it prevents the birth of a child, abortion is not generally considered birth control (another term for contraception). When properly done, abortion is one of the safest procedures in medicine, but unsafe abortion is a major cause of maternal death, especially in the developing world, while making safe abortion legal and accessible reduces maternal deaths. It is safer than childbirth, which has a 14 times higher risk of death in the United States. Modern methods use medication or surgery for abortions. The drug mifepristone in combination with prostaglandin appears to be as safe and effective as surgery during the first and second trimester of pregnancy. The most common surgical technique involves dilating the cervix and using a suction device. Birth control, such as the pill or intrauterine devices, can be used immediately following abortion. When performed legally and safely on a woman who desires it, induced abortions do not increase the risk of long-term mental or physical problems. In contrast, unsafe abortions (those performed by unskilled individuals, with hazardous equipment, or in unsanitary facilities) cause 47,000 deaths and 5 million hospital admissions each year. The World Health Organization states that "access to legal, safe and comprehensive abortion care, including post-abortion care, is essential for the attainment of the highest possible level of sexual and reproductive health". Around 56 million abortions are performed each year in the world, with about 45% done unsafely. Abortion rates changed little between 2003 and 2008, before which they decreased for at least two decades as access to family planning and birth control increased. , 37% of the world's women had access to legal abortions without limits as to reason. Countries that permit abortions have different limits on how late in pregnancy abortion is allowed. Abortion rates are similar between countries that ban abortion and countries that allow it. Historically, abortions have been attempted using herbal medicines, sharp tools, forceful massage, or through other traditional methods. Abortion laws and cultural or religious views of abortions are different around the world. In some areas, abortion is legal only in specific cases such as rape, fetal defects, poverty, risk to a woman's health, or incest. There is debate over the moral, ethical, and legal issues of abortion. Those who oppose abortion often argue that an embryo or fetus is a person with a right to life, and thus equate abortion with murder. Those who support the legality of abortion often argue that it is part of a woman's right to make decisions about her own body. Others favor legal and accessible abortion as a public health measure. Types Induced Approximately 205 million pregnancies occur each year worldwide. Over a third are unintended and about a fifth end in induced abortion. Most abortions result from unintended pregnancies. In the United Kingdom, 1 to 2% of abortions are done due to genetic problems in the fetus. A pregnancy can be intentionally aborted in several ways. The manner selected often depends upon the gestational age of the embryo or fetus, which increases in size as the pregnancy progresses. Specific procedures may also be selected due to legality, regional availability, and doctor or a woman's personal preference. Reasons for procuring induced abortions are typically characterized as either therapeutic or elective. An abortion is medically referred to as a therapeutic abortion when it is performed to save the life of the pregnant woman; to prevent harm to the woman's physical or mental health; to terminate a pregnancy where indications are that the child will have a significantly increased chance of mortality or morbidity; or to selectively reduce the number of fetuses to lessen health risks associated with multiple pregnancy. An abortion is referred to as an elective or voluntary abortion when it is performed at the request of the woman for non-medical reasons. Confusion sometimes arises over the term "elective" because "elective surgery" generally refers to all scheduled surgery, whether medically necessary or not. Spontaneous Miscarriage, also known as spontaneous abortion, is the unintentional expulsion of an embryo or fetus before the 24th week of gestation. A pregnancy that ends before 37 weeks of gestation resulting in a live-born infant is a "premature birth" or a "preterm birth". When a fetus dies in utero after viability, or during delivery, it is usually termed "stillborn". Premature births and stillbirths are generally not considered to be miscarriages, although usage of these terms can sometimes overlap. Only 30% to 50% of conceptions progress past the first trimester. The vast majority of those that do not progress are lost before the woman is aware of the conception, and many pregnancies are lost before medical practitioners can detect an embryo. Between 15% and 30% of known pregnancies end in clinically apparent miscarriage, depending upon the age and health of the pregnant woman. 80% of these spontaneous abortions happen in the first trimester. The most common cause of spontaneous abortion during the first trimester is chromosomal abnormalities of the embryo or fetus, accounting for at least 50% of sampled early pregnancy losses. Other causes include vascular disease (such as lupus), diabetes, other hormonal problems, infection, and abnormalities of the uterus. Advancing maternal age and a woman's history of previous spontaneous abortions are the two leading factors associated with a greater risk of spontaneous abortion. A spontaneous abortion can also be caused by accidental trauma; intentional trauma or stress to cause miscarriage is considered induced abortion or feticide. Methods Medical Medical abortions are those induced by abortifacient pharmaceuticals. Medical abortion became an alternative method of abortion with the availability of prostaglandin analogs in the 1970s and the antiprogestogen mifepristone (also known as RU-486) in the 1980s. The most common early first-trimester medical abortion regimens use mifepristone in combination with misoprostol (or sometimes another prostaglandin analog, gemeprost) up to 10 weeks (70 days) gestational age, methotrexate in combination with a prostaglandin analog up to 7 weeks gestation, or a prostaglandin analog alone. Mifepristone–misoprostol combination regimens work faster and are more effective at later gestational ages than methotrexate–misoprostol combination regimens, and combination regimens are more effective than misoprostol alone. This regimen is effective in the second trimester. Medical abortion regimens involving mifepristone followed by misoprostol in the cheek between 24 and 48 hours later are effective when performed before 70 days' gestation. In very early abortions, up to 7 weeks gestation, medical abortion using a mifepristone–misoprostol combination regimen is considered to be more effective than surgical abortion (vacuum aspiration), especially when clinical practice does not include detailed inspection of aspirated tissue. Early medical abortion regimens using mifepristone, followed 24–48 hours later by buccal or vaginal misoprostol are 98% effective up to 9 weeks gestational age; from 9 to 10 weeks efficacy decreases modestly to 94%. If medical abortion fails, surgical abortion must be used to complete the procedure. Early medical abortions account for the majority of abortions before 9 weeks gestation in Britain, France, Switzerland, United States, and the Nordic countries. Medical abortion regimens using mifepristone in combination with a prostaglandin analog are the most common methods used for second-trimester abortions in Canada, most of Europe, China and India, in contrast to the United States where 96% of second-trimester abortions are performed surgically by dilation and evacuation. A 2020 Cochrane Systematic Review concluded that providing women with medications to take home to complete the second stage of the procedure for an early medical abortion results in an effective abortion. Further research is required to determine if self-administered medical abortion is as safe as provider-administered medical abortion, where a health care professional is present to help manage the medical abortion. Safely permitting women to self-administer abortion medication has the potential to improve access to abortion. Other research gaps that were identified include how to best support women who choose to take the medication home for a self-administered abortion. Surgical Up to 15 weeks' gestation, suction-aspiration or vacuum aspiration are the most common surgical methods of induced abortion. Manual vacuum aspiration (MVA) consists of removing the fetus or embryo, placenta, and membranes by suction using a manual syringe, while electric vacuum aspiration (EVA) uses an electric pump. These techniques can both be used very early in pregnancy. MVA can be used up to 14 weeks but is more often used earlier in the U.S. EVA can be used later. MVA, also known as "mini-suction" and "menstrual extraction" or EVA can be used in very early pregnancy when cervical dilation may not be required. Dilation and curettage (D&C) refers to opening the cervix (dilation) and removing tissue (curettage) via suction or sharp instruments. D&C is a standard gynecological procedure performed for a variety of reasons, including examination of the uterine lining for possible malignancy, investigation of abnormal bleeding, and abortion. The World Health Organization recommends sharp curettage only when suction aspiration is unavailable. Dilation and evacuation (D&E), used after 12 to 16 weeks, consists of opening the cervix and emptying the uterus using surgical instruments and suction. D&E is performed vaginally and does not require an incision. Intact dilation and extraction (D&X) refers to a variant of D&E sometimes used after 18 to 20 weeks when removal of an intact fetus improves surgical safety or for other reasons. Abortion may also be performed surgically by hysterotomy or gravid hysterectomy. Hysterotomy abortion is a procedure similar to a caesarean section and is performed under general anesthesia. It requires a smaller incision than a caesarean section and can be used during later stages of pregnancy. Gravid hysterectomy refers to removal of the whole uterus while still containing the pregnancy. Hysterotomy and hysterectomy are associated with much higher rates of maternal morbidity and mortality than D&E or induction abortion. First-trimester procedures can generally be performed using local anesthesia, while second-trimester methods may require deep sedation or general anesthesia. Labor induction abortion In places lacking the necessary medical skill for dilation and extraction, or where preferred by practitioners, an abortion can be induced by first inducing labor and then inducing fetal demise if necessary. This is sometimes called "induced miscarriage". This procedure may be performed from 13 weeks gestation to the third trimester. Although it is very uncommon in the United States, more than 80% of induced abortions throughout the second trimester are labor-induced abortions in Sweden and other nearby countries. Only limited data are available comparing this method with dilation and extraction. Unlike D&E, labor-induced abortions after 18 weeks may be complicated by the occurrence of brief fetal survival, which may be legally characterized as live birth. For this reason, labor-induced abortion is legally risky in the United States. Other methods Historically, a number of herbs reputed to possess abortifacient properties have been used in folk medicine. Among these are: tansy, pennyroyal, black cohosh, and the now-extinct silphium. In 1978, one woman in Colorado died and another developed organ damage when they attempted to terminate their pregnancies by taking pennyroyal oil. Because the indiscriminant use of herbs as abortifacients can cause serious—even lethal—side effects, such as multiple organ failure, such use is not recommended by physicians. Abortion is sometimes attempted by causing trauma to the abdomen. The degree of force, if severe, can cause serious internal injuries without necessarily succeeding in inducing miscarriage. In Southeast Asia, there is an ancient tradition of attempting abortion through forceful abdominal massage. One of the bas reliefs decorating the temple of Angkor Wat in Cambodia depicts a demon performing such an abortion upon a woman who has been sent to the underworld. Reported methods of unsafe, self-induced abortion include misuse of misoprostol and insertion of non-surgical implements such as knitting needles and clothes hangers into the uterus. These and other methods to terminate pregnancy may be called "induced miscarriage". Such methods are rarely used in countries where surgical abortion is legal and available. Safety The health risks of abortion depend principally upon whether the procedure is performed safely or unsafely. The World Health Organization (WHO) defines unsafe abortions as those performed by unskilled individuals, with hazardous equipment, or in unsanitary facilities. Legal abortions performed in the developed world are among the safest procedures in medicine. In the United States as of 2012, abortion was estimated to be about 14 times safer for women than childbirth. CDC estimated in 2019 that US pregnancy-related mortality was 17.2 maternal deaths per 100,000 live births, while the US abortion mortality rate is 0.7 maternal deaths per 100,000 procedures. In the UK, guidelines of the Royal College of Obstetricians and Gynaecologists state that "Women should be advised that abortion is generally safer than continuing a pregnancy to term." Worldwide, on average, abortion is safer than carrying a pregnancy to term. A 2007 study reported that "26% of all pregnancies worldwide are terminated by induced abortion," whereas "deaths from improperly performed [abortion] procedures constitute 13% of maternal mortality globally." In Indonesia in 2000 it was estimated that 2 million pregnancies ended in abortion, 4.5 million pregnancies were carried to term, and 14-16 percent of maternal deaths resulted from abortion. In the US from 2000 to 2009, abortion had a mortality rate lower than plastic surgery, lower or similar to running a marathon, and about equivalent to traveling 760 miles in a passenger car. Five years after seeking abortion services, women who gave birth after being denied an abortion reported worse health than women who had either first or second trimester abortions. The risk of abortion-related mortality increases with gestational age, but remains lower than that of childbirth. Outpatient abortion is as safe from 64 to 70 days' gestation as it before 63 days. There is little difference in terms of safety and efficacy between medical abortion using a combined regimen of mifepristone and misoprostol and surgical abortion (vacuum aspiration) in early first trimester abortions up to 10 weeks gestation. Medical abortion using the prostaglandin analog misoprostol alone is less effective and more painful than medical abortion using a combined regimen of mifepristone and misoprostol or surgical abortion. Vacuum aspiration in the first trimester is the safest method of surgical abortion, and can be performed in a primary care office, abortion clinic, or hospital. Complications, which are rare, can include uterine perforation, pelvic infection, and retained products of conception requiring a second procedure to evacuate. Infections account for one-third of abortion-related deaths in the United States. The rate of complications of vacuum aspiration abortion in the first trimester is similar regardless of whether the procedure is performed in a hospital, surgical center, or office. Preventive antibiotics (such as doxycycline or metronidazole) are typically given before abortion procedures, as they are believed to substantially reduce the risk of postoperative uterine infection; however, antibiotics are not routinely given with abortion pills. The rate of failed procedures does not appear to vary significantly depending on whether the abortion is performed by a doctor or a mid-level practitioner. Complications after second-trimester abortion are similar to those after first-trimester abortion, and depend somewhat on the method chosen. The risk of death from abortion approaches roughly half the risk of death from childbirth the farther along a woman is in pregnancy; from one in a million before 9 weeks gestation to nearly one in ten thousand at 21 weeks or more (as measured from the last menstrual period). It appears that having had a prior surgical uterine evacuation (whether because of induced abortion or treatment of miscarriage) correlates with a small increase in the risk of preterm birth in future pregnancies. The studies supporting this did not control for factors not related to abortion or miscarriage, and hence the causes of this correlation have not been determined, although multiple possibilities have been suggested. Some purported risks of abortion are promoted primarily by anti-abortion groups, but lack scientific support. For example, the question of a link between induced abortion and breast cancer has been investigated extensively. Major medical and scientific bodies (including the WHO, National Cancer Institute, American Cancer Society, Royal College of OBGYN and American Congress of OBGYN) have concluded that abortion does not cause breast cancer. In the past even illegality has not automatically meant that the abortions were unsafe. Referring to the U.S., historian Linda Gordon states: "In fact, illegal abortions in this country have an impressive safety record." According to Rickie Solinger, Authors Jerome Bates and Edward Zawadzki describe the case of an illegal abortionist in the eastern U.S. in the early 20th century who was proud of having successfully completed 13,844 abortions without any fatality. In 1870s New York City the famous abortionist/midwife Madame Restell (Anna Trow Lohman) appears to have lost very few women among her more than 100,000 patients—a lower mortality rate than the childbirth mortality rate at the time. In 1936, the prominent professor of obstetrics and gynecology Frederick J. Taussig wrote that a cause of increasing mortality during the years of illegality in the U.S. was that Mental health Current evidence finds no relationship between most induced abortions and mental health problems other than those expected for any unwanted pregnancy. A report by the American Psychological Association concluded that a woman's first abortion is not a threat to mental health when carried out in the first trimester, with such women no more likely to have mental-health problems than those carrying an unwanted pregnancy to term; the mental-health outcome of a woman's second or greater abortion is less certain. Some older reviews concluded that abortion was associated with an increased risk of psychological problems; however, they did not use an appropriate control group. Although some studies show negative mental-health outcomes in women who choose abortions after the first trimester because of fetal abnormalities, more rigorous research would be needed to show this conclusively. Some proposed negative psychological effects of abortion have been referred to by anti-abortion advocates as a separate condition called "post-abortion syndrome", but this is not recognized by medical or psychological professionals in the United States. A long term-study among US women found that about 99% of women felt that they made the right decision five years after they had an abortion. Relief was the primary emotion with few women feeling sadness or guilt. Social stigma was a main factor predicting negative emotions and regret years later. Unsafe abortion Women seeking an abortion may use unsafe methods, especially when it is legally restricted. They may attempt self-induced abortion or seek the help of a person without proper medical training or facilities. This can lead to severe complications, such as incomplete abortion, sepsis, hemorrhage, and damage to internal organs. Unsafe abortions are a major cause of injury and death among women worldwide. Although data are imprecise, it is estimated that approximately 20 million unsafe abortions are performed annually, with 97% taking place in developing countries. Unsafe abortions are believed to result in millions of injuries. Estimates of deaths vary according to methodology, and have ranged from 37,000 to 70,000 in the past decade; deaths from unsafe abortion account for around 13% of all maternal deaths. The World Health Organization believes that mortality has fallen since the 1990s. To reduce the number of unsafe abortions, public health organizations have generally advocated emphasizing the legalization of abortion, training of medical personnel, and ensuring access to reproductive-health services. In response, opponents of abortion point out that abortion bans in no way affect prenatal care for women who choose to carry their fetus to term. The Dublin Declaration on Maternal Health, signed in 2012, notes, "the prohibition of abortion does not affect, in any way, the availability of optimal care to pregnant women." A major factor in whether abortions are performed safely or not is the legal standing of abortion. Countries with restrictive abortion laws have higher rates of unsafe abortion and similar overall abortion rates compared to those where abortion is legal and available. For example, the 1996 legalization of abortion in South Africa had an immediate positive impact on the frequency of abortion-related complications, with abortion-related deaths dropping by more than 90%. Similar reductions in maternal mortality have been observed after other countries have liberalized their abortion laws, such as Romania and Nepal. A 2011 study concluded that in the United States, some state-level anti-abortion laws are correlated with lower rates of abortion in that state. The analysis, however, did not take into account travel to other states without such laws to obtain an abortion. In addition, a lack of access to effective contraception contributes to unsafe abortion. It has been estimated that the incidence of unsafe abortion could be reduced by up to 75% (from 20 million to 5 million annually) if modern family planning and maternal health services were readily available globally. Rates of such abortions may be difficult to measure because they can be reported variously as miscarriage, "induced miscarriage", "menstrual regulation", "mini-abortion", and "regulation of a delayed/suspended menstruation". Forty percent of the world's women are able to access therapeutic and elective abortions within gestational limits, while an additional 35 percent have access to legal abortion if they meet certain physical, mental, or socioeconomic criteria. While maternal mortality seldom results from safe abortions, unsafe abortions result in 70,000 deaths and 5 million disabilities per year. Complications of unsafe abortion account for approximately an eighth of maternal mortalities worldwide, though this varies by region. Secondary infertility caused by an unsafe abortion affects an estimated 24 million women. The rate of unsafe abortions has increased from 44% to 49% between 1995 and 2008. Health education, access to family planning, and improvements in health care during and after abortion have been proposed to address this phenomenon. Incidence There are two commonly used methods of measuring the incidence of abortion: Abortion rate – number of abortions annually per 1000 women between 15 and 44 years of age (some sources use a range of 15–49) Abortion percentage – number of abortions out of 100 known pregnancies (pregnancies include live births, abortions and miscarriages) In many places, where abortion is illegal or carries a heavy social stigma, medical reporting of abortion is not reliable. For this reason, estimates of the incidence of abortion must be made without determining certainty related to standard error. The number of abortions performed worldwide seems to have remained stable in recent years, with 41.6 million having been performed in 2003 and 43.8 million having been performed in 2008. The abortion rate worldwide was 28 per 1000 women per year, though it was 24 per 1000 women per year for developed countries and 29 per 1000 women per year for developing countries. The same 2012 study indicated that in 2008, the estimated abortion percentage of known pregnancies was at 21% worldwide, with 26% in developed countries and 20% in developing countries. On average, the incidence of abortion is similar in countries with restrictive abortion laws and those with more liberal access to abortion. However, restrictive abortion laws are associated with increases in the percentage of abortions performed unsafely. The unsafe abortion rate in developing countries is partly attributable to lack of access to modern contraceptives; according to the Guttmacher Institute, providing access to contraceptives would result in about 14.5 million fewer unsafe abortions and 38,000 fewer deaths from unsafe abortion annually worldwide. The rate of legal, induced abortion varies extensively worldwide. According to the report of employees of Guttmacher Institute it ranged from 7 per 1000 women per year (Germany and Switzerland) to 30 per 1000 women per year (Estonia) in countries with complete statistics in 2008. The proportion of pregnancies that ended in induced abortion ranged from about 10% (Israel, the Netherlands and Switzerland) to 30% (Estonia) in the same group, though it might be as high as 36% in Hungary and Romania, whose statistics were deemed incomplete. An American study in 2002 concluded that about half of women having abortions were using a form of contraception at the time of becoming pregnant. Inconsistent use was reported by half of those using condoms and three-quarters of those using the birth control pill; 42% of those using condoms reported failure through slipping or breakage. The Guttmacher Institute estimated that "most abortions in the United States are obtained by minority women" because minority women "have much higher rates of unintended pregnancy". In 2022, while people of color comprise 44% of the population in Mississippi, 59% of the population in Texas, 42% of the population in Louisiana, and 35% of the population in Alabama, they comprise 80%, 74%, 72%, and 70% of those receiving abortions. The abortion rate may also be expressed as the average number of abortions a woman has during her reproductive years; this is referred to as total abortion rate (TAR). Gestational age and method Abortion rates also vary depending on the stage of pregnancy and the method practiced. In 2003, the Centers for Disease Control and Prevention (CDC) reported that 26% of reported legal induced abortions in the United States were known to have been obtained at less than 6 weeks' gestation, 18% at 7 weeks, 15% at 8 weeks, 18% at 9 through 10 weeks, 10% at 11 through 12 weeks, 6% at 13 through 15 weeks, 4% at 16 through 20 weeks and 1% at more than 21 weeks. 91% of these were classified as having been done by "curettage" (suction-aspiration, dilation and curettage, dilation and evacuation), 8% by "medical" means (mifepristone), >1% by "intrauterine instillation" (saline or prostaglandin), and 1% by "other" (including hysterotomy and hysterectomy). According to the CDC, due to data collection difficulties the data must be viewed as tentative and some fetal deaths reported beyond 20 weeks may be natural deaths erroneously classified as abortions if the removal of the dead fetus is accomplished by the same procedure as an induced abortion. The Guttmacher Institute estimated there were 2,200 intact dilation and extraction procedures in the US during 2000; this accounts for <0.2% of the total number of abortions performed that year. Similarly, in England and Wales in 2006, 89% of terminations occurred at or under 12 weeks, 9% between 13 and 19 weeks, and 2% at or over 20 weeks. 64% of those reported were by vacuum aspiration, 6% by D&E, and 30% were medical. There are more second trimester abortions in developing countries such as China, India and Vietnam than in developed countries. Motivation Personal The reasons why women have abortions are diverse and vary across the world. Some of the reasons may include an inability to afford a child, domestic violence, lack of support, feeling they are too young, and the wish to complete education or advance a career. Additional reasons include not being able or willing to raise a child conceived as a result of rape or incest Societal Some abortions are undergone as the result of societal pressures. These might include the preference for children of a specific sex or race, disapproval of single or early motherhood, stigmatization of people with disabilities, insufficient economic support for families, lack of access to or rejection of contraceptive methods, or efforts toward population control (such as China's one-child policy). These factors can sometimes result in compulsory abortion or sex-selective abortion. Maternal and fetal health An additional factor is maternal health which was listed as the main reason by about a third of women in 3 of 27 countries and about 7% of women in a further 7 of these 27 countries. In the U.S., the Supreme Court decisions in Roe v. Wade and Doe v. Bolton: "ruled that the state's interest in the life of the fetus became compelling only at the point of viability, defined as the point at which the fetus can survive independently of its mother. Even after the point of viability, the state cannot favor the life of the fetus over the life or health of the pregnant woman. Under the right of privacy, physicians must be free to use their "medical judgment for the preservation of the life or health of the mother." On the same day that the Court decided Roe, it also decided Doe v. Bolton, in which the Court defined health very broadly: "The medical judgment may be exercised in the light of all factors—physical, emotional, psychological, familial, and the woman's age—relevant to the well-being of the patient. All these factors may relate to health. This allows the attending physician the room he needs to make his best medical judgment." Public opinion shifted in America following television personality Sherri Finkbine's discovery during her fifth month of pregnancy that she had been exposed to thalidomide. Unable to obtain a legal abortion in the United States, she traveled to Sweden. From 1962 to 1965, an outbreak of German measles left 15,000 babies with severe birth defects. In 1967, the American Medical Association publicly supported liberalization of abortion laws. A National Opinion Research Center poll in 1965 showed 73% supported abortion when the mother's life was at risk, 57% when birth defects were present and 59% for pregnancies resulting from rape or incest. Cancer The rate of cancer during pregnancy is 0.02–1%, and in many cases, cancer of the mother leads to consideration of abortion to protect the life of the mother, or in response to the potential damage that may occur to the fetus during treatment. This is particularly true for cervical cancer, the most common type of which occurs in 1 of every 2,000–13,000 pregnancies, for which initiation of treatment "cannot co-exist with preservation of fetal life (unless neoadjuvant chemotherapy is chosen)". Very early stage cervical cancers (I and IIa) may be treated by radical hysterectomy and pelvic lymph node dissection, radiation therapy, or both, while later stages are treated by radiotherapy. Chemotherapy may be used simultaneously. Treatment of breast cancer during pregnancy also involves fetal considerations, because lumpectomy is discouraged in favor of modified radical mastectomy unless late-term pregnancy allows follow-up radiation therapy to be administered after the birth. Exposure to a single chemotherapy drug is estimated to cause a 7.5–17% risk of teratogenic effects on the fetus, with higher risks for multiple drug treatments. Treatment with more than 40 Gy of radiation usually causes spontaneous abortion. Exposure to much lower doses during the first trimester, especially 8 to 15 weeks of development, can cause intellectual disability or microcephaly, and exposure at this or subsequent stages can cause reduced intrauterine growth and birth weight. Exposures above 0.005–0.025 Gy cause a dose-dependent reduction in IQ. It is possible to greatly reduce exposure to radiation with abdominal shielding, depending on how far the area to be irradiated is from the fetus. The process of birth itself may also put the mother at risk. "Vaginal delivery may result in dissemination of neoplastic cells into lymphovascular channels, haemorrhage, cervical laceration and implantation of malignant cells in the episiotomy site, while abdominal delivery may delay the initiation of non-surgical treatment." History and religion Since ancient times abortions have been done using a number of methods, including herbal medicines, sharp tools, with force, or through other traditional methods. Induced abortion has a long history and can be traced back to civilizations as varied as ancient China (abortifacient knowledge is often attributed to the mythological ruler Shennong), ancient India since its Vedic age, ancient Egypt with its Ebers Papyrus (c. 1550 BCE), and the Roman Empire in the time of Juvenal (c. 200 CE). One of the earliest known artistic representations of abortion is in a bas relief at Angkor Wat (c. 1150). Found in a series of friezes that represent judgment after death in Hindu and Buddhist culture, it depicts the technique of abdominal abortion. Some medical scholars and abortion opponents have suggested that the Hippocratic Oath forbade Ancient Greek physicians from performing abortions; other scholars disagree with this interpretation, and state that the medical texts of Hippocratic Corpus contain descriptions of abortive techniques right alongside the Oath. The physician Scribonius Largus wrote in 43 CE that the Hippocratic Oath prohibits abortion, as did Soranus, although apparently not all doctors adhered to it strictly at the time. According to Soranus' 1st or 2nd century CE work Gynaecology, one party of medical practitioners banished all abortives as required by the Hippocratic Oath; the other party—to which he belonged—was willing to prescribe abortions, but only for the sake of the mother's health. Aristotle, in his treatise on government Politics (350 BCE), condemns infanticide as a means of population control. He preferred abortion in such cases, with the restriction "[that it] must be practised on it before it has developed sensation and life; for the line between lawful and unlawful abortion will be marked by the fact of having sensation and being alive". In Christianity, Pope Sixtus V (1585–90) was the first Pope before 1869 to declare that abortion is homicide regardless of the stage of pregnancy; and his pronouncement of 1588 was reversed three years later by Pope Gregory XIV. Through most of its history the Catholic Church was divided on whether it believed that early abortion was murder, and it did not begin vigorously opposing abortion until the 19th century. Several historians have written that prior to the 19th century most Catholic authors did not regard termination of pregnancy before "quickening" or "ensoulment" as an abortion. From 1750, excommunication became the punishment for abortions. Statements made in 1992 in the Catechism of the Catholic Church, the codified summary of the Church's teachings, opposed abortion. A 2014 Guttmacher survey of US abortion patients found that many reported a religious affiliation—24% were Catholic while 30% were Protestant. A 1995 survey reported that Catholic women are as likely as the general population to terminate a pregnancy, Protestants are less likely to do so, and Evangelical Christians are the least likely to do so. Islamic tradition has traditionally permitted abortion until a point in time when Muslims believe the soul enters the fetus, considered by various theologians to be at conception, 40 days after conception, 120 days after conception, or quickening. However, abortion is largely heavily restricted or forbidden in areas of high Islamic faith such as the Middle East and North Africa. In Europe and North America, abortion techniques advanced starting in the 17th century. However, the conservatism of most in the medical profession with regards to sexual matters prevented the wide expansion of abortion techniques. Other medical practitioners in addition to some physicians advertised their services, and they were not widely regulated until the 19th century, when the practice (sometimes called restellism) was banned in both the United States and the United Kingdom. Church groups as well as physicians were highly influential in anti-abortion movements. In the US, according to some sources, abortion was more dangerous than childbirth until about 1930 when incremental improvements in abortion procedures relative to childbirth made abortion safer. However, other sources maintain that in the 19th century early abortions under the hygienic conditions in which midwives usually worked were relatively safe. In addition, some commentators have written that, despite improved medical procedures, the period from the 1930s until legalization also saw more zealous enforcement of anti-abortion laws, and concomitantly an increasing control of abortion providers by organized crime. Soviet Russia (1919), Iceland (1935), and Sweden (1938) were among the first countries to legalize certain or all forms of abortion. In 1935, Nazi Germany, a law was passed permitting abortions for those deemed "hereditarily ill", while women considered of German stock were specifically prohibited from having abortions. Beginning in the second half of the twentieth century, abortion was legalized in a greater number of countries. Society and culture Abortion debate Induced abortion has long been the source of considerable debate. Ethical, moral, philosophical, biological, religious and legal issues surrounding abortion are related to value systems. Opinions of abortion may be about fetal rights, governmental authority, and women's rights. In both public and private debate, arguments presented in favor of or against abortion access focus on either the moral permissibility of an induced abortion, or justification of laws permitting or restricting abortion. The World Medical Association Declaration on Therapeutic Abortion notes, "circumstances bringing the interests of a mother into conflict with the interests of her unborn child create a dilemma and raise the question as to whether or not the pregnancy should be deliberately terminated." Abortion debates, especially pertaining to abortion laws, are often spearheaded by groups advocating one of these two positions. Groups who favor greater legal restrictions on abortion, including complete prohibition, most often describe themselves as "pro-life" while groups who are against such legal restrictions describe themselves as "pro-choice". Generally, the former position argues that a human fetus is a human person with a right to live, making abortion morally the same as murder. The latter position argues that a woman has certain reproductive rights, especially the right to decide whether or not to carry a pregnancy to term. Modern abortion law Current laws pertaining to abortion are diverse. Religious, moral, and cultural factors continue to influence abortion laws throughout the world. The right to life, the right to liberty, the right to security of person, and the right to reproductive health are major issues of human rights that sometimes constitute the basis for the existence or absence of abortion laws. In jurisdictions where abortion is legal, certain requirements must often be met before a woman may obtain a legal abortion (an abortion performed without the woman's consent is considered feticide). These requirements usually depend on the age of the fetus, often using a trimester-based system to regulate the window of legality, or as in the U.S., on a doctor's evaluation of the fetus' viability. Some jurisdictions require a waiting period before the procedure, prescribe the distribution of information on fetal development, or require that parents be contacted if their minor daughter requests an abortion. Other jurisdictions may require that a woman obtain the consent of the fetus' father before aborting the fetus, that abortion providers inform women of health risks of the procedure—sometimes including "risks" not supported by the medical literature—and that multiple medical authorities certify that the abortion is either medically or socially necessary. Many restrictions are waived in emergency situations. China, which has ended their one-child policy, and now has a two child policy, has at times incorporated mandatory abortions as part of their population control strategy. Other jurisdictions ban abortion almost entirely. Many, but not all, of these allow legal abortions in a variety of circumstances. These circumstances vary based on jurisdiction, but may include whether the pregnancy is a result of rape or incest, the fetus' development is impaired, the woman's physical or mental well-being is endangered, or socioeconomic considerations make childbirth a hardship. In countries where abortion is banned entirely, such as Nicaragua, medical authorities have recorded rises in maternal death directly and indirectly due to pregnancy as well as deaths due to doctors' fears of prosecution if they treat other gynecological emergencies. Some countries, such as Bangladesh, that nominally ban abortion, may also support clinics that perform abortions under the guise of menstrual hygiene. This is also a terminology in traditional medicine. In places where abortion is illegal or carries heavy social stigma, pregnant women may engage in medical tourism and travel to countries where they can terminate their pregnancies. Women without the means to travel can resort to providers of illegal abortions or attempt to perform an abortion by themselves. The organization Women on Waves has been providing education about medical abortions since 1999. The NGO created a mobile medical clinic inside a shipping container, which then travels on rented ships to countries with restrictive abortion laws. Because the ships are registered in the Netherlands, Dutch law prevails when the ship is in international waters. While in port, the organization provides free workshops and education; while in international waters, medical personnel are legally able to prescribe medical abortion drugs and counseling. Sex-selective abortion Sonography and amniocentesis allow parents to determine sex before childbirth. The development of this technology has led to sex-selective abortion, or the termination of a fetus based on its sex. The selective termination of a female fetus is most common. Sex-selective abortion is partially responsible for the noticeable disparities between the birth rates of male and female children in some countries. The preference for male children is reported in many areas of Asia, and abortion used to limit female births has been reported in Taiwan, South Korea, India, and China. This deviation from the standard birth rates of males and females occurs despite the fact that the country in question may have officially banned sex-selective abortion or even sex-screening. In China, a historical preference for a male child has been exacerbated by the one-child policy, which was enacted in 1979. Many countries have taken legislative steps to reduce the incidence of sex-selective abortion. At the International Conference on Population and Development in 1994 over 180 states agreed to eliminate "all forms of discrimination against the girl child and the root causes of son preference", conditions also condemned by a PACE resolution in 2011. The World Health Organization and UNICEF, along with other United Nations agencies, have found that measures to reduce access to abortion are much less effective at reducing sex-selective abortions than measures to reduce gender inequality. Anti-abortion violence In a number of cases, abortion providers and these facilities have been subjected to various forms of violence, including murder, attempted murder, kidnapping, stalking, assault, arson, and bombing. Anti-abortion violence is classified by both governmental and scholarly sources as terrorism. In the U.S. and Canada, over 8,000 incidents of violence, trespassing, and death threats have been recorded by providers since 1977, including over 200 bombings/arsons and hundreds of assaults. The majority of abortion opponents have not been involved in violent acts. In the United States, four physicians who performed abortions have been murdered: David Gunn (1993), John Britton (1994), Barnett Slepian (1998), and George Tiller (2009). Also murdered, in the U.S. and Australia, have been other personnel at abortion clinics, including receptionists and security guards such as James Barrett, Shannon Lowney, Lee Ann Nichols, and Robert Sanderson. Woundings (e.g., Garson Romalis) and attempted murders have also taken place in the United States and Canada. Hundreds of bombings, arsons, acid attacks, invasions, and incidents of vandalism against abortion providers have occurred. Notable perpetrators of anti-abortion violence include Eric Robert Rudolph, Scott Roeder, Shelley Shannon, and Paul Jennings Hill, the first person to be executed in the United States for murdering an abortion provider. Legal protection of access to abortion has been brought into some countries where abortion is legal. These laws typically seek to protect abortion clinics from obstruction, vandalism, picketing, and other actions, or to protect women and employees of such facilities from threats and harassment. Far more common than physical violence is psychological pressure. In 2003, Chris Danze organized anti-abortion organizations throughout Texas to prevent the construction of a Planned Parenthood facility in Austin. The organizations released the personal information online, of those involved with construction, sending them up to 1200 phone calls a day and contacting their churches. Some protestors record women entering clinics on camera. Non-human examples Spontaneous abortion occurs in various animals. For example, in sheep it may be caused by stress or physical exertion, such as crowding through doors or being chased by dogs. In cows, abortion may be caused by contagious disease, such as brucellosis or Campylobacter, but can often be controlled by vaccination. Eating pine needles can also induce abortions in cows. Several plants, including broomweed, skunk cabbage, poison hemlock, and tree tobacco, are known to cause fetal deformities and abortion in cattle and in sheep and goats. In horses, a fetus may be aborted or resorbed if it has lethal white syndrome (congenital intestinal aganglionosis). Foal embryos that are homozygous for the dominant white gene (WW) are theorized to also be aborted or resorbed before birth. In many species of sharks and rays, stress-induced abortions occur frequently on capture. Viral infection can cause abortion in dogs. Cats can experience spontaneous abortion for many reasons, including hormonal imbalance. A combined abortion and spaying is performed on pregnant cats, especially in trap–neuter–return programs, to prevent unwanted kittens from being born. Female rodents may terminate a pregnancy when exposed to the smell of a male not responsible for the pregnancy, known as the Bruce effect. Abortion may also be induced in animals, in the context of animal husbandry. For example, abortion may be induced in mares that have been mated improperly, or that have been purchased by owners who did not realize the mares were pregnant, or that are pregnant with twin foals. Feticide can occur in horses and zebras due to male harassment of pregnant mares or forced copulation, although the frequency in the wild has been questioned. Male gray langur monkeys may attack females following male takeover, causing miscarriage. Notes References Bibliography External links First-trimester abortion in women with medical conditions. US Department of Health and Human Services Safe abortion: Technical & policy guidance for health systems, World Health Organization (2015) Human reproduction Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate
766
https://en.wikipedia.org/wiki/Abstract%20%28law%29
Abstract (law)
In law, an abstract is a brief statement that contains the most important points of a long legal document or of several related legal papers. Abstract of title The abstract of title, used in real estate transactions, is the more common form of abstract. An abstract of title lists all the owners of a piece of land, a house, or a building before it came into possession of the present owner. The abstract also records all deeds, wills, mortgages, and other documents that affect ownership of the property. An abstract describes a chain of transfers from owner to owner and any agreements by former owners that are binding on later owners. Patent law In the context of patent law and specifically in prior art searches, searching through abstracts is a common way to find relevant prior art document to question to novelty or inventive step (or non-obviousness in United States patent law) of an invention. Under United States patent law, the abstract may be called "Abstract of the Disclosure". References External links , defining the requirements regarding the abstract in an international application filed under Patent Cooperation Treaty (PCT) and (previously ), defining the abstract-related requirements in a European patent application Legal research
771
https://en.wikipedia.org/wiki/American%20Revolutionary%20War
American Revolutionary War
The American Revolutionary War (April 19, 1775 – September 3, 1783), also known as the Revolutionary War or American War of Independence, secured a United States of America independent from Great Britain. Fighting began on April 19, 1775, followed by the Declaration of Independence on July 4, 1776. The American Patriots were supported by France and Spain, conflict taking place in North America, the Caribbean, and Atlantic Ocean. It ended on September 3, 1783 when Britain accepted American independence in the Treaty of Paris, while the Treaties of Versailles resolved separate conflicts with France and Spain. Established by Royal charter in the 17th and 18th centuries, the American colonies were largely autonomous in domestic affairs and commercially prosperous, trading with Britain and its Caribbean colonies, as well as other European powers via their Caribbean entrepôts. After British victory in the Seven Years' War in 1763, tensions arose over trade, colonial policy in the Northwest Territory and taxation measures, including the Stamp Act and Townshend Acts. Colonial opposition led to the 1770 Boston Massacre and 1773 Boston Tea Party, with Parliament responding by imposing the so-called Intolerable Acts. Established on September 5, 1774, the First Continental Congress drafted a Petition to the King and organized a boycott of British goods. Despite attempts to achieve a peaceful solution, fighting began with the Battle of Lexington on April 19, 1775 and in June Congress authorized George Washington to create a Continental Army. Although the "coercion policy" advocated by the North ministry was opposed by a faction within Parliament, both sides increasingly viewed conflict as inevitable. The Olive Branch Petition sent by Congress to George III in July 1775 was rejected and in August Parliament declared the colonies to be in a state of rebellion. Following the loss of Boston in March 1776, Sir William Howe, the new British commander-in-chief, launched the New York and New Jersey campaign. He captured New York City in November, before Washington won small but significant victories at Trenton and Princeton, which restored Patriot confidence. In summer 1777, Howe succeeded in taking Philadelphia, but in October a separate force under John Burgoyne was forced to surrender at Saratoga. This victory was crucial in convincing powers like France and Spain an independent United States was a viable entity. France provided the US informal economic and military support from the beginning of the rebellion, and after Saratoga the two countries signed a commercial agreement and a Treaty of Alliance in February 1778. In return for a guarantee of independence, Congress joined France in its global war with Britain and agreed to defend the French West Indies. Spain also allied with France against Britain in the Treaty of Aranjuez (1779), though it did not formally ally with the Americans. Nevertheless, access to ports in Spanish Louisiana allowed the Patriots to import arms and supplies, while the Spanish Gulf Coast campaign deprived the Royal Navy of key bases in the south. This undermined the 1778 strategy devised by Howe's replacement, Sir Henry Clinton, which took the war into the Southern United States. Despite some initial success, by September 1781 Cornwallis was besieged by a Franco-American force in Yorktown. After an attempt to resupply the garrison failed, Cornwallis surrendered in October, and although the British wars with France and Spain continued for another two years, this ended fighting in North America. In April 1782, the North ministry was replaced by a new British government which accepted American independence and began negotiating the Treaty of Paris, ratified on September 3, 1783. Prelude to revolution The French and Indian War, part of the wider global conflict known as the Seven Years' War, ended with the 1763 Peace of Paris, which expelled France from its possessions in New France. Acquisition of territories in Atlantic Canada and West Florida, inhabited largely by French or Spanish-speaking Catholics, led the British authorities to consolidate their hold by populating them with English-speaking settlers. Preventing conflict between settlers and Native American tribes west of the Appalachian Mountains would also avoid the cost of an expensive military occupation. The Proclamation Line of 1763 was designed to achieve these aims by refocusing colonial expansion north into Nova Scotia and south into Florida, with the Mississippi River as the dividing line between British and Spanish possessions in the Americas. Settlement beyond the 1763 limits was tightly restricted, while claims by individual colonies west of this line were rescinded, most significantly Virginia and Massachusetts who argued their boundaries extended from the Atlantic to the Pacific. Ultimately the vast exchange of territory destabilized existing alliances and trade networks between settlers and Native Americans in the west, while it proved impossible to prevent encroachment beyond the Proclamation Line. With the exception of Virginia and others "deprived" of their rights in the western lands, the colonial legislatures generally agreed on the principle of boundaries but disagreed on where to set them, while many settlers resented the restrictions. Since enforcement required permanent garrisons along the frontier, it led to increasingly bitter disputes over who should pay for them. Taxation and legislation Although directly administered by the Crown, acting through a local Governor, the colonies were largely governed by native-born property owners. While external affairs were managed by London, colonial militia were funded locally but with the ending of the French threat in 1763, the legislatures expected less taxation, not more. At the same time, the huge debt incurred by the Seven Years' War and demands from British taxpayers for cuts in government expenditure meant Parliament expected the colonies to fund their own defense. The 1763 to 1765 Grenville ministry instructed the Royal Navy to stop the trade of smuggled goods and enforce customs duties levied in American ports. The most important was the 1733 Molasses Act; routinely ignored prior to 1763, it had a significant economic impact since 85% of New England rum exports were manufactured from imported molasses. These measures were followed by the Sugar Act and Stamp Act, which imposed additional taxes on the colonies to pay for defending the western frontier. In July 1765, the Whigs formed the First Rockingham ministry, which repealed the Stamp Act and reduced tax on foreign molasses to help the New England economy, but re-asserted Parliamentary authority in the Declaratory Act. However, this did little to end the discontent; in 1768, a riot started in Boston when the authorities seized the sloop Liberty on suspicion of smuggling. Tensions escalated further in March 1770 when British troops fired on rock-throwing civilians, killing five in what became known as the Boston Massacre. The Massacre coincided with the partial repeal of the Townshend Acts by the Tory-based North Ministry, which came to power in January 1770 and remained in office until 1781. North insisted on retaining duty on tea to enshrine Parliament's right to tax the colonies; the amount was minor, but ignored the fact it was that very principle Americans found objectionable. Tensions escalated following the destruction of a customs vessel in the June 1772 Gaspee Affair, then came to a head in 1773. A banking crisis led to the near-collapse of the East India Company, which dominated the British economy; to support it, Parliament passed the Tea Act, giving it a trading monopoly in the Thirteen Colonies. Since most American tea was smuggled by the Dutch, the Act was opposed by those who managed the illegal trade, while being seen as yet another attempt to impose the principle of taxation by Parliament. In December 1773, a group called the Sons of Liberty disguised as Mohawk natives dumped 342 crates of tea into Boston Harbor, an event later known as the Boston Tea Party. Parliament responded by passing the so-called Intolerable Acts, aimed specifically at Massachusetts, although many colonists and members of the Whig opposition considered them a threat to liberty in general. This led to increased sympathy for the Patriot cause locally, as well as in Parliament and the London press. Break with the British Crown Over the course of the 18th century, the elected lower houses in the colonial legislatures gradually wrested power from their Royal Governors. Dominated by smaller landowners and merchants, these Assemblies now established ad hoc provincial legislatures, variously called Congresses, Conventions, and Conferences, effectively replacing Royal control. With the exception of Georgia, twelve colonies sent representatives to the First Continental Congress to agree on a unified response to the crisis. Many of the delegates feared that an all-out boycott would result in war and sent a Petition to the King calling for the repeal of the Intolerable Acts. However, after some debate, on September 17, 1774, Congress endorsed the Massachusetts Suffolk Resolves and on October 20 passed the Continental Association; based on a draft prepared by the First Virginia Convention in August, this instituted economic sanctions against Britain. While denying its authority over internal American affairs, a faction led by James Duane and future Loyalist Joseph Galloway insisted Congress recognize Parliament's right to regulate colonial trade. Expecting concessions by the North administration, Congress authorized the extralegal committees and conventions of the colonial legislatures to enforce the boycott; this succeeded in reducing British imports by 97% from 1774 to 1775. However, on February 9 Parliament declared Massachusetts to be in a state of rebellion and instituted a blockade of the colony. In July, the Restraining Acts limited colonial trade with the British West Indies and Britain and barred New England ships from the Newfoundland cod fisheries. The increase in tension led to a scramble for control of militia stores, which each Assembly was legally obliged to maintain for defense. On April 19, a British attempt to secure the Concord arsenal culminated in the Battles of Lexington and Concord which began the war. Political reactions After the Patriot victory at Concord, moderates in Congress led by John Dickinson drafted the Olive Branch Petition, offering to accept royal authority in return for George III mediating in the dispute. However, since it was immediately followed by the Declaration of the Causes and Necessity of Taking Up Arms, Colonial Secretary Dartmouth viewed the offer as insincere; he refused to present the petition to the king, which was therefore rejected in early September. Although constitutionally correct, since George could not oppose his own government, it disappointed those Americans who hoped he would mediate in the dispute, while the hostility of his language annoyed even Loyalist members of Congress. Combined with the Proclamation of Rebellion, issued on August 23 in response to the Battle at Bunker Hill, it ended hopes of a peaceful settlement. Backed by the Whigs, Parliament initially rejected the imposition of coercive measures by 170 votes, fearing an aggressive policy would simply drive the Americans towards independence. However, by the end of 1774 the collapse of British authority meant both North and George III were convinced war was inevitable. After Boston, Gage halted operations and awaited reinforcements; the Irish Parliament approved the recruitment of new regiments, while allowing Catholics to enlist for the first time. Britain also signed a series of treaties with German states to supply additional troops. Within a year it had an army of over 32,000 men in America, the largest ever sent outside Europe at the time. The employment of German mercenaries and Catholics against people viewed as British citizens was opposed by many in Parliament, as well as the colonial assemblies; combined with the lack of activity by Gage, it allowed the Patriots to take control of the legislatures. Support for independence was boosted by Thomas Paine's pamphlet Common Sense, which argued for American self-government, that was widely reprinted. To draft the Declaration of Independence, Congress appointed the Committee of Five, consisting of Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman and Robert Livingston. Identifying inhabitants of the Thirteen Colonies as "one people", it simultaneously dissolved political links with Britain, while including a long list of alleged violations of "English rights" committed by George III. On July 2, Congress voted for independence and published the declaration on July 4, which Washington read to his troops in New York City on July 9. At this point, the Revolution ceased to be an internal dispute over trade and tax policies and became a civil war, since each state represented in Congress was engaged in a struggle with Britain, but also split between Patriots and Loyalists. Patriots generally supported independence from Britain and a new national union in Congress, while Loyalists remained faithful to British rule. Estimates of numbers vary, one suggestion being the population as a whole was split evenly between committed Patriots, committed Loyalists and those who were indifferent. Others calculate the spilt as 40% Patriot, 40% neutral, 20% Loyalist, but with considerable regional variations. At the onset of the war, Congress realized defeating Britain required foreign alliances and intelligence-gathering. The Committee of Secret Correspondence was formed for "the sole purpose of corresponding with our friends in Great Britain and other parts of the world". From 1775 to 1776, it shared information and built alliances through secret correspondence, as well as employing secret agents in Europe to gather intelligence, conduct undercover operations, analyze foreign publications and initiate Patriot propaganda campaigns. Paine served as secretary, while Silas Deane was instrumental in securing French aid in Paris. War breaks out As the American Revolutionary War unfolded in North America, there were two principal campaign theaters within the thirteen states, and a smaller but strategically important one west of the Appalachian Mountains to the Mississippi River and north to the Great Lakes. The full-on military campaigning began in the states north of Maryland, and fighting was most frequent and severest there between 1775 and 1778. Patriots achieved several strategic victories in the South, the British lost their first army at Saratoga, and the French entered the war as an American ally. In the expanded Northern theater and wintering at Valley Forge, General Washington observed British operations coming out of New York at the 1778 Battle of Monmouth. He then closed off British initiatives by a series of raids that contained the British army in New York City. The same year, Spanish-supplied Virginia Colonel George Rogers Clark joined by Francophone settlers and their Indian allies conquered Western Quebec, the US Northwest Territory. Starting in 1779, the British initiated a southern strategy to begin at Savannah, gather Loyalist support, and reoccupy Patriot-controlled territory north to Chesapeake Bay. Initially the British were successful, and the Americans lost an entire army at the siege of Charleston, which caused a severe setback for Patriots in the region. But then British maneuvering north led to a combined American and French force cornering a second British army at Battle of Yorktown, and their surrender effectively ended the Revolutionary War. Early engagements On April 14, 1775, Sir Thomas Gage, Commander-in-Chief, North America since 1763 and also Governor of Massachusetts from 1774, received orders to take action against the Patriots. He decided to destroy militia ordnance stored at Concord, Massachusetts, and capture John Hancock and Samuel Adams, who were considered the principal instigators of the rebellion. The operation was to begin around midnight on April 19, in the hope of completing it before the Patriots could respond. However, Paul Revere learned of the plan and notified Captain Parker, commander of the Concord militia, who prepared to resist the attempted seizure. British troops clashed with colonial forces at Lexington and Concord, suffering around 300 casualties before withdrawing to Boston, which was then besieged by the militia. In May, 4,500 British reinforcements arrived under Generals William Howe, John Burgoyne, and Sir Henry Clinton. On June 17, they seized the Charlestown Peninsula at the Battle of Bunker Hill, a frontal assault in which they suffered over 1,000 casualties. Dismayed at the costly attack which had gained them little, Gage appealed to London for a larger army to suppress the revolt, but instead was replaced as commander by Howe. On June 14, 1775, Congress took control of Patriot forces outside Boston, and Congressional leader John Adams nominated George Washington as commander-in-chief of the new Continental Army. Washington previously commanded Virginia militia regiments in the French and Indian War, and on June 16, John Hancock officially proclaimed him "General and Commander in Chief of the army of the United Colonies." On July 3, He assumed command on July 3, preferring to fortify Dorchester Heights outside Boston rather than assaulting it. In early March 1776, Colonel Henry Knox arrived with heavy artillery acquired in the Capture of Fort Ticonderoga. Under cover of darkness, on March 5 Washington placed these on Dorchester Heights, from where they could fire on the town and British ships in Boston Harbor. Fearing another Bunker Hill, Howe evacuated the city on March 17 without further loss and sailed to Halifax, Nova Scotia, while Washington moved south to New York City. Beginning in August 1775, American privateers raided towns in Nova Scotia, including Saint John, Charlottetown and Yarmouth. In 1776, John Paul Jones and Jonathan Eddy attacked Canso and Fort Cumberland respectively. British officials in Quebec began negotiating with the Iroquois for their support, while the Americans urged them to maintain neutrality. Aware of Native American leanings toward the British and fearing an Anglo-Indian attack from Canada, Congress authorized an invasion of Quebec in April 1775. A second American invasion was defeated at the Battle of Quebec on December 31, and after a loose siege the Americans withdrew on May 6, 1776. A failed counter-attack at Trois-Rivières on June 8 ended American operations in Quebec. However, British pursuit was blocked by American ships on Lake Champlain until they were cleared on October 11 at the Battle of Valcour Island. The American troops were forced to withdraw to Fort Ticonderoga, ending the campaign. In November 1776, a Massachusetts-sponsored uprising in Nova Scotia during the Battle of Fort Cumberland was dispersed. The cumulative failures cost the Patriots support in local public opinion, and aggressive anti-Loyalist policies in the New England colonies alienated the Canadians. The Patriots made no further attempts to invade north. In Virginia, an attempt by Governor Lord Dunmore to seize militia stores on April 20 1775 led to an increase in tension, although conflict was avoided for the time being. This changed after the publication of Dunmore's Proclamation on November 7, 1775, promising freedom to any slaves who fled their Patriot masters and agreed to fight for the Crown. British forces were defeated at Great Bridge on December 9 and took refuge on British ships anchored near the port of Norfolk. When the Third Virginia Convention refused to disband its militia or accept martial law, Dunmore ordered the Burning of Norfolk on January 1, 1776. The siege of Savage's Old Fields began on November 19 in South Carolina between Loyalist and Patriot militias, and the Loyalists were subsequently driven out of the colony in the Snow Campaign. Loyalists were recruited in North Carolina to reassert British rule in the South, but they were decisively defeated in the Battle of Moore's Creek Bridge. A British expedition sent to reconquer South Carolina launched an attack on Charleston in the Battle of Sullivan's Island on June 28, 1776, but it failed and left the South under Patriot control until 1780. A shortage of gunpowder led Congress to authorize a naval expedition against The Bahamas to secure ordnance stored there. On March 3, 1776, an American squadron landed at Nassau and encountered minimal resistance, confiscating what supplies they could before sailing for home on March 17. A month later, after a brief skirmish with , they returned to New London, Connecticut, the base for American naval operations during the Revolution. British New York counter-offensive After regrouping at Halifax, Nova Scotia, William Howe was determined to take the fight to the Americans. He sailed for New York in June 1776 and began landing troops on Staten Island near the entrance to New York Harbor on July 2. The Americans rejected Howe's informal attempt to negotiate peace on July 30; Washington knew that an attack on the city was imminent and realized that he needed advance information to deal with disciplined British regular troops. On August 12, 1776, Patriot Thomas Knowlton was given orders to form an elite group for reconnaissance and secret missions. Knowlton's Rangers, which included Nathan Hale, became the Army's first intelligence unit. When Washington was driven off Long Island he soon realized that he would need more than military might and amateur spies to defeat the British. He was committed to professionalizing military intelligence, and with the aid of Benjamin Tallmadge, they launched the six-man Culper spy ring. The efforts of Washington and the Culper Spy Ring substantially increased effective allocation and deployment of Continental regiments in the field. Over the course of the war Washington spent more than 10 percent of his total military funds on intelligence operations. Washington split his army into positions on Manhattan Island and across the East River in western Long Island. On August 27 at the Battle of Long Island, Howe outflanked Washington and forced him back to Brooklyn Heights, but he did not attempt to encircle Washington's forces. Through the night of August 28, General Henry Knox bombarded the British. Knowing they were up against overwhelming odds, Washington ordered the assembly of a war council on August 29; all agreed to retreat to Manhattan. Washington quickly had his troops assembled and ferried them across the East River to Manhattan on flat-bottomed freight boats without any losses in men or ordnance, leaving General Thomas Mifflin's regiments as a rearguard. General Howe officially met with a delegation from Congress at the September Staten Island Peace Conference, but it failed to conclude peace as the British delegates only had the authority to offer pardons and could not recognize independence. On September 15, Howe seized control of New York City when the British landed at Kip's Bay and unsuccessfully engaged the Americans at the Battle of Harlem Heights the following day. On October 18 Howe failed to encircle the Americans at the Battle of Pell's Point, and the Americans withdrew. Howe declined to close with Washington's army on October 28 at the Battle of White Plains, and instead attacked a hill that was of no strategic value. Washington's retreat isolated his remaining forces and the British captured Fort Washington on November 16. The British victory there amounted to Washington's most disastrous defeat with the loss of 3,000 prisoners. The remaining American regiments on Long Island fell back four days later. General Sir Henry Clinton wanted to pursue Washington's disorganized army, but he was first required to commit 6,000 troops to capture Newport, Rhode Island to secure the Loyalist port. General Charles Cornwallis pursued Washington, but Howe ordered him to halt, leaving Washington unmolested. The outlook was bleak for the American cause: the reduced army had dwindled to fewer than 5,000 men and would be reduced further when enlistments expired at the end of the year. Popular support wavered, morale declined, and Congress abandoned Philadelphia and moved to Baltimore. Loyalist activity surged in the wake of the American defeat, especially in New York state. In London, news of the victorious Long Island campaign was well received with festivities held in the capital. Public support reached a peak, and King George III awarded the Order of the Bath to Howe. Strategic deficiencies among Patriot forces were evident: Washington divided a numerically weaker army in the face of a stronger one, his inexperienced staff misread the military situation, and American troops fled in the face of enemy fire. The successes led to predictions that the British could win within a year. In the meantime, the British established winter quarters in the New York City area and anticipated renewed campaigning the following spring. Two weeks after Congress withdrew to safer Maryland, Washington crossed the ice-choked Delaware River about 30 miles upriver from Philadelphia on the night of December 25–26, 1776. His approach over frozen trails surprised Hessian Colonel Johann Rall. The Continentals overwhelmed the Hessian garrison at Trenton, New Jersey, and took 900 prisoners. The celebrated victory rescued the American army's flagging morale, gave new hope to the Patriot cause, and dispelled much of the fear of professional Hessian "mercenaries". Cornwallis marched to retake Trenton but was repulsed at the Battle of the Assunpink Creek; in the night of January 2, Washington outmaneuvered Cornwallis and defeated his rearguard in the Battle of Princeton the following day. The two victories helped to convince the French that the Americans were worthy military allies. Washington entered winter quarters from January to May 1778 at Morristown, New Jersey, and he received the Congressional direction to inoculate all Continental troops against smallpox. Although a Forage War between the armies continued until March, Howe did not attempt to attack the Americans over the winter of 1776–1777. British northern strategy fails The 1776 campaign demonstrated regaining New England would be a prolonged affair, which led to a change in British strategy. This involved isolating the north from the rest of the country by taking control of the Hudson River, allowing them to focus on the south where Loyalist support was believed to be substantial. In December 1776, Howe wrote to the Colonial Secretary Lord Germain, proposing a limited offensive against Philadelphia, while a second force moved down the Hudson from Canada. Germain received this on February 23, 1777, followed a few days later by a memorandum from Burgoyne, then in London on leave. Burgoyne supplied several alternatives, all of which gave him responsibility for the offensive, with Howe remaining on the defensive. The option selected required him to lead the main force south from Montreal down the Hudson Valley, while a detachment under Barry St. Leger moved east from Lake Ontario. The two would meet at Albany, leaving Howe to decide whether to join them. Reasonable in principle, this did not account for the logistical difficulties involved and Burgoyne erroneously assumed Howe would remain on the defensive; Germain's failure to make this clear meant he opted to attack Philadelphia instead. Burgoyne set out on June 14, 1777, with a mixed force of British regulars, German auxiliaries and Canadian militia, and captured Fort Ticonderoga on July 5. As General Horatio Gates retreated, his troops blocked roads, destroyed bridges, dammed streams, and stripped the area of food. This slowed Burgoyne's progress and forced him to send out large foraging expeditions; on one of these, more than 700 British troops were captured at the Battle of Bennington on August 16. St Leger moved east and besieged Fort Stanwix; despite defeating an American relief force at the Battle of Oriskany on August 6, he was abandoned by his Indian allies and withdrew to Quebec on August 22. Now isolated and outnumbered by Gates, Burgoyne continued onto Albany rather than retreating to Fort Ticonderoga, reaching Saratoga on September 13. He asked Clinton for support while constructing defenses around the town. Morale among his troops rapidly declined, and an unsuccessful attempt to break past Gates at the Battle of Freeman Farms on September 19 resulted in 600 British casualties. When Clinton advised he could not reach them, Burgoyne's subordinates advised retreat; a reconnaissance in force on October 7 was repulsed by Gates at the Battle of Bemis Heights, forcing them back into Saratoga with heavy losses. By October 11, all hope of escape had vanished; persistent rain reduced the camp to a "squalid hell" of mud and starving cattle, supplies were dangerously low and many of the wounded in agony. Burgoyne capitulated on October 17; around 6,222 soldiers, including German forces commanded by General Riedesel, surrendered their arms before being taken to Boston, where they were to be transported to England. After securing additional supplies, Howe made another attempt on Philadelphia by landing his troops in Chesapeake Bay on August 24. He now compounded failure to support Burgoyne by missing repeated opportunities to destroy his opponent, defeating Washington at the Battle of Brandywine on September 11, then allowing him to withdraw in good order. After dispersing an American detachment at Paoli on September 20, Cornwallis occupied Philadelphia on September 26, with the main force of 9,000 under Howe based just to the north at Germantown. Washington attacked them on October 4, but was repulsed. To prevent Howe's forces in Philadelphia being resupplied by sea, the Patriots erected Fort Mifflin and nearby Fort Mercer on the east and west banks of the Delaware respectively, and placed obstacles in the river south of the city. This was supported by a small flotilla of Continental Navy ships on the Delaware, supplemented by the Pennsylvania State Navy, commanded by John Hazelwood. An attempt by the Royal Navy to take the forts in the October 20 to 22 Battle of Red Bank failed; a second attack captured Fort Mifflin on November 16, while Fort Mercer was abandoned two days later when Cornwallis breached the walls. His supply lines secured, Howe tried to tempt Washington into giving battle, but after inconclusive skirmishing at the Battle of White Marsh from December 5 to 8, he withdrew to Philadelphia for the winter. On December 19, the Americans followed suit and entered winter quarters at Valley Forge; while Washington's domestic opponents contrasted his lack of battlefield success with Gates' victory at Saratoga, foreign observers such as Frederick the Great were equally impressed with Germantown, which demonstrated resilience and determination. Over the winter, poor conditions, supply problems and low morale resulted in 2,000 deaths, with another 3,000 unfit for duty due to lack of shoes. However, Baron Friedrich Wilhelm von Steuben took the opportunity to introduce Prussian Army drill and infantry tactics to the entire Continental Army; he did this by training "model companies" in each regiment, who then instructed their home units. Despite Valley Forge being only twenty miles away, Howe made no effort to attack their camp, an action some critics argue could have ended the war. Foreign intervention Like his predecessors, French foreign minister Vergennes considered the 1763 Peace a national humiliation* and viewed the war as an opportunity to weaken Britain. He initially avoided open conflict, but allowed American ships to take on cargoes in French ports, a technical violation of neutrality. Although public opinion favored the American cause, Finance Minister Turgot argued they did not need French help to gain independence and war was too expensive. Instead, Vergennes persuaded Louis XVI to secretly fund a government front company to purchase munitions for the Patriots, carried in neutral Dutch ships and imported through Sint Eustatius in the Caribbean. Many Americans opposed a French alliance, fearing to "exchange one tyranny for another", but this changed after a series of military setbacks in early 1776. As France had nothing to gain from the colonies reconciling with Britain, Congress had three choices; making peace on British terms, continuing the struggle on their own, or proclaiming independence, guaranteed by France. Although the Declaration of Independence in July 1776 had wide public support, Adams was among those reluctant to pay the price of an alliance with France, and over 20% of Congressmen voted against it. Congress agreed to the treaty with reluctance and as the war moved in their favor increasingly lost interest in it. Silas Deane was sent to Paris to begin negotiations with Vergennes, whose key objectives were replacing Britain as the United States' primary commercial and military partner while securing the French West Indies from American expansion. These islands were extremely valuable; in 1772, the value of sugar and coffee produced by Saint-Domingue on its own exceeded that of all American exports combined. Talks progressed slowly until October 1777, when British defeat at Saratoga and their apparent willingness to negotiate peace convinced Vergennes only a permanent alliance could prevent the "disaster" of Anglo-American rapprochement. Assurances of formal French support allowed Congress to reject the Carlisle Peace Commission and insist on nothing short of complete independence. On February 6, 1778, France and the United States signed the Treaty of Amity and Commerce regulating trade between the two countries, followed by a defensive military alliance against Britain, the Treaty of Alliance. In return for French guarantees of American independence, Congress undertook to defend their interests in the West Indies, while both sides agreed not to make a separate peace; conflict over these provisions would lead to the 1798 to 1800 Quasi-War. Charles III of Spain was invited to join on the same terms but refused, largely due to concerns over the impact of the Revolution on Spanish colonies in the Americas. Spain had complained on multiple occasions about encroachment by American settlers into Louisiana, a problem that could only get worse once the United States replaced Britain. Although Spain ultimately made important contributions to American success, in the Treaty of Aranjuez (1779), Charles agreed only to support France's war with Britain outside America, in return for help in recovering Gibraltar, Menorca and Spanish Florida. The terms were confidential since several conflicted with American aims; for example, the French claimed exclusive control of the Newfoundland cod fisheries, a non-negotiable for colonies like Massachusetts. One less well-known impact of this agreement was the abiding American distrust of 'foreign entanglements'; the US would not sign another treaty until the NATO agreement in 1949. This was because the US had agreed not to make peace without France, while Aranjuez committed France to keep fighting until Spain recovered Gibraltar, effectively making it a condition of US independence without the knowledge of Congress. To encourage French participation in the struggle for independence, the US representative in Paris, Silas Deane promised promotion and command positions to any French officer who joined the Continental Army. Although many proved incompetent, one outstanding exception was Gilbert du Motier, Marquis de Lafayette, whom Congress appointed a major General. In addition to his military ability, Lafayette showed considerable political skill in building support for Washington among his officers and within Congress, liaising with French army and naval commanders, and promoting the Patriot cause in France. When the war started, Britain tried to borrow the Dutch-based Scots Brigade for service in America, but pro-Patriot sentiment led the States General to refuse. Although the Republic was no longer a major power, prior to 1774 they still dominated the European carrying trade, and Dutch merchants made large profits shipping French-supplied munitions to the Patriots. This ended when Britain declared war in December 1780, a conflict that proved disastrous to the Dutch economy. The Dutch were also excluded from the First League of Armed Neutrality, formed by Russia, Sweden and Denmark in March 1780 to protect neutral shipping from being stopped and searched for contraband by Britain and France. The British government failed to take into account the strength of the American merchant marine and support from European countries, which allowed the colonies to import munitions and continue trading with relative impunity. While well aware of this, the North administration delayed placing the Royal Navy on a war footing for cost reasons; this prevented the institution of an effective blockade and restricted them to ineffectual diplomatic protests. Traditional British policy was to employ European land-based allies to divert the opposition, a role filled by Prussia in the Seven Years' War; in 1778, they were diplomatically isolated and faced war on multiple fronts. Meanwhile, George III had given up on subduing America while Britain had a European war to fight. He did not welcome war with France, but he believed the British victories over France in the Seven Years' War as a reason to believe in ultimate victory over France. Britain could not find a powerful ally among the Great Powers to engage France on the European continent. Britain subsequently changed its focus into the Caribbean theater, and diverted major military resources away from America. Vergennes colleague "For her honour, France had to seize this opportunity to rise from her degradation...... "If she neglected it, if fear overcame duty, she would add debasement to humiliation, and become an object of contempt to her own century and to all future peoples". Stalemate in the North At the end of 1777, Howe resigned and was replaced by Sir Henry Clinton on May 24, 1778; with French entry into the war, he was ordered to consolidate his forces in New York. On June 18, the British departed Philadelphia with the reinvigorated Americans in pursuit; the Battle of Monmouth on June 28 was inconclusive but boosted Patriot morale. Washington had rallied Charles Lee's broken regiments, the Continentals repulsed British bayonet charges, the British rear guard lost perhaps 50 per-cent more casualties, and the Americans held the field at the end of the day. That midnight, the newly installed Clinton continued his retreat to New York. A French naval force under Admiral Charles Henri Hector d'Estaing was sent to assist Washington; deciding New York was too formidable a target, in August they launched a combined attack on Newport, with General John Sullivan commanding land forces. The resulting Battle of Rhode Island was indecisive; badly damaged by a storm, the French withdrew to avoid putting their ships at risk. Further activity was limited to British raids on Chestnut Neck and Little Egg Harbor in October. In July 1779, the Americans captured British positions at Stony Point and Paulus Hook. Clinton unsuccessfully tried to tempt Washington into a decisive engagement by sending General William Tryon to raid Connecticut. In July, a large American naval operation, the Penobscot Expedition, attempted to retake Maine, then part of Massachusetts, but was defeated. Persistent Iroquois raids along the border with Quebec led to the punitive Sullivan Expedition in April 1779, destroying many settlements but failing to stop them. During the winter of 1779–1780, the Continental Army suffered greater hardships than at Valley Forge. Morale was poor, public support fell away in the long war, the Continental dollar was virtually worthless, the army was plagued with supply problems, desertion was common, and mutinies occurred in the Pennsylvania Line and New Jersey Line regiments over the conditions in early 1780. In June 1780, Clinton sent 6,000 men under Wilhelm von Knyphausen to retake New Jersey, but they were halted by local militia at the Battle of Connecticut Farms; although the Americans withdrew, Knyphausen felt he was not strong enough to engage Washington's main force and retreated. A second attempt two weeks later ended in a British defeat at the Battle of Springfield, effectively ending their ambitions in New Jersey. In July, Washington appointed Benedict Arnold commander of West Point; his attempt to betray the fort to the British failed due to incompetent planning, and the plot was revealed when his British contact John André was captured and later executed. Arnold escaped to New York and switched sides, an action justified in a pamphlet addressed "To the Inhabitants of America"; the Patriots condemned his betrayal, while he found himself almost as unpopular with the British. The war to the west of the Appalachians was largely confined to skirmishing and raids. In February 1778, an expedition of militia to destroy British military supplies in settlements along the Cuyahoga River was halted by adverse weather. Later in the year, a second campaign was undertaken to seize the Illinois Country from the British. Virginia militia, Canadien settlers, and Indian allies commanded by Colonel George Rogers Clark captured Kaskaskia on July 4 then secured Vincennes, though Vincennes was recaptured by Quebec Governor Henry Hamilton. In early 1779, the Virginians counterattacked in the siege of Fort Vincennes and took Hamilton prisoner. Clark secured western British Quebec as the American Northwest Territory in the Treaty of Paris concluding the war. On May 25, 1780, British Colonel Henry Bird invaded Kentucky as part of a wider operation to clear American resistance from Quebec to the Gulf coast. Their Pensacola advance on New Orleans was overcome by Spanish Governor Gálvez's offensive on Mobile. Simultaneous British attacks were repulsed on St. Louis by the Spanish Lieutenant Governor de Leyba, and on the Virginia county courthouse at Cahokia by Lieutenant Colonel Clark. The British initiative under Bird from Detroit was ended at the rumored approach of Clark. The scale of violence in the Licking River Valley, such as during the Battle of Blue Licks, was extreme "even for frontier standards". It led to men of English and German settlements to join Clark's militia when the British and their auxiliaries withdrew to the Great Lakes. The Americans responded with a major offensive along the Mad River in August which met with some success in the Battle of Piqua but did not end Indian raids. French soldier Augustin de La Balme led a Canadian militia in an attempt to capture Detroit, but they dispersed when Miami natives led by Little Turtle attacked the encamped settlers on November 5. The war in the west had become a stalemate with the British garrison sitting in Detroit and the Virginians expanding westward settlements north of the Ohio River in the face of British-allied Indian resistance. War in the South The "Southern Strategy" was developed by Lord Germain, based on input from London-based Loyalists like Joseph Galloway. They argued it made no sense to fight the Patriots in the north where they were strongest, while the New England economy was reliant on trade with Britain, regardless of who governed it. On the other hand, duties on tobacco made the South far more profitable for Britain, while local support meant securing it required small numbers of regular troops. Victory would leave a truncated United States facing British possessions in the south, Canada to the north, and Ohio on their western border; with the Atlantic seaboard controlled by the Royal Navy, Congress would be forced to agree to terms. However, assumptions about the level of Loyalist support proved wildly optimistic. Germain accordingly ordered Augustine Prévost, the British commander in East Florida, to advance into Georgia in December 1778. Lieutenant-Colonel Archibald Campbell, an experienced officer taken prisoner earlier in the war before being exchanged for Ethan Allen, captured Savannah on December 29, 1778. He recruited a Loyalist militia of nearly 1,100, many of whom allegedly joined only after Campbell threatened to confiscate their property. Poor motivation and training made them unreliable troops, as demonstrated in their defeat by Patriot militia at the Battle of Kettle Creek on February 14, 1779, although this was offset by British victory at Brier Creek on March 3. In June, Prévost launched an abortive assault on Charleston, before retreating to Savannah, an operation notorious for widespread looting by British troops that enraged both Loyalists and Patriots. In October, a joint French and American operation under Admiral d'Estaing and General Benjamin Lincoln failed to recapture Savannah. Prévost was replaced by Lord Cornwallis, who assumed responsibility for Germain's strategy; he soon realized estimates of Loyalist support were considerably over-stated, and he needed far larger numbers of regular forces. Reinforced by Clinton, his troops captured Charleston in May 1780, inflicting the most serious Patriot defeat of the war; over 5,000 prisoners were taken and the Continental Army in the south effectively destroyed. On May 29, Loyalist regular Banastre Tarleton defeated an American force of 400 at the Battle of Waxhaws; over 120 were killed, many allegedly after surrendering. Responsibility is disputed, Loyalists claiming Tarleton was shot at while negotiating terms of surrender, but it was later used as a recruiting tool by the Patriots. Clinton returned to New York, leaving Cornwallis to oversee the south; despite their success, the two men left barely on speaking terms, with dire consequences for the future conduct of the war. The Southern strategy depended on local support, but this was undermined by a series of coercive measures. Previously, captured Patriots were sent home after swearing not to take up arms against the king; they were now required to fight their former comrades, while the confiscation of Patriot-owned plantations led formerly neutral "grandees" to side with them. Skirmishes at Williamson's Plantation, Cedar Springs, Rocky Mount, and Hanging Rock signaled widespread resistance to the new oaths throughout South Carolina. In July, Congress appointed General Horatio Gates commander in the south; he was defeated at the Battle of Camden on August 16, leaving Cornwallis free to enter North Carolina. Despite battlefield success, the British could not control the countryside and Patriot attacks continued; before moving north, Cornwallis sent Loyalist militia under Major Patrick Ferguson to cover his left flank, leaving their forces too far apart to provide mutual support. In early October, Ferguson was defeated at the Battle of Kings Mountain, dispersing organized Loyalist resistance in the region. Despite this, Cornwallis continued into North Carolina hoping for Loyalist support, while Washington replaced Gates with General Nathanael Greene in December 1780. Greene divided his army, leading his main force southeast pursued by Cornwallis; a detachment was sent southwest under Daniel Morgan, who defeated Tarleton's British Legion at Cowpens on January 17, 1781, nearly eliminating it as a fighting force. The Patriots now held the initiative in the south, with the exception of a raid on Richmond led by Benedict Arnold in January 1781. Greene led Cornwallis on a series of countermarches around North Carolina; by early March, the British were exhausted and short of supplies and Greene felt strong enough to fight the Battle of Guilford Court House on March 15. Although victorious, Cornwallis suffered heavy casualties and retreated to Wilmington, North Carolina seeking supplies and reinforcements. The Patriots now controlled most of the Carolinas and Georgia outside the coastal areas; after a minor reversal at the Battle of Hobkirk's Hill, they recaptured Fort Watson and Fort Motte on April 15. On June 6, Brigadier General Andrew Pickens captured Augusta, leaving the British in Georgia confined to Charleston and Savannah. The assumption Loyalists would do most of the fighting left the British short of troops and battlefield victories came at the cost of losses they could not replace. Despite halting Greene's advance at the Battle of Eutaw Springs on September 8, Cornwallis withdrew to Charleston with little to show for his campaign. Western campaign When Spain joined France's war against Britain in 1779, their treaty specifically excluded Spanish military action in North America. However, from the beginning of the war, Bernardo de Gálvez, the Governor of Spanish Louisiana, allowed the Americans to import supplies and munitions into New Orleans, then ship them to Pittsburgh. This provided an alternative transportation route for the Continental Army, bypassing the British blockade of the Atlantic Coast. The trade was organized by Oliver Pollock, a successful merchant in Havana and New Orleans who was appointed US "commercial agent". It also helped support the American campaign in the west; in the 1778 Illinois campaign, militia under General George Rogers Clark cleared the British from what was then part of Quebec, creating Illinois County, Virginia. Despite official neutrality, Gálvez initiated offensive operations against British outposts. First, he cleared British garrisons in Baton Rouge, Louisiana, Fort Bute, and Natchez, Mississippi, and captured five forts. In doing so, Gálvez opened navigation on the Mississippi River north to the American settlement in Pittsburg. In 1781, Galvez and Pollock campaigned east along the Gulf Coast to secure West Florida, including British-held Mobile and Pensacola. The Spanish operations crippled the British supply of armaments to British Indian allies, which effectively suspended a military alliance to attack settlers between the Mississippi River and the Appalachian Mountains. British defeat in the United States Clinton spent most of 1781 based in New York City; he failed to construct a coherent operational strategy, partly due to his difficult relationship with Admiral Marriot Arbuthnot. In Charleston, Cornwallis independently developed an aggressive plan for a campaign in Virginia, which he hoped would isolate Greene's army in the Carolinas and cause the collapse of Patriot resistance in the South. This was approved by Lord Germain in London, but neither of them informed Clinton. Washington and Rochambeau now discussed their options; the former wanted to attack New York, the latter Virginia, where Cornwallis' forces were less well-established and thus easier to defeat. Washington eventually gave way and Lafayette took a combined Franco-American force into Virginia, but Clinton misinterpreted his movements as preparations for an attack on New York. Concerned by this threat, he instructed Cornwallis to establish a fortified sea base where the Royal Navy could evacuate his troops to help defend New York. When Lafayette entered Virginia, Cornwallis complied with Clinton's orders and withdrew to Yorktown, where he constructed strong defenses and awaited evacuation. An agreement by the Spanish navy to defend the French West Indies allowed Admiral de Grasse to relocate to the Atlantic seaboard, a move Arbuthnot did not anticipate. This provided Lafayette naval support, while the failure of previous combined operations at Newport and Savannah meant their co-ordination was planned more carefully. Despite repeated urging from his subordinates, Cornwallis made no attempt to engage Lafayette before he could establish siege lines. Even worse, expecting to be withdrawn within a few days he abandoned the outer defenses, which were promptly occupied by the besiegers and hastened British defeat. On August 31, a British fleet under Thomas Graves left New York for Yorktown. After landing troops and munitions for the besiegers on August 30, de Grasse had remained in Chesapeake Bay and intercepted him on September 5; although the Battle of the Chesapeake was indecisive in terms of losses, Graves was forced to retreat, leaving Cornwallis isolated. An attempted breakout over the York River at Gloucester Point failed due to bad weather. Under heavy bombardment with dwindling supplies, Cornwallis felt his situation was hopeless and on October 16 sent emissaries to Washington to negotiate surrender; after twelve hours of negotiations, these were finalized the next day. Responsibility for defeat was the subject of fierce public debate between Cornwallis, Clinton and Germain. Despite criticism from his junior officers, Cornwallis retained the confidence of his peers and later held a series of senior government positions; Clinton ultimately took most of the blame and spent the rest of his life in obscurity. Subsequent to Yorktown, American forces were assigned to supervise the armistice between Washington and Clinton made to facilitate British departure following the January 1782 law of Parliament forbidding any further British offensive action in North America. British-American negotiations in Paris led to preliminaries signed November 1782 acknowledging US independence. The enacted Congressional war aim for British withdrawal from its North American claims to be ceded to the US was completed for the coastal cities in stages. In the South, Generals Greene and Wayne loosely invested the withdrawing British at Savanna and Charleston. There they observed the British finally taking off their regulars from Charleston December 14, 1782. Loyalist provincial militias of whites and free blacks, as well as Loyalists with their slaves were transported in a relocation to Nova Scotia and the British Caribbean. Native American allies of the British and some freed blacks were left to escape through the American lines unaided. Washington moved his army to New Windsor on the Hudson River about sixty miles north of New York City, and there the substance of the American army was furloughed home with officers at half pay until the Treaty of Paris formally ended the war on September 3, 1783. At that time, Congress decommissioned the regiments of Washington's Continental Army and began issuing land grants to veterans in the Northwest Territories for their war service. The last of the British occupation of New York City ended on November 25, 1783, with the departure of Clinton's replacement, General Sir Guy Carleton. Strategy and commanders To win their insurrection, the Americans needed to outlast the British will to continue the fight. To restore the empire, the British had to defeat the Continental Army in the early months, and compel the Congress to dissolve itself. Historian Terry M. Mays identifies three separate types of warfare, the first being a colonial conflict in which objections to Imperial trade regulation were as significant as taxation policy. The second was a civil war with all thirteen states split between Patriots, Loyalists and those who preferred to remain neutral. Particularly in the south, many battles were fought between Patriots and Loyalists with no British involvement, leading to divisions that continued after independence was achieved. The third element was a global war between France, Spain, the Dutch Republic and Britain, with America as one of a number of different theaters. After entering the war in 1778, France provided the Americans money, weapons, soldiers, and naval assistance, while French troops fought under US command in North America. While Spain did not formally join the war in America, they provided access to the Mississippi River and by capturing British possessions on the Gulf of Mexico denied bases to the Royal Navy, as well as retaking Menorca and besieging Gibraltar in Europe. Although the Dutch Republic was no longer a major power, prior to 1774 they still dominated the European carrying trade, and Dutch merchants made large profits by shipping French-supplied munitions to the Patriots. This ended when Britain declared war in December 1780 and the conflict proved disastrous to their economy. The Dutch were also excluded from the First League of Armed Neutrality, formed by Russia, Sweden and Denmark in March 1780 to protect neutral shipping from being stopped and searched for contraband by Britain and France. While of limited effect, these interventions forced the British to divert men and resources away from North America. American strategy Congress had multiple advantages if the rebellion turned into a protracted war. Their prosperous state populations depended on local production for food and supplies rather than on imports from their mother country that lay six to twelve weeks away by sail. They were spread across most of the North American Atlantic seaboard, stretching 1,000 miles. Most farms were remote from the seaports, and controlling four or five major ports did not give British armies control over the inland areas. Each state had established internal distribution systems. Each former colony had a long-established system of local militia, combat-tested in support of British regulars thirteen years before to secure an expanded British Empire. Together they took away French claims in North America west to the Mississippi River in the French and Indian War. The state legislatures independently funded and controlled their local militias. In the American Revolution, they trained and provided Continental Line regiments to the regular army, each with their own state officer corps. Motivation was also a major asset: each colonial capital had its own newspapers and printers, and the Patriots had more popular support than the Loyalists. British hoped that the Loyalists would do much of the fighting, but they fought less than expected. Continental Army When the war began, Congress lacked a professional army or navy, and each colony only maintained local militias. Militiamen were lightly armed, had little training, and usually did not have uniforms. Their units served for only a few weeks or months at a time and lacked the training and discipline of more experienced soldiers. Local county militias were reluctant to travel far from home and they were unavailable for extended operations. To compensate for this, Congress established a regular force known as the Continental Army on June 14, 1775, the origin of the modern United States Army, and appointed Washington as commander-in-chief. However, it suffered significantly from the lack of an effective training program and from largely inexperienced officers and sergeants, offset by a few senior officers. Each state legislature appointed officers for both county and state militias and their regimental Continental Line officers; although Washington was required to accept Congressional appointments, he was still permitted to choose and command his own generals, such as Nathanael Greene, his chief of artillery, Henry Knox, and Alexander Hamilton, the chief of staff. One of Washington's most successful recruits to general officer was Baron Friedrich Wilhelm von Steuben, a veteran of the Prussian general staff who wrote the Revolutionary War Drill Manual. The development of the Continental Army was always a work in progress and Washington used both his regulars and state militia throughout the war; when properly employed, the combination allowed them to overwhelm smaller British forces, as at Concord, Boston, Bennington, and Saratoga. Both sides used partisan warfare, but the state militias effectively suppressed Loyalist activity when British regulars were not in the area. Washington designed the overall military strategy of the war in cooperation with Congress, established the principle of civilian supremacy in military affairs, personally recruited his senior officer corps, and kept the states focused on a common goal. For the first three years until after Valley Forge, the Continental Army was largely supplemented by local state militias. Initially, Washington employed the inexperienced officers and untrained troops in Fabian strategies rather than risk frontal assaults against Britain's professional soldiers and officers. Over the course of the entire war, Washington lost more battles than he won, but he never surrendered his troops and maintained a fighting force in the face of British field armies and never gave up fighting for the American cause. By prevailing European standards, the armies in America were relatively small, limited by lack of supplies and logistics; the British in particular were constrained by the difficulty of transporting troops across the Atlantic and dependence on local supplies. Washington never directly commanded more than 17,000 men, while the combined Franco-American army at Yorktown was only about 19,000. At the beginning of 1776, Patriot forces consisted of 20,000 men, with two-thirds in the Continental Army and the other third in the various state militias. About 250,000 men served as regulars or as militia for the Revolutionary cause over eight years during wartime, but there were never more than 90,000 men under arms at one time. As a whole, American officers never equaled their opponents in tactics and maneuvers, and they lost most of the pitched battles. The great successes at Boston (1776), Saratoga (1777), and Yorktown (1781) were won from trapping the British far from base with a greater number of troops. Nevertheless, after 1778, Washington's army was transformed into a more disciplined and effective force, mostly by Baron von Steuben's training. Immediately after the Army emerged from Valley Forge, it proved its ability to match the British troops in action at the Battle of Monmouth, including a black Rhode Island regiment fending off a British bayonet attack then counter-charging for the first time in Washington's army. Here Washington came to realize that saving entire towns was not necessary, but preserving his army and keeping the revolutionary spirit alive was more important in the long run. Washington informed Henry Laurens "that the possession of our towns, while we have an army in the field, will avail them little." Although Congress was responsible for the war effort and provided supplies to the troops, Washington took it upon himself to pressure the Congress and state legislatures to provide the essentials of war; there was never nearly enough. Congress evolved in its committee oversight and established the Board of War, which included members of the military. Because the Board of War was also a committee ensnared with its own internal procedures, Congress also created the post of Secretary of War, and appointed Major General Benjamin Lincoln in February 1781 to the position. Washington worked closely with Lincoln to coordinate civilian and military authorities and took charge of training and supplying the army. Continental Navy During the first summer of the war, Washington began outfitting schooners and other small seagoing vessels to prey on ships supplying the British in Boston. Congress established the Continental Navy on October 13, 1775, and appointed Esek Hopkins as its first commander; for most of the war, it consisted of a handful of small frigates and sloops, supported by numerous privateers. On November 10, 1775, Congress authorized the creation of the Continental Marines, forefather of the United States Marine Corps. John Paul Jones became the first American naval hero by capturing HMS Drake on April 24, 1778, the first victory for any American military vessel in British waters. The last was by the frigate USS Alliance commanded by Captain John Barry. On March 10, 1783, the Alliance outgunned HMS Sybil in a 45-minute duel while escorting Spanish gold from Havana to Congress. After Yorktown, all US Navy ships were sold or given away; it was the first time in America's history that it had no fighting forces on the high seas. Congress primarily commissioned privateers to reduce costs and to take advantage of the large proportion of colonial sailors found in the British Empire. Overall, they included 1,700 ships that successfully captured 2,283 enemy ships to damage the British effort and to enrich themselves with the proceeds from the sale of cargo and the ship itself. About 55,000 sailors served aboard American privateers during the war. France At the beginning of the war, the Americans had no major international allies, as most nation-states watched and waited to see how developments would unfold in British North America. Over time, the Continental Army acquitted itself well in the face of British regulars and their German auxiliaries known to all European great powers. Battles such as the Battle of Bennington, the Battles of Saratoga, and even defeats such as the Battle of Germantown, proved decisive in gaining the attention and support of powerful European nations including France and Spain, and the Dutch Republic; the latter moved from covertly supplying the Americans with weapons and supplies to overtly supporting them. The decisive American victory at Saratoga convinced France, who was already a long-time rival of Britain, to offer the Americans the Treaty of Amity and Commerce. The two nations also agreed to a defensive Treaty of Alliance to protect their trade and also guaranteed American independence from Britain. To engage the United States as a French ally militarily, the treaty was conditioned on Britain initiating a war on France to stop it from trading with the US. Spain and the Dutch Republic were invited to join by both France and the United States in the treaty, but neither made a formal reply. On June 13, 1778, France declared war on Great Britain, and it invoked the French military alliance with the US, which ensured additional US privateer support for French possessions in the Caribbean. Washington worked closely with the soldiers and navy that France would send to America, primarily through Lafayette on his staff. French assistance made critical contributions required to defeat General Charles Cornwallis at Yorktown in 1781. British strategy The British military had considerable experience of fighting in North America, most recently during the Seven Years' War which forced France to give up New France in 1763. However, in previous conflicts they benefited from local logistics, as well as support from the colonial militia, which was not available in the American Revolutionary War. Reinforcements had to come from Europe, and maintaining large armies over such distances was extremely complex; ships could take three months to cross the Atlantic, and orders from London were often outdated by the time they arrived. Prior to the conflict, the colonies were largely autonomous economic and political entities, with no centralized area of ultimate strategic importance. This meant that, unlike Europe where the fall of a capital city often ended wars, that in America continued even after the loss of major settlements such as Philadelphia, the seat of Congress, New York and Charleston. British power was reliant on the Royal Navy, whose dominance allowed them to resupply their own expeditionary forces while preventing access to enemy ports. However, the majority of the American population was agrarian, rather than urban; supported by the French navy and blockade runners based in the Dutch Caribbean, their economy was able to survive. The geographical size of the colonies and limited manpower meant the British could not simultaneously conduct military operations and occupy territory without local support. Debate persists over whether their defeat was inevitable; one British statesman described it as "like trying to conquer a map". While Ferling argues Patriot victory was nothing short of a miracle, Ellis suggests the odds always favored the Americans, especially after Howe squandered the chance of a decisive British success in 1776, an "opportunity that would never come again". The US military history speculates the additional commitment of 10,000 fresh troops in 1780 would have placed British victory "within the realm of possibility". British Army The expulsion of France from North America in 1763 led to a drastic reduction in British troop levels in the colonies; in 1775, there were only 8,500 regular soldiers among a civilian population of 2.8 million. The bulk of military resources in the Americas were focused on defending sugar islands in the Caribbean; Jamaica alone generated more revenue than all thirteen American colonies combined. With the end of the Seven Years' War, the permanent army in Britain was also cut back, which resulted in administrative difficulties when the war began a decade later. Over the course of the war, there were four separate British commanders-in-chief, the first of whom was Thomas Gage; appointed in 1763, his initial focus was establishing British rule in former French areas of Canada. Rightly or wrongly, many in London blamed the revolt on his failure to take firm action earlier, and he was relieved after the heavy losses incurred at Bunker Hill. His replacement was Sir William Howe, a member of the Whig faction in Parliament who opposed the policy of coercion advocated by Lord North; Cornwallis, who later surrendered at Yorktown, was one of many senior officers who initially refused to serve in North America. The 1775 campaign showed the British overestimated the capabilities of their own troops and underestimated the colonial militia, requiring a reassessment of tactics and strategy. However, it allowed the Patriots to take the initiative and British authorities rapidly lost control over every colony. Howe's responsibility is still debated; despite receiving large numbers of reinforcements, Bunker Hill seems to have permanently affected his self-confidence and lack of tactical flexibility meant he often failed to follow up opportunities. Many of his decisions were attributed to supply problems, such as the delay in launching the New York campaign and failure to pursue Washington's beaten army. Having lost the confidence of his subordinates, he was recalled after Burgoyne surrendered at Saratoga. Following the failure of the Carlisle Commission, British policy changed from treating the Patriots as subjects who needed to be reconciled to enemies who had to be defeated. In 1778, Howe was replaced by Sir Henry Clinton, appointed instead of Carleton who was considered overly cautious. Regarded as an expert on tactics and strategy, like his predecessors Clinton was handicapped by chronic supply issues. As a result, he was largely inactive in 1779 and much of 1780; in October 1780, he warned Germain of "fatal consequences" if matters did not improve. In addition, Clinton's strategy was compromised by conflict with political superiors in London and his colleagues in North America, especially Admiral Mariot Arbuthnot, replaced in early 1781 by Rodney. He was neither notified nor consulted when Germain approved Cornwallis' invasion of the south in 1781 and delayed sending him reinforcements believing the bulk of Washington's army was still outside New York City. After the surrender at Yorktown, Clinton was relieved by Carleton, whose major task was to oversee the evacuation of Loyalists and British troops from Savannah, Charleston, and New York City. German Troops During the 18th century, all states commonly hired foreign soldiers, especially Britain; during the Seven Years' War, they comprised 10% of the British army and their use caused little debate. When it became clear additional troops were needed to suppress the revolt in America, it was decided to employ mercenaries. There were several reasons for this, including public sympathy for the Patriot cause, an historical reluctance to expand the British army and the time needed to recruit and train new regiments. An alternate source was readily available in the Holy Roman Empire, where many smaller states had a long tradition of renting their armies to the highest bidder. The most important was Hesse-Cassel, known as "the Mercenary State". The first supply agreements were signed by the North administration in late 1775; over the next decade, more than 40,000 Germans fought in North America, Gibraltar, South Africa and India, of whom 30,000 served in the American War. Often generically referred to as "Hessians", they included men from many other states, including Hanover and Brunswick. Sir Henry Clinton recommended recruiting Russian troops whom he rated very highly, having seen them in action against the Ottomans; however, negotiations with Catherine the Great made little progress. Unlike previous wars their use led to intense political debate in Britain, France, and even Germany, where Frederick the Great refused to provide passage through his territories for troops hired for the American war. In March 1776, the agreements were challenged in Parliament by Whigs who objected to "coercion" in general, and the use of foreign soldiers to subdue "British subjects". The debates were covered in detail by American newspapers, which reprinted key speeches and in May 1776 they received copies of the treaties themselves. Provided by British sympathizers, these were smuggled into North America from London by George Merchant, a recently released American prisoner. The prospect of mercenaries being used in the colonies bolstered support for independence, more so than taxation and other acts combined; the King was accused of declaring war on his own subjects, leading to the idea there were now two separate governments. By apparently showing Britain was determined to go to war, it made hopes of reconciliation seem naive and hopeless, while the employment of 'foreign mercenaries' became one of the charges levelled against George III in the Declaration of Independence. The Hessian reputation within Germany for brutality also increased support for the Patriot cause among German-American immigrants. The presence of over 150,000 German-Americans meant both sides felt these mercenaries might be persuaded to desert; one reason Clinton suggested employing Russians was that he felt they were less likely to defect. When the first German troops arrived on Staten Island in August 1776, Congress approved the printing of "handbills" promising land and citizenship to any willing to join the Patriot cause. The British launched a counter-campaign claiming deserters could well be executed for meddling in a war that was not theirs. Desertion among the Germans occurred throughout the war, with the highest rate of desertion occurring during the time between the surrender at Yorktown and the Treaty of Paris. German regiments were central to the British war effort; of the estimated 30,000 sent to America, some 13,000 became casualties. Revolution as civil war Loyalists Wealthy Loyalists convinced the British government that most of the colonists were sympathetic toward the Crown; consequently, British military planners relied on recruiting Loyalists, but had trouble recruiting sufficient numbers as the Patriots had widespread support. Nevertheless, they continued to deceive themselves on their level of American support as late as 1780, a year before hostilities ended. Approximately 25,000 Loyalists fought for the British throughout the war. Although Loyalists constituted about twenty percent of the colonial population, they were concentrated in distinct communities. Many of them lived among large plantation owners in the Tidewater region and South Carolina who produced cash crops in tobacco and indigo comparable to global markets in Caribbean sugar. When the British began probing the backcountry in 1777–1778, they were faced with a major problem: any significant level of organized Loyalist activity required a continued presence of British regulars. The available manpower that the British had in America was insufficient to protect Loyalist territory and counter American offensives. The Loyalist militias in the South were constantly defeated by neighboring Patriot militia. The most critical combat between the two partisan militias was at the Battle of Kings Mountain; the Patriot victory irreversibly crippled any further Loyalist militia capability in the South. When the early war policy was administered by General William Howe, the Crown's need to maintain Loyalist support prevented it from using the traditional revolt suppression methods. The British cause suffered when their troops ransacked local homes during an aborted attack on Charleston in 1779 that enraged both Patriots and Loyalists. After Congress rejected the Carlisle Peace Commission in 1778 and Westminster turned to "hard war" during Clinton's command, neutral colonists in the Carolinas often allied with the Patriots whenever brutal combat broke out between Tories and Whigs. Conversely, Loyalists gained support when Patriots intimidated suspected Tories by destroying property or tarring and feathering. A Loyalist militia unit—the British Legion—provided some of the best troops in British service that it received a commission in the British Army: it was a mixed regiment of 250 dragoons and 200 infantry supported by batteries of flying artillery. It was commanded by Banastre Tarleton and gained a fearsome reputation in the colonies for "brutality and needless slaughter". In May 1779 the British Legion was one of five regiments that formed the American Establishment. Women Women played various roles during the Revolutionary War; they often accompanied their husbands when permitted to do so. For example, throughout the war Martha Washington was known to visit and provide aid to her husband George at various American camps, and Frederika Charlotte Riedesel documented the Saratoga campaign. Women often accompanied armies as camp followers to sell goods and perform necessary tasks in hospitals and camps. They were a necessary part of eighteenth-century armies, and numbered in the thousands during the war. Women also assumed military roles: aside from auxiliary tasks like treating the wounded or setting up camp, some dressed as men to directly support combat, fight, or act as spies on both sides of the Revolutionary War. Anna Maria Lane joined her husband in the Army and wore men's clothes by the time the Battle of Germantown happened. The Virginia General Assembly later cited her bravery: she fought while dressed as a man and "performed extraordinary military services, and received a severe wound at the battle of Germantown ... with the courage of a soldier". On April 26, 1777, Sybil Ludington rode to alert militia forces of Putnam County, New York, and Danbury, Connecticut, to warn them of the British's approach; she has been called the "female Paul Revere". A few others disguised themselves as men. Deborah Sampson fought until her gender was discovered and discharged as a result; Sally St. Clair was killed in action during the war. African Americans When war began, the population of the Thirteen Colonies included an estimated 500,000 slaves, predominantly used as labor on Southern plantations. In November 1775, Lord Dunmore, the Royal Governor of Virginia, issued a proclamation that promised freedom to any Patriot-owned slaves willing to bear arms. Although the announcement helped to fill a temporary manpower shortage, white Loyalist prejudice meant recruits were eventually redirected to non-combatant roles. The Loyalists' motive was to deprive Patriot planters of labor rather than to end slavery; Loyalist-owned slaves were returned. The 1779 Philipsburg Proclamation issued by Clinton extended the offer of freedom to Patriot-owned slaves throughout the colonies. It persuaded entire families to escape to British lines, many of which were employed on farms to grow food for the army by removing the requirement for military service. While Clinton organized the Black Pioneers, he also ensured fugitive slaves were returned to Loyalist owners with orders that they were not to be punished for their attempted escape. As the war progressed, service as regular soldiers in British units became increasingly common; black Loyalists formed two regiments of the Charleston garrison in 1783. Estimates of the numbers who served the British during the war vary from 25,000 to 50,000, excluding those who escaped during wartime. Thomas Jefferson estimated that Virginia may have lost 30,000 slaves in total escapes. In South Carolina, nearly 25,000 slaves (about 30 percent of the enslaved population) either fled, migrated, or died, which significantly disrupted the plantation economies both during and after the war. Black Patriots were barred from the Continental Army until Washington convinced Congress in January 1778 that there was no other way to replace losses from disease and desertion. The 1st Rhode Island Regiment formed in February included former slaves whose owners were compensated; however, only 140 of its 225 soldiers were black and recruitment stopped in June 1788. Ultimately, around 5,000 African-Americans served in the Continental Army and Navy in a variety of roles, while another 4,000 were employed in Patriot militia units, aboard privateers, or as teamsters, servants, and spies. After the war, a small minority received land grants or Congressional pensions in old age; many others were returned to their masters post-war despite earlier promises of freedom. As a Patriot victory became increasingly likely, the treatment of Black Loyalists became a point of contention; after the surrender of Yorktown in 1781, Washington insisted all escapees be returned but Cornwallis refused. In 1782 and 1783, around 8,000 to 10,000 freed blacks were evacuated by the British from Charleston, Savannah, and New York; some moved onto London, while 3,000 to 4,000 settled in Nova Scotia, where they founded settlements such as Birchtown. White Loyalists transported 15,000 enslaved blacks to Jamaica and the Bahamas. The free Black Loyalists who migrated to the British West Indies included regular soldiers from Dunmore's Ethiopian Regiment, and those from Charleston who helped garrison the Leeward Islands. Native Americans Most Native Americans east of the Mississippi River were affected by the war, and many tribes were divided over how to respond to the conflict. A few tribes were friendly with the colonists, but most Natives opposed the union of the Colonies as a potential threat to their territory. Approximately 13,000 Natives fought on the British side, with the largest group coming from the Iroquois tribes who deployed around 1,500 men. Early in July 1776, Cherokee allies of Britain attacked the short-lived Washington District of North Carolina. Their defeat splintered both Cherokee settlements and people, and was directly responsible for the rise of the Chickamauga Cherokee, who perpetuated the Cherokee–American wars against American settlers for decades after hostilities with Britain ended. Creek and Seminole allies of Britain fought against Americans in Georgia and South Carolina. In 1778, a force of 800 Creeks destroyed American settlements along the Broad River in Georgia. Creek warriors also joined Thomas Brown's raids into South Carolina and assisted Britain during the Siege of Savannah. Many Native Americans were involved in the fight between Britain and Spain on the Gulf Coast and along the British side of the Mississippi River. Thousands of Creeks, Chickasaws, and Choctaws fought in major battles such as the Battle of Fort Charlotte, the Battle of Mobile, and the Siege of Pensacola. The Iroquois Confederacy was shattered as a result of the American Revolutionary War, whatever side they took; the Seneca, Onondaga, and Cayuga tribes sided with the British; members of the Mohawks fought on both sides; and many Tuscarora and Oneida sided with the Americans. To retaliate against raids on American settlement by Loyalists and their Indian allies, the Continental Army dispatched the Sullivan Expedition on a punitive expedition throughout New York to cripple the Iroquois tribes that had sided with the British. Mohawk leaders Joseph Louis Cook and Joseph Brant sided with the Americans and the British respectively, which further exacerbated the split. In the western theater of the American Revolutionary War, conflicts between settlers and Native Americans led to lingering distrust. In the 1783 Treaty of Paris, Great Britain ceded control of the disputed lands between the Great Lakes and the Ohio River, but the Indian inhabitants were not a part of the peace negotiations. Tribes in the Northwest Territory joined as the Western Confederacy and allied with the British to resist American settlement, and their conflict continued after the Revolutionary War as the Northwest Indian War. Britain's "American war" and peace Changing Prime Ministers Lord North, Prime Minister since 1770, delegated control of the war in North America to Lord George Germain and the Earl of Sandwich, who was head of the Royal Navy from 1771 to 1782. Defeat at Saratoga in 1777 made it clear the revolt would not be easily suppressed, especially after the Franco-American alliance of February 1778, and French declaration of war in June. With Spain also expected to join the conflict, the Royal Navy needed to prioritize either the war in America or in Europe; Germain advocated the former, Sandwich the latter. British negotiators now proposed a second peace settlement to Congress. The terms presented by the Carlisle Peace Commission included acceptance of the principle of self-government. Parliament would recognize Congress as the governing body, suspend any objectionable legislation, surrender its right to local colonial taxation, and discuss including American representatives in the House of Commons. In return, all property confiscated from Loyalists would be returned, British debts honored, and locally enforced martial law accepted. However, Congress demanded either immediate recognition of independence or the withdrawal of all British troops; they knew the commission were not authorized to accept these, bringing negotiations to a rapid end. When the commissioners returned to London in November 1778, they recommended a change in policy. Sir Henry Clinton, the new British Commander-in-Chief in America, was ordered to stop treating the rebels as enemies, rather than subjects whose loyalty might be regained. Those standing orders would be in effect for three years until Clinton was relieved. North initially backed the Southern strategy attempting to exploit divisions between the mercantile north and slave-owning south, but after the defeat of Yorktown, he was forced to accept the fact that this policy had failed. It was clear the war was lost, although the Royal Navy forced the French to relocate their fleet to the Caribbean in November 1781 and resumed a close blockade of American trade. The resulting economic damage and rising inflation meant the US was now eager to end the war, while France was unable to provide further loans; Congress could no longer pay its soldiers. On February 27, 1782, a Whig motion to end the offensive war in America was carried by 19 votes. North now resigned, obliging the king to invite Lord Rockingham to form a government; a consistent supporter of the Patriot cause, he made a commitment to US independence a condition of doing so. George III reluctantly accepted and the new government took office on March 27, 1782; however, Rockingham died unexpectedly on July 1, and was replaced by Lord Shelburne who acknowledged American independence. American Congress signs a peace When Lord Rockingham, the Whig leader and friend of the American cause was elevated to Prime Minister, Congress consolidated its diplomatic consuls in Europe into a peace delegation at Paris. All were experienced in Congressional leadership. The dean of the delegation was Benjamin Franklin of Pennsylvania. He had become a celebrity in the French Court, but he was also an Enlightenment scientist with influence in the courts of European great powers in Prussia, England's former ally, and Austria, a Catholic empire like Spain. Since the 1760s he had been an organizer of British American inter-colony cooperation, and then a colonial lobbyist to Parliament in London. John Adams of Massachusetts had been consul to the Dutch Republic and was a prominent early New England Patriot. John Jay of New York had been consul to Spain and was a past president of the Continental Congress. As consul to the Dutch Republic, Henry Laurens of South Carolina had secured a preliminary agreement for a trade agreement. He had been a successor to John Jay as president of Congress and with Franklin was a member of the American Philosophical Society. Although active in the preliminaries, he was not a signer of the conclusive treaty. The Whig negotiators for Lord Rockingham and his successor, Prime Minister Lord Shelburne, included long-time friend of Benjamin Franklin from his time in London, David Hartley and Richard Oswald, who had negotiated Laurens' release from the Tower of London. The Preliminary Peace signed on November 30 met four key Congressional demands: independence, territory up to the Mississippi, navigation rights into the Gulf of Mexico, and fishing rights in Newfoundland. British strategy was to strengthen the US sufficiently to prevent France from regaining a foothold in North America, and they had little interest in these proposals. However, divisions between their opponents allowed them to negotiate separately with each to improve their overall position, starting with the American delegation in September 1782. The French and Spanish sought to improve their position by creating the U.S. dependent on them for support against Britain, thus reversing the losses of 1763. Both parties tried to negotiate a settlement with Britain excluding the Americans; France proposed setting the western boundary of the US along the Appalachians, matching the British 1763 Proclamation Line. The Spanish suggested additional concessions in the vital Mississippi River Basin, but required the cession of Georgia in violation of the Franco-American alliance. Facing difficulties with Spain over claims involving the Mississippi River, and from France who was still reluctant to agree to American independence until all her demands were met, John Jay promptly told the British that he was willing to negotiate directly with them, cutting off France and Spain, and Prime Minister Lord Shelburne, in charge of the British negotiations, agreed. Key agreements for America in obtaining peace included recognition of United States independence, that she would gain all of the area east of the Mississippi River, north of Florida, and south of Canada; the granting of fishing rights in the Grand Banks, off the coast of Newfoundland and in the Gulf of Saint Lawrence; the United States and Great Britain were to each be given perpetual access to the Mississippi River. An Anglo-American Preliminary Peace was formally entered into in November 1782, and Congress endorsed the settlement on April 15, 1783. It announced the achievement of peace with independence; the "conclusive" treaty was signed on September 2, 1783, in Paris, effective the next day September 3, when Britain signed its treaty with France. John Adams, who helped draft the treaty, claimed it represented "one of the most important political events that ever happened on the globe". Ratified respectively by Congress and Parliament, the final versions were exchanged in Paris the following spring. On 25 November, the last British troops remaining in the US were evacuated from New York to Halifax. Aftermath Washington expressed astonishment that the Americans had won a war against a leading world power, referring to the American victory as "little short of a standing miracle". The conflict between British subjects with the Crown against those with the Congress had lasted over eight years from 1775 to 1783. The last uniformed British troops departed their last east coast port cities in Savannah, Charleston, and New York City, by November 25, 1783. That marked the end of British occupation in the new United States. On April 9, 1783, Washington issued orders that he had long waited to give, that "all acts of hostility" were to cease immediately. That same day, by arrangement with Washington, General Carleton issued a similar order to British troops. British troops, however, were not to evacuate until a prisoner of war exchange occurred, an effort that involved much negotiation and would take some seven months to effect. As directed by a Congressional resolution of May 26, 1783, all non-commissioned officers and enlisted were furloughed "to their homes" until the "definitive treaty of peace", when they would be automatically discharged. The US armies were directly disbanded in the field as of Washington's General Orders on Monday, June 2, 1783. Once the conclusive Treaty of Paris was signed with Britain, Washington resigned as commander-in-chief at Congress, leaving for his Army retirement at Mount Vernon. Territory The expanse of territory that was now the United States was ceded from its colonial Mother country alone. It included millions of sparsely settled acres south of the Great Lakes Line between the Appalachian Mountains and the Mississippi River. The tentative colonial migration west became a flood during the years of the Revolutionary War. Virginia's Kentucky County counted 150 men in 1775. By 1790 fifteen years later, it numbered over 73,000 and was seeking statehood in the United States. Britain's extended post-war policy for the US continued to try to establish an Indian buffer state below the Great Lakes as late as 1814 during the War of 1812. The formally acquired western American lands continued to be populated by a dozen or so American Indian tribes that had been British allies for the most part. Though British forts on their lands had been ceded to either the French or the British prior to the creation of the United States, Natives were not referred to in the British cession to the US. While tribes were not consulted by the British for the treaty, in practice the British refused to abandon the forts on territory they formally transferred. Instead, they provisioned military allies for continuing frontier raids and sponsored the Northwest Indian War (1785–1795), including erecting an additional British Fort Miami (Ohio). British sponsorship of local warfare on the United States continued until the Anglo-American Jay Treaty went into effect. At the same time, the Spanish also sponsored war within the US by Indian proxies in its Southwest Territory ceded by France to Britain, then Britain to the Americans. Of the European powers with American colonies adjacent to the newly created United States, Spain was most threatened by American independence, and it was correspondingly the most hostile to it. Its territory adjacent to the US was relatively undefended, so Spanish policy developed a combination of initiatives. Spanish soft power diplomatically challenged the British territorial cession west to the Mississippi and the previous northern boundaries of Spanish Florida. It imposed a high tariff on American goods, then blocked American settler access to the port of New Orleans. Spanish hard power extended war alliances and arms to Southwestern Natives to resist American settlement. A former Continental Army General, James Wilkinson settled in Kentucky County Virginia in 1784, and there he fostered settler secession from Virginia during the Spanish-allied Chickamauga Cherokee war. Beginning in 1787, he received pay as Spanish Agent 13, and subsequently expanded his efforts to persuade American settlers west of the Appalachians to secede from the United States, first in the Washington administration, and later again in the Jefferson administration. Casualties and losses The total loss of life throughout the conflict is largely unknown. As was typical in wars of the era, diseases such as smallpox claimed more lives than battle. Between 1775 and 1782, a smallpox epidemic broke out throughout North America, killing an estimated 130,000 among all its populations during those years. Historian Joseph Ellis suggests that Washington's decision to have his troops inoculated against the disease was one of his most important decisions. Up to 70,000 American Patriots died during active military service. Of these, approximately 6,800 were killed in battle, while at least 17,000 died from disease. The majority of the latter died while prisoners of war of the British, mostly in the prison ships in New York Harbor. The number of Patriots seriously wounded or disabled by the war has been estimated from 8,500 to 25,000. The French suffered 2,112 killed in combat in the United States. The Spanish lost a total of 124 killed and 247 wounded in West Florida. A British report in 1781 puts their total Army deaths at 6,046 in North America (1775–1779). Approximately 7,774 Germans died in British service in addition to 4,888 deserters; of the former, it is estimated 1,800 were killed in combat. Legacy The American Revolution established the United States with its numerous civil liberties and set an example to overthrow both monarchy and colonial governments. The United States has the world's oldest written constitution, and the constitutions of other free countries often bear a striking resemblance to the US Constitution, often word-for-word in places. It inspired the French, Haitian, Latin American Revolutions, and others into the modern era. Although the Revolution eliminated many forms of inequality, it did little to change the status of women, despite the role they played in winning independence. Most significantly, it failed to end slavery which continued to be a serious social and political issue and caused divisions that would ultimately end in civil war. While many were uneasy over the contradiction of demanding liberty for some, yet denying it to others, the dependence of southern states on slave labor made abolition too great a challenge. Between 1774 and 1780, many of the states banned the importation of slaves, but the institution itself continued. In 1782, Virginia passed a law permitting manumission and over the next eight years more than 10,000 slaves were given their freedom. With support from Benjamin Franklin, in 1790 the Quakers petitioned Congress to abolish slavery; the number of abolitionist movements greatly increased, and by 1804 all the northern states had outlawed it. However, even many like Adams who viewed slavery as a 'foul contagion' opposed the 1790 petition as a threat to the Union. In 1808, Jefferson passed legislation banning the importation of slaves, but allowed the domestic slave trade to continue, arguing the federal government had no right to regulate individual states. Historiography A large historiography concerns the reasons the Americans revolted and successfully broke away. The "Patriots", an insulting term used by the British that was proudly adopted by the Americans, stressed the constitutional rights of Englishmen, especially "No taxation without representation." Contemporaries credited the American Enlightenment with laying the intellectual, moral and ethical foundations of the Revolution among the Founding Fathers. Founders referred to the liberalism in the philosophy of John Locke as powerful influences. Although Two Treatises of Government has long been cited as a major influence on American thinkers, historians David Lundberg and Henry F. May demonstrate that Locke's Essay Concerning Human Understanding was far more widely read than were his political Treatises. Historians since the 1960s have emphasized that the Patriot constitutional argument was made possible by the emergence of a sense of American nationalism that united all 13 colonies. In turn, that nationalism was rooted in a Republican value system that demanded consent of the governed and opposed aristocratic control. In Britain itself, republicanism was a fringe view since it challenged the aristocratic control of the British political system. Political power was not controlled by an aristocracy or nobility in the 13 colonies, and instead, the colonial political system was based on the winners of free elections, which were open to the majority of white men. In the analysis of the coming of the Revolution, historians in recent decades have mostly used one of three approaches. The Atlantic history view places the American story in a broader context, including revolutions in France and Haiti. It tends to reintegrate the historiographies of the American Revolution and the British Empire. The "new social history" approach looks at community social structure to find cleavages that were magnified into colonial cleavages. The ideological approach that centers on republicanism in the United States. Republicanism dictated there would be no royalty, aristocracy or national church but allowed for continuation of the British common law, which American lawyers and jurists understood and approved and used in their everyday practice. Historians have examined how the rising American legal profession adopted British common law to incorporate republicanism by selective revision of legal customs and by introducing more choices for courts. Commemorations of the Revolutionary War After the first U.S. postage stamp was issued in 1849, the U.S. Post Office frequently issued commemorative stamps celebrating the various people and events of the Revolutionary War. However, it would be more than 140 years after the Revolution before any stamp commemorating that war itself was ever issued. The first such stamp was the 'Liberty Bell' issue of 1926. See also 1776 in the United States: events, births, deaths, and other years Timeline of the American Revolution Topics of the Revolution Committee of safety (American Revolution) Financial costs of the American Revolutionary War Flags of the American Revolution Naval operations in the American Revolutionary War Social history of the Revolution Black Patriot Christianity in the United States#American Revolution The Colored Patriots of the American Revolution History of Poles in the United States#American Revolution List of clergy in the American Revolution List of Patriots (American Revolution) Quakers in the American Revolution Scotch-Irish Americans#American Revolution Others in the American Revolution Nova Scotia in the American Revolution Watauga Association Lists of Revolutionary military List of American Revolutionary War battles List of British Forces in the American Revolutionary War List of Continental Forces in the American Revolutionary War List of infantry weapons in the American Revolution List of United States militia units in the American Revolutionary War "Thirteen Colony" economy Economic history of the US: Colonial economy to 1780 Shipbuilding in the American colonies Slavery in the United States Legacy and related American Revolution Statuary Commemoration of the American Revolution Founders Online Independence Day (United States) The Last Men of the Revolution List of plays and films about the American Revolution Museum of the American Revolution Tomb of the Unknown Soldier of the American Revolution United States Bicentennial List of wars of independence Bibliographies Bibliography of the American Revolutionary War Bibliography of Thomas Jefferson Bibliography of George Washington Notes Citations Year dates enclosed in [brackets] denote year of original printing Sources Britannica.com Dictionary of American Biography Encyclopædia Britannica , p. 73 – Highly regarded examination of British strategy and leadership. An introduction by John W. Shy with his biographical sketch of Mackesy. Robinson Library (See also:British Warships in the Age of Sail) Websites without authors Canada's Digital Collections Program History.org Maryland State House The History Place Totallyhistory.com U.S. Merchant Marine U.S. National Archives Valley Forge National Historic Park Yale Law School, Massachusetts Act Bibliography A selection of works relating to the war not listed above; Allison, David, and Larrie D. Ferreiro, eds. The American Revolution: A World War (Smithsonian, 2018) excerpt Volumes committed to the American Revolution: Vol. 7; Vol. 8; Vol. 9; Vol. 10 Bobrick, Benson. Angel in the Whirlwind: The Triumph of the American Revolution. Penguin, 1998 (paperback reprint) Chartrand, Rene. The French Army in the American War of Independence (1994). Short (48pp), very well illustrated descriptions. Commager, Henry Steele and Richard B. Morris, eds. The Spirit of 'Seventy-Six': The Story of the American Revolution as told by Participants. (Indianapolis: Bobbs-Merrill, 1958). online Conway, Stephen. The War of American Independence 1775–1783. Publisher: E. Arnold, 1995. . 280 pp. Kwasny, Mark V. Washington's Partisan War, 1775–1783. Kent, Ohio: 1996. . Militia warfare. Library of Congress May, Robin. The British Army in North America 1775–1783 (1993). Short (48pp), very well illustrated descriptions. National Institute of Health Neimeyer, Charles Patrick. America Goes to War: A Social History of the Continental Army (1995) Royal Navy Museum Stoker, Donald, Kenneth J. Hagan, and Michael T. McMaster, eds. Strategy in the American War of Independence: a global approach (Routledge, 2009) excerpt. Symonds, Craig L. A Battlefield Atlas of the American Revolution (1989), newly drawn maps emphasizing the movement of military units U.S. Army, "The Winning of Independence, 1777–1783" American Military History Volume I, 2005. U.S. National Park Service Zlatich, Marko; Copeland, Peter. General Washington's Army (1): 1775–78 (1994). Short (48pp), very well illustrated descriptions. ——. General Washington's Army (2): 1779–83 (1994). Short (48pp), very well illustrated descriptions. Primary sources In addition to this selection, many primary sources are available at the Princeton University Law School Avalon Project and at the Library of Congress Digital Collections (previously LOC webpage, American Memory). Original editions for titles related to the American Revolutionary War can be found open-sourced online at Internet Archive and Hathi Trust Digital Library. Emmerich, Adreas. The Partisan in War, a treatise on light infantry tactics written by Colonel Andreas Emmerich in 1789. External links Maps of the Revolutionary War from the United States Military Academy Bibliographies online Library of Congress Guide to the American Revolution Bibliographies of the War of American Independence compiled by the United States Army Center of Military History Political bibliography from Omohundro Institute of Early American History and Culture Conflicts in 1775 Conflicts in 1776 Conflicts in 1777 Conflicts in 1778 Conflicts in 1779 Conflicts in 1780 Conflicts in 1781 Conflicts in 1782 Conflicts in 1783 Global conflicts Rebellions against the British Empire Wars between the United Kingdom and the United States Wars of independence
772
https://en.wikipedia.org/wiki/Ampere
Ampere
The ampere (, ; symbol: A), often shortened to amp, is the base unit of electric current in the International System of Units (SI). It is named after André-Marie Ampère (1775–1836), French mathematician and physicist, considered the father of electromagnetism along with the Danish physicist Hans Christian Ørsted. The International System of Units defines the ampere in terms of other base units by measuring the electromagnetic force between electrical conductors carrying electric current. The earlier CGS system had two definitions of current, one essentially the same as the SI's and the other using electric charge as the base unit, with the unit of charge defined by measuring the force between two charged metal plates. The ampere was then defined as one coulomb of charge per second. In SI, the unit of charge, the coulomb, is defined as the charge carried by one ampere during one second. New definitions, in terms of invariant constants of nature, specifically the elementary charge, took effect on 20 May 2019. Definition The ampere is defined by taking the fixed numerical value of the elementary charge to be 1.602 176 634 × 10−19 when expressed in the unit C, which is equal to A⋅s, where the second is defined in terms of , the unperturbed ground state hyperfine transition frequency of the caesium-133 atom. The SI unit of charge, the coulomb, "is the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second: In general, charge is determined by steady current flowing for a time as . Constant, instantaneous and average current are expressed in amperes (as in "the charging current is 1.2 A") and the charge accumulated (or passed through a circuit) over a period of time is expressed in coulombs (as in "the battery charge is "). The relation of the ampere (C/s) to the coulomb is the same as that of the watt (J/s) to the joule. History The ampere is named for French physicist and mathematician André-Marie Ampère (1775–1836), who studied electromagnetism and laid the foundation of electrodynamics. In recognition of Ampère's contributions to the creation of modern electrical science, an international convention, signed at the 1881 International Exposition of Electricity, established the ampere as a standard unit of electrical measurement for electric current. The ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the units derived from it in the MKSA system would be conveniently sized. The "international ampere" was an early realization of the ampere, defined as the current that would deposit of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is . Since power is defined as the product of current and voltage, the ampere can alternatively be expressed in terms of the other units using the relationship , and thus 1 A = 1 W/V. Current can be measured by a multimeter, a device that can measure electrical voltage, current, and resistance. Former definition in the SI Until 2019, the SI defined the ampere as follows: The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to newtons per metre of length. Ampère's force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the formal definition of the ampere. The SI unit of charge, the coulomb, was then defined as "the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second: In general, charge was determined by steady current flowing for a time as . Realisation The standard ampere is most accurately realised using a Kibble balance, but is in practice maintained via Ohm's law from the units of electromotive force and resistance, the volt and the ohm, since the latter two can be tied to physical phenomena that are relatively easy to reproduce, the Josephson effect and the quantum Hall effect, respectively. Techniques to establish the realisation of an ampere have a relative uncertainty of approximately a few parts in 10, and involve realisations of the watt, the ohm and the volt. See also Ammeter Ampacity (current-carrying capacity) Electric current Electric shock Hydraulic analogy Magnetic constant Orders of magnitude (current) References External links The NIST Reference on Constants, Units, and Uncertainty NIST Definition of ampere and μ0 SI base units Units of electric current
775
https://en.wikipedia.org/wiki/Algorithm
Algorithm
In mathematics and computer science, an algorithm () is a finite sequence of well-defined instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. By making use of artificial intelligence, algorithms can perform automated deductions (referred to as automated reasoning) and use mathematical and logical tests to divert the code through various routes (referred to as automated decision-making). Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus". In contrast, a heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result. As an effective method, an algorithm can be expressed within a finite amount of space and time, and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input. History The concept of algorithm has existed since antiquity. Arithmetic algorithms, such as a division algorithm, were used by ancient Babylonian mathematicians c. 2500 BC and Egyptian mathematicians c. 1550 BC. Greek mathematicians later used algorithms in 240 BC in the sieve of Eratosthenes for finding prime numbers, and the Euclidean algorithm for finding the greatest common divisor of two numbers. Arabic mathematicians such as al-Kindi in the 9th century used cryptographic algorithms for code-breaking, based on frequency analysis. The word algorithm is derived from the name of the 9th-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī, whose nisba (identifying him as from Khwarazm) was Latinized as Algoritmi (Arabized Persian الخوارزمی c. 780–850). Muḥammad ibn Mūsā al-Khwārizmī was a mathematician, astronomer, geographer, and scholar in the House of Wisdom in Baghdad, whose name means 'the native of Khwarazm', a region that was part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, which was translated into Latin during the 12th century. The manuscript starts with the phrase Dixit Algorizmi ('Thus spake Al-Khwarizmi'), where "Algorizmi" was the translator's Latinization of Al-Khwarizmi's name. Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through another of his books, the Algebra. In late medieval Latin, algorismus, English 'algorism', the corruption of his name, simply meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός (arithmos), 'number' (cf. 'arithmetic'), the Latin word was altered to algorithmus, and the corresponding English term 'algorithm' is first attested in the 17th century; the modern sense was introduced in the 19th century. Indian mathematics was predominantly algorithmic. Algorithms that are representative of the Indian mathematical tradition range from the ancient Śulbasūtrās to the medieval texts of the Kerala School. In English, the word algorithm was first used in about 1230 and then by Chaucer in 1391. English adopted the French term, but it was not until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu. It begins with: which translates to: The poem is a few hundred lines long and summarizes the art of calculating with the new styled Indian dice (Tali Indorum), or Hindu numerals. A partial formalization of the modern concept of algorithm began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert in 1928. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939. Informal definition An informal definition could be "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and (for example) any prescribed bureaucratic procedure or cook-book recipe. In general, a program is only an algorithm if it stops eventually—even though infinite loops may sometimes prove desirable. A prototypical example of an algorithm is the Euclidean algorithm, which is used to determine the maximum common divisor of two integers; an example (there are others) is described by the flowchart above and as an example in a later section. offer an informal meaning of the word "algorithm" in the following quotation: No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller without limit ... you'd be trying to write on molecules, on atoms, on electrons") to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human who is capable of carrying out only very elementary operations on symbols. An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large. For example, an algorithm can be an algebraic equation such as y = m + n (i.e., two arbitrary "input variables" m and n that produce an output y), but various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of (for the addition example): Precise instructions (in a language understood by "the computer") for a fast, efficient, "good" process that specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained information and capabilities) to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = ... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format. The concept of algorithm is also used to define the notion of decidability—a notion that is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related to the customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term. Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device. Formalization Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform—in a specific order—to carry out a specified task, such as calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987) and Gurevich (2000): Minsky: "But we will also maintain, with Turing ... that any procedure which could "naturally" be called effective, can, in fact, be realized by a (simple) machine. Although this may seem extreme, the arguments ... in its favor are hard to refute". Gurevich: "… Turing's informal argument in favor of his thesis justifies a stronger thesis: every algorithm can be simulated by a Turing machine … according to Savage [1987], an algorithm is a computational process defined by a Turing machine".Turing machines can define computational processes that do not terminate. The informal definitions of algorithms generally require that the algorithm always terminates. This requirement renders the task of deciding whether a formal procedure is an algorithm impossible in the general case—due to a major theorem of computability theory known as the halting problem. Typically, when an algorithm is associated with processing information, data can be read from an input source, written to an output device and stored for further processing. Stored data are regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures. For some of these computational processes, the algorithm must be rigorously defined: specified in the way it applies in all possible circumstances that could arise. This means that any conditional steps must be systematically dealt with, case-by-case; the criteria for each case must be clear (and computable). Because an algorithm is a precise list of precise steps, the order of computation is always crucial to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting "from the top" and going "down to the bottom"—an idea that is described more formally by flow of control. So far, the discussion on the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception—one which attempts to describe a task in discrete, "mechanical" means. Unique to this conception of formalized algorithms is the assignment operation, which sets the value of a variable. It derives from the intuition of "memory" as a scratchpad. An example of such an assignment can be found below. For some alternate conceptions of what constitutes an algorithm, see functional programming and logic programming. Expressing algorithms Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in the statements based on natural language. Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are also often used as a way to define or document algorithms. There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see finite-state machine, state transition table and control table for more), as flowcharts and drakon-charts (see state diagram for more), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see Turing machine for more). Representations of algorithms can be classed into three accepted levels of Turing machine description, as follows: 1 High-level description "...prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head." 2 Implementation description "...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level, we do not give details of states or transition function." 3 Formal description Most detailed, "lowest level", gives the Turing machine's "state table". For an example of the simple algorithm "Add m+n" described in all three levels, see Examples. Design Algorithm design refers to a method or a mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories of operation research, such as dynamic programming and divide-and-conquer. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g. an algorithm's run-time growth as the size of its input increases. Typical steps in the development of algorithms: Problem definition Development of a model Specification of the algorithm Designing an algorithm Checking the correctness of the algorithm Analysis of algorithm Implementation of algorithm Program testing Documentation preparation Computer algorithms "Elegant" (compact) programs, "good" (fast) programs : The notion of "simplicity and elegance" appears informally in Knuth and precisely in Chaitin: Knuth: " ... we want good algorithms in some loosely defined aesthetic sense. One criterion ... is the length of time taken to perform the algorithm .... Other criteria are adaptability of the algorithm to computers, its simplicity and elegance, etc." Chaitin: " ... a program is 'elegant,' by which I mean that it's the smallest possible program for producing the output that it does" Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant—such a proof would solve the Halting problem (ibid). Algorithm versus function computable by an algorithm: For a given function multiple algorithms may exist. This is true, even without expanding the available instruction set available to the programmer. Rogers observes that "It is ... important to distinguish between the notion of algorithm, i.e. procedure and the notion of function computable by algorithm, i.e. mapping yielded by procedure. The same function may have several different algorithms". Unfortunately, there may be a tradeoff between goodness (speed) and elegance (compactness)—an elegant program may take more steps to complete a computation than one less elegant. An example that uses Euclid's algorithm appears below. Computers (and computors), models of computation: A computer (or human "computor") is a restricted type of machine, a "discrete deterministic mechanical device" that blindly follows its instructions. Melzak's and Lambek's primitive models reduced this notion to four elements: (i) discrete, distinguishable locations, (ii) discrete, indistinguishable counters (iii) an agent, and (iv) a list of instructions that are effective relative to the capability of the agent. Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability". Minsky's machine proceeds sequentially through its five (or six, depending on how one counts) instructions unless either a conditional IF-THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution) operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g. L ← L − 1). Rarely must a programmer write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT. However, a few different assignment instructions (e.g. DECREMENT, INCREMENT, and ZERO/CLEAR/EMPTY for a Minsky machine) are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is a convenience; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0 THEN GOTO xxx is unconditional. Simulation of an algorithm: computer (computor) language: Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example". But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a quadratic equation the computor must know how to take a square root. If they don't, then the algorithm, to be effective, must provide a set of rules for extracting a square root. This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor). But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters". When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" instruction available rather than just subtraction (or worse: just Minsky's "decrement"). Structured programming, canonical structures: Per the Church–Turing thesis, any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations, Turing completeness requires only four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction. Canonical flowchart symbols: The graphical aide called a flowchart, offers a way to describe and document an algorithm (and a computer program of one). Like the program flow of a Minsky machine, a flowchart always starts at the top of a page and proceeds down. Its primary symbols are only four: the directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm–Jacopini canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. The symbols, and their use to build the canonical structures are shown in the diagram. Examples Algorithm example One of the simplest algorithms is to find the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be stated in a high-level description in English prose, as: High-level description: If there are no numbers in the set then there is no highest number. Assume the first number in the set is the largest number in the set. For each remaining number in the set: if this number is larger than the current largest number, consider this number to be the largest number in the set. When there are no numbers left in the set to iterate over, consider the current largest number to be the largest number of the set. (Quasi-)formal description: Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code: Input: A list of numbers L. Output: The largest number in the list L. if L.size = 0 return null largest ← L[0] for each item in L, do if item > largest, then largest ← item return largest Euclid's algorithm In mathematics, the Euclidean algorithm, or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (c. 300 BC). It is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations. Euclid poses the problem thus: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including zero. To "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s. In modern words, remainder r = l − q×s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division. For Euclid's method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be zero, AND (ii) the subtraction must be "proper"; i.e., a test must guarantee that the smaller of the two numbers is subtracted from the larger (or the two can be equal so their subtraction yields zero). Euclid's original proof adds a third requirement: the two lengths must not be prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest. While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another, it yields the number "1" for their common measure. So, to be precise, the following is really Nicomachus' algorithm. Computer language for Euclid's algorithm Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction. A location is symbolized by upper case letter(s), e.g. S, A, etc. The varying quantity (number) in a location is written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009. An inelegant program for Euclid's algorithm The following algorithm is framed as Knuth's four-step version of Euclid's and Nicomachus', but, rather than using division to find the remainder, it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-level description, shown in boldface, is adapted from Knuth 1973:2–4: INPUT: [Into two locations L and S put the numbers l and s that represent the two lengths]: INPUT L, S [Initialize R: make the remaining length r equal to the starting/initial/input length l]: R ← L E0: [Ensure r ≥ s.] [Ensure the smaller of the two numbers is in S and the larger in R]: IF R > S THEN the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6: GOTO step 7 ELSE swap the contents of R and S. L ← R (this first step is redundant, but is useful for later discussion). R ← S S ← L E1: [Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R. IF S > R THEN done measuring so GOTO 10 ELSE measure again, R ← R − S [Remainder-loop]: GOTO 7. E2: [Is the remainder zero?]: EITHER (i) the last measure was exact, the remainder in R is zero, and the program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S. IF R = 0 THEN done so GOTO step 15 ELSE CONTINUE TO step 11, E3: [Interchange s and r]: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s; L serves as a temporary location. L ← R R ← S S ← L [Repeat the measuring process]: GOTO 7 OUTPUT: [Done. S contains the greatest common divisor]: PRINT S DONE: HALT, END, STOP. An elegant program for Euclid's algorithm The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language, the steps are numbered, and the instruction LET [] = [] is the assignment instruction symbolized by ←. 5 REM Euclid's algorithm for greatest common divisor 6 PRINT "Type two integers greater than 0" 10 INPUT A,B 20 IF B=0 THEN GOTO 80 30 IF A > B THEN GOTO 60 40 LET B=B-A 50 GOTO 20 60 LET A=A-B 70 GOTO 20 80 PRINT A 90 END How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at last the minuend M is less than or equal to the subtrahend S (Difference = Minuend − Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses. The following version can be used with programming languages from the C-family: // Euclid's algorithm for greatest common divisor int euclidAlgorithm (int A, int B){ A=abs(A); B=abs(B); while (B!=0){ while (A>B) A=A-B; B=B-A; } return A; } Testing the Euclid algorithms Does an algorithm do what its author wants it to do? A few test cases usually give some confidence in the core functionality. But tests are not enough. For test cases, one source uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively prime numbers 14157 and 5950. But "exceptional cases" must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane 5 Flight 501 rocket failure (June 4, 1996). Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm". Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof. Measuring and improving the Euclid algorithms Elegance (compactness) versus goodness (speed): With only six core instructions, "Elegant" is the clear winner, compared to "Inelegant" at thirteen instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed. Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective"—that is, it computes the function intended by its author—then the question becomes, can it be improved? The compactness of "Inelegant" can be improved by the elimination of five steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm; rather, it can only be done heuristically; i.e., by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps, together with steps 2 and 3, can be eliminated. This reduces the number of core instructions from thirteen to eight, which makes it "more elegant" than "Elegant", at nine steps. The speed of "Elegant" can be improved by moving the "B=0?" test outside of the two subtraction loops. This change calls for the addition of three instructions (B = 0?, A = 0?, GOTO). Now "Elegant" computes the example-numbers faster; whether this is always the case for any given A, B, and R, S would require a detailed analysis. Algorithmic analysis It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm which adds up the elements of a list of n numbers would have a time requirement of O(n), using big O notation. At all times the algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. Therefore, it is said to have a space requirement of O(1), if the space required to store the input numbers is not counted, or O(n) if it is counted. Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost O(log n)) outperforms a sequential search (cost O(n) ) when used for table lookups on sorted lists or arrays. Formal versus empirical The analysis, and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware/software platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off" problem, the efficiency of a particular algorithm may not have significant consequences (unless n is extremely large) but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign. Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization. Empirical tests cannot replace formal analysis, though, and are not trivial to perform in a fair manner. Execution efficiency To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power. Classification There are various ways to classify algorithms, each with its own merits. By implementation One way to classify algorithms is by implementation means. Recursion A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition (also known as termination condition) matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa. Logical An algorithm may be viewed as controlled logical deduction. This notion may be expressed as: Algorithm = logic + control. The logic component expresses the axioms that may be used in the computation and the control component determines the way in which deduction is applied to the axioms. This is the basis for the logic programming paradigm. In pure logic programming languages, the control component is fixed and algorithms are specified by supplying only the logic component. The appeal of this approach is the elegant semantics: a change in the axioms produces a well-defined change in the algorithm. Serial, parallel or distributed Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms take advantage of computer architectures where several processors can work on a problem at the same time, whereas distributed algorithms utilize multiple machines connected with a computer network. Parallel or distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable. Some problems have no parallel algorithms and are called inherently serial problems. Deterministic or non-deterministic Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics. Exact or approximate While many algorithms reach an exact solution, approximation algorithms seek an approximation that is closer to the true solution. The approximation can be reached by either using a deterministic or a random strategy. Such algorithms have practical value for many hard problems. One of the examples of an approximate algorithm is the Knapsack problem, where there is a set of given items. Its goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. Total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value. Quantum algorithm They run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement. By design paradigm Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories includes many different types of algorithms. Some common paradigms are: Brute-force or exhaustive search This is the naive method of trying every possible solution to see which is best. Divide and conquer A divide and conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease and conquer algorithm, which solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of a decrease and conquer algorithm is the binary search algorithm. Search and enumeration Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking. Randomized algorithm Such algorithms make some choices randomly (or pseudo-randomly). They can be very useful in finding approximate solutions for problems where finding exact solutions can be impractical (see heuristic method below). For some of these problems, it is known that the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithms for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms: Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time. Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP. Reduction of complexity This technique involves solving a difficult problem by transforming it into a better-known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer. Back tracking In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution. Optimization problems For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following: Linear programming When searching for optimal solutions to a linear function bound to linear equality and inequality constraints, the constraints of the problem can be used directly in producing the optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem additionally requires that one or more of the unknowns must be an integer then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem. Dynamic programming When a problem shows optimal substructures—meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity. The greedy method A greedy algorithm is similar to a dynamic programming algorithm in that it works by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution, which may be given or have been constructed in some way, and improve it by making small modifications. For some problems they can find the optimal solution while for others they stop at local optima, that is, at solutions that cannot be improved by the algorithm but are not optimum. The most popular use of greedy algorithms is for finding the minimal spanning tree where finding the optimal solution is possible with this method. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem. The heuristic method In optimization problems, heuristic algorithms can be used to find a solution close to the optimal solution in cases where finding the optimal solution is impractical. These algorithms work by getting closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. Their merit is that they can find a solution very close to the optimal solution in a relatively short time. Such algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some of them, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm. By field of study Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques. Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented for optimization of resource consumption in industry but is now used in solving a broad range of problems in many fields. By complexity Algorithms can be classified by the amount of time they need to complete compared to their input size: Constant time: if the time needed by the algorithm is the same, regardless of the input size. E.g. an access to an array element. Logarithmic time: if the time is a logarithmic function of the input size. E.g. binary search algorithm. Linear time: if the time is proportional to the input size. E.g. the traverse of a list. Polynomial time: if the time is a power of the input size. E.g. the bubble sort algorithm has quadratic time complexity. Exponential time: if the time is an exponential function of the input size. E.g. Brute-force search. Some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them. Continuous algorithms The adjective "continuous" when applied to the word "algorithm" can mean: An algorithm operating on data that represents continuous quantities, even though this data is represented by discrete approximations—such algorithms are studied in numerical analysis; or An algorithm in the form of a differential equation that operates continuously on the data, running on an analog computer. Legal issues Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography). History: Development of the notion of "algorithm" Ancient Near East The earliest evidence of algorithms is found in the Babylonian mathematics of ancient Mesopotamia (modern Iraq). A Sumerian clay tablet found in Shuruppak near Baghdad and dated to circa 2500 BC described the earliest division algorithm. During the Hammurabi dynasty circa 1800-1600 BC, Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events. Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus circa 1550 BC. Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus, and the Euclidean algorithm, which was first described in Euclid's Elements (c. 300 BC). Discrete and distinguishable symbols Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying: accumulating stones or marks scratched on sticks or making discrete symbols in clay. Through the Babylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine computations. Manipulation of symbols as "place holders" for numbers: algebra Muhammad ibn Mūsā al-Khwārizmī, a Persian mathematician, wrote the Al-jabr in the 9th century. The terms "algorism" and "algorithm" are derived from the name al-Khwārizmī, while the term "algebra" is derived from the book Al-jabr. In Europe, the word "algorithm" was originally used to refer to the sets of rules and techniques used by Al-Khwarizmi to solve algebraic equations, before later being generalized to refer to any set of rules or techniques. This eventually culminated in Leibniz's notion of the calculus ratiocinator (ca 1680): Cryptographic algorithms The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm. Mechanical contrivances with discrete states The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particular, the verge escapement that provides us with the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines"—the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace, mid-19th century. Lovelace is credited with the first creation of an algorithm intended for processing on a computer—Babbage's analytical engine, the first device considered a real Turing-complete computer instead of just a calculator—and is sometimes called "history's first programmer" as a result, though a full implementation of Babbage's second device would not be realized until decades after her lifetime. Logical machines 1870 – Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to reduce Boolean equations when presented in a form similar to what is now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically ... More recently, however, I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc.] ...". With this machine he could analyze a "syllogism or any other simple logical argument". This machine he displayed in 1870 before the Fellows of the Royal Society. Another logician John Venn, however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all that can be rationally expected of any logical machine". Jacquard loom, Hollerith punch cards, telegraphy and telephony – the electromechanical relay: Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers. By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape (ca 1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter (ca. 1910) with its punched-paper use of Baudot code on tape. Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device". Davis (2000) observes the particular importance of the electromechanical relay (with its two "binary states" open and closed): It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical relays, that machines were built having the scope Babbage had envisioned." Mathematics during the 19th century up to the mid-20th century Symbols and rules: In rapid succession, the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language". But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a " 'formula language', that is a lingua characterica, a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules". The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913). The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular, the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox. The resultant considerations led to Kurt Gödel's paper (1931)—he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers. Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene and J.B. Rosser's λ-calculus a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene. Church's proof that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no decision about the next instruction. Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"—in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine". Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I", and a few years later Kleene's renaming his Thesis "Church's Thesis" and proposing "Turing's Thesis". Emil Post (1936) and Alan Turing (1936–37, 1939) Emil Post (1936) described the actions of a "computer" (human being) as follows: "...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to be carried out, and a fixed unalterable set of directions. His symbol space would be "a two-way infinite sequence of spaces or boxes... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time.... a box is to admit of but two possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke. "One box is to be singled out and called the starting point. ...a specific problem is to be given in symbolic form by a finite number of boxes [i.e., INPUT] being marked with a stroke. Likewise, the answer [i.e., OUTPUT] is to be given in symbolic form by such a configuration of marked boxes... "A set of directions applicable to a general problem sets up a deterministic process when applied to each specific problem. This process terminates only when it comes to the direction of type (C ) [i.e., STOP]". See more at Post–Turing machine Alan Turing's work preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter, and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'". Given the prevalence of Morse code and telegraphy, ticker tape machines, and teletypewriters we might conjecture that all were influences. Turing—his model of computation is now called a Turing machine—begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further and creates a machine as a model of computation of numbers. "Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book...I assume then that the computation is carried out on one-dimensional paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite... "The behavior of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares which the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite... "Let us imagine that the operations performed by the computer to be split up into 'simple operations' which are so elementary that it is not easy to imagine them further divided." Turing's reduction yields the following: "The simple operations must therefore include: "(a) Changes of the symbol on one of the observed squares "(b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares. "It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must, therefore, be taken to be one of the following: "(A) A possible change (a) of symbol together with a possible change of state of mind. "(B) A possible change (b) of observed squares, together with a possible change of state of mind" "We may now construct a machine to do the work of this computer." A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it: "A function is said to be "effectively calculable" if its values can be found by some purely mechanical process. Though it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more definite, mathematical expressible definition ... [he discusses the history of the definition pretty much as presented above with respect to Gödel, Herbrand, Kleene, Church, Turing, and Post] ... We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability † with effective calculability ... . "† We shall use the expression "computable function" to mean a function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions". J.B. Rosser (1939) and S.C. Kleene (1943) J. Barkley Rosser defined an 'effective [mathematical] method' in the following manner (italicization added): "'Effective method' is used here in the rather special sense of a method each step of which is precisely determined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date. [his footnote #5; see discussion immediately below]. The simplest of these to state (due to Post and Turing) says essentially that an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer. All three definitions are equivalent, so it doesn't matter which one is used. Moreover, the fact that all three are equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–226) Rosser's footnote No. 5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular, Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and their use of recursion, in particular, Gödel's use in his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–37) in their mechanism-models of computation. Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the following context (boldface in original): "12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, "is the predicate value true?"" (Kleene 1943:273) History after 1950 A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing thesis) and philosophy of mind (especially arguments about artificial intelligence). For more, see Algorithm characterizations. See also Abstract machine Algorithm engineering Algorithm characterizations Algorithmic bias Algorithmic composition Algorithmic entities Algorithmic synthesis Algorithmic technique Algorithmic topology Garbage in, garbage out Introduction to Algorithms (textbook) List of algorithms List of algorithm general topics List of important publications in theoretical computer science – Algorithms Regulation of algorithms Theory of computation Computability theory Computational complexity theory Notes Bibliography Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw–Hill Book Company, New York. . Includes an excellent bibliography of 56 references. , : cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable". Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109 Reprinted in The Undecidable, p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc. Reprinted in The Undecidable, p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes. Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the article are listed here by author's name. Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage, Ada Lovelace, Claude Shannon, Howard Aiken, etc. , Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pp. 77–111. Includes bibliography of 33 sources. , 3rd edition 1976[?], (pbk.) , . Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof. Presented to the American Mathematical Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result). Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis). Kosovsky, N.K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms, LSU Publ., Leningrad, 1981 A.A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS .] Minsky expands his "...idea of an algorithm – an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines. Reprinted in The Undecidable, pp. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis. Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable) Cf. in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4). . Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK. Reprinted in The Undecidable, pp. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton. United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms: 2100 Patentability, Manual of Patent Examining Procedure (MPEP). Latest revision August 2006 Further reading Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms. Stanford, California: Center for the Study of Language and Information. Knuth, Donald E. (2010). Selected Papers on Design of Algorithms. Stanford, California: Center for the Study of Language and Information. External links Dictionary of Algorithms and Data Structures – National Institute of Standards and Technology Algorithm repositories The Stony Brook Algorithm Repository – State University of New York at Stony Brook Collected Algorithms of the ACM – Association for Computing Machinery The Stanford GraphBase – Stanford University Articles with example pseudocode Mathematical logic Theoretical computer science
777
https://en.wikipedia.org/wiki/Annual%20plant
Annual plant
An annual plant is a plant that completes its life cycle, from germination to the production of seeds, within one growing season, and then dies. The length of growing seasons and period in which they take place vary according to geographical location, and may not correspond to the four traditional seasonal divisions of the year. With respect to the traditional seasons, annual plants are generally categorized into summer annuals and winter annuals. Summer annuals germinate during spring or early summer and mature by autumn of the same year. Winter annuals germinate during the autumn and mature during the spring or summer of the following calendar year. One seed-to-seed life cycle for an annual can occur in as little as a month in some species, though most last several months. Oilseed rapa can go from seed-to-seed in about five weeks under a bank of fluorescent lamps. This style of growing is often used in classrooms for education. Many desert annuals are therophytes, because their seed-to-seed life cycle is only weeks and they spend most of the year as seeds to survive dry conditions. Cultivation In cultivation, many food plants are, or are grown as, annuals, including virtually all domesticated grains. Some perennials and biennials are grown in gardens as annuals for convenience, particularly if they are not considered cold hardy for the local climate. Carrot, celery and parsley are true biennials that are usually grown as annual crops for their edible roots, petioles and leaves, respectively. Tomato, sweet potato and bell pepper are tender perennials usually grown as annuals. Ornamental perennials commonly grown as annuals are impatiens, mirabilis, wax begonia, snapdragon, pelargonium, coleus and petunia. Examples of true annuals include corn, wheat, rice, lettuce, peas, watermelon, beans, zinnia and marigold. Summer Summer annuals sprout, flower, produce seed, and die, during the warmer months of the year. The lawn weed crabgrass is a summer annual. Winter Winter annuals germinate in autumn or winter, live through the winter, then bloom in winter or spring. The plants grow and bloom during the cool season when most other plants are dormant or other annuals are in seed form waiting for warmer weather to germinate. Winter annuals die after flowering and setting seed. The seeds germinate in the autumn or winter when the soil temperature is cool. Winter annuals typically grow low to the ground, where they are usually sheltered from the coldest nights by snow cover, and make use of warm periods in winter for growth when the snow melts. Some common winter annuals include henbit, deadnettle, chickweed, and winter cress. Winter annuals are important ecologically, as they provide vegetative cover that prevents soil erosion during winter and early spring when no other cover exists and they provide fresh vegetation for animals and birds that feed on them. Although they are often considered to be weeds in gardens, this viewpoint is not always necessary, as most of them die when the soil temperature warms up again in early to late spring when other plants are still dormant and have not yet leafed out. Even though they do not compete directly with cultivated plants, sometimes winter annuals are considered a pest in commercial agriculture, because they can be hosts for insect pests or fungal diseases (such as ovary smut, Microbotryum sp.) which attack crops being cultivated. The property that they prevent the soil from drying out can also be problematic for commercial agriculture. Molecular genetics In 2008, it was discovered that the inactivation of only two genes in one species of annual plant leads to its conversion into a perennial plant. Researchers deactivated the SOC1 and FUL genes (which control flowering time) of Arabidopsis thaliana. This switch established phenotypes common in perennial plants, such as wood formation. See also References External links Garden plants
779
https://en.wikipedia.org/wiki/Anthophyta
Anthophyta
The anthophytes are a grouping of plant taxa bearing flower-like reproductive structures. They were formerly thought to be a clade comprising plants bearing flower-like structures. The group contained the angiosperms - the extant flowering plants, such as roses and grasses - as well as the Gnetales and the extinct Bennettitales. Detailed morphological and molecular studies have shown that the group is not actually monophyletic, with proposed floral homologies of the gnetophytes and the angiosperms having evolved in parallel. This makes it easier to reconcile molecular clock data that suggests that the angiosperms diverged from the gymnosperms around 320-300 mya. Some more recent studies have used the word anthophyte to describe a group which includes the angiosperms and a variety of fossils (glossopterids, Pentoxylon, Bennettitales, and Caytonia), but not the Gnetales. References Historically recognized plant taxa
780
https://en.wikipedia.org/wiki/Atlas%20%28disambiguation%29
Atlas (disambiguation)
An atlas is a collection of maps, originally named after the Ancient Greek deity. Atlas may also refer to: Mythology Atlas (mythology), an Ancient Greek Titanic deity who held up the celestial sphere Atlas, the first legendary king of Atlantis and further variant of the mythical Titan Atlas of Mauretania, a legendary king of Mauretania and variant of the mythical Titan Places United States Atlas, California Atlas, Illinois Atlas, Texas Atlas, West Virginia Atlas, Wisconsin Atlas District, an area in Washington, D.C. Atlas Peak AVA, a California wine region Atlas Township, Michigan Other Atlas Cinema, a historic movie theatre in Istanbul, Turkey Atlas Mountains, a set of mountain ranges in northwestern Africa Atlas, Nilüfer, a village in Nilüfer district of Bursa Province, Turkey People with the name Atlas (graffiti artist), American graffiti artist Atlas DaBone, American wrestler and football player Charles Atlas (1892–1972), Italian-American bodybuilder Charles Atlas (artist) David Atlas (born 1924), American meteorologist who pioneered weather radar James Atlas (1949-2019), American writer, editor and publisher Meir Atlas (1848–1926), Lithuanian rabbi Natacha Atlas (born 1964), Belgian singer Nava Atlas, American book artist and author Omar Atlas (born 1938), former Venezuelan professional wrestler Scott Atlas (born 1955), American conservative health care policy advisor Teddy Atlas (born 1956), American boxing trainer and commentator Tony Atlas (born 1954), American wrestler and bodybuilder Arts, entertainment, and media Comics Atlas (Drawn and Quarterly), a comic book series by Dylan Horrocks Agents of Atlas, a Marvel Comics mini-series Atlas Comics (1950s), a publisher Atlas/Seaboard Comics, a 1970s line of comics Fictional characters Atlas (DC Comics), the name of several of DC Comics' fictional characters, comic book superheroes, and deities Atlas (Teen Titans), Teen Titans character Atlas, an Astro Boy character Atlas, a BioShock character Atlas, a BattleMech in the BattleTech universe Atlas, an antagonist in Mega Man ZX Advent Atlas, a Portal 2 character Atlas, a PS238 character Erik Josten, a.k.a. Atlas, a Marvel Comics supervillain The Atlas, a strong driving force from No Man's Sky Literature Atlas, a photography book by Gerhard Richter ATLAS of Finite Groups, a group theory book Atlas Shrugged, a novel by Ayn Rand The Atlas (novel), by William T. Vollmann Music Groups Atlas (band), a New Zealand rock band Atlas Sound, the solo musical project of Bradford Cox, lead singer and guitarist of the indie rock band Deerhunter Musicians Black Atlass, a Canadian musician Albums Atlas (Kinky album) Atlas (Parkway Drive album), Parkway Drive's fourth album Atlas (Real Estate album) Atlas (RÜFÜS album) Operas Atlas (opera), 1991 opera by Meredith Monk Atlas: An Opera in Three Parts, 1993 recording of Monk's opera Songs "Atlas" (Battles song), 2007 song by Battles on the album Mirrored "Atlas" (Coldplay song), 2013 song by Coldplay from The Hunger Games: Catching Fire soundtrack "Atlas", a song by Caligula's Horse from the album The Tide, the Thief & River's End "Atlas", the titular song from Parkway Drive's fourth album "Atlas", a song by Man Overboard from Man Overboard "Atlas", a song by Jake Chudnow used as main theme in the YouTube series Mind Field Periodicals Atlas (magazine) The Atlas, a newspaper published in England from 1826 to 1869 Other uses in arts, entertainment, and media Atlas (film) Atlas (statue), iconic statue by Lee Lawrie in Rockefeller Center Atlas, a book about flora and/or fauna of a region, such as atlases of the flora and fauna of Britain and Ireland Atlas Entertainment, a film production company Atlas folio, a book size Atlas Media Corp., a non-fiction entertainment company Atlas Press, a UK publisher RTV Atlas, a broadcaster in Montenegro Atlas Sound, a solo musical project by Bradford Cox The Atlas (video game), a 1991 multiplatform strategy video game Atlas (video game), an upcoming massively-multiplayer online video game Atlas Corporation, a fictional arms manufacturer in the video game series Borderlands (series) Brands and enterprises Atlas (appliance company), a Belarusian company Atlas Consortium, a group of technology companies Atlas Copco, Swedish company founded in 1873 Atlas Corporation, an investment company Atlas Elektronik, a German naval/marine electronics and systems business Atlas Group, a Pakistani business group Atlas Mara Limited, formerly Atlas Mara Co-Nvest Limited, a financial holding company that owns banks in Africa Atlas Model Railroad, American maker of model trains and accessories Atlas Network, formerly Atlas Economic Research Foundation Atlas Press (tool company) Atlas Solutions, a subsidiary of Facebook for digital online advertising, formerly owned by Microsoft Atlas Telecom, a worldwide communications company Atlas Van Lines, a moving company Atlas-Imperial, an American diesel engine manufacturer Dresser Atlas, a provider of oilfield and factory automation services Tele Atlas, a Dutch mapping company Western Atlas, an oilfield services company Computing and technology Atlas (computer), an early supercomputer, built in the 1960s Atlas (robot), a humanoid robot developed by Boston Dynamics and DARPA ATLAS (software), a software flagging naturalized American for denaturalization Atlas, a computer used at the Lawrence Livermore National Laboratory in 2006 Abbreviated Test Language for All Systems, or ATLAS, a MILSPEC language for avionics equipment testing Advanced Technology Leisure Application Simulator, or ATLAS, a hydraulic motion simulator used in theme parks ASP.NET AJAX (formerly "Atlas"), a set of ASP.NET extensions ATLAS Transformation Language, programming language Atlas.ti, a qualitative analysis program Automatically Tuned Linear Algebra Software, or ATLAS, Texture atlas, or image sprite sheet UNIVAC 1101, an early American computer, built in the 1950s Science Astronomy Atlas (comet) (C/2019 Y4) Atlas (crater) on the near side of the Moon Atlas (moon), a satellite of Saturn Atlas (star), also designated 27 Tauri, a triple star system in the constellation of Taurus and a member of the Pleiades Advanced Technology Large-Aperture Space Telescope (ATLAST) Advanced Topographic Laser Altimeter System (ATLAS), a space-based lidar instrument on ICESat-2 Asteroid Terrestrial-impact Last Alert System (ATLAS) Mathematics Atlas (manifolds), a set of smooth charts Atlas (topology), a set of charts Smooth atlas Physics Argonne Tandem Linear Accelerator System, or ATLAS, a linear accelerator at the Argonne National Laboratory ATLAS experiment, a particle detector for the Large Hadron Collider at CERN Atomic-terrace low-angle shadowing, or ATLAS, a nanofabrication technique Biology and healthcare Atlas (anatomy), part of the spine Atlas personality, a term used in psychology to describe the personality of someone whose childhood was characterized by excessive responsibilities Brain atlas, a neuroanatomical map of the brain of a human or other animal Animals and plants Atlas bear Atlas beetle Atlas cedar Atlas moth Atlas pied flycatcher, a bird Atlas turtle Sport Atlas Delmenhorst, a German association football club Atlas F.C., a Mexican professional football club Club Atlético Atlas, an Argentine amateur football club KK Atlas, a former men's professional basketball club based in Belgrade (today's Serbia) Transport Aerospace Atlas (rocket family) SM-65 Atlas intercontinental ballistic missile (ICBM) AeroVelo Atlas, a human-powered helicopter Airbus A400M Atlas, a military aircraft produced 2007–present Armstrong Whitworth Atlas, a British military aeroplane produced 1927–1933 Atlas Air, an American cargo airline Atlas Aircraft, a 1940s aircraft manufacturer Atlas Aircraft Corporation, a South African military aircraft manufacturer Atlas Aviation, an aircraft maintenance firm Atlas Blue, a Moroccan low-cost airline Atlasjet, a Turkish airline Birdman Atlas, an ultralight aircraft HMLAT-303, U.S. Marine Corps helicopter training squadron La Mouette Atlas, a French hang glider design Automotive Atlas (1951 automobile), a French mini-car Atlas (light trucks), a Greek motor vehicle manufacturer Atlas (Pittsburgh automobile), produced 1906–1907 Atlas (Springfield automobile), produced 1907–1913 Atlas, a British van by the Standard Motor Company produced 1958–1962 Atlas Drop Forge Company, a parts subsidiary of REO Motor Car Company Atlas Motor Buggy, an American highwheeler produced in 1909 General Motors Atlas engine Honda Atlas Cars Pakistan, a Pakistani car manufacturer Nissan Atlas, a Japanese light truck Volkswagen Atlas, a sport utility vehicle Geely Atlas, a sport utility vehicle Ships and boats Atlas Werke, a former German shipbuilding company , the name of several Royal Navy ships ST Atlas, a Swedish tugboat , the name of several U.S. Navy ships Trains Atlas, an 1863–1885 South Devon Railway Dido class locomotive Atlas, a 1927–1962 LMS Royal Scot Class locomotive Atlas Car and Manufacturing Company, a locomotive manufacturer Atlas Model Railroad Other uses Atlas (architecture) ATLAS (simulation) (Army Tactical Level Advanced Simulation), a Thai military system Atlas (storm), which hit the Midwestern United States in October 2013, named by The Weather Channel Agrupación de Trabajadores Latinoamericanos Sindicalistas, or ATLAS, a former Latin American trade union confederation in the early 1950s Atlas languages, Berber languages spoken in the Atlas Mountains of Morocco ATLAS Network, a network of European special police units Atlas Uranium Mill See also Altas (disambiguation)
782
https://en.wikipedia.org/wiki/Mouthwash
Mouthwash
Mouthwash, mouth rinse, oral rinse, or mouth bath is a liquid which is held in the mouth passively or swilled around the mouth by contraction of the perioral muscles and/or movement of the head, and may be gargled, where the head is tilted back and the liquid bubbled at the back of the mouth. Usually mouthwashes are antiseptic solutions intended to reduce the microbial load in the mouth, although other mouthwashes might be given for other reasons such as for their analgesic, anti-inflammatory or anti-fungal action. Additionally, some rinses act as saliva substitutes to neutralize acid and keep the mouth moist in xerostomia (dry mouth). Cosmetic mouthrinses temporarily control or reduce bad breath and leave the mouth with a pleasant taste. Rinsing with water or mouthwash after brushing with a fluoride toothpaste can reduce the availability of salivary fluoride. This can lower the anti-cavity re-mineralization and antibacterial effects of fluoride. Fluoridated mouthwash may mitigate this effect or in high concentrations increase available fluoride, but is not as cost effective as leaving the fluoride toothpaste on the teeth after brushing. A group of experts discussing post brushing rinsing in 2012 found that although there was clear guidance given in many public health advice publications to "spit, avoid rinsing with water/excessive rinsing with water" they believed there was a limited evidence base for best practice. Use Common use involves rinsing the mouth with about 20-50 ml (2/3 fl oz) of mouthwash. The wash is typically swished or gargled for about half a minute and then spat out. Most companies suggest not drinking water immediately after using mouthwash. In some brands, the expectorate is stained, so that one can see the bacteria and debris. Mouthwash should not be used immediately after brushing the teeth so as not to wash away the beneficial fluoride residue left from the toothpaste. Similarly, the mouth should not be rinsed out with water after brushing. Patients were told to "spit don't rinse" after toothbrushing as part of a National Health Service campaign in the UK. A fluoride mouthrinse can be used at a different time of the day to brushing. Gargling is where the head is tilted back, allowing the mouthwash to sit in the back of the mouth while exhaling, causing the liquid to bubble. Gargling is practiced in Japan for perceived prevention of viral infection. One commonly used way is with infusions or tea. In some cultures, gargling is usually done in private, typically in a bathroom at a sink so the liquid can be rinsed away. Effects The most-commonly-used mouthwashes are commercial antiseptics, which are used at home as part of an oral hygiene routine. Mouthwashes combine ingredients to treat a variety of oral conditions. Variations are common, and mouthwash has no standard formulation, so its use and recommendation involves concerns about patient safety. Some manufacturers of mouthwash state that their antiseptic and antiplaque mouthwashes kill the bacterial plaque that causes cavities, gingivitis, and bad breath. It is, however, generally agreed that the use of mouthwash does not eliminate the need for both brushing and flossing. The American Dental Association asserts that regular brushing and proper flossing are enough in most cases, in addition to regular dental check-ups, although they approve many mouthwashes. For many patients, however, the mechanical methods could be tedious and time-consuming, and, additionally, some local conditions may render them especially difficult. Chemotherapeutic agents, including mouthwashes, could have a key role as adjuncts to daily home care, preventing and controlling supragingival plaque, gingivitis and oral malodor. Minor and transient side effects of mouthwashes are very common, such as taste disturbance, tooth staining, sensation of a dry mouth, etc. Alcohol-containing mouthwashes may make dry mouth and halitosis worse, as they dry out the mouth. Soreness, ulceration and redness may sometimes occur (e.g., aphthous stomatitis or allergic contact stomatitis) if the person is allergic or sensitive to mouthwash ingredients, such as preservatives, coloring, flavors and fragrances. Such effects might be reduced or eliminated by diluting the mouthwash with water, using a different mouthwash (e.g. saltwater), or foregoing mouthwash entirely. Prescription mouthwashes are used prior to and after oral surgery procedures, such as tooth extraction, or to treat the pain associated with mucositis caused by radiation therapy or chemotherapy. They are also prescribed for aphthous ulcers, other oral ulcers, and other mouth pain. "Magic mouthwashes" are prescription mouthwashes compounded in a pharmacy from a list of ingredients specified by a doctor. Despite a lack of evidence that prescription mouthwashes are more effective in decreasing the pain of oral lesions, many patients and prescribers continue to use them. There has been only one controlled study to evaluate the efficacy of magic mouthwash; it shows no difference in efficacy between the most common magic-mouthwash formulation, on the one hand, and commercial mouthwashes (such as chlorhexidine) or a saline/baking soda solution, on the other. Current guidelines suggest that saline solution is just as effective as magic mouthwash in pain relief and in shortening the healing time of oral mucositis from cancer therapies. History The first known references to mouth rinsing is in Ayurveda for treatment of gingivitis. Later, in the Greek and Roman periods, mouth rinsing following mechanical cleansing became common among the upper classes, and Hippocrates recommended a mixture of salt, alum, and vinegar. The Jewish Talmud, dating back about 1,800 years, suggests a cure for gum ailments containing "dough water" and olive oil. Before Europeans came to the Americas, Native North American and Mesoamerican cultures used mouthwashes, often made from plants such as Coptis trifolia. Indeed, Aztec dentistry was more advanced than European dentistry of the age. Peoples of the Americas used salt water mouthwashes for sore throats, and other mouthwashes for problems such as teething and mouth ulcers. Anton van Leeuwenhoek, the famous 17th century microscopist, discovered living organisms (living, because they were mobile) in deposits on the teeth (what we now call dental plaque). He also found organisms in water from the canal next to his home in Delft. He experimented with samples by adding vinegar or brandy and found that this resulted in the immediate immobilization or killing of the organisms suspended in water. Next he tried rinsing the mouth of himself and somebody else with a mouthwash containing vinegar or brandy and found that living organisms remained in the dental plaque. He concluded—correctly—that the mouthwash either did not reach, or was not present long enough, to kill the plaque organisms. In 1892, German Richard Seifert invented mouthwash product Odol, which was produced by company founder Karl August Lingner (1861–1916) in Dresden. That remained the state of affairs until the late 1960s when Harald Loe (at the time a professor at the Royal Dental College in Aarhus, Denmark) demonstrated that a chlorhexidine compound could prevent the build-up of dental plaque. The reason for chlorhexidine's effectiveness is that it strongly adheres to surfaces in the mouth and thus remains present in effective concentrations for many hours. Since then commercial interest in mouthwashes has been intense and several newer products claim effectiveness in reducing the build-up in dental plaque and the associated severity of gingivitis, in addition to fighting bad breath. Many of these solutions aim to control the Volatile Sulfur Compound (VSC)-creating anaerobic bacteria that live in the mouth and excrete substances that lead to bad breath and unpleasant mouth taste. For example, the number of mouthwash variants in the United States of America has grown from 15 (1970) to 66 (1998) to 113 (2012). Research Research in the field of microbiotas shows that only a limited set of microbes cause tooth decay, with most of the bacteria in the human mouth being harmless. Focused attention on cavity-causing bacteria such as Streptococcus mutans has led research into new mouthwash treatments that prevent these bacteria from initially growing. While current mouthwash treatments must be used with a degree of frequency to prevent this bacteria from regrowing, future treatments could provide a viable long-term solution. Ingredients Alcohol Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol, which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide "bite". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing, although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the mouth, releasing more malodorous volatile sulfur compounds. Therefore, alcohol-containing mouthwash may temporarily worsen halitosis in those who already have it, or, indeed, be the sole cause of halitosis in other individuals. It is hypothesized that alcohol in mouthwashes acts as a carcinogen (cancer-inducing agent). Generally, there is no scientific consensus about this. One review stated: The same researchers also state that the risk of acquiring oral cancer rises almost five times for users of alcohol-containing mouthwash who neither smoke nor drink (with a higher rate of increase for those who do). In addition, the authors highlight side effects from several mainstream mouthwashes that included dental erosion and accidental poisoning of children. The review garnered media attention and conflicting opinions from other researchers. Yinka Ebo of Cancer Research UK disputed the findings, concluding that "there is still not enough evidence to suggest that using mouthwash that contains alcohol will increase the risk of mouth cancer". Studies conducted in 1985, 1995, 2003, and 2012 did not support an association between alcohol-containing mouth rinses and oral cancer. Andrew Penman, chief executive of The Cancer Council New South Wales, called for further research on the matter. In a March 2009 brief, the American Dental Association said "the available evidence does not support a connection between oral cancer and alcohol-containing mouthrinse". Many newer brands of mouthwash are alcohol free, not just in response to consumer concerns about oral cancer, but also to cater for religious groups who abstain from alcohol consumption. Benzydamine (analgesic) In painful oral conditions such as aphthous stomatitis, analgesic mouthrinses (e.g. benzydamine mouthwash, or "Difflam") are sometimes used to ease pain, commonly used before meals to reduce discomfort while eating. Benzoic acid Benzoic acid acts as a buffer. Betamethasone Betamethasone is sometimes used as an anti-inflammatory, corticosteroid mouthwash. It may be used for severe inflammatory conditions of the oral mucosa such as the severe forms of aphthous stomatitis. Cetylpyridinium chloride (antiseptic, antimalodor) Cetylpyridinium chloride containing mouthwash (e.g. 0.05%) is used in some specialized mouthwashes for halitosis. Cetylpyridinium chloride mouthwash has less anti-plaque effect than chlorhexidine and may cause staining of teeth, or sometimes an oral burning sensation or ulceration. Chlorhexidine digluconate and hexetidine (antiseptic) Chlorhexidine digluconate is a chemical antiseptic and is used in a 0.12–0.2% solution as a mouthwash. However, there is no evidence to support that higher concentrations are more effective in controlling dental plaque and gingivitis. It has anti-plaque action, but also some anti-fungal action. It is especially effective against Gram-negative rods. The proportion of Gram-negative rods increase as gingivitis develops, so it is also used to reduce gingivitis. It is sometimes used as an adjunct to prevent dental caries and to treat gingivitis periodontal disease, although it does not penetrate into periodontal pockets well. Chlorhexidine mouthwash alone is unable to prevent plaque, so it is not a substitute for regular toothbrushing and flossing. Instead, chlorhexidine mouthwash is more effective when used as an adjunctive treatment with toothbrushing and flossing. In the short term, if toothbrushing is impossible due to pain, as may occur in primary herpetic gingivostomatitis, chlorhexidine mouthwash is used as a temporary substitute for other oral hygiene measures. It is not suited for use in acute necrotizing ulcerative gingivitis, however. Rinsing with chlorhexidine mouthwash before a tooth extraction reduces the risk of a dry socket, a painful condition where the blood clot is lost from an extraction socket and bone is exposed to the oral cavity. Other uses of chlorhexidine mouthwash include prevention of oral candidiasis in immunocompromised persons, treatment of denture-related stomatitis, mucosal ulceration/erosions and oral mucosal lesions, general burning sensation and many other uses. Chlorhexidine has good substantivity (the ability of a mouthwash to bind to hard and soft tissues in the mouth). However, chlorhexidine binds to tannins, meaning that prolonged use in persons who consume coffee, tea or red wine is associated with extrinsic staining (i.e. removable staining) of teeth. Chlorhexidine mouthwash can also cause taste disturbance or alteration. Chlorhexidine is rarely associated with other issues like overgrowth of enterobacteria in persons with leukemia, desquamation and irritation of oral mucosa, salivary gland pain and swelling, and hypersensitivity reactions including anaphylaxis. A randomized clinical trial conducted in Rabat University in Morocco found better results in plaque inhibition when chlorohexidine with alcohol base 0.12% was used, when compared to an alcohol-free 0.1% chlorhexidine mouthrinse. Chlorhexidine mouthwashes increase staining of teeth over a period of time. However, many publications, and, in recent times, also a systematic review (van Swaaij 2020) revealed that an AntiDiscoloration System (ADS) based on L-ascorbic acid and sodium metabisulfite is able to reduce tooth staining without affecting the antibacterial effect of chlorhexidine. Hexetidine also has anti-plaque, analgesic, astringent and anti-malodor properties, but is considered an inferior alternative to chlorhexidine. Edible oils In traditional Ayurvedic medicine, the use of oil mouthwashes is called "Kavala" ("oil swishing") or "Gandusha", and this practice has more recently been re-marketed by the complementary and alternative medicine industry as "oil pulling". Its promoters claim it works by "pulling out" "toxins", which are known as ama in Ayurvedic medicine, and thereby reducing inflammation. Ayurvedic literature claims that oil pulling is capable of improving oral and systemic health, including a benefit in conditions such as headaches, migraines, diabetes mellitus, asthma, and acne, as well as whitening teeth. Oil pulling has received little study and there is little evidence to support claims made by the technique's advocates. When compared with chlorhexidine in one small study, it was found to be less effective at reducing oral bacterial load, and the other health claims of oil pulling have failed scientific verification or have not been investigated. There is a report of lipid pneumonia caused by accidental inhalation of the oil during oil pulling. The mouth is rinsed with approximately one tablespoon of oil for 10–20 minutes then spat out. Sesame oil, coconut oil and ghee are traditionally used, but newer oils such as sunflower oil are also used. Essential oils Phenolic compounds and monoterpenes include essential oil constituents that have some antibacterial properties, such as eucalyptol, eugenol, hinokitiol, menthol, phenol, or thymol. Essential oils are oils which have been extracted from plants. Mouthwashes based on essential oils could be more effective than traditional mouthcare as anti-gingival treatments. They have been found effective in reducing halitosis, and are being used in several commercial mouthwashes. Fluoride (anticavity) Anti-cavity mouthwashes use sodium fluoride to protect against tooth decay. Fluoride-containing mouthwashes are used as prevention for dental caries for individuals who are considered at higher risk for tooth decay, whether due to xerostomia related to salivary dysfunction or side effects of medication, to not drinking fluoridated water, or to being physically unable to care for their oral needs (brushing and flossing), and as treatment for those with dentinal hypersensitivity, gingival recession/ root exposure. Flavoring agents and Xylitol Flavoring agents include sweeteners such as sorbitol, sucralose, sodium saccharin, and xylitol, which stimulate salivary function due to their sweetness and taste and helps restore the mouth to a neutral level of acidity. Xylitol rinses double as a bacterial inhibitor, and have been used as substitute for alcohol to avoid dryness of mouth associated with alcohol. Hydrogen peroxide Hydrogen peroxide can be used as an oxidizing mouthwash (e.g. Peroxyl, 1.5%). It kills anaerobic bacteria, and also has a mechanical cleansing action when it froths as it comes into contact with debris in mouth. It is often used in the short term to treat acute necrotising ulcerative gingivitis. Side effects can occur with prolonged use, including hypertrophy of the lingual papillae. Lactoperoxidase (saliva substitute) Enzymes and non-enzymatic proteins, such as lactoperoxidase, lysozyme, and lactoferrin, have been used in mouthwashes (e.g., Biotene) to reduce levels of oral bacteria, and, hence, of the acids produced by these bacteria. Lidocaine/xylocaine Oral lidocaine is useful for the treatment of mucositis symptoms (inflammation of mucous membranes) induced by radiation or chemotherapy. There is evidence that lidocaine anesthetic mouthwash has the potential to be systemically absorbed, when it was tested in patients with oral mucositis who underwent a bone marrow transplant. Methyl salicylate Methyl salicylate functions as an antiseptic, antiinflammatory, and analgesic agent, a flavoring, and a fragrance. Methyl salicylate has some anti-plaque action, but less than chlorhexidine. Methyl salicylate does not stain teeth. Nystatin Nystatin suspension is an antifungal ingredient used for the treatment of oral candidiasis. Potassium oxalate A randomized clinical trial found promising results in controlling and reducing dentine hypersensitivity when potassium oxalate mouthwash was used in conjugation with toothbrushing. Povidone/iodine (PVP-I) A 2005 study found that gargling three times a day with simple water or with a povidone-iodine solution was effective in preventing upper respiratory infection and decreasing the severity of symptoms if contracted. Other sources attribute the benefit to a simple placebo effect. PVP-I in general covers "a wider virucidal spectrum, covering both enveloped and nonenveloped viruses, than the other commercially available antiseptics", which also includes the novel SARS-CoV-2 Virus. Sanguinarine Sanguinarine-containing mouthwashes are marketed as anti-plaque and anti-malodor treatments. Sanguinarine is a toxic alkaloid herbal extract, obtained from plants such as Sanguinaria canadensis (bloodroot), Argemone mexicana (Mexican prickly poppy), and others. However, its use is strongly associated with the development of leukoplakia (a white patch in the mouth), usually in the buccal sulcus. This type of leukoplakia has been termed "sanguinaria-associated keratosis", and more than 80% of people with leukoplakia in the vestibule of the mouth have used this substance. Upon stopping contact with the causative substance, the lesions may persist for years. Although this type of leukoplakia may show dysplasia, the potential for malignant transformation is unknown. Ironically, elements within the complementary and alternative medicine industry promote the use of sanguinaria as a therapy for cancer. Sodium bicarbonate (baking soda) Sodium bicarbonate is sometimes combined with salt to make a simple homemade mouthwash, indicated for any of the reasons that a saltwater mouthwash might be used. Pre-mixed mouthwashes of 1% sodium bicarbonate and 1.5% sodium chloride in aqueous solution are marketed, although pharmacists will easily be able to produce such a formulation from the base ingredients when required. Sodium bicarbonate mouthwash is sometimes used to remove viscous saliva and to aid visualization of the oral tissues during examination of the mouth. Sodium chloride (salt) Saltwater mouthwash, also known as salt rinse, is made by dissolving 0.5–1 teaspoon of table salt into a cup of water which is as hot as possible without causing discomfort in the mouth. Saline has a mechanical cleansing action and an antiseptic action, as it is a hypertonic solution in relation to bacteria, which undergo lysis. The heat of the solution produces a therapeutic increase in blood flow (hyperemia) to the surgical site, promoting healing. Hot saltwater mouthwashes also encourage the draining of pus from dental abscesses. Conversely, if heat is applied on the side of the face (e.g., hot water bottle) rather than inside the mouth, it may cause a dental abscess to drain extra-orally, which is later associated with an area of fibrosis on the face (see cutaneous sinus of dental origin). Gargling with saltwater is said to reduce the symptoms of a sore throat. Hot saltwater mouth baths (or hot saltwater mouthwashes, sometimes abbreviated to "HSWMW") are also routinely used after oral surgery, to keep food debris out of healing wounds and to prevent infection. Some oral surgeons consider saltwater mouthwashes the mainstay of wound cleanliness after surgery. In dental extractions, hot saltwater mouthbaths should start about 24 hours after a dental extraction. The term mouth bath implies that the liquid is passively held in the mouth, rather than vigorously swilled around (which could dislodge a blood clot). Once the blood clot has stabilized, the mouthwash can be used more vigorously. These mouthwashes tend to be advised for use about 6 times per day, especially after meals (to remove food from the socket). Sodium lauryl sulfate (foaming agent) Sodium lauryl sulfate (SLS) is used as a foaming agent in many oral hygiene products, including many mouthwashes. Some may suggest that it is probably advisable to use mouthwash at least an hour after brushing with toothpaste when the toothpaste contains SLS, since the anionic compounds in the SLS toothpaste can deactivate cationic agents present in the mouthwash. Sucralfate Sucralfate is a mucosal coating agent, composed of an aluminum salt of sulfated sucrose. It is not recommended for use in the prevention of oral mucositis in head and neck cancer patients receiving radiotherapy or chemoradiation, due to a lack of efficacy found in a well-designed, randomized controlled trial. Tetracycline (antibiotic) Tetracycline is an antibiotic which may sometimes be used as a mouthwash in adults (it causes red staining of teeth in children). It is sometimes use for herpetiforme ulceration (an uncommon type of aphthous stomatitis), but prolonged use may lead to oral candidiasis, as the fungal population of the mouth overgrows in the absence of enough competing bacteria. Similarly, minocycline mouthwashes of 0.5% concentrations can relieve symptoms of recurrent aphthous stomatitis. Erythromycin is similar. Tranexamic acid A 4.8% tranexamic acid solution is sometimes used as an antifibrinolytic mouthwash to prevent bleeding during and after oral surgery in persons with coagulopathies (clotting disorders) or who are taking anticoagulants (blood thinners such as warfarin). Triclosan Triclosan is a non-ionic chlorinate bisphenol antiseptic found in some mouthwashes. When used in mouthwash (e.g. 0.03%), there is moderate substantivity, broad spectrum anti-bacterial action, some anti-fungal action, and significant anti-plaque effect, especially when combined with a copolymer or zinc citrate. Triclosan does not cause staining of the teeth. The safety of triclosan has been questioned. Zinc Astringents like zinc chloride provide a pleasant-tasting sensation and shrink tissues. Zinc, when used in combination with other antiseptic agents, can limit the buildup of tartar. See also Virucide References External links Article on Bad-Breath Prevention Products – from MSNBC Mayo Clinic Q&A on Magic Mouthwash for chemotherapy sores Gargle at the Centre for Cancer Education, University of Newcastle upon Tyne Dentifrices Oral hygiene Drug delivery devices Dosage forms
783
https://en.wikipedia.org/wiki/Alexander%20the%20Great
Alexander the Great
Alexander III of Macedon ( ; 20/21 July 356 BC – 10/11 June 323 BC), commonly known as Alexander the Great, was a king of the ancient Greek kingdom of Macedon. A member of the Argead dynasty, he was born in Pella—a city in Ancient Greece—in 356 BC. He succeeded his father King Philip II to the throne at the age of 20, and spent most of his ruling years conducting a lengthy military campaign throughout Western Asia and Northeastern Africa. By the age of thirty, he had created one of the largest empires in history, stretching from Greece to northwestern India. He was undefeated in battle and is widely considered to be one of history's greatest and most successful military commanders. During his youth, Alexander was tutored by Aristotle until the age of 16. His father Philip was assassinated in 336 BC at the wedding of Cleopatra of Macedon, Alexander's sister, and Alexander assumed the throne of the Kingdom of Macedon. In 335 BC he campaigned in the Balkans, reasserting control over Thrace and Illyria before sacking the Greek city of Thebes. Alexander was then awarded the generalship of Greece. He used his authority to launch his father's pan-Hellenic project, assuming leadership over all the Greeks in their conquest of Persia. In 334 BC he invaded the Achaemenid Empire (Persian Empire) and began a series of campaigns that lasted 10 years. Following his conquest of Asia Minor (modern-day Turkey), Alexander broke the power of Persia in a series of decisive battles, including those at Issus and Gaugamela. He subsequently overthrew King Darius III and conquered the Achaemenid Empire in its entirety. At that point, his empire stretched from the Adriatic Sea to the Indus River. Alexander endeavored to reach the "ends of the world and the Great Outer Sea" and invaded India in 326 BC, achieving an important victory over King Porus at the Battle of the Hydaspes. He eventually turned back at the Beas River due to the demand of his homesick troops, dying in 323 BC in Babylon, the city he planned to establish as his capital. He did not manage to execute a series of planned campaigns that would have begun with an invasion of Arabia. In the years following his death, a series of civil wars tore his empire apart. Alexander's legacy includes the cultural diffusion and syncretism which his conquests engendered, such as Greco-Buddhism and Hellenistic Judaism. He founded more than twenty cities that bore his name, most notably Alexandria in Egypt. Alexander's settlement of Greek colonists and the resulting spread of Greek culture resulted in Hellenistic civilization, which developed through the Roman Empire into modern Western culture. The Greek language became the lingua franca of the region and was the predominant language of the Byzantine Empire up until its end in the mid-15th century AD. Greek-speaking communities in central and far eastern Anatolia survived until the Greek genocide and the population exchange in the 1920s. Alexander became legendary as a classical hero in the mould of Achilles, featuring prominently in the history and mythic traditions of both Greek and non-Greek cultures. His military achievements and enduring, unprecedented success in battle made him the measure against which many later military leaders would compare themselves. Military academies throughout the world still teach his tactics. Early life Lineage and childhood Alexander was born in Pella, the capital of the Kingdom of Macedon, on the sixth day of the ancient Greek month of Hekatombaion, which probably corresponds to 20 July 356 BC (although the exact date is uncertain). He was the son of the king of Macedon, Philip II, and his fourth wife, Olympias, daughter of Neoptolemus I, king of Epirus. Although Philip had seven or eight wives, Olympias was his principal wife for some time, likely because she gave birth to Alexander. Several legends surround Alexander's birth and childhood. According to the ancient Greek biographer Plutarch, on the eve of the consummation of her marriage to Philip, Olympias dreamed that her womb was struck by a thunderbolt that caused a flame to spread "far and wide" before dying away. Sometime after the wedding, Philip is said to have seen himself, in a dream, securing his wife's womb with a seal engraved with a lion's image. Plutarch offered a variety of interpretations for these dreams: that Olympias was pregnant before her marriage, indicated by the sealing of her womb; or that Alexander's father was Zeus. Ancient commentators were divided about whether the ambitious Olympias promulgated the story of Alexander's divine parentage, variously claiming that she had told Alexander, or that she dismissed the suggestion as impious. On the day Alexander was born, Philip was preparing a siege on the city of Potidea on the peninsula of Chalcidice. That same day, Philip received news that his general Parmenion had defeated the combined Illyrian and Paeonian armies and that his horses had won at the Olympic Games. It was also said that on this day, the Temple of Artemis in Ephesus, one of the Seven Wonders of the World, burnt down. This led Hegesias of Magnesia to say that it had burnt down because Artemis was away, attending the birth of Alexander. Such legends may have emerged when Alexander was king, and possibly at his instigation, to show that he was superhuman and destined for greatness from conception. In his early years, Alexander was raised by a nurse, Lanike, sister of Alexander's future general Cleitus the Black. Later in his childhood, Alexander was tutored by the strict Leonidas, a relative of his mother, and by Lysimachus of Acarnania. Alexander was raised in the manner of noble Macedonian youths, learning to read, play the lyre, ride, fight, and hunt. When Alexander was ten years old, a trader from Thessaly brought Philip a horse, which he offered to sell for thirteen talents. The horse refused to be mounted, and Philip ordered it away. Alexander, however, detecting the horse's fear of its own shadow, asked to tame the horse, which he eventually managed. Plutarch stated that Philip, overjoyed at this display of courage and ambition, kissed his son tearfully, declaring: "My boy, you must find a kingdom big enough for your ambitions. Macedon is too small for you", and bought the horse for him. Alexander named it Bucephalas, meaning "ox-head". Bucephalas carried Alexander as far as India. When the animal died (because of old age, according to Plutarch, at age thirty), Alexander named a city after him, Bucephala. Education When Alexander was 13, Philip began to search for a tutor, and considered such academics as Isocrates and Speusippus, the latter offering to resign from his stewardship of the Academy to take up the post. In the end, Philip chose Aristotle and provided the Temple of the Nymphs at Mieza as a classroom. In return for teaching Alexander, Philip agreed to rebuild Aristotle's hometown of Stageira, which Philip had razed, and to repopulate it by buying and freeing the ex-citizens who were slaves, or pardoning those who were in exile. Mieza was like a boarding school for Alexander and the children of Macedonian nobles, such as Ptolemy, Hephaistion, and Cassander. Many of these students would become his friends and future generals, and are often known as the "Companions". Aristotle taught Alexander and his companions about medicine, philosophy, morals, religion, logic, and art. Under Aristotle's tutelage, Alexander developed a passion for the works of Homer, and in particular the Iliad; Aristotle gave him an annotated copy, which Alexander later carried on his campaigns. Alexander was able to quote Euripides from memory. During his youth, Alexander was also acquainted with Persian exiles at the Macedonian court, who received the protection of Philip II for several years as they opposed Artaxerxes III. Among them were Artabazos II and his daughter Barsine, possible future mistress of Alexander, who resided at the Macedonian court from 352 to 342 BC, as well as Amminapes, future satrap of Alexander, or a Persian nobleman named Sisines. This gave the Macedonian court a good knowledge of Persian issues, and may even have influenced some of the innovations in the management of the Macedonian state. Suda writes that Anaximenes of Lampsacus was one of Alexander's teachers, and that Anaximenes also accompanied Alexander on his campaigns. Heir of Philip II Regency and ascent of Macedon At the age of 16, Alexander's education under Aristotle ended. Philip II had waged war against the Thracians to the north, which left Alexander in charge as regent and heir apparent. During Philip's absence, the Thracian tribe of Maedi revolted against Macedonia. Alexander responded quickly and drove them from their territory. The territory was colonized, and a city, named Alexandropolis, was founded. Upon Philip's return, Alexander was dispatched with a small force to subdue the revolts in southern Thrace. Campaigning against the Greek city of Perinthus, Alexander reportedly saved his father's life. Meanwhile, the city of Amphissa began to work lands that were sacred to Apollo near Delphi, a sacrilege that gave Philip the opportunity to further intervene in Greek affairs. While Philip was occupied in Thrace, Alexander was ordered to muster an army for a campaign in southern Greece. Concerned that other Greek states might intervene, Alexander made it look as though he was preparing to attack Illyria instead. During this turmoil, the Illyrians invaded Macedonia, only to be repelled by Alexander. Philip and his army joined his son in 338 BC, and they marched south through Thermopylae, taking it after stubborn resistance from its Theban garrison. They went on to occupy the city of Elatea, only a few days' march from both Athens and Thebes. The Athenians, led by Demosthenes, voted to seek alliance with Thebes against Macedonia. Both Athens and Philip sent embassies to win Thebes's favour, but Athens won the contest. Philip marched on Amphissa (ostensibly acting on the request of the Amphictyonic League), capturing the mercenaries sent there by Demosthenes and accepting the city's surrender. Philip then returned to Elatea, sending a final offer of peace to Athens and Thebes, who both rejected it. As Philip marched south, his opponents blocked him near Chaeronea, Boeotia. During the ensuing Battle of Chaeronea, Philip commanded the right wing and Alexander the left, accompanied by a group of Philip's trusted generals. According to the ancient sources, the two sides fought bitterly for some time. Philip deliberately commanded his troops to retreat, counting on the untested Athenian hoplites to follow, thus breaking their line. Alexander was the first to break the Theban lines, followed by Philip's generals. Having damaged the enemy's cohesion, Philip ordered his troops to press forward and quickly routed them. With the Athenians lost, the Thebans were surrounded. Left to fight alone, they were defeated. After the victory at Chaeronea, Philip and Alexander marched unopposed into the Peloponnese, welcomed by all cities; however, when they reached Sparta, they were refused, but did not resort to war. At Corinth, Philip established a "Hellenic Alliance" (modelled on the old anti-Persian alliance of the Greco-Persian Wars), which included most Greek city-states except Sparta. Philip was then named Hegemon (often translated as "Supreme Commander") of this league (known by modern scholars as the League of Corinth), and announced his plans to attack the Persian Empire. Exile and return When Philip returned to Pella, he fell in love with and married Cleopatra Eurydice in 338 BC, the niece of his general Attalus. The marriage made Alexander's position as heir less secure, since any son of Cleopatra Eurydice would be a fully Macedonian heir, while Alexander was only half-Macedonian. During the wedding banquet, a drunken Attalus publicly prayed to the gods that the union would produce a legitimate heir. In 337 BC, Alexander fled Macedon with his mother, dropping her off with her brother, King Alexander I of Epirus in Dodona, capital of the Molossians. He continued to Illyria, where he sought refuge with one or more Illyrian kings, perhaps with Glaukias, and was treated as a guest, despite having defeated them in battle a few years before. However, it appears Philip never intended to disown his politically and militarily trained son. Accordingly, Alexander returned to Macedon after six months due to the efforts of a family friend, Demaratus, who mediated between the two parties. In the following year, the Persian satrap (governor) of Caria, Pixodarus, offered his eldest daughter to Alexander's half-brother, Philip Arrhidaeus. Olympias and several of Alexander's friends suggested this showed Philip intended to make Arrhidaeus his heir. Alexander reacted by sending an actor, Thessalus of Corinth, to tell Pixodarus that he should not offer his daughter's hand to an illegitimate son, but instead to Alexander. When Philip heard of this, he stopped the negotiations and scolded Alexander for wishing to marry the daughter of a Carian, explaining that he wanted a better bride for him. Philip exiled four of Alexander's friends, Harpalus, Nearchus, Ptolemy and Erigyius, and had the Corinthians bring Thessalus to him in chains. King of Macedon Accession In summer 336 BC, while at Aegae attending the wedding of his daughter Cleopatra to Olympias's brother, Alexander I of Epirus, Philip was assassinated by the captain of his bodyguards, Pausanias. As Pausanias tried to escape, he tripped over a vine and was killed by his pursuers, including two of Alexander's companions, Perdiccas and Leonnatus. Alexander was proclaimed king on the spot by the nobles and army at the age of 20. Consolidation of power Alexander began his reign by eliminating potential rivals to the throne. He had his cousin, the former Amyntas IV, executed. He also had two Macedonian princes from the region of Lyncestis killed, but spared a third, Alexander Lyncestes. Olympias had Cleopatra Eurydice and Europa, her daughter by Philip, burned alive. When Alexander learned about this, he was furious. Alexander also ordered the murder of Attalus, who was in command of the advance guard of the army in Asia Minor and Cleopatra's uncle. Attalus was at that time corresponding with Demosthenes, regarding the possibility of defecting to Athens. Attalus also had severely insulted Alexander, and following Cleopatra's murder, Alexander may have considered him too dangerous to leave alive. Alexander spared Arrhidaeus, who was by all accounts mentally disabled, possibly as a result of poisoning by Olympias. News of Philip's death roused many states into revolt, including Thebes, Athens, Thessaly, and the Thracian tribes north of Macedon. When news of the revolts reached Alexander, he responded quickly. Though advised to use diplomacy, Alexander mustered 3,000 Macedonian cavalry and rode south towards Thessaly. He found the Thessalian army occupying the pass between Mount Olympus and Mount Ossa, and ordered his men to ride over Mount Ossa. When the Thessalians awoke the next day, they found Alexander in their rear and promptly surrendered, adding their cavalry to Alexander's force. He then continued south towards the Peloponnese. Alexander stopped at Thermopylae, where he was recognized as the leader of the Amphictyonic League before heading south to Corinth. Athens sued for peace and Alexander pardoned the rebels. The famous encounter between Alexander and Diogenes the Cynic occurred during Alexander's stay in Corinth. When Alexander asked Diogenes what he could do for him, the philosopher disdainfully asked Alexander to stand a little to the side, as he was blocking the sunlight. This reply apparently delighted Alexander, who is reported to have said "But verily, if I were not Alexander, I would like to be Diogenes." At Corinth, Alexander took the title of Hegemon ("leader") and, like Philip, was appointed commander for the coming war against Persia. He also received news of a Thracian uprising. Balkan campaign Before crossing to Asia, Alexander wanted to safeguard his northern borders. In the spring of 335 BC, he advanced to suppress several revolts. Starting from Amphipolis, he travelled east into the country of the "Independent Thracians"; and at Mount Haemus, the Macedonian army attacked and defeated the Thracian forces manning the heights. The Macedonians marched into the country of the Triballi, and defeated their army near the Lyginus river (a tributary of the Danube). Alexander then marched for three days to the Danube, encountering the Getae tribe on the opposite shore. Crossing the river at night, he surprised them and forced their army to retreat after the first cavalry skirmish. News then reached Alexander that Cleitus, King of Illyria, and King Glaukias of the Taulantii were in open revolt against his authority. Marching west into Illyria, Alexander defeated each in turn, forcing the two rulers to flee with their troops. With these victories, he secured his northern frontier. While Alexander campaigned north, the Thebans and Athenians rebelled once again. Alexander immediately headed south. While the other cities again hesitated, Thebes decided to fight. The Theban resistance was ineffective, and Alexander razed the city and divided its territory between the other Boeotian cities. The end of Thebes cowed Athens, leaving all of Greece temporarily at peace. Alexander then set out on his Asian campaign, leaving Antipater as regent. Conquest of the Persian Empire Asia Minor After his victory at the Battle of Chaeronea (338 BC), Philip II began the work of establishing himself as hēgemṓn () of a league which according to Diodorus was to wage a campaign against the Persians for the sundry grievances Greece suffered in 480 and free the Greek cities of the western coast and islands from Achaemenid rule. In 336 he sent Parmenion, with Amyntas, Andromenes and Attalus, and an army of 10,000 men into Anatolia to make preparations for an invasion. At first, all went well. The Greek cities on the western coast of Anatolia revolted until the news arrived that Philip had been murdered and had been succeeded by his young son Alexander. The Macedonians were demoralized by Philip's death and were subsequently defeated near Magnesia by the Achaemenids under the command of the mercenary Memnon of Rhodes. Taking over the invasion project of Philip II, Alexander's army crossed the Hellespont in 334 BC with approximately 48,100 soldiers, 6,100 cavalry and a fleet of 120 ships with crews numbering 38,000, drawn from Macedon and various Greek city-states, mercenaries, and feudally raised soldiers from Thrace, Paionia, and Illyria. He showed his intent to conquer the entirety of the Persian Empire by throwing a spear into Asian soil and saying he accepted Asia as a gift from the gods. This also showed Alexander's eagerness to fight, in contrast to his father's preference for diplomacy. After an initial victory against Persian forces at the Battle of the Granicus, Alexander accepted the surrender of the Persian provincial capital and treasury of Sardis; he then proceeded along the Ionian coast, granting autonomy and democracy to the cities. Miletus, held by Achaemenid forces, required a delicate siege operation, with Persian naval forces nearby. Further south, at Halicarnassus, in Caria, Alexander successfully waged his first large-scale siege, eventually forcing his opponents, the mercenary captain Memnon of Rhodes and the Persian satrap of Caria, Orontobates, to withdraw by sea. Alexander left the government of Caria to a member of the Hecatomnid dynasty, Ada, who adopted Alexander. From Halicarnassus, Alexander proceeded into mountainous Lycia and the Pamphylian plain, asserting control over all coastal cities to deny the Persians naval bases. From Pamphylia onwards the coast held no major ports and Alexander moved inland. At Termessos, Alexander humbled but did not storm the Pisidian city. At the ancient Phrygian capital of Gordium, Alexander "undid" the hitherto unsolvable Gordian Knot, a feat said to await the future "king of Asia". According to the story, Alexander proclaimed that it did not matter how the knot was undone and hacked it apart with his sword. The Levant and Syria In spring 333 BC, Alexander crossed the Taurus into Cilicia. After a long pause due to an illness, he marched on towards Syria. Though outmanoeuvered by Darius's significantly larger army, he marched back to Cilicia, where he defeated Darius at Issus. Darius fled the battle, causing his army to collapse, and left behind his wife, his two daughters, his mother Sisygambis, and a fabulous treasure. He offered a peace treaty that included the lands he had already lost, and a ransom of 10,000 talents for his family. Alexander replied that since he was now king of Asia, it was he alone who decided territorial divisions. Alexander proceeded to take possession of Syria, and most of the coast of the Levant. In the following year, 332 BC, he was forced to attack Tyre, which he captured after a long and difficult siege. The men of military age were massacred and the women and children sold into slavery. Egypt When Alexander destroyed Tyre, most of the towns on the route to Egypt quickly capitulated. However, Alexander was met with resistance at Gaza. The stronghold was heavily fortified and built on a hill, requiring a siege. When "his engineers pointed out to him that because of the height of the mound it would be impossible... this encouraged Alexander all the more to make the attempt". After three unsuccessful assaults, the stronghold fell, but not before Alexander had received a serious shoulder wound. As in Tyre, men of military age were put to the sword and the women and children were sold into slavery. Egypt was only one of a large number of territories taken by Alexander from the Persians. After his trip to Siwa, Alexander was crowned in the temple of Ptah at Memphis. It appears that the Egyptian people did not find it disturbing that he was a foreigner - nor that he was absent for virtually his entire reign. Alexander restored the temples neglected by the Persians and dedicated new monuments to the Egyptian gods. In the temple of Luxor, near Karnak, he built a chapel for the sacred barge. During his brief months in Egypt, he reformed the taxation system on the Greek models and organized the military occupation of the country, but, early in 331 BCE, he left for Asia in pursuit of the Persians. Alexander advanced on Egypt in later 332 BC, where he was regarded as a liberator. To legitimize taking power and be recognized as the descendant of the long line of pharaohs, Alexander made sacrifices to the gods at Memphis and went to consult the famous oracle of Amun-Ra at the Siwa Oasis. He was pronounced son of the deity Amun at the Oracle of Siwa Oasis in the Libyan desert. Henceforth, Alexander often referred to Zeus-Ammon as his true father, and after his death, currency depicted him adorned with the Horns of Ammon as a symbol of his divinity. The Greeks interpreted this message - one that the gods addressed to all pharaohs - as a prophecy. During his stay in Egypt, he founded Alexandria, which would become the prosperous capital of the Ptolemaic Kingdom after his death. Control of Egypt passed to Ptolemy I (son of Lagos), the founder of the Ptolemaic Dynasty (305-30 BCE) after the death of Alexander. Assyria and Babylonia Leaving Egypt in 331 BC, Alexander marched eastward into Achaemenid Assyria in Upper Mesopotamia (now northern Iraq) and defeated Darius again at the Battle of Gaugamela. Darius once more fled the field, and Alexander chased him as far as Arbela. Gaugamela would be the final and decisive encounter between the two. Darius fled over the mountains to Ecbatana (modern Hamadan) while Alexander captured Babylon. Babylonian astronomical diaries says that "the king of the world, Alexander" sends his scouts with a message to the people of Babylon before entering the city: "I shall not enter your houses". Persia From Babylon, Alexander went to Susa, one of the Achaemenid capitals, and captured its treasury. He sent the bulk of his army to the Persian ceremonial capital of Persepolis via the Persian Royal Road. Alexander himself took selected troops on the direct route to the city. He then stormed the pass of the Persian Gates (in the modern Zagros Mountains) which had been blocked by a Persian army under Ariobarzanes and then hurried to Persepolis before its garrison could loot the treasury. On entering Persepolis, Alexander allowed his troops to loot the city for several days. Alexander stayed in Persepolis for five months. During his stay a fire broke out in the eastern palace of Xerxes I and spread to the rest of the city. Possible causes include a drunken accident or deliberate revenge for the burning of the Acropolis of Athens during the Second Persian War by Xerxes; Plutarch and Diodorus allege that Alexander's companion, the hetaera Thaïs, instigated and started the fire. Even as he watched the city burn, Alexander immediately began to regret his decision. Plutarch claims that he ordered his men to put out the fires, but that the flames had already spread to most of the city. Curtius claims that Alexander did not regret his decision until the next morning. Plutarch recounts an anecdote in which Alexander pauses and talks to a fallen statue of Xerxes as if it were a live person: Fall of the Empire and the East Alexander then chased Darius, first into Media, and then Parthia. The Persian king no longer controlled his own destiny, and was taken prisoner by Bessus, his Bactrian satrap and kinsman. As Alexander approached, Bessus had his men fatally stab the Great King and then declared himself Darius's successor as Artaxerxes V, before retreating into Central Asia to launch a guerrilla campaign against Alexander. Alexander buried Darius's remains next to his Achaemenid predecessors in a regal funeral. He claimed that, while dying, Darius had named him as his successor to the Achaemenid throne. The Achaemenid Empire is normally considered to have fallen with Darius. However, as basic forms of community life and the general structure of government were maintained and resuscitated by Alexander under his own rule, he, in the words of the Iranologist Pierre Briant "may therefore be considered to have acted in many ways as the last of the Achaemenids." Alexander viewed Bessus as a usurper and set out to defeat him. This campaign, initially against Bessus, turned into a grand tour of central Asia. Alexander founded a series of new cities, all called Alexandria, including modern Kandahar in Afghanistan, and Alexandria Eschate ("The Furthest") in modern Tajikistan. The campaign took Alexander through Media, Parthia, Aria (West Afghanistan), Drangiana, Arachosia (South and Central Afghanistan), Bactria (North and Central Afghanistan), and Scythia. In 329 BC, Spitamenes, who held an undefined position in the satrapy of Sogdiana, betrayed Bessus to Ptolemy, one of Alexander's trusted companions, and Bessus was executed. However, when, at some point later, Alexander was on the Jaxartes dealing with an incursion by a horse nomad army, Spitamenes raised Sogdiana in revolt. Alexander personally defeated the Scythians at the Battle of Jaxartes and immediately launched a campaign against Spitamenes, defeating him in the Battle of Gabai. After the defeat, Spitamenes was killed by his own men, who then sued for peace. Problems and plots During this time, Alexander adopted some elements of Persian dress and customs at his court, notably the custom of proskynesis, either a symbolic kissing of the hand, or prostration on the ground, that Persians showed to their social superiors. This was one aspect of Alexander's broad strategy aimed at securing the aid and support of the Iranian upper classes. The Greeks however regarded the gesture of proskynesis as the province of deities and believed that Alexander meant to deify himself by requiring it. This cost him the sympathies of many of his countrymen, and he eventually abandoned it. During the long rule of the Achaemenids, the elite positions in many segments of the empire including the central government, the army, and the many satrapies were specifically reserved for Iranians and to a major degree Persian noblemen. The latter were in many cases additionally connected through marriage alliances with the royal Achaemenid family. This created a problem for Alexander as to whether he had to make use of the various segments and people that had given the empire its solidity and unity for a lengthy period of time. Pierre Briant explains that Alexander realized that it was insufficient to merely exploit the internal contradictions within the imperial system as in Asia Minor, Babylonia or Egypt; he also had to (re)create a central government with or without the support of the Iranians. As early as 334 BC he demonstrated awareness of this, when he challenged incumbent King Darius III "by appropriating the main elements of the Achaemenid monarchy's ideology, particularly the theme of the king who protects the lands and the peasants". Alexander wrote a letter in 332 BC to Darius III, wherein he argued that he was worthier than Darius "to succeed to the Achaemenid throne". However, Alexander's eventual decision to burn the Achaemenid palace at Persepolis in conjunction with the major rejection and opposition of the "entire Persian people" made it impracticable for him to pose himself as Darius' legitimate successor. Against Bessus (Artaxerxes V) however, Briant adds, Alexander reasserted "his claim to legitimacy as the avenger of Darius III". A plot against his life was revealed, and one of his officers, Philotas, was executed for failing to alert Alexander. The death of the son necessitated the death of the father, and thus Parmenion, who had been charged with guarding the treasury at Ecbatana, was assassinated at Alexander's command, to prevent attempts at vengeance. Most infamously, Alexander personally killed the man who had saved his life at Granicus, Cleitus the Black, during a violent drunken altercation at Maracanda (modern day Samarkand in Uzbekistan), in which Cleitus accused Alexander of several judgmental mistakes and most especially, of having forgotten the Macedonian ways in favour of a corrupt oriental lifestyle. Later, in the Central Asian campaign, a second plot against his life was revealed, this one instigated by his own royal pages. His official historian, Callisthenes of Olynthus, was implicated in the plot, and in the Anabasis of Alexander, Arrian states that Callisthenes and the pages were then tortured on the rack as punishment, and likely died soon after. It remains unclear if Callisthenes was actually involved in the plot, for prior to his accusation he had fallen out of favour by leading the opposition to the attempt to introduce proskynesis. Macedon in Alexander's absence When Alexander set out for Asia, he left his general Antipater, an experienced military and political leader and part of Philip II's "Old Guard", in charge of Macedon. Alexander's sacking of Thebes ensured that Greece remained quiet during his absence. The one exception was a call to arms by Spartan king Agis III in 331 BC, whom Antipater defeated and killed in the battle of Megalopolis. Antipater referred the Spartans' punishment to the League of Corinth, which then deferred to Alexander, who chose to pardon them. There was also considerable friction between Antipater and Olympias, and each complained to Alexander about the other. In general, Greece enjoyed a period of peace and prosperity during Alexander's campaign in Asia. Alexander sent back vast sums from his conquest, which stimulated the economy and increased trade across his empire. However, Alexander's constant demands for troops and the migration of Macedonians throughout his empire depleted Macedon's strength, greatly weakening it in the years after Alexander, and ultimately led to its subjugation by Rome after the Third Macedonian War (171–168 BC). Indian campaign Forays into the Indian subcontinent After the death of Spitamenes and his marriage to Roxana (Raoxshna in Old Iranian) to cement relations with his new satrapies, Alexander turned to the Indian subcontinent. He invited the chieftains of the former satrapy of Gandhara (a region presently straddling eastern Afghanistan and northern Pakistan), to come to him and submit to his authority. Omphis (Indian name Ambhi), the ruler of Taxila, whose kingdom extended from the Indus to the Hydaspes (Jhelum), complied, but the chieftains of some hill clans, including the Aspasioi and Assakenoi sections of the Kambojas (known in Indian texts also as Ashvayanas and Ashvakayanas), refused to submit. Ambhi hastened to relieve Alexander of his apprehension and met him with valuable presents, placing himself and all his forces at his disposal. Alexander not only returned Ambhi his title and the gifts but he also presented him with a wardrobe of "Persian robes, gold and silver ornaments, 30 horses and 1,000 talents in gold". Alexander was emboldened to divide his forces, and Ambhi assisted Hephaestion and Perdiccas in constructing a bridge over the Indus where it bends at Hund, supplied their troops with provisions, and received Alexander himself, and his whole army, in his capital city of Taxila, with every demonstration of friendship and the most liberal hospitality. On the subsequent advance of the Macedonian king, Taxiles accompanied him with a force of 5,000 men and took part in the battle of the Hydaspes River. After that victory he was sent by Alexander in pursuit of Porus, to whom he was charged to offer favourable terms, but narrowly escaped losing his life at the hands of his old enemy. Subsequently, however, the two rivals were reconciled by the personal mediation of Alexander; and Taxiles, after having contributed zealously to the equipment of the fleet on the Hydaspes, was entrusted by the king with the government of the whole territory between that river and the Indus. A considerable accession of power was granted him after the death of Philip, son of Machatas; and he was allowed to retain his authority at the death of Alexander himself (323 BC), as well as in the subsequent partition of the provinces at Triparadisus, 321 BC. In the winter of 327/326 BC, Alexander personally led a campaign against the Aspasioi of Kunar valleys, the Guraeans of the Guraeus valley, and the Assakenoi of the Swat and Buner valleys. A fierce contest ensued with the Aspasioi in which Alexander was wounded in the shoulder by a dart, but eventually the Aspasioi lost. Alexander then faced the Assakenoi, who fought against him from the strongholds of Massaga, Ora and Aornos. The fort of Massaga was reduced only after days of bloody fighting, in which Alexander was wounded seriously in the ankle. According to Curtius, "Not only did Alexander slaughter the entire population of Massaga, but also did he reduce its buildings to rubble." A similar slaughter followed at Ora. In the aftermath of Massaga and Ora, numerous Assakenians fled to the fortress of Aornos. Alexander followed close behind and captured the strategic hill-fort after four bloody days. After Aornos, Alexander crossed the Indus and fought and won an epic battle against King Porus, who ruled a region lying between the Hydaspes and the Acesines (Chenab), in what is now the Punjab, in the Battle of the Hydaspes in 326 BC. Alexander was impressed by Porus's bravery, and made him an ally. He appointed Porus as satrap, and added to Porus's territory land that he did not previously own, towards the south-east, up to the Hyphasis (Beas). Choosing a local helped him control these lands so distant from Greece. Alexander founded two cities on opposite sides of the Hydaspes river, naming one Bucephala, in honour of his horse, who died around this time. The other was Nicaea (Victory), thought to be located at the site of modern-day Mong, Punjab. Philostratus the Elder in the Life of Apollonius of Tyana writes that in the army of Porus there was an elephant who fought brave against Alexander's army and Alexander dedicated it to the Helios (Sun) and named it Ajax, because he thought that a so great animal deserved a great name. The elephant had gold rings around its tusks and an inscription was on them written in Greek: "Alexander the son of Zeus dedicates Ajax to the Helios" (ΑΛΕΞΑΝΔΡΟΣ Ο ΔΙΟΣ ΤΟΝ ΑΙΑΝΤΑ ΤΩΙ ΗΛΙΩΙ). Revolt of the army East of Porus's kingdom, near the Ganges River, was the Nanda Empire of Magadha, and further east, the Gangaridai Empire of Bengal region of the Indian subcontinent. Fearing the prospect of facing other large armies and exhausted by years of campaigning, Alexander's army mutinied at the Hyphasis River (Beas), refusing to march farther east. This river thus marks the easternmost extent of Alexander's conquests. Alexander tried to persuade his soldiers to march farther, but his general Coenus pleaded with him to change his opinion and return; the men, he said, "longed to again see their parents, their wives and children, their homeland". Alexander eventually agreed and turned south, marching along the Indus. Along the way his army conquered the Malhi (in modern-day Multan) and other Indian tribes and Alexander sustained an injury during the siege. Alexander sent much of his army to Carmania (modern southern Iran) with general Craterus, and commissioned a fleet to explore the Persian Gulf shore under his admiral Nearchus, while he led the rest back to Persia through the more difficult southern route along the Gedrosian Desert and Makran. Alexander reached Susa in 324 BC, but not before losing many men to the harsh desert. Last years in Persia Discovering that many of his satraps and military governors had misbehaved in his absence, Alexander executed several of them as examples on his way to Susa. As a gesture of thanks, he paid off the debts of his soldiers, and announced that he would send over-aged and disabled veterans back to Macedon, led by Craterus. His troops misunderstood his intention and mutinied at the town of Opis. They refused to be sent away and criticized his adoption of Persian customs and dress and the introduction of Persian officers and soldiers into Macedonian units. After three days, unable to persuade his men to back down, Alexander gave Persians command posts in the army and conferred Macedonian military titles upon Persian units. The Macedonians quickly begged forgiveness, which Alexander accepted, and held a great banquet with several thousand of his men. In an attempt to craft a lasting harmony between his Macedonian and Persian subjects, Alexander held a mass marriage of his senior officers to Persian and other noblewomen at Susa, but few of those marriages seem to have lasted much beyond a year. Meanwhile, upon his return to Persia, Alexander learned that guards of the tomb of Cyrus the Great in Pasargadae had desecrated it, and swiftly executed them. Alexander admired Cyrus the Great, from an early age reading Xenophon's Cyropaedia, which described Cyrus's heroism in battle and governance as a king and legislator. During his visit to Pasargadae Alexander ordered his architect Aristobulus to decorate the interior of the sepulchral chamber of Cyrus's tomb. Afterwards, Alexander travelled to Ecbatana to retrieve the bulk of the Persian treasure. There, his closest friend and possible lover, Hephaestion, died of illness or poisoning. Hephaestion's death devastated Alexander, and he ordered the preparation of an expensive funeral pyre in Babylon, as well as a decree for public mourning. Back in Babylon, Alexander planned a series of new campaigns, beginning with an invasion of Arabia, but he would not have a chance to realize them, as he died shortly after Hephaestion. Death and succession On either 10 or 11 June 323 BC, Alexander died in the palace of Nebuchadnezzar II, in Babylon, at age 32. There are two different versions of Alexander's death, and details of the death differ slightly in each. Plutarch's account is that roughly 14 days before his death, Alexander entertained admiral Nearchus and spent the night and next day drinking with Medius of Larissa. Alexander developed a fever, which worsened until he was unable to speak. The common soldiers, anxious about his health, were granted the right to file past him as he silently waved at them. In the second account, Diodorus recounts that Alexander was struck with pain after downing a large bowl of unmixed wine in honour of Heracles, followed by 11 days of weakness; he did not develop a fever, instead dying after some agony. Arrian also mentioned this as an alternative, but Plutarch specifically denied this claim. Given the propensity of the Macedonian aristocracy to assassination, foul play featured in multiple accounts of his death. Diodorus, Plutarch, Arrian and Justin all mentioned the theory that Alexander was poisoned. Justin stated that Alexander was the victim of a poisoning conspiracy, Plutarch dismissed it as a fabrication, while both Diodorus and Arrian noted that they mentioned it only for the sake of completeness. The accounts were nevertheless fairly consistent in designating Antipater, recently removed as Macedonian viceroy, and at odds with Olympias, as the head of the alleged plot. Perhaps taking his summons to Babylon as a death sentence, and having seen the fate of Parmenion and Philotas, Antipater purportedly arranged for Alexander to be poisoned by his son Iollas, who was Alexander's wine-pourer. There was even a suggestion that Aristotle may have participated. The strongest argument against the poison theory is the fact that twelve days passed between the start of his illness and his death; such long-acting poisons were probably not available. However, in a 2003 BBC documentary investigating the death of Alexander, Leo Schep from the New Zealand National Poisons Centre proposed that the plant white hellebore (Veratrum album), which was known in antiquity, may have been used to poison Alexander. In a 2014 manuscript in the journal Clinical Toxicology, Schep suggested Alexander's wine was spiked with Veratrum album, and that this would produce poisoning symptoms that match the course of events described in the Alexander Romance. Veratrum album poisoning can have a prolonged course and it was suggested that if Alexander was poisoned, Veratrum album offers the most plausible cause. Another poisoning explanation put forward in 2010 proposed that the circumstances of his death were compatible with poisoning by water of the river Styx (modern-day Mavroneri in Arcadia, Greece) that contained calicheamicin, a dangerous compound produced by bacteria. Several natural causes (diseases) have been suggested, including malaria and typhoid fever. A 1998 article in the New England Journal of Medicine attributed his death to typhoid fever complicated by bowel perforation and ascending paralysis. Another recent analysis suggested pyogenic (infectious) spondylitis or meningitis. Other illnesses fit the symptoms, including acute pancreatitis, West Nile virus, and Guillain-Barré syndrome. Natural-cause theories also tend to emphasize that Alexander's health may have been in general decline after years of heavy drinking and severe wounds. The anguish that Alexander felt after Hephaestion's death may also have contributed to his declining health. After death Alexander's body was laid in a gold anthropoid sarcophagus that was filled with honey, which was in turn placed in a gold casket. According to Aelian, a seer called Aristander foretold that the land where Alexander was laid to rest "would be happy and unvanquishable forever". Perhaps more likely, the successors may have seen possession of the body as a symbol of legitimacy, since burying the prior king was a royal prerogative. While Alexander's funeral cortege was on its way to Macedon, Ptolemy seized it and took it temporarily to Memphis. His successor, Ptolemy II Philadelphus, transferred the sarcophagus to Alexandria, where it remained until at least late Antiquity. Ptolemy IX Lathyros, one of Ptolemy's final successors, replaced Alexander's sarcophagus with a glass one so he could convert the original to coinage. The recent discovery of an enormous tomb in northern Greece, at Amphipolis, dating from the time of Alexander the Great has given rise to speculation that its original intent was to be the burial place of Alexander. This would fit with the intended destination of Alexander's funeral cortege. However, the memorial was found to be dedicated to the dearest friend of Alexander the Great, Hephaestion. Pompey, Julius Caesar and Augustus all visited the tomb in Alexandria, where Augustus, allegedly, accidentally knocked the nose off. Caligula was said to have taken Alexander's breastplate from the tomb for his own use. Around AD 200, Emperor Septimius Severus closed Alexander's tomb to the public. His son and successor, Caracalla, a great admirer, visited the tomb during his own reign. After this, details on the fate of the tomb are hazy. The so-called "Alexander Sarcophagus", discovered near Sidon and now in the Istanbul Archaeology Museum, is so named not because it was thought to have contained Alexander's remains, but because its bas-reliefs depict Alexander and his companions fighting the Persians and hunting. It was originally thought to have been the sarcophagus of Abdalonymus (died 311 BC), the king of Sidon appointed by Alexander immediately following the battle of Issus in 331. However, more recently, it has been suggested that it may date from earlier than Abdalonymus's death. Demades likened the Macedonian army, after the death of Alexander, to the blinded Cyclops, due to the many random and disorderly movements that it made. In addition, Leosthenes, also, likened the anarchy between the generals, after Alexander's death, to the blinded Cyclops "who after he had lost his eye went feeling and groping about with his hands before him, not knowing where to lay them". Division of the empire Alexander's death was so sudden that when reports of his death reached Greece, they were not immediately believed. Alexander had no obvious or legitimate heir, his son Alexander IV by Roxane being born after Alexander's death. According to Diodorus, Alexander's companions asked him on his deathbed to whom he bequeathed his kingdom; his laconic reply was "tôi kratistôi"—"to the strongest". Another theory is that his successors wilfully or erroneously misheard "tôi Kraterôi"—"to Craterus", the general leading his Macedonian troops home and newly entrusted with the regency of Macedonia. Arrian and Plutarch claimed that Alexander was speechless by this point, implying that this was an apocryphal story. Diodorus, Curtius and Justin offered the more plausible story that Alexander passed his signet ring to Perdiccas, a bodyguard and leader of the companion cavalry, in front of witnesses, thereby nominating him. Perdiccas initially did not claim power, instead suggesting that Roxane's baby would be king, if male; with himself, Craterus, Leonnatus, and Antipater as guardians. However, the infantry, under the command of Meleager, rejected this arrangement since they had been excluded from the discussion. Instead, they supported Alexander's half-brother Philip Arrhidaeus. Eventually, the two sides reconciled, and after the birth of Alexander IV, he and Philip III were appointed joint kings, albeit in name only. Dissension and rivalry soon afflicted the Macedonians, however. The satrapies handed out by Perdiccas at the Partition of Babylon became power bases each general used to bid for power. After the assassination of Perdiccas in 321 BC, Macedonian unity collapsed, and 40 years of war between "The Successors" (Diadochi) ensued before the Hellenistic world settled into four stable power blocs: Ptolemaic Egypt, Seleucid Mesopotamia and Central Asia, Attalid Anatolia, and Antigonid Macedon. In the process, both Alexander IV and Philip III were murdered. Last plans Diodorus stated that Alexander had given detailed written instructions to Craterus some time before his death, which are known as Alexander's "last plans". Craterus started to carry out Alexander's commands, but the successors chose not to further implement them, on the grounds they were impractical and extravagant. Furthermore, Perdiccas had read the notebooks containing Alexander's last plans to the Macedonian troops in Babylon, who voted not to carry them out. According to Diodorus, Alexander's last plans called for military expansion into the southern and western Mediterranean, monumental constructions, and the intermixing of Eastern and Western populations. It included: Construction of 1,000 ships larger than triremes, along with harbours and a road running along the African coast all the way to the Pillars of Hercules, to be used for an invasion of Carthage and the western Mediterranean; Erection of great temples in Delos, Delphi, Dodona, Dium, Amphipolis, all costing 1,500 talents, and a monumental temple to Athena at Troy Amalgamation of small settlements into larger cities ("synoecisms") and the "transplant of populations from Asia to Europe and in the opposite direction from Europe to Asia, in order to bring the largest continent to common unity and to friendship by means of intermarriage and family ties" Construction of a monumental tomb for his father Philip, "to match the greatest of the pyramids of Egypt" Conquest of Arabia Circumnavigation of Africa The enormous scale of these plans has led many scholars to doubt their historicity. Ernst Badian argued that they were exaggerated by Perdiccas in order to ensure that the Macedonian troops voted not to carry them out. Other scholars have proposed that they were invented by later authors within the tradition of the Alexander Romance. Character Generalship Alexander perhaps earned the epithet "the Great" due to his unparalleled success as a military commander; he never lost a battle, despite typically being outnumbered. This was due to use of terrain, phalanx and cavalry tactics, bold strategy, and the fierce loyalty of his troops. The Macedonian phalanx, armed with the sarissa, a spear long, had been developed and perfected by Philip II through rigorous training, and Alexander used its speed and manoeuvrability to great effect against larger but more disparate Persian forces. Alexander also recognized the potential for disunity among his diverse army, which employed various languages and weapons. He overcame this by being personally involved in battle, in the manner of a Macedonian king. In his first battle in Asia, at Granicus, Alexander used only a small part of his forces, perhaps 13,000 infantry with 5,000 cavalry, against a much larger Persian force of 40,000. Alexander placed the phalanx at the center and cavalry and archers on the wings, so that his line matched the length of the Persian cavalry line, about . By contrast, the Persian infantry was stationed behind its cavalry. This ensured that Alexander would not be outflanked, while his phalanx, armed with long pikes, had a considerable advantage over the Persians' scimitars and javelins. Macedonian losses were negligible compared to those of the Persians. At Issus in 333 BC, his first confrontation with Darius, he used the same deployment, and again the central phalanx pushed through. Alexander personally led the charge in the center, routing the opposing army. At the decisive encounter with Darius at Gaugamela, Darius equipped his chariots with scythes on the wheels to break up the phalanx and equipped his cavalry with pikes. Alexander arranged a double phalanx, with the center advancing at an angle, parting when the chariots bore down and then reforming. The advance was successful and broke Darius's center, causing the latter to flee once again. When faced with opponents who used unfamiliar fighting techniques, such as in Central Asia and India, Alexander adapted his forces to his opponents' style. Thus, in Bactria and Sogdiana, Alexander successfully used his javelin throwers and archers to prevent outflanking movements, while massing his cavalry at the center. In India, confronted by Porus's elephant corps, the Macedonians opened their ranks to envelop the elephants and used their sarissas to strike upwards and dislodge the elephants' handlers. Physical appearance Historical sources frequently give conflicting accounts of Alexander's appearance, and the earliest sources are the most scant in their detail. During his lifetime, Alexander carefully curated his image by commissioning works from famous and great artists of the time. This included commissioning sculptures by Lysippos, paintings by Apelles and gem engravings by Pyrgoteles. Ancient authors recorded that Alexander was so pleased with portraits of himself created by Lysippos that he forbade other sculptors from crafting his image; scholars today, however, find the claim dubious. Nevertheless, Andrew Stewart highlights the fact that artistic portraits, not least because of who they are commissioned by, are always partisan, and that artistic portrayals of Alexander "seek to legitimize him (or, by extension, his Successors), to interpret him to their audiences, to answer their critiques, and to persuade them of his greatness", and thus should be considered within a framework of "praise and blame", in the same way sources such as praise poetry are. Despite those caveats, Lysippos's sculpture, famous for its naturalism, as opposed to a stiffer, more static pose, is thought to be the most faithful depiction. Curtius Rufus, a Roman historian from the first century AD, who wrote the Histories of Alexander the Great, gives this account of Alexander sitting on the throne of Darius III: Both Curtius and Diodorus report a story that when Darius III's mother, Sisygambis, first met Alexander and Hephaestion, she assumed that the latter was Alexander because he was the taller and more handsome of the two. Details from the Alexander Sarcophagus show that he had a fair complexion with ruddy cheeks. This is in line with the description of him given by the Greek biographer Plutarch (): Historians have understood the detail of the pleasant odour attributed to Alexander as stemming from a belief in ancient Greece that pleasant scents are characteristic of gods and heroes. The Alexander Mosaic and contemporary coins portray Alexander with "a straight nose, a slightly protruding jaw, full lips and eyes deep set beneath a strongly pronounced forehead". Although the Alexander Mosaic depicts him with brown hair, the ancient historian Aelian (c. 175 – c. 235 AD), in his Varia Historia (12.14), describes Alexander as having fair, or golden, hair. Personality Both of Alexander's parents encouraged his ambitions. His father Philip was probably Alexander's most immediate and influential role model, as the young Alexander watched him campaign practically every year, winning victory after victory while ignoring severe wounds. Alexander's relationship with his father "forged" the competitive side of his personality; he had a need to outdo his father, illustrated by his reckless behavior in battle. While Alexander worried that his father would leave him "no great or brilliant achievement to be displayed to the world", he also downplayed his father's achievements to his companions. Alexander's mother Olympia similarly had huge ambitions, and encouraged her son to believe it was his destiny to conquer the Persian Empire. She instilled a sense of destiny in him, and Plutarch tells how his ambition "kept his spirit serious and lofty in advance of his years". According to Plutarch, Alexander also had a violent temper and rash, impulsive nature, and this could influence his decision making. Although Alexander was stubborn and did not respond well to orders from his father, he was open to reasoned debate. He had a calmer side—perceptive, logical, and calculating. He had a great desire for knowledge, a love for philosophy, and was an avid reader. This was no doubt in part due to Aristotle's tutelage; Alexander was intelligent and quick to learn. His intelligent and rational side was amply demonstrated by his ability and success as a general. He had great self-restraint in "pleasures of the body", in contrast with his lack of self-control with alcohol. Alexander was erudite and patronized both arts and sciences. However, he had little interest in sports or the Olympic Games (unlike his father), seeking only the Homeric ideals of honour (timê) and glory (kudos). He had great charisma and force of personality, characteristics which made him a great leader. His unique abilities were further demonstrated by the inability of any of his generals to unite Macedonia and retain the Empire after his death—only Alexander had the ability to do so. During his final years, and especially after the death of Hephaestion, Alexander began to exhibit signs of megalomania and paranoia. His extraordinary achievements, coupled with his own ineffable sense of destiny and the flattery of his companions, may have combined to produce this effect. His delusions of grandeur are readily visible in his will and in his desire to conquer the world, in as much as he is by various sources described as having boundless ambition, an epithet, the meaning of which has descended into an historical cliché. He appears to have believed himself a deity, or at least sought to deify himself. Olympias always insisted to him that he was the son of Zeus, a theory apparently confirmed to him by the oracle of Amun at Siwa. He began to identify himself as the son of Zeus-Ammon. Alexander adopted elements of Persian dress and customs at court, notably proskynesis, which was one aspect of Alexander's broad strategy aimed at securing the aid and support of the Iranian upper classes; however the practise of proskynesis was disapproved by the Macedonians, and they were unwilling to perform it. This behaviour cost him the sympathies of many of his countrymen. However, Alexander also was a pragmatic ruler who understood the difficulties of ruling culturally disparate peoples, many of whom lived in kingdoms where the king was divine. Thus, rather than megalomania, his behaviour may simply have been a practical attempt at strengthening his rule and keeping his empire together. Personal relationships Alexander married three times: Roxana, daughter of the Sogdian nobleman Oxyartes of Bactria, out of love; and the Persian princesses Stateira and Parysatis, the former a daughter of Darius III and latter a daughter of Artaxerxes III, for political reasons. He apparently had two sons, Alexander IV of Macedon by Roxana and, possibly, Heracles of Macedon from his mistress Barsine. He lost another child when Roxana miscarried at Babylon. Alexander also had a close relationship with his friend, general, and bodyguard Hephaestion, the son of a Macedonian noble. Hephaestion's death devastated Alexander. This event may have contributed to Alexander's failing health and detached mental state during his final months. Alexander's sexuality has been the subject of speculation and controversy in modern times. The Roman era writer Athenaeus says, based on the scholar Dicaearchus, who was Alexander's contemporary, that the king "was quite excessively keen on boys", and that Alexander kissed the eunuch Bagoas in public. This episode is also told by Plutarch, probably based on the same source. None of Alexander's contemporaries, however, are known to have explicitly described Alexander's relationship with Hephaestion as sexual, though the pair was often compared to Achilles and Patroclus, whom classical Greek culture painted as a couple. Aelian writes of Alexander's visit to Troy where "Alexander garlanded the tomb of Achilles, and Hephaestion that of Patroclus, the latter hinting that he was a beloved of Alexander, in just the same way as Patroclus was of Achilles." Some modern historians (e.g., Robin Lane Fox) believe not only that Alexander's youthful relationship with Hephaestion was sexual, but that their sexual contacts may have continued into adulthood, which went against the social norms of at least some Greek cities, such as Athens, though some modern researchers have tentatively proposed that Macedonia (or at least the Macedonian court) may have been more tolerant of homosexuality between adults. Green argues that there is little evidence in ancient sources that Alexander had much carnal interest in women; he did not produce an heir until the very end of his life. However, Ogden calculates that Alexander, who impregnated his partners thrice in eight years, had a higher matrimonial record than his father at the same age. Two of these pregnancies — Stateira's and Barsine's — are of dubious legitimacy. According to Diodorus Siculus, Alexander accumulated a harem in the style of Persian kings, but he used it rather sparingly, "not wishing to offend the Macedonians", showing great self-control in "pleasures of the body". Nevertheless, Plutarch described how Alexander was infatuated by Roxana while complimenting him on not forcing himself on her. Green suggested that, in the context of the period, Alexander formed quite strong friendships with women, including Ada of Caria, who adopted him, and even Darius's mother Sisygambis, who supposedly died from grief upon hearing of Alexander's death. Battle record Legacy Alexander's legacy extended beyond his military conquests, and his reign marked a turning point in European and Asian history. His campaigns greatly increased contacts and trade between East and West, and vast areas to the east were significantly exposed to Greek civilization and influence. Some of the cities he founded became major cultural centers, many surviving into the 21st century. His chroniclers recorded valuable information about the areas through which he marched, while the Greeks themselves got a sense of belonging to a world beyond the Mediterranean. Hellenistic kingdoms Alexander's most immediate legacy was the introduction of Macedonian rule to huge new swathes of Asia. At the time of his death, Alexander's empire covered some , and was the largest state of its time. Many of these areas remained in Macedonian hands or under Greek influence for the next 200–300 years. The successor states that emerged were, at least initially, dominant forces, and these 300 years are often referred to as the Hellenistic period. The eastern borders of Alexander's empire began to collapse even during his lifetime. However, the power vacuum he left in the northwest of the Indian subcontinent directly gave rise to one of the most powerful Indian dynasties in history, the Maurya Empire. Taking advantage of this power vacuum, Chandragupta Maurya (referred to in Greek sources as "Sandrokottos"), of relatively humble origin, took control of the Punjab, and with that power base proceeded to conquer the Nanda Empire. Founding of cities Over the course of his conquests, Alexander founded some twenty cities that bore his name, most of them east of the Tigris. The first, and greatest, was Alexandria in Egypt, which would become one of the leading Mediterranean cities. The cities' locations reflected trade routes as well as defensive positions. At first, the cities must have been inhospitable, little more than defensive garrisons. Following Alexander's death, many Greeks who had settled there tried to return to Greece. However, a century or so after Alexander's death, many of the Alexandrias were thriving, with elaborate public buildings and substantial populations that included both Greek and local peoples. The foundation of the "new" Smyrna was also associated with Alexander. According to the legend, after Alexander hunted on the Mount Pagus, he slept under a plane tree at the sanctuary of Nemesis. While he was sleeping, the goddess appeared and told him to found a city there and move into it the Smyrnaeans from the "old" city. The Smyrnaeans sent ambassadors to the oracle at Clarus to ask about this, and after the response from the oracle they decided to move to the "new" city. The city of Pella, in modern Jordan, was founded by veterans of Alexander's army, and named it after the city of Pella, in Greece, which was the birthplace of Alexander. Funding of temples In 334 BC, Alexander the Great donated funds for the completion of the new temple of Athena Polias in Priene, in modern-day western Turkey. An inscription from the temple, now housed in the British Museum, declares: "King Alexander dedicated [this temple] to Athena Polias." This inscription is one of the few independent archaeological discoveries confirming an episode from Alexander's life. The temple was designed by Pytheos, one of the architects of the Mausoleum at Halicarnassus. Libanius wrote that Alexander founded the temple of Zeus Bottiaios (), in the place where later the city of Antioch was built. Suda wrote that Alexander built a big temple to Sarapis. Hellenization Hellenization was coined by the German historian Johann Gustav Droysen to denote the spread of Greek language, culture, and population into the former Persian empire after Alexander's conquest. This process can be seen in such great Hellenistic cities as Alexandria, Antioch and Seleucia (south of modern Baghdad). Alexander sought to insert Greek elements into Persian culture and to hybridize Greek and Persian culture, homogenizing the populations of Asia and Europe. Although his successors explicitly rejected such policies, Hellenization occurred throughout the region, accompanied by a distinct and opposite 'Orientalization' of the successor states. The core of the Hellenistic culture promulgated by the conquests was essentially Athenian. The close association of men from across Greece in Alexander's army directly led to the emergence of the largely Attic-based "koine", or "common" Greek dialect. Koine spread throughout the Hellenistic world, becoming the lingua franca of Hellenistic lands and eventually the ancestor of modern Greek. Furthermore, town planning, education, local government, and art current in the Hellenistic period were all based on Classical Greek ideals, evolving into distinct new forms commonly grouped as Hellenistic. Also, the New Testament was written in the Koine Greek language. Aspects of Hellenistic culture were still evident in the traditions of the Byzantine Empire in the mid-15th century. Hellenization in South and Central Asia Some of the most pronounced effects of Hellenization can be seen in Afghanistan and India, in the region of the relatively late-rising Greco-Bactrian Kingdom (250–125 BC) (in modern Afghanistan, Pakistan, and Tajikistan) and the Indo-Greek Kingdom (180 BC – 10 AD) in modern Afghanistan and India. On the Silk Road trade routes, Hellenistic culture hybridized with Iranian and Buddhist cultures. The cosmopolitan art and mythology of Gandhara (a region spanning the upper confluence of the Indus, Swat and Kabul rivers in modern Pakistan) of the ~3rd century BC to the ~5th century AD are most evident of the direct contact between Hellenistic civilization and South Asia, as are the Edicts of Ashoka, which directly mention the Greeks within Ashoka's dominion as converting to Buddhism and the reception of Buddhist emissaries by Ashoka's contemporaries in the Hellenistic world. The resulting syncretism known as Greco-Buddhism influenced the development of Buddhism and created a culture of Greco-Buddhist art. These Greco-Buddhist kingdoms sent some of the first Buddhist missionaries to China, Sri Lanka and Hellenistic Asia and Europe (Greco-Buddhist monasticism). Some of the first and most influential figurative portrayals of the Buddha appeared at this time, perhaps modelled on Greek statues of Apollo in the Greco-Buddhist style. Several Buddhist traditions may have been influenced by the ancient Greek religion: the concept of Boddhisatvas is reminiscent of Greek divine heroes, and some Mahayana ceremonial practices (burning incense, gifts of flowers, and food placed on altars) are similar to those practised by the ancient Greeks; however, similar practices were also observed amongst the native Indic culture. One Greek king, Menander I, probably became Buddhist, and was immortalized in Buddhist literature as 'Milinda'. The process of Hellenization also spurred trade between the east and west. For example, Greek astronomical instruments dating to the 3rd century BC were found in the Greco-Bactrian city of Ai Khanoum in modern-day Afghanistan, while the Greek concept of a spherical earth surrounded by the spheres of planets eventually supplanted the long-standing Indian cosmological belief of a disc consisting of four continents grouped around a central mountain (Mount Meru) like the petals of a flower. The Yavanajataka (lit. Greek astronomical treatise) and Paulisa Siddhanta texts depict the influence of Greek astronomical ideas on Indian astronomy. Following the conquests of Alexander the Great in the east, Hellenistic influence on Indian art was far-ranging. In the area of architecture, a few examples of the Ionic order can be found as far as Pakistan with the Jandial temple near Taxila. Several examples of capitals displaying Ionic influences can be seen as far as Patna, especially with the Pataliputra capital, dated to the 3rd century BC. The Corinthian order is also heavily represented in the art of Gandhara, especially through Indo-Corinthian capitals. Influence on Rome Alexander and his exploits were admired by many Romans, especially generals, who wanted to associate themselves with his achievements. Polybius began his Histories by reminding Romans of Alexander's achievements, and thereafter Roman leaders saw him as a role model. Pompey the Great adopted the epithet "Magnus" and even Alexander's anastole-type haircut, and searched the conquered lands of the east for Alexander's 260-year-old cloak, which he then wore as a sign of greatness. Julius Caesar dedicated a Lysippean equestrian bronze statue but replaced Alexander's head with his own, while Octavian visited Alexander's tomb in Alexandria and temporarily changed his seal from a sphinx to Alexander's profile. The emperor Trajan also admired Alexander, as did Nero and Caracalla. The Macriani, a Roman family that in the person of Macrinus briefly ascended to the imperial throne, kept images of Alexander on their persons, either on jewellery, or embroidered into their clothes. On the other hand, some Roman writers, particularly Republican figures, used Alexander as a cautionary tale of how autocratic tendencies can be kept in check by republican values. Alexander was used by these writers as an example of ruler values such as (friendship) and (clemency), but also (anger) and (over-desire for glory). Emperor Julian in his satire called "The Caesars", describes a contest between the previous Roman emperors, with Alexander the Great called in as an extra contestant, in the presence of the assembled gods. The Itinerarium Alexandri is a 4th-century Latin Itinerarium which describes Alexander the Great's campaigns. Julius Caesar went to serve his quaestorship in Hispania after his wife's funeral, in the spring or early summer of 69 BC. While there, he encountered a statue of Alexander the Great, and realised with dissatisfaction that he was now at an age when Alexander had the world at his feet, while he had achieved comparatively little. Pompey posed as the "new Alexander" since he was his boyhood hero. After Caracalla concluded his campaign against the Alamanni, it became evident that he was inordinately preoccupied with Alexander the Great. He began openly mimicking Alexander in his personal style. In planning his invasion of the Parthian Empire, Caracalla decided to arrange 16,000 of his men in Macedonian-style phalanxes, despite the Roman army having made the phalanx an obsolete tactical formation. The historian Christopher Matthew mentions that the term Phalangarii has two possible meanings, both with military connotations. The first refers merely to the Roman battle line and does not specifically mean that the men were armed with pikes, and the second bears similarity to the 'Marian Mules' of the late Roman Republic who carried their equipment suspended from a long pole, which were in use until at least the 2nd century AD. As a consequence, the Phalangarii of Legio II Parthica may not have been pikemen, but rather standard battle line troops or possibly Triarii. Caracalla's mania for Alexander went so far that Caracalla visited Alexandria while preparing for his Persian invasion and persecuted philosophers of the Aristotelian school based on a legend that Aristotle had poisoned Alexander. This was a sign of Caracalla's increasingly erratic behaviour. But this mania for Alexander, strange as it was, was overshadowed by subsequent events in Alexandria. In 39, Caligula performed a spectacular stunt by ordering a temporary floating bridge to be built using ships as pontoons, stretching for over two miles from the resort of Baiae to the neighbouring port of Puteoli. It was said that the bridge was to rival the Persian king Xerxes' pontoon bridge crossing of the Hellespont. Caligula, who could not swim, then proceeded to ride his favourite horse Incitatus across, wearing the breastplate of Alexander the Great. This act was in defiance of a prediction by Tiberius's soothsayer Thrasyllus of Mendes that Caligula had "no more chance of becoming emperor than of riding a horse across the Bay of Baiae". The diffusion of Greek culture and language cemented by Alexander's conquests in West Asia and North Africa served as a "precondition" for the later Roman expansion into these territories and entire basis for the Byzantine Empire, according to Errington. Unsuccessful plan to cut a canal through the isthmus Pausanias writes that Alexander wanted to dig through the Mimas mountain (in today's Karaburun area), but didn't succeed. He says this was Alexander's only unsuccessful project. Pliny the Elder adds that the planned distance was , and the purpose was to cut a canal through the isthmus to connect the Caystrian and Hermaean bays. Naming of the Icarus island in the Persian Gulf Arrian wrote that Aristobulus said that Alexander named Icarus island (modern Failaka Island) in the Persian Gulf after Icarus island in the Aegean. Legend Many of the legends about Alexander derive from his own lifetime, probably encouraged by Alexander himself. His court historian Callisthenes portrayed the sea in Cilicia as drawing back from him in proskynesis. Writing shortly after Alexander's death, Onesicritus invented a tryst between Alexander and Thalestris, queen of the mythical Amazons. He reportedly read this passage to his patron King Lysimachus, who had been one of Alexander's generals and who quipped, "I wonder where I was at the time." In the first centuries after Alexander's death, probably in Alexandria, a quantity of the legendary material coalesced into a text known as the Alexander Romance, later falsely ascribed to Callisthenes and therefore known as Pseudo-Callisthenes. This text underwent numerous expansions and revisions throughout Antiquity and the Middle Ages, containing many dubious stories, and was translated into numerous languages. In ancient and modern culture Alexander the Great's accomplishments and legacy have been depicted in many cultures. Alexander has figured in both high and popular culture beginning in his own era to the present day. The Alexander Romance, in particular, has had a significant impact on portrayals of Alexander in later cultures, from Persian to medieval European to modern Greek. Alexander features prominently in modern Greek folklore, more so than any other ancient figure. The colloquial form of his name in modern Greek ("O Megalexandros") is a household name, and he is the only ancient hero to appear in the Karagiozis shadow play. One well-known fable among Greek seamen involves a solitary mermaid who would grasp a ship's prow during a storm and ask the captain "Is King Alexander alive?" The correct answer is "He is alive and well and rules the world!" causing the mermaid to vanish and the sea to calm. Any other answer would cause the mermaid to turn into a raging Gorgon who would drag the ship to the bottom of the sea, all hands aboard. In pre-Islamic Middle Persian (Zoroastrian) literature, Alexander is referred to by the epithet gujastak, meaning "accursed", and is accused of destroying temples and burning the sacred texts of Zoroastrianism. In Sunni Islamic Persia, under the influence of the Alexander Romance (in Iskandarnamah), a more positive portrayal of Alexander emerges. Firdausi's Shahnameh ("The Book of Kings") includes Alexander in a line of legitimate Persian shahs, a mythical figure who explored the far reaches of the world in search of the Fountain of Youth. In the Shahnameh, Alexander's first journey is to Mecca to pray at the Kaaba. Alexander was depicted as performing a Hajj (pilgrimage to Mecca) many times in subsequent Islamic art and literature. Later Persian writers associate him with philosophy, portraying him at a symposium with figures such as Socrates, Plato and Aristotle, in search of immortality. The figure of Dhul-Qarnayn (literally "the Two-Horned One") mentioned in the Quran is believed by scholars to be based on later legends of Alexander. In this tradition, he was a heroic figure who built a wall to defend against the nations of Gog and Magog. He then travelled the known world in search of the Water of Life and Immortality, eventually becoming a prophet. The Syriac version of the Alexander Romance portrays him as an ideal Christian world conqueror who prayed to "the one true God". In Egypt, Alexander was portrayed as the son of Nectanebo II, the last pharaoh before the Persian conquest. His defeat of Darius was depicted as Egypt's salvation, "proving" Egypt was still ruled by an Egyptian. According to Josephus, Alexander was shown the Book of Daniel when he entered Jerusalem, which described a mighty Greek king who would conquer the Persian Empire. This is cited as a reason for sparing Jerusalem. In Hindi and Urdu, the name "Sikandar", derived from the Persian name for Alexander, denotes a rising young talent, and the Delhi Sultanate ruler Aladdin Khalji stylized himself as "Sikandar-i-Sani" (the Second Alexander the Great). In medieval India, Turkic and Afghan sovereigns from the Iranian-cultured region of Central Asia brought positive cultural connotations of Alexander to the Indian subcontinent, resulting in the efflorescence of Sikandernameh (Alexander Romances) written by Indo-Persian poets such as Amir Khusrow and the prominence of Alexander the Great as a popular subject in Mughal-era Persian miniatures. In medieval Europe, Alexander the Great was revered as a member of the Nine Worthies, a group of heroes whose lives were believed to encapsulate all the ideal qualities of chivalry. During the first Italian campaign of the French Revolutionary Wars, in a question from Bourrienne, asking whether he gave his preference to Alexander or Caesar, Napoleon said that he places Alexander The Great in the first rank, the main reason being his campaign on Asia. In the Greek Anthology, there are poems referring to Alexander. Throughout time, art objects related to Alexander were being created. In addition to speech works, sculptures and paintings, in modern times Alexander is still the subject of musical and cinematic works. The song 'Alexander the Great' by the British heavy metal band Iron Maiden is indicative. Some films that have been shot with the theme of Alexander are: Sikandar (1941), an Indian production directed by Sohrab Modi about the conquest of India by Alexander Alexander the Great (1956), produced by MGM and starring Richard Burton Sikandar-e-Azam (1965), an Indian production directed by Kedar Kapoor Alexander (2004), directed by Oliver Stone, starring Colin Farrell There are also many references to other movies and TV series. Newer novels about Alexander are: The trilogy "Alexander the Great" by Valerio Massimo Manfredi consisting of "The son of the dream", "The sand of Amon", and "The ends of the world". The trilogy of Mary Renault consisting of "Fire from Heaven", "The Persian Boy" and "Funeral Games". The Virtues of War, about Alexander the Great (2004), and "* The Afghan Campaign, about Alexander the Great's conquests in Afghanistan (2006), " by Steven Pressfield. Irish playwright Aubrey Thomas de Vere wrote Alexander the Great, a Dramatic Poem. Historiography Apart from a few inscriptions and fragments, texts written by people who actually knew Alexander or who gathered information from men who served with Alexander were all lost. Contemporaries who wrote accounts of his life included Alexander's campaign historian Callisthenes; Alexander's generals Ptolemy and Nearchus; Aristobulus, a junior officer on the campaigns; and Onesicritus, Alexander's chief helmsman. Their works are lost, but later works based on these original sources have survived. The earliest of these is Diodorus Siculus (1st century BC), followed by Quintus Curtius Rufus (mid-to-late 1st century AD), Arrian (1st to 2nd century AD), the biographer Plutarch (1st to 2nd century AD), and finally Justin, whose work dated as late as the 4th century. Of these, Arrian is generally considered the most reliable, given that he used Ptolemy and Aristobulus as his sources, closely followed by Diodorus. See also Ancient Macedonian army Bucephalus Chronology of European exploration of Asia Theories about Alexander the Great in the Quran Ptolemaic cult of Alexander the Great Gates of Alexander List of biblical figures identified in extra-biblical sources List of people known as The Great Annotations References Sources Primary sources . Secondary sources Further reading , also (1974) New York: E. P. Dutton and (1986) London: Penguin Books. External links Alexander the Great - By Kireet Joshi . Part 1, Part 2, Part 3, Part 4, Part 5, Part 6. . . . In Our Time: Alexander the Great BBC discussion with Paul Cartledge, Diana Spencer and Rachel Mairs hosted by Melvyn Bragg, first broadcast 1 October 2015. 356 BC births 323 BC deaths 4th-century BC Babylonian kings 4th-century BC Macedonian monarchs 4th-century BC Pharaohs Ancient LGBT people Ancient Macedonian generals Ancient Pellaeans Argead kings of Macedonia City founders Deified people Hellenistic-era people Monarchs of Persia People in the deuterocanonical books Pharaohs of the Argead dynasty Shahnameh characters
784
https://en.wikipedia.org/wiki/Alfred%20Korzybski
Alfred Korzybski
Alfred Habdank Skarbek Korzybski (, ; July 3, 1879 – March 1, 1950) was a Polish-American independent scholar who developed a field called general semantics, which he viewed as both distinct from, and more encompassing than, the field of semantics. He argued that human knowledge of the world is limited both by the human nervous system and the languages humans have developed, and thus no one can have direct access to reality, given that the most we can know is that which is filtered through the brain's responses to reality. His best known dictum is "The map is not the territory". Early life and career Born in Warsaw, Poland, then part of the Russian Empire, Korzybski belonged to an aristocratic Polish family whose members had worked as mathematicians, scientists, and engineers for generations. He learned the Polish language at home and the Russian language in schools; and having a French and German governess, he became fluent in four languages as a child. Korzybski studied engineering at the Warsaw University of Technology. During the First World War (1914–1918) Korzybski served as an intelligence officer in the Russian Army. After being wounded in a leg and suffering other injuries, he moved to North America in 1916 (first to Canada, then to the United States) to coordinate the shipment of artillery to Russia. He also lectured to Polish-American audiences about the conflict, promoting the sale of war bonds. After the war he decided to remain in the United States, becoming a naturalized citizen in 1940. He met Mira Edgerly, a painter of portraits on ivory, shortly after the 1918 Armistice; They married in January 1919; the marriage lasted until his death. E. P. Dutton published Korzybski's first book, Manhood of Humanity, in 1921. In this work he proposed and explained in detail a new theory of humankind: mankind as a "time-binding" class of life (humans perform time binding by the transmission of knowledge and abstractions through time which become accreted in cultures). General semantics Korzybski's work culminated in the initiation of a discipline that he named general semantics (GS). This should not be confused with semantics. The basic principles of general semantics, which include time-binding, are described in the publication Science and Sanity, published in 1933. In 1938, Korzybski founded the Institute of General Semantics in Chicago. The post-World War II housing shortage in Chicago cost him the institute's building lease, so in 1946 he moved the institute to Lakeville, Connecticut, U.S., where he directed it until his death in 1950. Korzybski maintained that humans are limited in what they know by (1) the structure of their nervous systems, and (2) the structure of their languages. Humans cannot experience the world directly, but only through their "abstractions" (nonverbal impressions or "gleanings" derived from the nervous system, and verbal indicators expressed and derived from language). These sometimes mislead us about what is the truth. Our understanding sometimes lacks similarity of structure with what is actually happening. He sought to train our awareness of abstracting, using techniques he had derived from his study of mathematics and science. He called this awareness, this goal of his system, "consciousness of abstracting". His system included the promotion of attitudes such as "I don't know; let's see," in order that we may better discover or reflect on its realities as revealed by modern science. Another technique involved becoming inwardly and outwardly quiet, an experience he termed, "silence on the objective levels". "To be" Many devotees and critics of Korzybski reduced his rather complex system to a simple matter of what he said about the verb form "is" of the general verb "to be." His system, however, is based primarily on such terminology as the different "orders of abstraction," and formulations such as "consciousness of abstracting." The contention that Korzybski opposed the use of the verb "to be" would be a profound exaggeration. He thought that certain uses of the verb "to be", called the "is of identity" and the "is of predication", were faulty in structure, e.g., a statement such as, "Elizabeth is a fool" (said of a person named "Elizabeth" who has done something that we regard as foolish). In Korzybski's system, one's assessment of Elizabeth belongs to a higher order of abstraction than Elizabeth herself. Korzybski's remedy was to deny identity; in this example, to be aware continually that "Elizabeth" is not what we call her. We find Elizabeth not in the verbal domain, the world of words, but the nonverbal domain (the two, he said, amount to different orders of abstraction). This was expressed by Korzybski's most famous premise, "the map is not the territory". Note that this premise uses the phrase "is not", a form of "to be"; this and many other examples show that he did not intend to abandon "to be" as such. In fact, he said explicitly that there were no structural problems with the verb "to be" when used as an auxiliary verb or when used to state existence or location. It was even acceptable at times to use the faulty forms of the verb "to be," as long as one was aware of their structural limitations. Anecdotes One day, Korzybski was giving a lecture to a group of students, and he interrupted the lesson suddenly in order to retrieve a packet of biscuits, wrapped in white paper, from his briefcase. He muttered that he just had to eat something, and he asked the students on the seats in the front row if they would also like a biscuit. A few students took a biscuit. "Nice biscuit, don't you think," said Korzybski, while he took a second one. The students were chewing vigorously. Then he tore the white paper from the biscuits, in order to reveal the original packaging. On it was a big picture of a dog's head and the words "Dog Cookies." The students looked at the package, and were shocked. Two of them wanted to vomit, put their hands in front of their mouths, and ran out of the lecture hall to the toilet. "You see," Korzybski remarked, "I have just demonstrated that people don't just eat food, but also words, and that the taste of the former is often outdone by the taste of the latter." William Burroughs went to a Korzybski workshop in the Autumn of 1939. He was 25 years old, and paid $40. His fellow students—there were 38 in all—included young Samuel I. Hayakawa (later to become a Republican member of the U.S. Senate) and Wendell Johnson (founder of the Monster Study). Influence Korzybski was well received in numerous disciplines, as evidenced by the positive reactions from leading figures in the sciences and humanities in the 1940s and 1950s. These include author Robert A. Heinlein naming a character after him in his 1940 short story "Blowups Happen", and science fiction writer A. E. van Vogt in his novel "The World of Null-A", published in 1948. Korzybski's ideas influenced philosopher Alan Watts who used his phrase "the map is not the territory" in lectures. Writer Robert Anton Wilson was also deeply influenced by Korzybski's ideas. As reported in the third edition of Science and Sanity, in World War II the US Army used Korzybski's system to treat battle fatigue in Europe, under the supervision of Dr. Douglas M. Kelley, who went on to become the psychiatrist in charge of the Nazi war criminals at Nuremberg. Some of the General Semantics tradition was continued by Samuel I. Hayakawa. See also Alfred Korzybski Memorial Lecture Concept and object E-Prime Institute of General Semantics Robert Pula Structural differential References Further reading Kodish, Bruce. 2011. Korzybski: A Biography. Pasadena, CA: Extensional Publishing. softcover, 978-09700664-28 hardcover. Kodish, Bruce and Susan Presby Kodish. 2011. Drive Yourself Sane: Using the Uncommon Sense of General Semantics, Third Edition. Pasadena, CA: Extensional Publishing. Alfred Korzybski, Manhood of Humanity, foreword by Edward Kasner, notes by M. Kendig, Institute of General Semantics, 1950, hardcover, 2nd edition, 391 pages, . (Copy of the first edition.) Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics, Alfred Korzybski, preface by Robert P. Pula, Institute of General Semantics, 1994, hardcover, 5th edition, . (Full text online.) Alfred Korzybski, Collected Writings 1920-1950, Institute of General Semantics, 1990, hardcover, Montagu, M. F. A. (1953). Time-binding and the concept of culture. The Scientific Monthly, Vol. 77, No. 3 (Sep., 1953), pp. 148–155. Murray, E. (1950). In memoriam: Alfred H. Korzybski. Sociometry, Vol. 13, No. 1 (Feb., 1950), pp. 76–77. External links Alfred Korzybski and Gestalt Therapy Website Australian General Semantics Society Institute of General Semantics Finding aid to Alfred Korzybski papers at Columbia University. Rare Book & Manuscript Library. 1879 births 1950 deaths Writers from Warsaw Clan Abdank Polish emigrants to the United States Polish engineers 20th-century Polish philosophers Polish mathematicians Linguists from Poland General semantics People from Lakeville, Connecticut
785
https://en.wikipedia.org/wiki/Asteroids%20%28video%20game%29
Asteroids (video game)
Asteroids is a space-themed multidirectional shooter arcade game designed by Lyle Rains and Ed Logg released in November 1979 by Atari, Inc. The player controls a single spaceship in an asteroid field which is periodically traversed by flying saucers. The object of the game is to shoot and destroy the asteroids and saucers, while not colliding with either, or being hit by the saucers' counter-fire. The game becomes harder as the number of asteroids increases. Asteroids was one of the first major hits of the golden age of arcade games; the game sold over 70,000 arcade cabinets and proved both popular with players and influential with developers. In the 1980s it was ported to Atari's home systems, and the Atari VCS version sold over three million copies. The game was widely imitated, and it directly influenced Defender, Gravitar, and many other video games. Asteroids was conceived during a meeting between Logg and Rains, who decided to use hardware developed by Howard Delman previously used for Lunar Lander. Asteroids was based on an unfinished game titled Cosmos; its physics model, control scheme, and gameplay elements were derived from Spacewar!, Computer Space, and Space Invaders and refined through trial and error. The game is rendered on a vector display in a two-dimensional view that wraps around both screen axes. Gameplay The objective of Asteroids is to destroy asteroids and saucers. The player controls a triangular ship that can rotate left and right, fire shots straight forward, and thrust forward. Once the ship begins moving in a direction, it will continue in that direction for a time without player intervention unless the player applies thrust in a different direction. The ship eventually comes to a stop when not thrusting. The player can also send the ship into hyperspace, causing it to disappear and reappear in a random location on the screen, at the risk of self-destructing or appearing on top of an asteroid. Each level starts with a few large asteroids drifting in various directions on the screen. Objects wrap around screen edges – for instance, an asteroid that drifts off the top edge of the screen reappears at the bottom and continues moving in the same direction. As the player shoots asteroids, they break into smaller asteroids that move faster and are more difficult to hit. Smaller asteroids are also worth more points. Two flying saucers appear periodically on the screen; the "big saucer" shoots randomly and poorly, while the "small saucer" fires frequently at the ship. After reaching a score of 40,000, only the small saucer appears. As the player's score increases, the angle range of the shots from the small saucer diminishes until the saucer fires extremely accurately. Once the screen has been cleared of all asteroids and flying saucers, a new set of large asteroids appears, thus starting the next level. The game gets harder as the number of asteroids increases until after the score reaches a range between 40,000 and 60,000. The player starts with 3–5 lives upon game start and gains an extra life per 10,000 points. Play continues to the last ship lost, which ends the game. Machine "turns over" at 99,990 points, which is the maximum high score that can be achieved. Lurking exploit In the original game design, saucers were supposed to begin shooting as soon as they appeared, but this was changed. Additionally, saucers can only aim at the player's ship on-screen; they are not capable of aiming across a screen boundary. These behaviors allow a "lurking" strategy, in which the player stays near the edge of the screen opposite the saucer. By keeping just one or two rocks in play, a player can shoot across the boundary and destroy saucers to accumulate points indefinitely with little risk of being destroyed. Arcade operators began to complain about losing revenue due to this exploit. In response, Atari issued a patched EPROM and, due to the impact of this exploit, Atari (and other companies) changed their development and testing policies to try to prevent future games from having such exploits. Development Concept Asteroids was conceived by Lyle Rains and programmed by Ed Logg with collaborations from other Atari staff. Logg was impressed with the Atari Video Computer System (later called the Atari 2600), and he joined Atari's coin-op division to work on Dirt Bike, which was never released due to an unsuccessful field test. Paul Mancuso joined the development team as Asteroids technician and engineer Howard Delman contributed to the hardware. During a meeting in April 1979, Rains discussed Planet Grab, a multiplayer arcade game later renamed to Cosmos. Logg did not know the name of the game, thinking Computer Space as "the inspiration for the two-dimensional approach". Rains conceived of Asteroids as a mixture of Computer Space and Space Invaders, combining the two-dimensional approach of Computer Space with Space Invaders addictive gameplay of "completion" and "eliminate all threats". The unfinished game featured a giant, indestructible asteroid, so Rains asked Logg: "Well, why don’t we have a game where you shoot the rocks and blow them up?" In response, Logg described a similar concept where the player selectively shoots at rocks that break into smaller pieces. Both agreed on the concept. Hardware Asteroids was implemented on hardware developed by Delman and is a vector game, in which the graphics are composed of lines drawn on a vector monitor. Rains initially wanted the game done in raster graphics, but Logg, experienced in vector graphics, suggested an XY monitor because the high image quality would permit precise aiming. The hardware is chiefly a MOS 6502 executing the game program, and QuadraScan, a high-resolution vector graphics processor developed by Atari and referred to as an "XY display system" and the "Digital Vector Generator (DVG)". The original design concepts for QuadraScan came out of Cyan Engineering, Atari's off-campus research lab in Grass Valley, California, in 1978. Cyan gave it to Delman, who finished the design and first used it for Lunar Lander. Logg received Delman's modified board with five buttons, 13 sound effects, and additional RAM, and he used it to develop Asteroids. The size of the board was 4 by 4 inches, and it was "linked up" to a monitor. Implementation Logg modeled the player's ship, the five-button control scheme, and the game physics after Spacewar!, which he had played as a student at the University of California, Berkeley, but made several changes to improve playability. The ship was programmed into the hardware and rendered by the monitor, and it was configured to move with thrust and inertia. The hyperspace button was not placed near Logg's right thumb, which he was dissatisfied with, as he had a problem "tak[ing] his hand off the thrust button". Drawings of asteroids in various shapes were incorporated into the game. Logg copied the idea of a high score table with initials from Exidy's Star Fire. The two saucers were formulated to be different from each other. A steadily decreasing timer shortens intervals between saucer attacks to keep the player from not shooting asteroids and saucers. A "heartbeat" soundtrack quickens as the game progresses. The game does not have a sound chip. Delman created a hardware circuit for 13 sound effects by hand which was wired onto the board. A prototype of Asteroids was well received by several Atari staff and engineers, who "wander[ed] between labs, passing comment and stopping to play as they went". Logg was often asked when he would be leaving by employees eager to play the prototype, so he created a second prototype for staff to play. Atari tested the game in arcades in Sacramento, California, and also observed players during focus group sessions at Atari. Players used to Spacewar! struggled to maintain grip on the thrust button and requested a joystick; players accustomed to Space Invaders noted they get no break in the game. Logg and other engineers observed proceedings and documented comments in four pages. Asteroids slows down as the player gains 50–100 lives, because there is no limit to the number of lives displayed. The player can "lose" the game after more than 250 lives are collected. Ports Asteroids was released for the Atari VCS (later renamed the Atari 2600) and Atari 8-bit family in 1981, then the Atari 7800 in 1986. A port for the Atari 5200, identical to the Atari 8-bit computer version, was in development in 1982, but was not published. The Atari 7800 version was a launch title and includes cooperative play; the asteroids have colorful textures and the "heartbeat" sound effect remains intact. Programmers Brad Stewart and Bob Smith were unable to fit the Atari VCS port into a 4 KB cartridge. It became the first game for the console to use bank switching, a technique that increases ROM size from 4 KB to 8 KB. Reception Asteroids was immediately successful upon release. It displaced Space Invaders by popularity in the United States and became Atari's best selling arcade game of all time, with over 70,000 units sold. Atari earned an estimated $150 million in sales from the game, and arcade operators earned a further $500 million from coin drops. Atari had been in the process of manufacturing another vector game, Lunar Lander, but demand for Asteroids was so high "that several hundred Asteroids games were shipped in Lunar Lander cabinets". Asteroids was so popular that some video arcade operators had to install large boxes to hold the number of coins spent by players. It replaced Space Invaders at the top of the US RePlay amusement arcade charts in April 1980, though Space Invaders remained the top game at street locations. Asteroids went on to become the highest-grossing arcade video game of 1980 in the United States, dethroning Space Invaders. It shipped 70,000 arcade units worldwide in 1980, including over 60,000 sold in the United States that year, and grossed about worldwide ( adjusted for inflation) by 1980. The game remained at the top of the US RePlay charts through March 1981. However, the game did not perform as well overseas in Europe and Asia. It sold 30,000 arcade units overseas, for a total of 100,000 arcade units sold worldwide. Atari manufactured 76,312 units from its US and Ireland plants, including 21,394 Asteroids Deluxe units. It was a commercial failure in Japan when it released there in 1980, partly due to its complex controls and partly due to the Japanese market beginning to lose interest in space shoot 'em ups at the time. Asteroids received positive reviews from video game critics and has been regarded as Logg's magnum opus. Richard A. Edwards reviewed the 1981 Asteroids home cartridge in The Space Gamer No. 46. Edwards commented that "this home cartridge is a virtual duplicate of the ever-popular Atari arcade game. [...] If blasting asteroids is the thing you want to do then this is the game, but at this price I can't wholeheartedly recommend it". Video Games Player magazine reviewed the Atari VCS version, rating the graphics and sound a B, while giving the game an overall B+ rating. Electronic Fun with Computers & Games magazine gave the Atari VCS version an A rating. William Cassidy, writing for GameSpy's "Classic Gaming", noticed its innovations, including being one of the first video games to track initials and allow players to enter their initials for appearing in the top 10 high scores, and commented, "the vector graphics fit the futuristic outer space theme very well". In 1996, Next Generation listed it as number 39 on their "Top 100 Games of All Time", particularly lauding the control dynamics which require "the constant juggling of speed, positioning, and direction". In 1999, Next Generation listed Asteroids as number 29 on their "Top 50 Games of All Time", commenting that "Asteroid was a classic the day it was released, and it has never lost any of its appeal". Asteroids was ranked fourth on Retro Gamers list of "Top 25 Arcade Games"; the Retro Gamer staff cited its simplicity and the lack of a proper ending as allowances of revisiting the game. In 2012, Asteroids was listed on Time All-Time 100 greatest video games list. Entertainment Weekly named Asteroids one of the top ten games for the Atari 2600 in 2013. It was added to the Museum of Modern Art's collection of video games. In 2021, The Guardian listed Asteroids as the second greatest video game of the 1970s, just below Galaxian (1979). By contrast, in March 1983 the Atari 8-bit port of Asteroids won sixth place in Softlines Dog of the Year awards "for badness in computer games", Atari division, based on reader submissions. Usage of the names of Saturday Night Live characters "Mr. Bill" and "Sluggo" to refer to the saucers in an Esquire article about the game led to Logg receiving a cease and desist letter from a lawyer with the "Mr. Bill Trademark". Legacy Arcade sequels Released in 1981, Asteroids Deluxe was the first sequel to Asteroids. Dave Shepperd edited the code and made enhancements to the game without Logg's involvement. The onscreen objects are tinted blue, and hyperspace is replaced by a shield that depletes when used. The asteroids rotate, and new "killer satellite" enemies break into smaller ships that home in on the player's position. The arcade machine's monitor displays vector graphics overlaying a holographic backdrop. The game is more difficult than the original and enables saucers to shoot across the screen boundary, eliminating the lurking strategy for high scores in the original. It was followed by Owen Rubin's Space Duel in 1982, featuring colorful geometric shapes and co-op multiplayer gameplay. In 1987's Blasteroids, Ed Rotberg added "power-ups, ship morphing, branching levels, bosses, and the ability to dock your ships in multiplayer for added firepower". Blasteroids uses raster graphics instead of vectors. Re-releases The game is half of the Atari Lynx pairing Super Asteroids & Missile Command, and included in the 1993 Microsoft Arcade compilation. Activision published an enhanced version of Asteroids for the PlayStation (1998), Nintendo 64 (1999), Microsoft Windows (1998), Game Boy Color (1999), and Macintosh (2000). The Atari Flashback series of dedicated video game consoles have included both the 2600 and the arcade versions of Asteroids. Published by Crave Entertainment on December 14, 1999, Asteroids Hyper 64 made the ship and asteroids 3D and added new weapons and a multiplayer mode. A technical demo of Asteroids was developed by iThink for the Atari Jaguar but was never released. Unofficially referred to as Asteroids 2000, it was demonstrated at E-JagFest 2000. In 2001, Infogrames released Atari Anniversary Edition for the Dreamcast, PlayStation, and Microsoft Windows. Developed by Digital Eclipse, it includes emulated versions of Asteroids and other games. The arcade and Atari 2600 versions of Asteroids were included in Atari Anthology for both Xbox and PlayStation 2. Released on November 28, 2007, the Xbox Live Arcade port of Asteroids has revamped HD graphics along with an added intense "throttle monkey" mode. The arcade and 2600 versions were made available through Microsofts Game Room service in 2010. Glu Mobile released an enhanced mobile phone port. Asteroids is included on Atari Greatest Hits Volume 1 for the Nintendo DS. An updated version of the game was announced in 2018 for the Intellivision Amico. Both the Atari 2600 and Atari 7800 versions of the game was included on Atari Collection 1 and 2 in 2020 for the Evercade. Clones Quality Software's Asteroids in Space (1980) was one of the best selling games for the Apple II and voted one of the most popular software titles of 1978-80 by Softalk magazine. In December 1981, Byte reviewed eight Asteroids clones for home computers. Three other Apple II Asteroids clones were reviewed together in the 1982 Creative Computing Software Buyers Guide: The Asteroid Field, Asteron, and Apple-Oids. In the last of these, the asteroids are in the shape of apples. Two independent clones, Asteroid for the Apple II and Fasteroids for TRS-80, were renamed to Planetoids and sold by Adventure International. Others clones include Acornsoft's Meteors, Moons of Jupiter for the VIC-20, and MineStorm for the Vectrex. The Mattel Intellivision game Meteor! , an Asteroids clone, was cancelled to avoid a lawsuit, and was reworked as Astrosmash. The game borrows elements from Asteroids and Space Invaders. Elon Musk, when he was a 12 year-old child, programmed a space shoot 'em up game inspired by Space Invaders and Asteroids, called Blastar, which was published for the Commodore VIC-20 in 1984. World records On February 6, 1982, Leo Daniels of Carolina Beach, North Carolina, set a world record score of 40,101,910 points. On November 13 of the same year, 15-year-old Scott Safran of Cherry Hill, New Jersey, set a new record at 41,336,440 points. In 1998, to congratulate Safran on his accomplishment, the Twin Galaxies Intergalactic Scoreboard searched for him for four years until 2002, when it was discovered that he had died in an accident in 1989. In a ceremony in Philadelphia on April 27, 2002, Walter Day of Twin Galaxies presented an award to the surviving members of Safran's family, commemorating his achievement. On April 5, 2010, John McAllister broke Safran's record with a high score of 41,838,740 in a 58-hour Internet livestream. Some claim that the true world record for Asteroids was set in a laundromat in Hyde Park, New York, from June 30 to July 3, 1982, and that details of the score of over 48 million were published in the July 4th edition of the Poughkeepsie Journal. References External links at Atari 1979 video games Arcade video games Atari 2600 games Atari 7800 games Atari 8-bit family games Atari arcade games Atari Lynx games Cancelled Atari 5200 games Cancelled Atari Jaguar games Ed Logg games Game Boy games Game Boy Color games Multidirectional shooters Multiplayer and single-player video games Science fiction video games Sega arcade games Taito arcade games Xbox 360 games Xbox 360 Live Arcade games Vector arcade video games Video games developed in the United States
786
https://en.wikipedia.org/wiki/Asparagales
Asparagales
Asparagales (asparagoid lilies) is an order of plants in modern classification systems such as the Angiosperm Phylogeny Group (APG) and the Angiosperm Phylogeny Web. The order takes its name from the type family Asparagaceae and is placed in the monocots amongst the lilioid monocots. The order has only recently been recognized in classification systems. It was first put forward by Huber in 1977 and later taken up in the Dahlgren system of 1985 and then the APG in 1998, 2003 and 2009. Before this, many of its families were assigned to the old order Liliales, a very large order containing almost all monocots with colorful tepals and lacking starch in their endosperm. DNA sequence analysis indicated that many of the taxa previously included in Liliales should actually be redistributed over three orders, Liliales, Asparagales, and Dioscoreales. The boundaries of the Asparagales and of its families have undergone a series of changes in recent years; future research may lead to further changes and ultimately greater stability. In the APG circumscription, Asparagales is the largest order of monocots with 14 families, 1,122 genera, and about 36,000 species. The order is clearly circumscribed on the basis of molecular phylogenetics, but it is difficult to define morphologically since its members are structurally diverse. Most species of Asparagales are herbaceous perennials, although some are climbers and some are tree-like. The order also contains many geophytes (bulbs, corms, and various kinds of tuber). According to telomere sequence, at least two evolutionary switch-points happened within the order. The basal sequence is formed by TTTAGGG like in the majority of higher plants. Basal motif was changed to vertebrate-like TTAGGG and finally, the most divergent motif CTCGGTTATGGG appears in Allium. One of the defining characteristics (synapomorphies) of the order is the presence of phytomelanin, a black pigment present in the seed coat, creating a dark crust. Phytomelanin is found in most families of the Asparagales (although not in Orchidaceae, thought to be the sister-group of the rest of the order). The leaves of almost all species form a tight rosette, either at the base of the plant or at the end of the stem, but occasionally along the stem. The flowers are not particularly distinctive, being 'lily type', with six tepals and up to six stamina. The order is thought to have first diverged from other related monocots some 120–130 million years ago (early in the Cretaceous period), although given the difficulty in classifying the families involved, estimates are likely to be uncertain. From an economic point of view, the order Asparagales is second in importance within the monocots only to the order Poales (which includes grasses and cereals). Species are used as food and flavourings (e.g. onion, garlic, leek, asparagus, vanilla, saffron), in medicinal or cosmetic applications (Aloe), as cut flowers (e.g. freesia, gladiolus, iris, orchids), and as garden ornamentals (e.g. day lilies, lily of the valley, Agapanthus). Description Although most species in the order are herbaceous, some no more than 15 cm high, there are a number of climbers (e.g., some species of Asparagus), as well as several genera forming trees (e.g. Agave, Cordyline, Yucca, Dracaena, Aloe ), which can exceed 10 m in height. Succulent genera occur in several families (e.g. Aloe). Almost all species have a tight cluster of leaves (a rosette), either at the base of the plant or at the end of a more-or-less woody stem as with Yucca. In some cases, the leaves are produced along the stem. The flowers are in the main not particularly distinctive, being of a general 'lily type', with six tepals, either free or fused from the base and up to six stamina. They are frequently clustered at the end of the plant stem. The Asparagales are generally distinguished from the Liliales by the lack of markings on the tepals, the presence of septal nectaries in the ovaries, rather than the bases of the tepals or stamen filaments, and the presence of secondary growth. They are generally geophytes, but with linear leaves, and a lack of fine reticular venation. The seeds characteristically have the external epidermis either obliterated (in most species bearing fleshy fruit), or if present, have a layer of black carbonaceous phytomelanin in species with dry fruits (nuts). The inner part of the seed coat is generally collapsed, in contrast to Liliales whose seeds have a well developed outer epidermis, lack phytomelanin, and usually display a cellular inner layer. The orders which have been separated from the old Liliales are difficult to characterize. No single morphological character appears to be diagnostic of the order Asparagales. The flowers of Asparagales are of a general type among the lilioid monocots. Compared to Liliales, they usually have plain tepals without markings in the form of dots. If nectaries are present, they are in the septa of the ovaries rather than at the base of the tepals or stamens. Those species which have relatively large dry seeds have a dark, crust-like (crustose) outer layer containing the pigment phytomelan. However, some species with hairy seeds (e.g. Eriospermum, family Asparagaceae s.l.), berries (e.g. Maianthemum, family Asparagaceae s.l.), or highly reduced seeds (e.g. orchids) lack this dark pigment in their seed coats. Phytomelan is not unique to Asparagales (i.e. it is not a synapomorphy) but it is common within the order and rare outside it. The inner portion of the seed coat is usually completely collapsed. In contrast, the morphologically similar seeds of Liliales have no phytomelan, and usually retain a cellular structure in the inner portion of the seed coat. Most monocots are unable to thicken their stems once they have formed, since they lack the cylindrical meristem present in other angiosperm groups. Asparagales have a method of secondary thickening which is otherwise only found in Dioscorea (in the monocot order Disoscoreales). In a process called 'anomalous secondary growth', they are able to create new vascular bundles around which thickening growth occurs. Agave, Yucca, Aloe, Dracaena, Nolina and Cordyline can become massive trees, albeit not of the height of the tallest dicots, and with less branching. Other genera in the order, such as Lomandra and Aphyllanthes, have the same type of secondary growth but confined to their underground stems. Microsporogenesis (part of pollen formation) distinguishes some members of Asparagales from Liliales. Microsporogenesis involves a cell dividing twice (meiotically) to form four daughter cells. There are two kinds of microsporogenesis: successive and simultaneous (although intermediates exist). In successive microsporogenesis, walls are laid down separating the daughter cells after each division. In simultaneous microsporogenesis, there is no wall formation until all four cell nuclei are present. Liliales all have successive microsporogenesis, which is thought to be the primitive condition in monocots. It seems that when the Asparagales first diverged they developed simultaneous microsporogenesis, which the 'lower' Asparagale families retain. However, the 'core' Asparagales (see Phylogenetics ) have reverted to successive microsporogenesis. The Asparagales appear to be unified by a mutation affecting their telomeres (a region of repetitive DNA at the end of a chromosome). The typical 'Arabidopsis-type' sequence of bases has been fully or partially replaced by other sequences, with the 'human-type' predominating. Other apomorphic characters of the order according to Stevens are: the presence of chelidonic acid, anthers longer than wide, tapetal cells bi- to tetra-nuclear, tegmen not persistent, endosperm helobial, and loss of mitochondrial gene sdh3. Taxonomy As circumscribed within the Angiosperm Phylogeny Group system Asparagales is the largest order within the monocotyledons, with 14 families, 1,122 genera and about 25,000–42,000 species, thus accounting for about 50% of all monocots and 10–15% of the flowering plants (angiosperms). The attribution of botanical authority for the name Asparagales belongs to Johann Heinrich Friedrich Link (1767–1851) who coined the word 'Asparaginae' in 1829 for a higher order taxon that included Asparagus although Adanson and Jussieau had also done so earlier (see History). Earlier circumscriptions of Asparagales attributed the name to Bromhead (1838), who had been the first to use the term 'Asparagales'. History Pre-Darwinian The type genus, Asparagus, from which the name of the order is derived, was described by Carl Linnaeus in 1753, with ten species. He placed Asparagus within the Hexandria Monogynia (six stamens, one carpel) in his sexual classification in the Species Plantarum. The majority of taxa now considered to constitute Asparagales have historically been placed within the very large and diverse family, Liliaceae. The family Liliaceae was first described by Michel Adanson in 1763, and in his taxonomic scheme he created eight sections within it, including the Asparagi with Asparagus and three other genera. The system of organising genera into families is generally credited to Antoine Laurent de Jussieu who formally described both the Liliaceae and the type family of Asparagales, the Asparagaceae, as Lilia and Asparagi, respectively, in 1789. Jussieu established the hierarchical system of taxonomy (phylogeny), placing Asparagus and related genera within a division of Monocotyledons, a class (III) of Stamina Perigynia and 'order' Asparagi, divided into three subfamilies. The use of the term Ordo (order) at that time was closer to what we now understand as Family, rather than Order. In creating his scheme he used a modified form of Linnaeus' sexual classification but using the respective topography of stamens to carpels rather than just their numbers. While De Jussieu's Stamina Perigynia also included a number of 'orders' that would eventually form families within the Asparagales such as the Asphodeli (Asphodelaceae), Narcissi (Amaryllidaceae) and Irides (Iridaceae), the remainder are now allocated to other orders. Jussieu's Asparagi soon came to be referred to as Asparagacées in the French literature (Latin: Asparagaceae). Meanwhile, the 'Narcissi' had been renamed as the 'Amaryllidées' (Amaryllideae) in 1805, by Jean Henri Jaume Saint-Hilaire, using Amaryllis as the type species rather than Narcissus, and thus has the authority attribution for Amaryllidaceae. In 1810, Brown proposed that a subgroup of Liliaceae be distinguished on the basis of the position of the ovaries and be referred to as Amaryllideae and in 1813 de Candolle described Liliacées Juss. and Amaryllidées Brown as two quite separate families. The literature on the organisation of genera into families and higher ranks became available in the English language with Samuel Frederick Gray's A natural arrangement of British plants (1821). Gray used a combination of Linnaeus' sexual classification and Jussieu's natural classification to group together a number of families having in common six equal stamens, a single style and a perianth that was simple and petaloid, but did not use formal names for these higher ranks. Within the grouping he separated families by the characteristics of their fruit and seed. He treated groups of genera with these characteristics as separate families, such as Amaryllideae, Liliaceae, Asphodeleae and Asparageae. The circumscription of Asparagales has been a source of difficulty for many botanists from the time of John Lindley (1846), the other important British taxonomist of the early nineteenth century. In his first taxonomic work, An Introduction to the Natural System of Botany (1830) he partly followed Jussieu by describing a subclass he called Endogenae, or Monocotyledonous Plants (preserving de Candolle's Endogenæ phanerogamæ) divided into two tribes, the Petaloidea and Glumaceae. He divided the former, often referred to as petaloid monocots, into 32 orders, including the Liliaceae (defined narrowly), but also most of the families considered to make up the Asparagales today, including the Amaryllideae. By 1846, in his final scheme Lindley had greatly expanded and refined the treatment of the monocots, introducing both an intermediate ranking (Alliances) and tribes within orders (i.e. families). Lindley placed the Liliaceae within the Liliales, but saw it as a paraphyletic ("catch-all") family, being all Liliales not included in the other orders, but hoped that the future would reveal some characteristic that would group them better. The order Liliales was very large and had become a used to include almost all monocotyledons with colourful tepals and without starch in their endosperm (the lilioid monocots). The Liliales was difficult to divide into families because morphological characters were not present in patterns that clearly demarcated groups. This kept the Liliaceae separate from the Amaryllidaceae (Narcissales). Of these Liliaceae was divided into eleven tribes (with 133 genera) and Amaryllidaceae into four tribes (with 68 genera), yet both contained many genera that would eventually segregate to each other's contemporary orders (Liliales and Asparagales respectively). The Liliaceae would be reduced to a small 'core' represented by the tribe Tulipae, while large groups such Scilleae and Asparagae would become part of Asparagales either as part of the Amaryllidaceae or as separate families. While of the Amaryllidaceae, the Agaveae would be part of Asparagaceae but the Alstroemeriae would become a family within the Liliales. The number of known genera (and species) continued to grow and by the time of the next major British classification, that of the Bentham & Hooker system in 1883 (published in Latin) several of Lindley's other families had been absorbed into the Liliaceae. They used the term 'series' to indicate suprafamilial rank, with seven series of monocotyledons (including Glumaceae), but did not use Lindley's terms for these. However they did place the Liliaceous and Amaryllidaceous genera into separate series. The Liliaceae were placed in series Coronariae, while the Amaryllideae were placed in series Epigynae. The Liliaceae now consisted of twenty tribes (including Tulipeae, Scilleae and Asparageae), and the Amaryllideae of five (including Agaveae and Alstroemerieae). An important addition to the treatment of the Liliaceae was the recognition of the Allieae as a distinct tribe that would eventually find its way to the Asparagales as the subfamily Allioideae of the Amaryllidaceae. Post-Darwinian The appearance of Charles Darwin's Origin of Species in 1859 changed the way that taxonomists considered plant classification, incorporating evolutionary information into their schemata. The Darwinian approach led to the concept of phylogeny (tree-like structure) in assembling classification systems, starting with Eichler. Eichler, having established a hierarchical system in which the flowering plants (angiosperms) were divided into monocotyledons and dicotyledons, further divided into former into seven orders. Within the Liliiflorae were seven families, including Liliaceae and Amaryllidaceae. Liliaceae included Allium and Ornithogalum (modern Allioideae) and Asparagus. Engler, in his system developed Eichler's ideas into a much more elaborate scheme which he treated in a number of works including Die Natürlichen Pflanzenfamilien (Engler and Prantl 1888) and Syllabus der Pflanzenfamilien (1892–1924). In his treatment of Liliiflorae the Liliineae were a suborder which included both families Liliaceae and Amaryllidaceae. The Liliaceae had eight subfamilies and the Amaryllidaceae four. In this rearrangement of Liliaceae, with fewer subdivisions, the core Liliales were represented as subfamily Lilioideae (with Tulipae and Scilleae as tribes), the Asparagae were represented as Asparagoideae and the Allioideae was preserved, representing the alliaceous genera. Allieae, Agapantheae and Gilliesieae were the three tribes within this subfamily. In the Amaryllidacea, there was little change from the Bentham & Hooker. A similar approach was adopted by Wettstein. Twentieth century In the twentieth century the Wettstein system (1901–1935) placed many of the taxa in an order called 'Liliiflorae'. Next Johannes Paulus Lotsy (1911) proposed dividing the Liliiflorae into a number of smaller families including Asparagaceae. Then Herbert Huber (1969, 1977), following Lotsy's example, proposed that the Liliiflorae be split into four groups including the 'Asparagoid' Liliiflorae. The widely used Cronquist system (1968–1988) used the very broadly defined order Liliales. These various proposals to separate small groups of genera into more homogeneous families made little impact till that of Dahlgren (1985) incorporating new information including synapomorphy. Dahlgren developed Huber's ideas further and popularised them, with a major deconstruction of existing families into smaller units. They created a new order, calling it Asparagales. This was one of five orders within the superorder Liliiflorae. Where Cronquist saw one family, Dahlgren saw forty distributed over three orders (predominantly Liliales and Asparagales). Over the 1980s, in the context of a more general review of the classification of angiosperms, the Liliaceae were subjected to more intense scrutiny. By the end of that decade, the Royal Botanic Gardens at Kew, the British Museum of Natural History and the Edinburgh Botanical Gardens formed a committee to examine the possibility of separating the family at least for the organization of their herbaria. That committee finally recommended that 24 new families be created in the place of the original broad Liliaceae, largely by elevating subfamilies to the rank of separate families. The order Asparagales as currently circumscribed has only recently been recognized in classification systems, through the advent of phylogenetics. The 1990s saw considerable progress in plant phylogeny and phylogenetic theory, enabling a phylogenetic tree to be constructed for all of the flowering plants. The establishment of major new clades necessitated a departure from the older but widely used classifications such as Cronquist and Thorne based largely on morphology rather than genetic data. This complicated discussion about plant evolution and necessitated a major restructuring. rbcL gene sequencing and cladistic analysis of monocots had redefined the Liliales in 1995. from four morphological orders sensu Dahlgren. The largest clade representing the Liliaceae, all previously included in Liliales, but including both the Calochortaceae and Liliaceae sensu Tamura. This redefined family, that became referred to as core Liliales, but corresponded to the emerging circumscription of the Angiosperm Phylogeny Group (1998). Phylogeny and APG system The 2009 revision of the Angiosperm Phylogeny Group system, APG III, places the order in the clade monocots. From the Dahlgren system of 1985 onwards, studies based mainly on morphology had identified the Asparagales as a distinct group, but had also included groups now located in Liliales, Pandanales and Zingiberales. Research in the 21st century has supported the monophyly of Asparagales, based on morphology, 18S rDNA, and other DNA sequences, although some phylogenetic reconstructions based on molecular data have suggested that Asparagales may be paraphyletic, with Orchidaceae separated from the rest. Within the monocots, Asparagales is the sister group of the commelinid clade. This cladogram shows the placement of Asparagales within the orders of Lilianae sensu Chase & Reveal (monocots) based on molecular phylogenetic evidence. The lilioid monocot orders are bracketed, namely Petrosaviales, Dioscoreales, Pandanales, Liliales and Asparagales. These constitute a paraphyletic assemblage, that is groups with a common ancestor that do not include all direct descendants (in this case commelinids as the sister group to Asparagales); to form a clade, all the groups joined by thick lines would need to be included. While Acorales and Alismatales have been collectively referred to as "alismatid monocots" (basal or early branching monocots), the remaining clades (lilioid and commelinid monocots) have been referred to as the "core monocots". The relationship between the orders (with the exception of the two sister orders) is pectinate, that is diverging in succession from the line that leads to the commelinids. Numbers indicate crown group (most recent common ancestor of the sampled species of the clade of interest) divergence times in mya (million years ago). Subdivision A phylogenetic tree for the Asparagales, generally to family level, but including groups which were recently and widely treated as families but which are now reduced to subfamily rank, is shown below. The tree shown above can be divided into a basal paraphyletic group, the 'lower Asparagales (asparagoids)', from Orchidaceae to Asphodelaceae, and a well-supported monophyletic group of 'core Asparagales' (higher asparagoids), comprising the two largest families, Amaryllidaceae sensu lato and Asparagaceae sensu lato. Two differences between these two groups (although with exceptions) are: the mode of microsporogenesis and the position of the ovary. The 'lower Asparagales' typically have simultaneous microsporogenesis (i.e. cell walls develop only after both meiotic divisions), which appears to be an apomorphy within the monocots, whereas the 'core Asparagales' have reverted to successive microsporogenesis (i.e. cell walls develop after each division). The 'lower Asparagales' typically have an inferior ovary, whereas the 'core Asparagales' have reverted to a superior ovary. A 2002 morphological study by Rudall treated possessing an inferior ovary as a synapomorphy of the Asparagales, stating that reversions to a superior ovary in the 'core Asparagales' could be associated with the presence of nectaries below the ovaries. However, Stevens notes that superior ovaries are distributed among the 'lower Asparagales' in such a way that it is not clear where to place the evolution of different ovary morphologies. The position of the ovary seems a much more flexible character (here and in other angiosperms) than previously thought. Changes to family structure in APG III The APG III system when it was published in 2009, greatly expanded the families Xanthorrhoeaceae, Amaryllidaceae, and Asparagaceae. Thirteen of the families of the earlier APG II system were thereby reduced to subfamilies within these three families. The expanded Xanthorrhoeaceae is now called "Asphodelaceae". The APG II families (left) and their equivalent APG III subfamilies (right) are as follows: Structure of Asparagales Orchid clade Orchidaceae is possibly the largest family of all angiosperms (only Asteraceae might - or might not - be more speciose) and hence by far the largest in the order. The Dahlgren system recognized three families of orchids, but DNA sequence analysis later showed that these families are polyphyletic and so should be combined. Several studies suggest (with high bootstrap support) that Orchidaceae is the sister of the rest of the Asparagales. Other studies have placed the orchids differently in the phylogenetic tree, generally among the Boryaceae-Hypoxidaceae clade. The position of Orchidaceae shown above seems the best current hypothesis, but cannot be taken as confirmed. Orchids have simultaneous microsporogenesis and inferior ovaries, two characters that are typical of the 'lower Asparagales'. However, their nectaries are rarely in the septa of the ovaries, and most orchids have dust-like seeds, atypical of the rest of the order. (Some members of Vanilloideae and Cypripedioideae have crustose seeds, probably associated with dispersal by birds and mammals that are attracted by fermenting fleshy fruit releasing fragrant compounds, e.g. vanilla.) In terms of the number of species, Orchidaceae diversification is remarkable. However, although the other Asparagales may be less rich in species, they are more variable morphologically, including tree-like forms. Boryaceae to Hypoxidaceae The four families excluding Boryaceae form a well-supported clade in studies based on DNA sequence analysis. All four contain relatively few species, and it has been suggested that they be combined into one family under the name Hypoxidaceae sensu lato. The relationship between Boryaceae (which includes only two genera, Borya and Alania), and other Asparagales has remained unclear for a long time. The Boryaceae are mycorrhizal, but not in the same way as orchids. Morphological studies have suggested a close relationship between Boryaceae and Blandfordiaceae. There is relatively low support for the position of Boryaceae in the tree shown above. Ixioliriaceae to Xeronemataceae The relationship shown between Ixioliriaceae and Tecophilaeaceae is still unclear. Some studies have supported a clade of these two families, others have not. The position of Doryanthaceae has also varied, with support for the position shown above, but also support for other positions. The clade from Iridaceae upwards appears to have stronger support. All have some genetic characteristics in common, having lost Arabidopsis-type telomeres. Iridaceae is distinctive among the Asparagales in the unique structure of the inflorescence (a rhipidium), the combination of an inferior ovary and three stamens, and the common occurrence of unifacial leaves whereas bifacial leaves are the norm in other Asparagales. Members of the clade from Iridaceae upwards have infra-locular septal nectaries, which Rudall interpreted as a driver towards secondarily superior ovaries. Asphodelaceae + 'core Asparagales' The next node in the tree (Xanthorrhoeaceae sensu lato + the 'core Asparagales') has strong support. 'Anomalous' secondary thickening occurs among this clade, e.g. in Xanthorrhoea (family Asphodelaceae) and Dracaena (family Asparagaceae sensu lato), with species reaching tree-like proportions. The 'core Asparagales', comprising Amaryllidaceae sensu lato and Asparagaceae sensu lato, are a strongly supported clade, as are clades for each of the families. Relationships within these broadly defined families appear less clear, particularly within the Asparagaceae sensu lato. Stevens notes that most of its subfamilies are difficult to recognize, and that significantly different divisions have been used in the past, so that the use of a broadly defined family to refer to the entire clade is justified. Thus the relationships among subfamilies shown above, based on APWeb , is somewhat uncertain. Evolution Several studies have attempted to date the evolution of the Asparagales, based on phylogenetic evidence. Earlier studies generally give younger dates than more recent studies, which have been preferred in the table below. A 2009 study suggests that the Asparagales have the highest diversification rate in the monocots, about the same as the order Poales, although in both orders the rate is little over half that of the eudicot order Lamiales, the clade with the highest rate. Comparison of family structures The taxonomic diversity of the monocotyledons is described in detail by Kubitzki. Up-to-date information on the Asparagales can be found on the Angiosperm Phylogeny Website. The APG III system's family circumscriptions are being used as the basis of the Kew-hosted World Checklist of Selected Plant Families. With this circumscription, the order consists of 14 families (Dahlgren had 31) with approximately 1120 genera and 26000 species. Order Asparagales Link Family Amaryllidaceae J.St.-Hil. (including Agapanthaceae F.Voigt, Alliaceae Borkh.) Family Asparagaceae Juss. (including Agavaceae Dumort. [which includes Anemarrhenaceae, Anthericaceae, Behniaceae and Herreriaceae], Aphyllanthaceae Burnett, Hesperocallidaceae Traub, Hyacinthaceae Batsch ex Borkh., Laxmanniaceae Bubani, Ruscaceae M.Roem. [which includes Convallariaceae] and Themidaceae Salisb.) Family Asteliaceae Dumort. Family Blandfordiaceae R.Dahlgren & Clifford Family Boryaceae M.W. Chase, Rudall & Conran Family Doryanthaceae R.Dahlgren & Clifford Family Hypoxidaceae R.Br. Family Iridaceae Juss. Family Ixioliriaceae Nakai Family Lanariaceae R.Dahlgren & A.E.van Wyk Family Orchidaceae Juss. Family Tecophilaeaceae Leyb. Family Xanthorrhoeaceae Dumort. (including Asphodelaceae Juss. and Hemerocallidaceae R.Br.), now Asphodelaceae Juss. Family Xeronemataceae M.W.Chase, Rudall & M.F.Fay The earlier 2003 version, APG II, allowed 'bracketed' families, i.e. families which could either be segregated from more comprehensive families or could be included in them. These are the families given under "including" in the list above. APG III does not allow bracketed families, requiring the use of the more comprehensive family; otherwise the circumscription of the Asparagales is unchanged. A separate paper accompanying the publication of the 2009 APG III system provided subfamilies to accommodate the families which were discontinued. The first APG system of 1998 contained some extra families, included in square brackets in the list above. Two older systems which use the order Asparagales are the Dahlgren system and the Kubitzki system. The families included in the circumscriptions of the order in these two systems are shown in the first and second columns of the table below. The equivalent family in the modern APG III system (see below) is shown in the third column. Note that although these systems may use the same name for a family, the genera which it includes may be different, so the equivalence between systems is only approximate in some cases. Uses The Asparagales include many important crop plants and ornamental plants. Crops include Allium, Asparagus and Vanilla, while ornamentals include irises, hyacinths and orchids. See also Taxonomy of Liliaceae Notes References Bibliography Books Contents * Chapters , In . , in , in , in Articles APG Historical sources Digital edition by the University and State Library Düsseldorf 1st ed. 1901–1908; 2nd ed. 1910–1911; 3rd ed. 1923–1924; 4th ed. 1933–1935 Websites : Families included in the checklist Reference materials External links Biodiversity Heritage Library Angiosperm orders Extant Late Cretaceous first appearances
787
https://en.wikipedia.org/wiki/Alismatales
Alismatales
The Alismatales (alismatids) are an order of flowering plants including about 4500 species. Plants assigned to this order are mostly tropical or aquatic. Some grow in fresh water, some in marine habitats. Description The Alismatales comprise herbaceous flowering plants of often aquatic and marshy habitats, and the only monocots known to have green embryos other than the Amaryllidaceae. They also include the only marine angiosperms growing completely submerged, the seagrasses. The flowers are usually arranged in inflorescences, and the mature seeds lack endosperm. Both marine and freshwater forms include those with staminate flowers that detach from the parent plant and float to the surface. There they can pollinate carpellate flowers floating on the surface via long pedicels. In others, pollination occurs underwater, where pollen may form elongated strands, increasing chance of success. Most aquatic species have a totally submerged juvenile phase, and flowers are either floating or emergent. Vegetation may be totally submersed, have floating leaves, or protrude from the water. Collectively, they are commonly known as "water plantain". Taxonomy The Alismatales contain about 165 genera in 13 families, with a cosmopolitan distribution. Phylogenetically, they are basal monocots, diverging early in evolution relative to the lilioid and commelinid monocot lineages. Together with the Acorales, the Alismatales are referred to informally as the alismatid monocots. Early systems The Cronquist system (1981) places the Alismatales in subclass Alismatidae, class Liliopsida [= monocotyledons] and includes only three families as shown: Alismataceae Butomaceae Limnocharitaceae Cronquist's subclass Alismatidae conformed fairly closely to the order Alismatales as defined by APG, minus the Araceae. The Dahlgren system places the Alismatales in the superorder Alismatanae in the subclass Liliidae [= monocotyledons] in the class Magnoliopsida [= angiosperms] with the following families included: Alismataceae Aponogetonaceae Butomaceae Hydrocharitaceae Limnocharitaceae In Tahktajan's classification (1997), the order Alismatales contains only the Alismataceae and Limnocharitaceae, making it equivalent to the Alismataceae as revised in APG-III. Other families included in the Alismatates as currently defined are here distributed among 10 additional orders, all of which are assigned, with the following exception, to the Subclass Alismatidae. Araceae in Tahktajan 1997 is assigned to the Arales and placed in the Subclass Aridae; Tofieldiaceae to the Melanthiales and placed in the Liliidae. Angiosperm Phylogeny Group The Angiosperm Phylogeny Group system (APG) of 1998 and APG II (2003) assigned the Alismatales to the monocots, which may be thought of as an unranked clade containing the families listed below. The biggest departure from earlier systems (see below) is the inclusion of family Araceae. By its inclusion, the order has grown enormously in number of species. The family Araceae alone accounts for about a hundred genera, totaling over two thousand species. The rest of the families together contain only about five hundred species, many of which are in very small families. The APG III system (2009) differs only in that the Limnocharitaceae are combined with the Alismataceae; it was also suggested that the genus Maundia (of the Juncaginaceae) could be separated into a monogeneric family, the Maundiaceae, but the authors noted that more study was necessary before the Maundiaceae could be recognized. order Alismatales sensu APG III family Alismataceae (including Limnocharitaceae) family Aponogetonaceae family Araceae family Butomaceae family Cymodoceaceae family Hydrocharitaceae family Juncaginaceae family Posidoniaceae family Potamogetonaceae family Ruppiaceae family Scheuchzeriaceae family Tofieldiaceae family Zosteraceae In APG IV (2016), it was decided that evidence was sufficient to elevate Maundia to family level as the monogeneric Maundiaceae. The authors considered including a number of the smaller orders within the Juncaginaceae, but an online survey of botanists and other users found little support for this "lumping" approach. Consequently, the family structure for APG IV is: family Alismataceae (including Limnocharitaceae) family Aponogetonaceae family Araceae family Butomaceae family Cymodoceaceae family Hydrocharitaceae family Juncaginaceae family Maundiaceae family Posidoniaceae family Potamogetonaceae family Ruppiaceae family Scheuchzeriaceae family Tofieldiaceae family Zosteraceae Phylogeny Cladogram showing the orders of monocots (Lilianae sensu Chase & Reveal) based on molecular phylogenetic evidence: References Further reading B. C. J. du Mortier 1829. Analyse des Familles de Plantes : avec l'indication des principaux genres qui s'y rattachent. Imprimerie de J. Casterman, Tournay W. S. Judd, C. S. Campbell, E. A. Kellogg, P. F. Stevens, M. J. Donoghue, 2002. Plant Systematics: A Phylogenetic Approach, 2nd edition. Sinauer Associates, Sunderland, Massachusetts . , in External links Angiosperm orders
788
https://en.wikipedia.org/wiki/Apiales
Apiales
The Apiales are an order of flowering plants. The families are those recognized in the APG III system. This is typical of the newer classifications, though there is some slight variation and in particular, the Torriceliaceae may be divided. Under this definition, well-known members include carrots, celery, parsley, and Hedera helix (English ivy). The order Apiales is placed within the asterid group of eudicots as circumscribed by the APG III system. Within the asterids, Apiales belongs to an unranked group called the campanulids, and within the campanulids, it belongs to a clade known in phylogenetic nomenclature as Apiidae. In 2010, a subclade of Apiidae named Dipsapiidae was defined to consist of the three orders: Apiales, Paracryphiales, and Dipsacales. Taxonomy Under the Cronquist system, only the Apiaceae and Araliaceae were included here, and the restricted order was placed among the rosids rather than the asterids. The Pittosporaceae were placed within the Rosales, and many of the other forms within the family Cornaceae. Pennantia was in the family Icacinaceae. In the classification system of Dahlgren the families Apiaceae and Araliaceae were placed in the order Ariales, in the superorder Araliiflorae (also called Aralianae). The present understanding of the Apiales is fairly recent and is based upon comparison of DNA sequences by phylogenetic methods. The circumscriptions of some of the families have changed. In 2009, one of the subfamilies of Araliaceae was shown to be polyphyletic. Gynoecia The largest and obviously closely related families of Apiales are Araliaceae, Myodocarpaceae and Apiaceae, which resemble each other in the structure of their gynoecia. In this respect however, the Pittosporaceae is notably distinct from them. Typical syncarpous gynoecia exhibit four vertical zones, determined by the extent of fusion of the carpels. In most plants the synascidiate (i.e. "united bottle-shaped") and symplicate zones are fertile and bear the ovules. Each of the first three families possess mainly bi- or multilocular ovaries in a gynoecium with a long synascidiate, but very short symplicate zone, where the ovules are inserted at their transition, the so-called cross-zone (or "Querzone"). In gynoecia of the Pittosporaceae, the symplicate is much longer than the synascidiate zone, and the ovules are arranged along the first. Members of the latter family consequently have unilocular ovaries with a single cavity between adjacent carpels. References Angiosperm orders Taxa named by Takenoshin Nakai
789
https://en.wikipedia.org/wiki/Asterales
Asterales
Asterales is an order of dicotyledonous flowering plants that includes the large family Asteraceae (or Compositae) known for composite flowers made of florets, and ten families related to the Asteraceae. While asterids in general are characterized by fused petals, composite flowers consisting of many florets create the false appearance of separate petals (as found in the rosids). The order is cosmopolitan (plants found throughout most of the world including desert and frigid zones), and includes mostly herbaceous species, although a small number of trees (such as the Lobelia deckenii, the giant lobelia, and Dendrosenecio, giant groundsels) and shrubs are also present. Asterales are organisms that seem to have evolved from one common ancestor. Asterales share characteristics on morphological and biochemical levels. Synapomorphies (a character that is shared by two or more groups through evolutionary development) include the presence in the plants of oligosaccharide inulin, a nutrient storage molecule used instead of starch; and unique stamen morphology. The stamens are usually found around the style, either aggregated densely or fused into a tube, probably an adaptation in association with the plunger (brush; or secondary) pollination that is common among the families of the order, wherein pollen is collected and stored on the length of the pistil. Taxonomy The name and order Asterales is botanically venerable, dating back to at least 1926 in the Hutchinson system of plant taxonomy when it contained only five families, of which only two are retained in the APG III classification. Under the Cronquist system of taxonomic classification of flowering plants, Asteraceae was the only family in the group, but newer systems (such as APG II and APG III) have expanded it to 11. In the classification system of Dahlgren the Asterales were in the superorder Asteriflorae (also called Asteranae). The order Asterales currently includes 11 families, the largest of which are the Asteraceae, with about 25,000 species, and the Campanulaceae ("bellflowers"), with about 2,000 species. The remaining families count together for less than 1500 species. The two large families are cosmopolitan, with many of their species found in the Northern Hemisphere, and the smaller families are usually confined to Australia and the adjacent areas, or sometimes South America. Only the Asteraceae have composite flower heads; the other families do not, but share other characteristics such as storage of inulin that define the 11 families as more closely related to each other than to other plant families or orders such as the rosids. The phylogenetic tree according to APG III for the Campanulid clade is as below. Biogeography The core Asterales are Stylidiaceae (six genera), APA clade (Alseuosmiaceae, Phellinaceae and Argophyllaceae, together 7 genera), MGCA clade (Menyanthaceae, Goodeniaceae, Calyceraceae, in total twenty genera), and Asteraceae (about sixteen hundred genera). Other Asterales are Rousseaceae (four genera), Campanulaceae (eighty four genera) and Pentaphragmataceae (one genus). All Asterales families are represented in the Southern Hemisphere; however, Asteraceae and Campanulaceae are cosmopolitan and Menyanthaceae nearly so. Evolution Although most extant species of Asteraceae are herbaceous, the examination of the basal members in the family suggests that the common ancestor of the family was an arborescent plant, a tree or shrub, perhaps adapted to dry conditions, radiating from South America. Less can be said about the Asterales themselves with certainty, although since several families in Asterales contain trees, the ancestral member is most likely to have been a tree or shrub. Because all clades are represented in the southern hemisphere but many not in the northern hemisphere, it is natural to conjecture that there is a common southern origin to them. Asterales are angiosperms, flowering plants that appeared about 140 million years ago. The Asterales order probably originated in the Cretaceous (145 – 66 Mya) on the supercontinent Gondwana which broke up from 184 – 80 Mya, forming the area that is now Australia, South America, Africa, India and Antarctica. Asterales contain about 14% of eudicot diversity. From an analysis of relationships and diversities within the Asterales and with their superorders, estimates of the age of the beginning of the Asterales have been made, which range from 116 Mya to 82Mya. However few fossils have been found, of the Menyanthaceae-Asteraceae clade in the Oligocene, about 29 Mya. Fossil evidence of the Asterales is rare and belongs to rather recent epochs, so the precise estimation of the order's age is quite difficult. An Oligocene (34 – 23 Mya) pollen is known for Asteraceae and Goodeniaceae, and seeds from Oligocene and Miocene (23 – 5.3 Mya) are known for Menyanthaceae and Campanulaceae respectively. Economic importance The Asterales, by dint of being a super-set of the family Asteraceae, include some species grown for food, including the sunflower (Helianthus annuus), lettuce (Lactuca sativa) and chicory (Cichorium). Many are also used as spices and traditional medicines. Asterales are common plants and have many known uses. For example, pyrethrum (derived from Old World members of the genus Chrysanthemum) is a natural insecticide with minimal environmental impact. Wormwood, derived from a genus that includes the sagebrush, is used as a source of flavoring for absinthe, a bitter classical liquor of European origin. References Citations General references W. S. Judd, C. S. Campbell, E. A. Kellogg, P. F. Stevens, M. J. Donoghue (2002). Plant Systematics: A Phylogenetic Approach, 2nd edition. pp. 476–486 (Asterales). Sinauer Associates, Sunderland, Massachusetts. . External links Angiosperm orders
791
https://en.wikipedia.org/wiki/Asteroid
Asteroid
An asteroid is a minor planet of the inner Solar System. Historically, these terms have been applied to any astronomical object orbiting the Sun that did not resolve into a disc in a telescope and was not observed to have characteristics of an active comet such as a tail. As minor planets in the outer Solar System were discovered that were found to have volatile-rich surfaces similar to comets, these came to be distinguished from the objects found in the main asteroid belt. Thus the term "asteroid" now generally refers to the minor planets of the inner Solar System, including those co-orbital with Jupiter. Larger asteroids are often called planetoids. Overview Millions of asteroids exist: many are shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets. The vast majority of known asteroids orbit within the main asteroid belt located between the orbits of Mars and Jupiter, or are co-orbital with Jupiter (the Jupiter trojans). However, other orbital families exist with significant populations, including the near-Earth objects. Individual asteroids are classified by their characteristic spectra, with the majority falling into three main groups: C-type, M-type, and S-type. These were named after and are generally identified with carbon-rich, metallic, and silicate (stony) compositions, respectively. The sizes of asteroids varies greatly; the largest, Ceres, is almost across and massive enough to qualify as a dwarf planet. Asteroids are somewhat arbitrarily differentiated from comets and meteoroids. In the case of comets, the difference is one of composition: while asteroids are mainly composed of mineral and rock, comets are primarily composed of dust and ice. Furthermore, asteroids formed closer to the sun, preventing the development of cometary ice. The difference between asteroids and meteoroids is mainly one of size: meteoroids have a diameter of one meter or less, whereas asteroids have a diameter of greater than one meter. Finally, meteoroids can be composed of either cometary or asteroidal materials. Only one asteroid, 4 Vesta, which has a relatively reflective surface, is normally visible to the naked eye, and this is only in very dark skies when it is favorably positioned. Rarely, small asteroids passing close to Earth may be visible to the naked eye for a short time. , the Minor Planet Center had data on 930,000 minor planets in the inner and outer Solar System, of which about 545,000 had enough information to be given numbered designations. The United Nations declared 30 June as International Asteroid Day to educate the public about asteroids. The date of International Asteroid Day commemorates the anniversary of the Tunguska asteroid impact over Siberia, Russian Federation, on 30 June 1908. In April 2018, the B612 Foundation reported "It is 100 percent certain we'll be hit [by a devastating asteroid], but we're not 100 percent sure when." Also in 2018, physicist Stephen Hawking, in his final book Brief Answers to the Big Questions, considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched. Discovery The first asteroid to be discovered, Ceres, was originally considered to be a new planet. This was followed by the discovery of other similar bodies, which, with the equipment of the time, appeared to be points of light, like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term "asteroid", coined in Greek as ἀστεροειδής, or asteroeidēs, meaning 'star-like, star-shaped', and derived from the Ancient Greek astēr 'star, planet'. In the early second half of the nineteenth century, the terms "asteroid" and "planet" (not always qualified as "minor") were still used interchangeably. Discovery timeline: 10 by 1849 1 Ceres, 1801 2 Pallas 1802 3 Juno 1804 4 Vesta 1807 5 Astraea 1845 in 1846, planet Neptune was discovered 6 Hebe July 1847 7 Iris August 1847 8 Flora October 1847 9 Metis 25 April 1848 10 Hygiea 12 April 1849 tenth asteroid discovered 100 asteroids by 1868 1,000 by 1921 10,000 by 1989 100,000 by 2005 1,000,000 by 2020 Historical methods Asteroid discovery methods have dramatically improved over the past two centuries. In the last years of the 18th century, Baron Franz Xaver von Zach organized a group of 24 astronomers to search the sky for the missing planet predicted at about 2.8 AU from the Sun by the Titius-Bode law, partly because of the discovery, by Sir William Herschel in 1781, of the planet Uranus at the distance predicted by the law. This task required that hand-drawn sky charts be prepared for all stars in the zodiacal band down to an agreed-upon limit of faintness. On subsequent nights, the sky would be charted again and any moving object would, hopefully, be spotted. The expected motion of the missing planet was about 30 seconds of arc per hour, readily discernible by observers. The first object, Ceres, was not discovered by a member of the group, but rather by accident in 1801 by Giuseppe Piazzi, director of the observatory of Palermo in Sicily. He discovered a new star-like object in Taurus and followed the displacement of this object during several nights. Later that year, Carl Friedrich Gauss used these observations to calculate the orbit of this unknown object, which was found to be between the planets Mars and Jupiter. Piazzi named it after Ceres, the Roman goddess of agriculture. Three other asteroids (2 Pallas, 3 Juno, and 4 Vesta) were discovered over the next few years, with Vesta found in 1807. After eight more years of fruitless searches, most astronomers assumed that there were no more and abandoned any further searches. However, Karl Ludwig Hencke persisted, and began searching for more asteroids in 1830. Fifteen years later, he found 5 Astraea, the first new asteroid in 38 years. He also found 6 Hebe less than two years later. After this, other astronomers joined in the search and at least one new asteroid was discovered every year after that (except the wartime year 1945). Notable asteroid hunters of this early era were J.R. Hind, A. de Gasparis, R. Luther, H.M.S. Goldschmidt, J. Chacornac, J. Ferguson, N.R. Pogson, E.W. Tempel, J.C. Watson, C.H.F. Peters, A. Borrelly, J. Palisa, the Henry brothers and A. Charlois. In 1891, Max Wolf pioneered the use of astrophotography to detect asteroids, which appeared as short streaks on long-exposure photographic plates. This dramatically increased the rate of detection compared with earlier visual methods: Wolf alone discovered 248 asteroids, beginning with 323 Brucia, whereas only slightly more than 300 had been discovered up to that point. It was known that there were many more, but most astronomers did not bother with them, some calling them "vermin of the skies", a phrase variously attributed to E. Suess and E. Weiss. Even a century later, only a few thousand asteroids were identified, numbered and named. Manual methods of the 1900s and modern reporting Until 1998, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope, or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. Any body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations. These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ). The last step of discovery is to send the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union. Computerized methods There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth (see Earth-crosser asteroids). The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens. Various asteroid deflection strategies have been proposed, as early as the 1960s. The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact. Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across. All these considerations helped spur the launch of highly efficient surveys that consist of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. A list of teams using such systems includes: Lincoln Near-Earth Asteroid Research (LINEAR) Near-Earth Asteroid Tracking (NEAT) Spacewatch Lowell Observatory Near-Earth-Object Search (LONEOS) Catalina Sky Survey (CSS) Pan-STARRS NEOWISE Asteroid Terrestrial-impact Last Alert System (ATLAS) Campo Imperatore Near-Earth Object Survey (CINEOS) Japanese Spaceguard Association Asiago-DLR Asteroid Survey (ADAS) , the LINEAR system alone has discovered 147,132 asteroids. Among all the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter. Terminology Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. Beech and Steel's 1995 paper proposed a meteoroid definition including size limits. The term "asteroid", from the Greek word for "star-like", never had a formal definition, with the broader term minor planet being preferred by the International Astronomical Union. However, following the discovery of asteroids below ten meters in size, Rubin and Grossman's 2010 paper revised the previous definition of meteoroid to objects between 10 µm and 1 meter in size in order to maintain the distinction between asteroids and meteoroids. The smallest asteroids discovered (based on absolute magnitude H) are with and with both with an estimated size of about 1 meter. In 2006, the term "small Solar System body" was also introduced to cover both most minor planets and comets. Other languages prefer "planetoid" (Greek for "planet-like"), and this term is occasionally used in English especially for larger minor planets such as the dwarf planets as well as an alternative for asteroids since they are not star-like. The word "planetesimal" has a similar meaning, but refers specifically to the small building blocks of the planets that existed when the Solar System was forming. The term "planetule" was coined by the geologist William Daniel Conybeare to describe minor planets, but is not in common use. The three largest objects in the asteroid belt, Ceres, Pallas, and Vesta, grew to the stage of protoplanets. Ceres is a dwarf planet, the only one in the inner Solar System. When found, asteroids were seen as a class of objects distinct from comets, and there was no unified term for the two until "small Solar System body" was coined in 2006. The main difference between an asteroid and a comet is that a comet shows a coma due to sublimation of near-surface ices by solar radiation. A few objects have ended up being dual-listed because they were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; most "asteroids" with notably eccentric orbits are probably dormant or extinct comets. For almost two centuries, from the discovery of Ceres in 1801 until the discovery of the first centaur, Chiron in 1977, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few such as Hidalgo ventured far beyond Jupiter for part of their orbit. Those located between the orbits of Mars and Jupiter were known for many years simply as The Asteroids. When astronomers started finding more small bodies that permanently resided further out than Jupiter, now called centaurs, they numbered them among the traditional asteroids, though there was debate over whether they should be considered asteroids or as a new type of object. Then, when the first trans-Neptunian object (other than Pluto), Albion, was discovered in 1992, and especially when large numbers of similar objects started turning up, new terms were invented to sidestep the issue: Kuiper-belt object, trans-Neptunian object, scattered-disc object, and so on. These inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies are not expected to exhibit much cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets and not asteroids. The innermost of these are the Kuiper-belt objects, called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Furthermore, most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. (The much more distant Oort cloud is hypothesized to be the main reservoir of dormant comets.) Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line. The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term "asteroid" to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects. When the IAU introduced the class small Solar System bodies in 2006 to include most objects previously classified as minor planets and comets, they created the class of dwarf planets for the largest minor planets – those that have enough mass to have become ellipsoidal under their own gravity. According to the IAU, "the term 'minor planet' may still be used, but generally, the term 'Small Solar System Body' will be preferred." Currently only the largest object in the asteroid belt, Ceres, at about across, has been placed in the dwarf planet category. Formation It is thought that planetesimals in the asteroid belt evolved much like the rest of the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust. In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres. Distribution within the Solar System Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include: Asteroid belt The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is now estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter. Trojans Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, L4 and L5, which lie 60° ahead of and behind the larger body. The most significant population of trojans are the Jupiter trojans. Although fewer Jupiter trojans have been discovered (), it is thought that they are as numerous as the asteroids in the asteroid belt. Trojans have been found in the orbits of other planets, including Venus, Earth, Mars, Uranus, and Neptune. Near-Earth asteroids Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as Earth-crossers. , 14,464 near-Earth asteroids are known and approximately 900–1,000 have a diameter of over one kilometer. Characteristics Size distribution Asteroids vary greatly in size, from almost for the largest down to rocks just 1 meter across. The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and are thought to be surviving protoplanets. The vast majority, however, are much smaller and are irregularly shaped; they are thought to be either battered planetesimals or fragments of larger bodies. The dwarf planet Ceres is by far the largest asteroid, with a diameter of . The next largest are 4 Vesta and 2 Pallas, both with diameters of just over . Vesta is the only main-belt asteroid that can, on occasion, be visible to the naked eye. On some rare occasions, a near-Earth asteroid may briefly become visible without technical aid; see 99942 Apophis. The mass of all the objects of the asteroid belt, lying between the orbits of Mars and Jupiter, is estimated to be in the range of , about 4% of the mass of the Moon. Of this, Ceres comprises , about a third of the total. Adding in the next three most massive objects, Vesta (9%), Pallas (7%), and Hygiea (3%), brings this figure up to half, whereas the three most-massive asteroids after that, 511 Davida (1.2%), 704 Interamnia (1.0%), and 52 Europa (0.9%), constitute only another 3%. The number of asteroids increases rapidly as their individual masses decrease. The number of asteroids decreases markedly with size. Although this generally follows a power law, there are 'bumps' at and , where more asteroids than expected from a logarithmic distribution are found. Largest asteroids Although their location in the asteroid belt excludes them from planet status, the three largest objects, Ceres, Vesta, and Pallas, are intact protoplanets that share many characteristics common to planets, and are atypical compared to the majority of irregularly shaped asteroids. The fourth-largest asteroid, Hygiea, appears nearly spherical although it may have an undifferentiated interior, like the majority of asteroids. Between them, the four largest asteroids constitute half the mass of the asteroid belt. Ceres is the only asteroid that appears to be plastic shape under its own gravity and hence the only one that is a likely dwarf planet. It has a much higher absolute magnitude than the other asteroids, of around 3.32, and may possess a surface layer of ice. Like the planets, Ceres is differentiated: it has a crust, a mantle and a core. No meteorites from Ceres have been found on Earth. Vesta, too, has a differentiated interior, though it formed inside the Solar System's frost line, and so is devoid of water; its composition is mainly of basaltic rock with minerals such as olivine. Aside from the large crater at its southern pole, Rheasilvia, Vesta also has an ellipsoidal shape. Vesta is the parent body of the Vestian family and other V-type asteroids, and is the source of the HED meteorites, which constitute 5% of all meteorites on Earth. Pallas is unusual in that, like Uranus, it rotates on its side, with its axis of rotation tilted at high angles to its orbital plane. Its composition is similar to that of Ceres: high in carbon and silicon, and perhaps partially differentiated. Pallas is the parent body of the Palladian family of asteroids. Hygiea is the largest carbonaceous asteroid and, unlike the other largest asteroids, lies relatively close to the plane of the ecliptic. It is the largest member and presumed parent body of the Hygiean family of asteroids. Because there is no sufficiently large crater on the surface to be the source of that family, as there is on Vesta, it is thought that Hygiea may have been completely disrupted in the collision that formed the Hygiean family and recoalesced after losing a bit less than 2% of its mass. Observations taken with the Very Large Telescope's SPHERE imager in 2017 and 2018, and announced in late 2019, revealed that Hygiea has a nearly spherical shape, which is consistent both with it being in hydrostatic equilibrium (and thus a dwarf planet), or formerly being in hydrostatic equilibrium, or with being disrupted and recoalescing. Rotation Measurements of the rotation rates of large asteroids in the asteroid belt show that there is an upper limit. Very few asteroids with a diameter larger than 100 meters have a rotation period smaller than 2.2 hours. For asteroids rotating faster than approximately this rate, the inertial force at the surface is greater than the gravitational force, so any loose surface material would be flung out. However, a solid object should be able to rotate much more rapidly. This suggests that most asteroids with a diameter over 100 meters are rubble piles formed through the accumulation of debris after collisions between asteroids. Composition The physical composition of asteroids is varied and in most cases poorly understood. Ceres appears to be composed of a rocky core covered by an icy mantle, where Vesta is thought to have a nickel-iron core, olivine mantle, and basaltic crust. 10 Hygiea, however, which appears to have a uniformly primitive composition of carbonaceous chondrite, is thought to be the largest undifferentiated asteroid, though it may be a differentiated asteroid that was globally disrupted by an impact and then reassembled. Other asteroids appear to be the remnant cores or mantles of proto-planets, high in rock and metal Most small asteroids are thought to be piles of rubble held together loosely by gravity, though the largest are probably solid. Some asteroids have moons or are co-orbiting binaries: Rubble piles, moons, binaries, and scattered asteroid families are thought to be the results of collisions that disrupted a parent asteroid, or, possibly, a planet. In the main asteroid belt, there appear to be two primary populations of asteroid: a dark, volatile-rich population, consisting of the C-type and P-type asteroids, with albedos less that 0.10 and densities under , and a dense, volatile-poor population, consisting of the S-type and M-type asteroids, with albedos over 0.15 and densities greater than 2.7. Within these populations, larger asteroids are denser, presumably due to compression. There appears to be minimal macro-porosity (interstitial vacuum) in the score of asteroids with masses greater than . Asteroids contain traces of amino acids and other organic compounds, and some speculate that asteroid impacts may have seeded the early Earth with the chemicals necessary to initiate life, or may have even brought life itself to Earth (also see panspermia). In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine and related organic molecules) may have been formed on asteroids and comets in outer space. Composition is calculated from three primary sources: albedo, surface spectrum, and density. The last can only be determined accurately by observing the orbits of moons the asteroid might have. So far, every asteroid with moons has turned out to be a rubble pile, a loose conglomeration of rock and metal that may be half empty space by volume. The investigated asteroids are as large as 280 km in diameter, and include 121 Hermione (268×186×183 km), and 87 Sylvia (384×262×232 km). Only half a dozen asteroids are larger than 87 Sylvia, though none of them have moons. The fact that such large asteroids as Sylvia may be rubble piles, presumably due to disruptive impacts, has important consequences for the formation of the Solar System: Computer simulations of collisions involving solid bodies show them destroying each other as often as merging, but colliding rubble piles are more likely to merge. This means that the cores of the planets could have formed relatively quickly. On 7 October 2009, the presence of water ice was confirmed on the surface of 24 Themis using NASA's Infrared Telescope Facility. The surface of the asteroid appears completely covered in ice. As this ice layer is sublimating, it may be getting replenished by a reservoir of ice under the surface. Organic compounds were also detected on the surface. Scientists hypothesize that some of the first water brought to Earth was delivered by asteroid impacts after the collision that produced the Moon. The presence of ice on 24 Themis supports this theory. In October 2013, water was detected on an extrasolar body for the first time, on an asteroid orbiting the white dwarf GD 61. On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." In May 2016, significant asteroid data arising from the Wide-field Infrared Survey Explorer and NEOWISE missions have been questioned. Although the early original criticism had not undergone peer review, a more recent peer-reviewed study was subsequently published. In November 2019, scientists reported detecting, for the first time, sugar molecules, including ribose, in meteorites, suggesting that chemical processes on asteroids can produce some fundamentally essential bio-ingredients important to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth, and possibly, as well, the notion of panspermia. Acfer 049, a meteorite discovered in Algeria in 1990, was shown in 2019 to have ice fossils inside it – the first direct evidence of water ice in the composition of asteroids. Findings have shown that solar winds can react with the oxygen in the upper layer of the asteroids and create water. It has been estimated that every cubic metre of irradiated rock could contain up to 20 litres. Surface features Most asteroids outside the "big four" (Ceres, Pallas, Vesta, and Hygiea) are likely to be broadly similar in appearance, if irregular in shape. 50 km (31 mi) 253 Mathilde is a rubble pile saturated with craters with diameters the size of the asteroid's radius, and Earth-based observations of 300 km (186 mi) 511 Davida, one of the largest asteroids after the big four, reveal a similarly angular profile, suggesting it is also saturated with radius-size craters. Medium-sized asteroids such as Mathilde and 243 Ida that have been observed up close also reveal a deep regolith covering the surface. Of the big four, Pallas and Hygiea are practically unknown. Vesta has compression fractures encircling a radius-size crater at its south pole but is otherwise a spheroid. Ceres seems quite different in the glimpses Hubble has provided, with surface features that are unlikely to be due to simple craters and impact basins, but details will be expanded with the Dawn spacecraft, which entered Ceres orbit on 6 March 2015. Color Asteroids become darker and redder with age due to space weathering. However evidence suggests most of the color change occurs rapidly, in the first hundred thousand years, limiting the usefulness of spectral measurement for determining the age of asteroids. Classification Asteroids are commonly categorized according to two criteria: the characteristics of their orbits, and features of their reflectance spectrum. Orbital classification Many asteroids have been placed in groups and families based on their orbital characteristics. Apart from the broadest divisions, it is customary to name a group of asteroids after the first member of that group to be discovered. Groups are relatively loose dynamical associations, whereas families are tighter and result from the catastrophic break-up of a large parent asteroid sometime in the past. Families are more common and easier to identify within the main asteroid belt, but several small families have been reported among the Jupiter trojans. Main belt families were first recognized by Kiyotsugu Hirayama in 1918 and are often called Hirayama families in his honor. About 30–35% of the bodies in the asteroid belt belong to dynamical families each thought to have a common origin in a past collision between asteroids. A family has also been associated with the plutoid dwarf planet . Quasi-satellites and horseshoe objects Some asteroids have unusual horseshoe orbits that are co-orbital with Earth or some other planet. Examples are 3753 Cruithne and . The first instance of this type of orbital arrangement was discovered between Saturn's moons Epimetheus and Janus. Sometimes these horseshoe objects temporarily become quasi-satellites for a few decades or a few hundred years, before returning to their earlier status. Both Earth and Venus are known to have quasi-satellites. Such objects, if associated with Earth or Venus or even hypothetically Mercury, are a special class of Aten asteroids. However, such objects could be associated with outer planets as well. Spectral classification In 1975, an asteroid taxonomic system based on color, albedo, and spectral shape was developed by Chapman, Morrison, and Zellner. These properties are thought to correspond to the composition of the asteroid's surface material. The original classification system had three categories: C-types for dark carbonaceous objects (75% of known asteroids), S-types for stony (silicaceous) objects (17% of known asteroids) and U for those that did not fit into either C or S. This classification has since been expanded to include many other asteroid types. The number of types continues to grow as more asteroids are studied. The two most widely used taxonomies now used are the Tholen classification and SMASS classification. The former was proposed in 1984 by David J. Tholen, and was based on data collected from an eight-color asteroid survey performed in the 1980s. This resulted in 14 asteroid categories. In 2002, the Small Main-Belt Asteroid Spectroscopic Survey resulted in a modified version of the Tholen taxonomy with 24 different types. Both systems have three broad categories of C, S, and X asteroids, where X consists of mostly metallic asteroids, such as the M-type. There are also several smaller classes. The proportion of known asteroids falling into the various spectral types does not necessarily reflect the proportion of all asteroids that are of that type; some types are easier to detect than others, biasing the totals. Problems Originally, spectral designations were based on inferences of an asteroid's composition. However, the correspondence between spectral class and composition is not always very good, and a variety of classifications are in use. This has led to significant confusion. Although asteroids of different spectral classifications are likely to be composed of different materials, there are no assurances that asteroids within the same taxonomic class are composed of the same (or similar) materials. Naming A newly discovered asteroid is given a provisional designation (such as ) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. ). The formal naming convention uses parentheses around the number – e.g. (433) Eros – but dropping the parentheses is quite common. Informally, it is common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text. In addition, names can be proposed by the asteroid's discoverer, within guidelines established by the International Astronomical Union. Symbols The first asteroids to be discovered were assigned iconic symbols like the ones traditionally used to designate the planets. By 1855 there were two dozen asteroid symbols, which often occurred in multiple variants. In 1851, after the fifteenth asteroid (Eunomia) had been discovered, Johann Franz Encke made a major change in the upcoming 1854 edition of the Berliner Astronomisches Jahrbuch (BAJ, Berlin Astronomical Yearbook). He introduced a disk (circle), a traditional symbol for a star, as the generic symbol for an asteroid. The circle was then numbered in order of discovery to indicate a specific asteroid (although he assigned ① to the fifth, Astraea, while continuing to designate the first four only with their existing iconic symbols). The numbered-circle convention was quickly adopted by astronomers, and the next asteroid to be discovered (16 Psyche, in 1852) was the first to be designated in that way at the time of its discovery. However, Psyche was given an iconic symbol as well, as were a few other asteroids discovered over the next few years (see chart above). 20 Massalia was the first asteroid that was not assigned an iconic symbol, and no iconic symbols were created after the 1855 discovery of 37 Fides. That year Astraea's number was increased to ⑤, but the first four asteroids, Ceres to Vesta, were not listed by their numbers until the 1867 edition. The circle was soon abbreviated to a pair of parentheses, which were easier to typeset and sometimes omitted altogether over the next few decades, leading to the modern convention. Exploration Until the age of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes and their shapes and terrain remained a mystery. The best modern ground-based telescopes and the Earth-orbiting Hubble Space Telescope can resolve a small amount of detail on the surfaces of the largest asteroids, but even these mostly remain little more than fuzzy blobs. Limited information about the shapes and compositions of asteroids can be inferred from their light curves (their variation in brightness as they rotate) and their spectral properties, and asteroid sizes can be estimated by timing the lengths of star occultations (when an asteroid passes directly in front of a star). Radar imaging can yield good information about asteroid shapes and orbital and rotational parameters, especially for near-Earth asteroids. In terms of delta-v and propellant requirements, NEOs are more easily accessible than the Moon. The first close-up photographs of asteroid-like objects were taken in 1971, when the Mariner 9 probe imaged Phobos and Deimos, the two small moons of Mars, which are probably captured asteroids. These images revealed the irregular, potato-like shapes of most asteroids, as did later images from the Voyager probes of the small moons of the gas giants. The first true asteroid to be photographed in close-up was 951 Gaspra in 1991, followed in 1993 by 243 Ida and its moon Dactyl, all of which were imaged by the Galileo probe en route to Jupiter. The first dedicated asteroid probe was NEAR Shoemaker, which photographed 253 Mathilde in 1997, before entering into orbit around 433 Eros, finally landing on its surface in 2001. Other asteroids briefly visited by spacecraft en route to other destinations include 9969 Braille (by Deep Space 1 in 1999), and 5535 Annefrank (by Stardust in 2002). From September to November 2005, the Japanese Hayabusa probe studied 25143 Itokawa in detail and was plagued with difficulties, but returned samples of its surface to Earth on 13 June 2010. The European Rosetta probe (launched in 2004) flew by 2867 Šteins in 2008 and 21 Lutetia, the third-largest asteroid visited to date, in 2010. In September 2007, NASA launched the Dawn spacecraft, which orbited 4 Vesta from July 2011 to September 2012, and has been orbiting the dwarf planet 1 Ceres since 2015. 4 Vesta is the second-largest asteroid visited to date. On 13 December 2012, China's lunar orbiter Chang'e 2 flew within of the asteroid 4179 Toutatis on an extended mission. The Japan Aerospace Exploration Agency (JAXA) launched the Hayabusa2 probe in December 2014, and plans to return samples from 162173 Ryugu in December 2020. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. In September 2016, NASA launched the OSIRIS-REx sample return mission to asteroid 101955 Bennu, which it reached in December 2018. On May 10 2021, the probe departed the asteroid with a sample from its surface, and is expected to return to Earth on September 24 2023. Planned and future missions In early 2013, NASA announced the planning stages of a mission to capture a near-Earth asteroid and move it into lunar orbit where it could possibly be visited by astronauts and later impacted into the Moon. On 19 June 2014, NASA reported that asteroid 2011 MD was a prime candidate for capture by a robotic mission, perhaps in the early 2020s. It has been suggested that asteroids might be used as a source of materials that may be rare or exhausted on Earth (asteroid mining), or materials for constructing space habitats (see Colonization of the asteroids). Materials that are heavy and expensive to launch from Earth may someday be mined from asteroids and used for space manufacturing and construction. In the U.S. Discovery program the Psyche spacecraft proposal to 16 Psyche and Lucy spacecraft to Jupiter trojans made it to the semi-finalist stage of mission selection. In January 2017, Lucy and Psyche mission were both selected as NASA's Discovery Program missions 13 and 14 respectively. In November 2021, NASA launched its Double Asteroid Redirection Test (DART), a mission to test technology for defending Earth against potential asteroids or comets. Location of Ceres (within asteroid belt) compared to other bodies of the Solar System Fiction Asteroids and the asteroid belt are a staple of science fiction stories. Asteroids play several potential roles in science fiction: as places human beings might colonize, resources for extracting minerals, hazards encountered by spacecraft traveling between two other points, and as a threat to life on Earth or other inhabited planets, dwarf planets, and natural satellites by potential impact. Gallery See also ʻOumuamua Active asteroid Amor asteroid Apollo asteroid Asteroid Day Asteroid impact avoidance Asteroids in astrology Aten asteroid Atira asteroid BOOTES (Burst Observer and Optical Transient Exploring System) Category:Asteroid groups and families Category:Asteroids Category:Binary asteroids Centaur (minor planet) Chang'e 2 lunar orbiter Constellation program Dawn (spacecraft) Dwarf planet Impact event List of asteroid close approaches to Earth List of exceptional asteroids List of impact craters on Earth List of minor planets List of minor planets named after people List of minor planets named after places List of possible impact structures on Earth Lost minor planet Marco Polo (spacecraft) Meanings of minor planet names Mesoplanet Meteoroid Minor planet Near-Earth object NEOShield NEOSSat (Near Earth Object Surveillance Satellite) Canada's new satellite Pioneer 10 Rosetta (spacecraft) Explanatory notes References Further reading Further information about asteroids (see Logarithmic scale) External links Minor planets
794
https://en.wikipedia.org/wiki/Allocution
Allocution
An allocution, or allocutus, is a formal statement made to the court by the defendant who has been found guilty prior to being sentenced. It is part of the criminal procedure in some jurisdictions using common law. Concept An allocution allows the defendant to explain why the sentence should be lenient. In plea bargains, an allocution may be required of the defendant. The defendant explicitly admits specifically and in detail the actions and their reasons in exchange for a reduced sentence. In principle, that removes any doubt as to the exact nature of the defendant's guilt in the matter. The term "allocution" is used generally only in jurisdictions in the United States, but there are vaguely similar processes in other common law countries. In many other jurisdictions, it is for the defense lawyer to mitigate on their client's behalf, and the defendant rarely has the opportunity to speak. The right of victims to speak at sentencing is also sometimes referred to as allocution. Australia In Australia, the term allocutus is used by the Clerk of Arraigns or another formal associate of the Court. It is generally phrased as, "Prisoner at the Bar, you have been found Guilty by a jury of your peers of the offense of XYZ. Do you have anything to say as to why the sentence of this Court should not now be passed upon you?" The defense counsel will then make a plea in mitigation (also called submissions on penalty) in an attempt to mitigate the relative seriousness of the offense and heavily refer to and rely upon the defendant's previous good character and good works, if any. The right to make a plea in mitigation is absolute. If a judge or magistrate refuses to hear such a plea or does not properly consider it, the sentence can be overturned on appeal. United States In most of the United States, defendants are allowed the opportunity to allocute before a sentence is passed. Some jurisdictions hold that as an absolute right. In its absence, a sentence but not the conviction may be overturned, resulting in the need for a new sentencing hearing. In the federal system, Federal Rules of Criminal Procedure 32(i)(4) provides that the court must "address the defendant personally in order to permit the defendant to speak or present any information to mitigate the sentence." The Federal Public Defender recommends that defendants speak in terms of how a lenient sentence will be sufficient but not greater than necessary to comply with the statutory directives set forth in . See also Confession (law) References Criminal procedure Evidence law
795
https://en.wikipedia.org/wiki/Affidavit
Affidavit
An ( ; Medieval Latin for "he has declared under oath") is a written statement voluntarily made by an affiant or deponent under an oath or affirmation which is administered by a person who is authorized to do so by law. Such a statement is witnessed as to the authenticity of the affiant's signature by a taker of oaths, such as a notary public or commissioner of oaths. An affidavit is a type of verified statement or showing, or in other words, it contains a verification, which means that it is made under oath on penalty of perjury, and this serves as evidence for its veracity and is required in court proceedings. Definition An affidavit is typically defined as a written declaration or statement that is sworn or affirmed before a person who has authority to administer an oath. There is no general defined form for an affidavit, although for some proceedings an affidavit must satisfy legal or statutory requirements in order to be considered. An affidavit may include, a commencement which identifies the affiant; an attestation clause, usually a jurat, at the end certifying that the affiant made the statement under oath on the specified date; signatures of the affiant and person who administered the oath. In some cases, an introductory clause, called a preamble, is added attesting that the affiant personally appeared before the authenticating authority. An affidavit may also recite that the statement it records was made under penalty of perjury. An affidavit that is prepared for use within the context of litigation may also include a caption that identifies the venue and parties to the relevant judicial proceedings. Worldwide Australia On 2 March 2016, the High Court of Australia held that the ACT Uniform Evidence Legislation is neutral in the way sworn evidence and unsworn evidence is treated as being of equal weight. India In Indian law, although an affidavit may be taken as proof of the facts stated therein, the courts have no jurisdiction to admit evidence by way of affidavit. Affidavit is not treated as "evidence" within the meaning of Section 3 of the Evidence Act. However, it was held by the Supreme Court that an affidavit can be used as evidence only if the court so orders for sufficient reasons, namely, the right of the opposite party to have the deponent produced for cross-examination. Therefore, an affidavit cannot ordinarily be used as evidence in absence of a specific order of the court. Sri Lanka In Sri Lanka, under the Oaths Ordinance, with the exception of a court-martial, a person may submit an affidavit signed in the presence of a commissioner for oaths or a justice of the peace. Ireland Affidavits are made in a similar way as to England and Wales, although "make oath" is sometimes omitted. An affirmed affidavit may be substituted for an sworn affidavit in most cases for those opposed to swearing oaths. The person making the affidavit is known as the deponent and signs the affidavit. The affidavit concludes in the standard format "sworn/affirmed (declared) before me, [name of commissioner for oaths/solicitor], a commissioner for oaths (solicitor), on the [date] at [location] in the county/city of [county/city], and I know the deponent", and it is signed and stamped by the commissioner for oaths. It is important that the Commissioner states his/her name clearly, sometimes documents are rejected when the name cannot be ascertained. In August 2020, a new method of filing affidavits came into force. Under Section 21 of the Civil Law and Criminal Law (Miscellaneous Provisions) Act 2020 witnesses are no longer required to swear before God or make an affirmation when filing an affidavit. Instead, witnesses will make a non-religious “statement of truth” and, if it is breached, will be liable for up to one year in prison if convicted summarily or, upon conviction on indictment, to a maximum fine of €250,000 or imprisonment for a term not exceeding 5 years, or both. This is designed to replace affidavits and statutory declarations in situations where the electronic means of lodgement or filing of documents with the Court provided for in Section 20 is utilised. As of January 2022, it has yet to be adopted widely, and it is expected it will not be used for some time by lay litigants who will still lodge papers in person. United States In American jurisprudence, under the rules for hearsay, admission of an unsupported affidavit as evidence is unusual (especially if the affiant is not available for cross-examination) with regard to material facts which may be dispositive of the matter at bar. Affidavits from persons who are dead or otherwise incapacitated, or who cannot be located or made to appear, may be accepted by the court, but usually only in the presence of corroborating evidence. An affidavit which reflected a better grasp of the facts close in time to the actual events may be used to refresh a witness's recollection. Materials used to refresh recollection are admissible as evidence. If the affiant is a party in the case, the affiant's opponent may be successful in having the affidavit admitted as evidence, as statements by a party-opponent are admissible through an exception to the hearsay rule. Affidavits are typically included in the response to interrogatories. Requests for admissions under Federal Rule of Civil Procedure 36, however, are not required to be sworn. When a person signs an affidavit, that person is eligible to take the stand at a trial or evidentiary hearing. One party may wish to summon the affiant to verify the contents of the affidavit, while the other party may want to cross-examine the affiant about the affidavit. Some types of motions will not be accepted by the court unless accompanied by an independent sworn statement or other evidence in support of the need for the motion. In such a case, a court will accept an affidavit from the filing attorney in support of the motion, as certain assumptions are made, to wit: The affidavit in place of sworn testimony promotes judicial economy. The lawyer is an officer of the court and knows that a false swearing by them, if found out, could be grounds for severe penalty up to and including disbarment. The lawyer if called upon would be able to present independent and more detailed evidence to prove the facts set forth in his affidavit. The acceptance of an affidavit by one society does not confirm its acceptance as a legal document in other jurisdictions. Equally, the acceptance that a lawyer is an officer of the court (for swearing the affidavit) is not a given. This matter is addressed by the use of the apostille, a means of certifying the legalization of a document for international use under the terms of the 1961 Hague Convention Abolishing the Requirement of Legalization for Foreign Public Documents. Documents which have been notarized by a notary public, and certain other documents, and then certified with a conformant apostille, are accepted for legal use in all the nations that have signed the Hague Convention. Thus most affidavits now require to be apostilled if used for cross border issues. See also Declaration (law) Deposition (law) Fishman Affidavit, a well-known example of an affidavit Performativity Statutory declaration Sworn declaration References Evidence law Legal documents Notary
798
https://en.wikipedia.org/wiki/Aries%20%28constellation%29
Aries (constellation)
Aries is one of the constellations of the zodiac. It is located in the Northern celestial hemisphere between Pisces to the west and Taurus to the east. The name Aries is Latin for ram. Its old astronomical symbol is (♈︎). It is one of the 48 constellations described by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations. It is a mid-sized constellation, ranking 39th overall size, with an area of 441 square degrees (1.1% of the celestial sphere). Aries has represented a ram since late Babylonian times. Before that, the stars of Aries formed a farmhand. Different cultures have incorporated the stars of Aries into different constellations including twin inspectors in China and a porpoise in the Marshall Islands. Aries is a relatively dim constellation, possessing only four bright stars: Hamal (Alpha Arietis, second magnitude), Sheratan (Beta Arietis, third magnitude), Mesarthim (Gamma Arietis, fourth magnitude), and 41 Arietis (also fourth magnitude). The few deep-sky objects within the constellation are quite faint and include several pairs of interacting galaxies. Several meteor showers appear to radiate from Aries, including the Daytime Arietids and the Epsilon Arietids. History and mythology Aries is recognized as an official constellation now, albeit as a specific region of the sky, by the International Astronomical Union. It was originally defined in ancient texts as a specific pattern of stars, and has remained a constellation since ancient times; it now includes the ancient pattern as well as the surrounding stars. In the description of the Babylonian zodiac given in the clay tablets known as the MUL.APIN, the constellation, now known as Aries, was the final station along the ecliptic. The MUL.APIN was a comprehensive table of the risings and settings of stars, which likely served as an agricultural calendar. Modern-day Aries was known as , "The Agrarian Worker" or "The Hired Man". Although likely compiled in the 12th or 11th century BC, the MUL.APIN reflects a tradition which marks the Pleiades as the vernal equinox, which was the case with some precision at the beginning of the Middle Bronze Age. The earliest identifiable reference to Aries as a distinct constellation comes from the boundary stones that date from 1350 to 1000 BC. On several boundary stones, a zodiacal ram figure is distinct from the other characters present. The shift in identification from the constellation as the Agrarian Worker to the Ram likely occurred in later Babylonian tradition because of its growing association with Dumuzi the Shepherd. By the time the MUL.APIN was created—by 1000 BC—modern Aries was identified with both Dumuzi's ram and a hired laborer. The exact timing of this shift is difficult to determine due to the lack of images of Aries or other ram figures. In ancient Egyptian astronomy, Aries was associated with the god Amon-Ra, who was depicted as a man with a ram's head and represented fertility and creativity. Because it was the location of the vernal equinox, it was called the "Indicator of the Reborn Sun". During the times of the year when Aries was prominent, priests would process statues of Amon-Ra to temples, a practice that was modified by Persian astronomers centuries later. Aries acquired the title of "Lord of the Head" in Egypt, referring to its symbolic and mythological importance. Aries was not fully accepted as a constellation until classical times. In Hellenistic astrology, the constellation of Aries is associated with the golden ram of Greek mythology that rescued Phrixus and Helle on orders from Hermes, taking Phrixus to the land of Colchis. Phrixos and Helle were the son and daughter of King Athamas and his first wife Nephele. The king's second wife, Ino, was jealous and wished to kill his children. To accomplish this, she induced a famine in Boeotia, then falsified a message from the Oracle of Delphi that said Phrixos must be sacrificed to end the famine. Athamas was about to sacrifice his son atop Mount Laphystium when Aries, sent by Nephele, arrived. Helle fell off of Aries's back in flight and drowned in the Dardanelles, also called the Hellespont in her honor. Historically, Aries has been depicted as a crouched, wingless ram with its head turned towards Taurus. Ptolemy asserted in his Almagest that Hipparchus depicted Alpha Arietis as the ram's muzzle, though Ptolemy did not include it in his constellation figure. Instead, it was listed as an "unformed star", and denoted as "the star over the head". John Flamsteed, in his Atlas Coelestis, followed Ptolemy's description by mapping it above the figure's head. Flamsteed followed the general convention of maps by depicting Aries lying down. Astrologically, Aries has been associated with the head and its humors. It was strongly associated with Mars, both the planet and the god. It was considered to govern Western Europe and Syria, and to indicate a strong temper in a person. The First Point of Aries, the location of the vernal equinox, is named for the constellation. This is because the Sun crossed the celestial equator from south to north in Aries more than two millennia ago. Hipparchus defined it in 130 BC. as a point south of Gamma Arietis. Because of the precession of the equinoxes, the First Point of Aries has since moved into Pisces and will move into Aquarius by around 2600 AD. The Sun now appears in Aries from late April through mid May, though the constellation is still associated with the beginning of spring. Medieval Muslim astronomers depicted Aries in various ways. Astronomers like al-Sufi saw the constellation as a ram, modeled on the precedent of Ptolemy. However, some Islamic celestial globes depicted Aries as a nondescript four-legged animal with what may be antlers instead of horns. Some early Bedouin observers saw a ram elsewhere in the sky; this constellation featured the Pleiades as the ram's tail. The generally accepted Arabic formation of Aries consisted of thirteen stars in a figure along with five "unformed" stars, four of which were over the animal's hindquarters and one of which was the disputed star over Aries's head. Al-Sufi's depiction differed from both other Arab astronomers' and Flamsteed's, in that his Aries was running and looking behind itself. The obsolete constellations of Aries (Apes/Vespa/Lilium/Musca (Borealis)) all centred on the same the northern stars. In 1612, Petrus Plancius introduced Apes, a constellation representing a bee. In 1624, the same stars were used by Jakob Bartsch as for Vespa, representing a wasp. In 1679, Augustin Royer used these stars for his constellation Lilium, representing the fleur-de-lis. None of these constellation became widely accepted. Johann Hevelius renamed the constellation "Musca" in 1690 in his Firmamentum Sobiescianum. To differentiate it from Musca, the southern fly, it was later renamed Musca Borealis but it did not gain acceptance and its stars were ultimately officially reabsorbed into Aries. The asterism involved was 33, 35, 39, and 41 Arietis. In 1922, the International Astronomical Union defined its recommended three-letter abbreviation, "Ari". The official boundaries of Aries were defined in 1930 by Eugène Delporte as a polygon of 12 segments. Its right ascension is between 1h 46.4m and 3h 29.4m and its declination is between 10.36° and 31.22° in the equatorial coordinate system. In non-Western astronomy In traditional Chinese astronomy, stars from Aries were used in several constellations. The brightest stars—Alpha, Beta, and Gamma Arietis—formed a constellation called Lou (婁), variously translated as "bond", "lasso", and "sickle", which was associated with the ritual sacrifice of cattle. This name was shared by the 16th lunar mansion, the location of the full moon closest to the autumnal equinox. This constellation has also been associated with harvest-time as it could represent a woman carrying a basket of food on her head. 35, 39, and 41 Arietis were part of a constellation called Wei (胃), which represented a fat abdomen and was the namesake of the 17th lunar mansion, which represented granaries. Delta and Zeta Arietis were a part of the constellation Tianyin (天陰), thought to represent the Emperor's hunting partner. Zuogeng (左更), a constellation depicting a marsh and pond inspector, was composed of Mu, Nu, Omicron, Pi, and Sigma Arietis. He was accompanied by Yeou-kang, a constellation depicting an official in charge of pasture distribution. In a similar system to the Chinese, the first lunar mansion in Hindu astronomy was called "Aswini", after the traditional names for Beta and Gamma Arietis, the Aswins. Because the Hindu new year began with the vernal equinox, the Rig Veda contains over 50 new-year's related hymns to the twins, making them some of the most prominent characters in the work. Aries itself was known as "Aja" and "Mesha". In Hebrew astronomy Aries was named "Taleh"; it signified either Simeon or Gad, and generally symbolizes the "Lamb of the World". The neighboring Syrians named the constellation "Amru", and the bordering Turks named it "Kuzi". Half a world away, in the Marshall Islands, several stars from Aries were incorporated into a constellation depicting a porpoise, along with stars from Cassiopeia, Andromeda, and Triangulum. Alpha, Beta, and Gamma Arietis formed the head of the porpoise, while stars from Andromeda formed the body and the bright stars of Cassiopeia formed the tail. Other Polynesian peoples recognized Aries as a constellation. The Marquesas islanders called it Na-pai-ka; the Māori constellation Pipiri may correspond to modern Aries as well. In indigenous Peruvian astronomy, a constellation with most of the same stars as Aries existed. It was called the "Market Moon" and the "Kneeling Terrace", as a reminder for when to hold the annual harvest festival, Ayri Huay. Features Stars Aries has three prominent stars forming an asterism, designated Alpha, Beta, and Gamma Arietis by Johann Bayer. Alpha (Hamal) and Beta (Sheratan) are commonly used for navigation. There is also one other star above the fourth magnitude, 41 Arietis (Bharani). α Arietis, called Hamal, is the brightest star in Aries. Its traditional name is derived from the Arabic word for "lamb" or "head of the ram" (ras al-hamal), which references Aries's mythological background. With a spectral class of K2 and a luminosity class of III, it is an orange giant with an apparent visual magnitude of 2.00, which lies 66 light-years from Earth. Hamal has a luminosity of and its absolute magnitude is −0.1. β Arietis, also known as Sheratan, is a blue-white star with an apparent visual magnitude of 2.64. Its traditional name is derived from "sharatayn", the Arabic word for "the two signs", referring to both Beta and Gamma Arietis in their position as heralds of the vernal equinox. The two stars were known to the Bedouin as "qarna al-hamal", "horns of the ram". It is 59 light-years from Earth. It has a luminosity of and its absolute magnitude is 2.1. It is a spectroscopic binary star, one in which the companion star is only known through analysis of the spectra. The spectral class of the primary is A5. Hermann Carl Vogel determined that Sheratan was a spectroscopic binary in 1903; its orbit was determined by Hans Ludendorff in 1907. It has since been studied for its eccentric orbit. γ Arietis, with a common name of Mesarthim, is a binary star with two white-hued components, located in a rich field of magnitude 8–12 stars. Its traditional name has conflicting derivations. It may be derived from a corruption of "al-sharatan", the Arabic word meaning "pair" or a word for "fat ram". However, it may also come from the Sanskrit for "first star of Aries" or the Hebrew for "ministerial servants", both of which are unusual languages of origin for star names. Along with Beta Arietis, it was known to the Bedouin as "qarna al-hamal". The primary is of magnitude 4.59 and the secondary is of magnitude 4.68. The system is 164 light-years from Earth. The two components are separated by 7.8 arcseconds, and the system as a whole has an apparent magnitude of 3.9. The primary has a luminosity of and the secondary has a luminosity of ; the primary is an A-type star with an absolute magnitude of 0.2 and the secondary is a B9-type star with an absolute magnitude of 0.4. The angle between the two components is 1°. Mesarthim was discovered to be a double star by Robert Hooke in 1664, one of the earliest such telescopic discoveries. The primary, γ1 Arietis, is an Alpha² Canum Venaticorum variable star that has a range of 0.02 magnitudes and a period of 2.607 days. It is unusual because of its strong silicon emission lines. The constellation is home to several double stars, including Epsilon, Lambda, and Pi Arietis. ε Arietis is a binary star with two white components. The primary is of magnitude 5.2 and the secondary is of magnitude 5.5. The system is 290 light-years from Earth. Its overall magnitude is 4.63, and the primary has an absolute magnitude of 1.4. Its spectral class is A2. The two components are separated by 1.5 arcseconds. λ Arietis is a wide double star with a white-hued primary and a yellow-hued secondary. The primary is of magnitude 4.8 and the secondary is of magnitude 7.3. The primary is 129 light-years from Earth. It has an absolute magnitude of 1.7 and a spectral class of F0. The two components are separated by 36 arcseconds at an angle of 50°; the two stars are located 0.5° east of 7 Arietis. π Arietis is a close binary star with a blue-white primary and a white secondary. The primary is of magnitude 5.3 and the secondary is of magnitude 8.5. The primary is 776 light-years from Earth. The primary itself is a wide double star with a separation of 25.2 arcseconds; the tertiary has a magnitude of 10.8. The primary and secondary are separated by 3.2 arcseconds. Most of the other stars in Aries visible to the naked eye have magnitudes between 3 and 5. δ Ari, called Boteïn, is a star of magnitude 4.35, 170 light-years away. It has an absolute magnitude of −0.1 and a spectral class of K2. ζ Arietis is a star of magnitude 4.89, 263 light-years away. Its spectral class is A0 and its absolute magnitude is 0.0. 14 Arietis is a star of magnitude 4.98, 288 light-years away. Its spectral class is F2 and its absolute magnitude is 0.6. 39 Arietis (Lilii Borea) is a similar star of magnitude 4.51, 172 light-years away. Its spectral class is K1 and its absolute magnitude is 0.0. 35 Arietis is a dim star of magnitude 4.55, 343 light-years away. Its spectral class is B3 and its absolute magnitude is −1.7. 41 Arietis, known both as c Arietis and Nair al Butain, is a brighter star of magnitude 3.63, 165 light-years away. Its spectral class is B8 and it has a luminosity of . Its absolute magnitude is −0.2. 53 Arietis is a runaway star of magnitude 6.09, 815 light-years away. Its spectral class is B2. It was likely ejected from the Orion Nebula approximately five million years ago, possibly due to supernovae. Finally, Teegarden's Star is the closest star to Earth in Aries. It is a brown dwarf of magnitude 15.14 and spectral class M6.5V. With a proper motion of 5.1 arcseconds per year, it is the 24th closest star to Earth overall. Aries has its share of variable stars, including R and U Arietis, Mira-type variable stars, and T Arietis, a semi-regular variable star. R Arietis is a Mira variable star that ranges in magnitude from a minimum of 13.7 to a maximum of 7.4 with a period of 186.8 days. It is 4,080 light-years away. U Arietis is another Mira variable star that ranges in magnitude from a minimum of 15.2 to a maximum of 7.2 with a period of 371.1 days. T Arietis is a semiregular variable star that ranges in magnitude from a minimum of 11.3 to a maximum of 7.5 with a period of 317 days. It is 1,630 light-years away. One particularly interesting variable in Aries is SX Arietis, a rotating variable star considered to be the prototype of its class, helium variable stars. SX Arietis stars have very prominent emission lines of Helium I and Silicon III. They are normally main-sequence B0p—B9p stars, and their variations are not usually visible to the naked eye. Therefore, they are observed photometrically, usually having periods that fit in the course of one night. Similar to Alpha² Canum Venaticorum variables, SX Arietis stars have periodic changes in their light and magnetic field, which correspond to the periodic rotation; they differ from the Alpha² Canum Venaticorum variables in their higher temperature. There are between 39 and 49 SX Arietis variable stars currently known; ten are noted as being "uncertain" in the General Catalog of Variable Stars. Sky objects NGC 772 is a spiral galaxy with an integrated magnitude of 10.3, located southeast of β Arietis and 15 arcminutes west of 15 Arietis. It is a relatively bright galaxy and shows obvious nebulosity and ellipticity in an amateur telescope. It is 7.2 by 4.2 arcminutes, meaning that its surface brightness, magnitude 13.6, is significantly lower than its integrated magnitude. NGC 772 is a class SA(s)b galaxy, which means that it is an unbarred spiral galaxy without a ring that possesses a somewhat prominent bulge and spiral arms that are wound somewhat tightly. The main arm, on the northwest side of the galaxy, is home to many star forming regions; this is due to previous gravitational interactions with other galaxies. NGC 772 has a small companion galaxy, NGC 770, that is about 113,000 light-years away from the larger galaxy. The two galaxies together are also classified as Arp 78 in the Arp peculiar galaxy catalog. NGC 772 has a diameter of 240,000 light-years and the system is 114 million light-years from Earth. Another spiral galaxy in Aries is NGC 673, a face-on class SAB(s)c galaxy. It is a weakly barred spiral galaxy with loosely wound arms. It has no ring and a faint bulge and is 2.5 by 1.9 arcminutes. It has two primary arms with fragments located farther from the core. 171,000 light-years in diameter, NGC 673 is 235 million light-years from Earth. NGC 678 and NGC 680 are a pair of galaxies in Aries that are only about 200,000 light-years apart. Part of the NGC 691 group of galaxies, both are at a distance of approximately 130 million light-years. NGC 678 is an edge-on spiral galaxy that is 4.5 by 0.8 arcminutes. NGC 680, an elliptical galaxy with an asymmetrical boundary, is the brighter of the two at magnitude 12.9; NGC 678 has a magnitude of 13.35. Both galaxies have bright cores, but NGC 678 is the larger galaxy at a diameter of 171,000 light-years; NGC 680 has a diameter of 72,000 light-years. NGC 678 is further distinguished by its prominent dust lane. NGC 691 itself is a spiral galaxy slightly inclined to our line of sight. It has multiple spiral arms and a bright core. Because it is so diffuse, it has a low surface brightness. It has a diameter of 126,000 light-years and is 124 million light-years away. NGC 877 is the brightest member of an 8-galaxy group that also includes NGC 870, NGC 871, and NGC 876, with a magnitude of 12.53. It is 2.4 by 1.8 arcminutes and is 178 million light-years away with a diameter of 124,000 light-years. Its companion is NGC 876, which is about 103,000 light-years from the core of NGC 877. They are interacting gravitationally, as they are connected by a faint stream of gas and dust. Arp 276 is a different pair of interacting galaxies in Aries, consisting of NGC 935 and IC 1801. NGC 821 is an E6 elliptical galaxy. It is unusual because it has hints of an early spiral structure, which is normally only found in lenticular and spiral galaxies. NGC 821 is 2.6 by 2.0 arcminutes and has a visual magnitude of 11.3. Its diameter is 61,000 light-years and it is 80 million light-years away. Another unusual galaxy in Aries is Segue 2, a dwarf and satellite galaxy of the Milky Way, recently discovered to be a potential relic of the epoch of reionization. Meteor showers Aries is home to several meteor showers. The Daytime Arietid meteor shower is one of the strongest meteor showers that occurs during the day, lasting from 22 May to 2 July. It is an annual shower associated with the Marsden group of comets that peaks on 7 June with a maximum zenithal hourly rate of 54 meteors. Its parent body may be the asteroid Icarus. The meteors are sometimes visible before dawn, because the radiant is 32 degrees away from the Sun. They usually appear at a rate of 1–2 per hour as "earthgrazers", meteors that last several seconds and often begin at the horizon. Because most of the Daytime Arietids are not visible to the naked eye, they are observed in the radio spectrum. This is possible because of the ionized gas they leave in their wake. Other meteor showers radiate from Aries during the day; these include the Daytime Epsilon Arietids and the Northern and Southern Daytime May Arietids. The Jodrell Bank Observatory discovered the Daytime Arietids in 1947 when James Hey and G. S. Stewart adapted the World War II-era radar systems for meteor observations. The Delta Arietids are another meteor shower radiating from Aries. Peaking on 9 December with a low peak rate, the shower lasts from 8 December to 14 January, with the highest rates visible from 8 to 14 December. The average Delta Aquarid meteor is very slow, with an average velocity of per second. However, this shower sometimes produces bright fireballs. This meteor shower has northern and southern components, both of which are likely associated with 1990 HA, a near-Earth asteroid. The Autumn Arietids also radiate from Aries. The shower lasts from 7 September to 27 October and peaks on 9 October. Its peak rate is low. The Epsilon Arietids appear from 12 to 23 October. Other meteor showers radiating from Aries include the October Delta Arietids, Daytime Epsilon Arietids, Daytime May Arietids, Sigma Arietids, Nu Arietids, and Beta Arietids. The Sigma Arietids, a class IV meteor shower, are visible from 12 to 19 October, with a maximum zenithal hourly rate of less than two meteors per hour on 19 October. Planetary systems Aries contains several stars with extrasolar planets. HIP 14810, a G5 type star, is orbited by three giant planets (those more than ten times the mass of Earth). HD 12661, like HIP 14810, is a G-type main sequence star, slightly larger than the Sun, with two orbiting planets. One planet is 2.3 times the mass of Jupiter, and the other is 1.57 times the mass of Jupiter. HD 20367 is a G0 type star, approximately the size of the Sun, with one orbiting planet. The planet, discovered in 2002, has a mass 1.07 times that of Jupiter and orbits every 500 days. In 2019, scientists conducting the CARMENES survey at the Calar Alto Observatory announced evidence of two Earth-mass exoplanets orbiting the star within its habitable zone. See also Aries (Chinese astronomy) References Explanatory notes Citations Bibliography Online sources SIMBAD External links The Deep Photographic Guide to the Constellations: Aries The clickable Aries Star Tales – Aries Warburg Institute Iconographic Database (medieval and early modern images of Aries) Constellations Constellations listed by Ptolemy Northern constellations
799
https://en.wikipedia.org/wiki/Aquarius%20%28constellation%29
Aquarius (constellation)
Aquarius is a constellation of the zodiac, between Capricornus and Pisces. Its name is Latin for "water-carrier" or "cup-carrier", and its old astronomical symbol is (♒︎), a representation of water. Aquarius is one of the oldest of the recognized constellations along the zodiac (the Sun's apparent path). It was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations. It is found in a region often called the Sea due to its profusion of constellations with watery associations such as Cetus the whale, Pisces the fish, and Eridanus the river. At apparent magnitude 2.9, Beta Aquarii is the brightest star in the constellation. History and mythology Aquarius is identified as "The Great One" in the Babylonian star catalogues and represents the god Ea himself, who is commonly depicted holding an overflowing vase. The Babylonian star-figure appears on entitlement stones and cylinder seals from the second millennium. It contained the winter solstice in the Early Bronze Age. In Old Babylonian astronomy, Ea was the ruler of the southernmost quarter of the Sun's path, the "Way of Ea", corresponding to the period of 45 days on either side of winter solstice. Aquarius was also associated with the destructive floods that the Babylonians regularly experienced, and thus was negatively connoted. In Ancient Egypt astronomy, Aquarius was associated with the annual flood of the Nile; the banks were said to flood when Aquarius put his jar into the river, beginning spring. In the Greek tradition, the constellation came to be represented simply as a single vase from which a stream poured down to Piscis Austrinus. The name in the Hindu zodiac is likewise kumbha "water-pitcher". In Greek mythology, Aquarius is sometimes associated with Deucalion, the son of Prometheus who built a ship with his wife Pyrrha to survive an imminent flood. They sailed for nine days before washing ashore on Mount Parnassus. Aquarius is also sometimes identified with beautiful Ganymede, a youth in Greek mythology and the son of Trojan king Tros, who was taken to Mount Olympus by Zeus to act as cup-carrier to the gods. Neighboring Aquila represents the eagle, under Zeus' command, that snatched the young boy; some versions of the myth indicate that the eagle was in fact Zeus transformed. An alternative version of the tale recounts Ganymede's kidnapping by the goddess of the dawn, Eos, motivated by her affection for young men; Zeus then stole him from Eos and employed him as cup-bearer. Yet another figure associated with the water bearer is Cecrops I, a king of Athens who sacrificed water instead of wine to the gods. Depictions In the first century, Ptolemy's Almagest established the common Western depiction of Aquarius. His water jar, an asterism itself, consists of Gamma, Pi, Eta, and Zeta Aquarii; it pours water in a stream of more than 20 stars terminating with Fomalhaut, now assigned solely to Piscis Austrinus. The water bearer's head is represented by 5th magnitude 25 Aquarii while his left shoulder is Beta Aquarii; his right shoulder and forearm are represented by Alpha and Gamma Aquarii respectively. In Eastern astronomy In Chinese astronomy, the stream of water flowing from the Water Jar was depicted as the "Army of Yu-Lin" (Yu-lim-kiun or Yulinjun, Hanzi: 羽林君). The name "Yu-lin" means "feathers and forests", referring to the numerous light-footed soldiers from the northern reaches of the empire represented by these faint stars. The constellation's stars were the most numerous of any Chinese constellation, numbering 45, the majority of which were located in modern Aquarius. The celestial army was protected by the wall Leibizhen (垒壁阵), which counted Iota, Lambda, Phi, and Sigma Aquarii among its 12 stars. 88, 89, and 98 Aquarii represent Fou-youe, the axes used as weapons and for hostage executions. Also in Aquarius is Loui-pi-tchin, the ramparts that stretch from 29 and 27 Piscium and 33 and 30 Aquarii through Phi, Lambda, Sigma, and Iota Aquarii to Delta, Gamma, Kappa, and Epsilon Capricorni. Near the border with Cetus, the axe Fuyue was represented by three stars; its position is disputed and may have instead been located in Sculptor. Tienliecheng also has a disputed position; the 13-star castle replete with ramparts may have possessed Nu and Xi Aquarii but may instead have been located south in Piscis Austrinus. The Water Jar asterism was seen to the ancient Chinese as the tomb, Fenmu. Nearby, the emperors' mausoleum Xiuliang stood, demarcated by Kappa Aquarii and three other collinear stars. Ku ("crying") and Qi ("weeping"), each composed of two stars, were located in the same region. Three of the Chinese lunar mansions shared their name with constellations. Nu, also the name for the 10th lunar mansion, was a handmaiden represented by Epsilon, Mu, 3, and 4 Aquarii. The 11th lunar mansion shared its name with the constellation Xu ("emptiness"), formed by Beta Aquarii and Alpha Equulei; it represented a bleak place associated with death and funerals. Wei, the rooftop and 12th lunar mansion, was a V-shaped constellation formed by Alpha Aquarii, Theta Pegasi, and Epsilon Pegasi; it shared its name with two other Chinese constellations, in modern-day Scorpius and Aries. Features Stars Despite both its prominent position on the zodiac and its large size, Aquarius has no particularly bright stars, its four brightest stars being less than magnitude 2. However, recent research has shown that there are several stars lying within its borders that possess planetary systems. The two brightest stars, Alpha and Beta Aquarii, are luminous yellow supergiants, of spectral types G0Ib and G2Ib respectively, that were once hot blue-white B-class main sequence stars 5 to 9 times as massive as the Sun. The two are also moving through space perpendicular to the plane of the Milky Way. Just shading Alpha, Beta Aquarii is the brightest star in Aquarius with an apparent magnitude of 2.91. It also has the proper name of Sadalsuud. Having cooled and swollen to around 50 times the Sun's diameter, it is around 2200 times as luminous as the Sun. It is around 6.4 times as massive as the Sun and around 56 million years old. Sadalsuud is 540 ± 20 light-years from Earth. Alpha Aquarii, also known as Sadalmelik, has an apparent magnitude of 2.94. It is 520 ± 20 light-years distant from Earth, and is around 6.5 times as massive as the Sun and 3000 times as luminous. It is 53 million years old. γ Aquarii, also called Sadachbia, is a white main sequence star of spectral type star of spectral type A0V that is between 158 and 315 million years old and is around two and a half times the Sun's mass, and double its radius. Of magnitude 3.85, it is 164 ± 9 light years away. It has a luminosity of . The name Sadachbia comes from the Arabic for "lucky stars of the tents", sa'd al-akhbiya. δ Aquarii, also known as Skat or Scheat is a blue-white A2 spectral type star of apparent magnitude 3.27 and luminosity of . ε Aquarii, also known as Albali, is a blue-white A1 spectral type star with an apparent magnitude of 3.77, an absolute magnitude of 1.2, and a luminosity of . ζ Aquarii is an F2 spectral type double star; both stars are white. Overall, it appears to be of magnitude 3.6 and luminosity of . The primary has a magnitude of 4.53 and the secondary a magnitude of 4.31, but both have an absolute magnitude of 0.6. Its orbital period is 760 years; the two components are currently moving farther apart. θ Aquarii, sometimes called Ancha, is a G8 spectral type star with an apparent magnitude of 4.16 and an absolute magnitude of 1.4. κ Aquarii, also called Situla, has an apparent magnitude of 5.03. λ Aquarii, also called Hudoor or Ekchusis, is an M2 spectral type star of magnitude 3.74 and luminosity of . ξ Aquarii, also called Bunda, is an A7 spectral type star with an apparent magnitude of 4.69 and an absolute magnitude of 2.4. π Aquarii, also called Seat, is a B0 spectral type star with an apparent magnitude of 4.66 and an absolute magnitude of −4.1. Planetary systems Twelve exoplanet systems have been found in Aquarius as of 2013. Gliese 876, one of the nearest stars to Earth at a distance of 15 light-years, was the first red dwarf star to be found to possess a planetary system. It is orbited by four planets, including one terrestrial planet 6.6 times the mass of Earth. The planets vary in orbital period from 2 days to 124 days. 91 Aquarii is an orange giant star orbited by one planet, 91 Aquarii b. The planet's mass is 2.9 times the mass of Jupiter, and its orbital period is 182 days. Gliese 849 is a red dwarf star orbited by the first known long-period Jupiter-like planet, Gliese 849 b. The planet's mass is 0.99 times that of Jupiter and its orbital period is 1,852 days. There are also less-prominent systems in Aquarius. WASP-6, a type G8 star of magnitude 12.4, is host to one exoplanet, WASP-6 b. The star is 307 parsecs from Earth and has a mass of 0.888 solar masses and a radius of 0.87 solar radii. WASP-6 b was discovered in 2008 by the transit method. It orbits its parent star every 3.36 days at a distance of 0.042 astronomical units (AU). It is 0.503 Jupiter masses but has a proportionally larger radius of 1.224 Jupiter radii. HD 206610, a K0 star located 194 parsecs from Earth, is host to one planet, HD 206610 b. The host star is larger than the Sun; more massive at 1.56 solar masses and larger at 6.1 solar radii. The planet was discovered by the radial velocity method in 2010 and has a mass of 2.2 Jupiter masses. It orbits every 610 days at a distance of 1.68 AU. Much closer to its sun is WASP-47 b, which orbits every 4.15 days only 0.052 AU from its sun, yellow dwarf (G9V) WASP-47. WASP-47 is close in size to the Sun, having a radius of 1.15 solar radii and a mass even closer at 1.08 solar masses. WASP-47 b was discovered in 2011 by the transit method, like WASP-6 b. It is slightly larger than Jupiter with a mass of 1.14 Jupiter masses and a radius of 1.15 Jupiter masses. There are several more single-planet systems in Aquarius. HD 210277, a magnitude 6.63 yellow star located 21.29 parsecs from Earth, is host to one known planet: HD 210277 b. The 1.23 Jupiter mass planet orbits at nearly the same distance as Earth orbits the Sun1.1 AU, though its orbital period is significantly longer at around 442 days. HD 210277 b was discovered earlier than most of the other planets in Aquarius, detected by the radial velocity method in 1998. The star it orbits resembles the Sun beyond their similar spectral class; it has a radius of 1.1 solar radii and a mass of 1.09 solar masses. HD 212771 b, a larger planet at 2.3 Jupiter masses, orbits host star HD 212771 at a distance of 1.22 AU. The star itself, barely below the threshold of naked-eye visibility at magnitude 7.6, is a G8IV (yellow subgiant) star located 131 parsecs from Earth. Though it has a similar mass to the Sun1.15 solar massesit is significantly less dense with its radius of 5 solar radii. Its lone planet was discovered in 2010 by the radial velocity method, like several other exoplanets in the constellation. As of 2013, there were only two known multiple-planet systems within the bounds of Aquarius: the Gliese 876 and HD 215152 systems. The former is quite prominent; the latter has only two planets and has a host star farther away at 21.5 parsecs. The HD 215152 system consists of the planets HD 215152 b and HD 215152 c orbiting their K0-type, magnitude 8.13 sun. Both discovered in 2011 by the radial velocity method, the two tiny planets orbit very close to their host star. HD 215152 c is the larger at 0.0097 Jupiter masses (still significantly larger than the Earth, which weighs in at 0.00315 Jupiter masses); its smaller sibling is barely smaller at 0.0087 Jupiter masses. The error in the mass measurements (0.0032 and respectively) is large enough to make this discrepancy statistically insignificant. HD 215152 c also orbits further from the star than HD 215152 b, 0.0852 AU compared to 0.0652. On 23 February 2017, NASA announced that ultracool dwarf star TRAPPIST-1 in Aquarius has seven Earth-like rocky planets. Of these, three are in the system's habitable zone, and may contain water. The discovery of the TRAPPIST-1 system is seen by astronomers as a significant step toward finding life beyond Earth. Deep sky objects Because of its position away from the galactic plane, the majority of deep-sky objects in Aquarius are galaxies, globular clusters, and planetary nebulae. Aquarius contains three deep sky objects that are in the Messier catalog: the globular clusters Messier 2, Messier 72, and the asterism Messier 73. While M73 was originally catalogued as a sparsely populated open cluster, modern analysis indicates the 6 main stars are not close enough together to fit this definition, reclassifying M73 as an asterism. Two well-known planetary nebulae are also located in Aquarius: the Saturn Nebula (NGC 7009), to the southeast of μ Aquarii; and the famous Helix Nebula (NGC 7293), southwest of δ Aquarii. M2, also catalogued as NGC 7089, is a rich globular cluster located approximately 37,000 light-years from Earth. At magnitude 6.5, it is viewable in small-aperture instruments, but a 100 mm aperture telescope is needed to resolve any stars. M72, also catalogued as NGC 6981, is a small 9th magnitude globular cluster located approximately 56,000 light-years from Earth. M73, also catalogued as NGC 6994, is an open cluster with highly disputed status. Aquarius is also home to several planetary nebulae. NGC 7009, also known as the Saturn Nebula, is an 8th magnitude planetary nebula located 3,000 light-years from Earth. It was given its moniker by the 19th century astronomer Lord Rosse for its resemblance to the planet Saturn in a telescope; it has faint protrusions on either side that resemble Saturn's rings. It appears blue-green in a telescope and has a central star of magnitude 11.3. Compared to the Helix Nebula, another planetary nebula in Aquarius, it is quite small. NGC 7293, also known as the Helix Nebula, is the closest planetary nebula to Earth at a distance of 650 light-years. It covers 0.25 square degrees, making it also the largest planetary nebula as seen from Earth. However, because it is so large, it is only viewable as a very faint object, though it has a fairly high integrated magnitude of 6.0. One of the visible galaxies in Aquarius is NGC 7727, of particular interest for amateur astronomers who wish to discover or observe supernovae. A spiral galaxy (type S), it has an integrated magnitude of 10.7 and is 3 by 3 arcseconds. NGC 7252 is a tangle of stars resulting from the collision of two large galaxies and is known as the Atoms-for-Peace galaxy because of its resemblance to a cartoon atom. Meteor showers There are three major meteor showers with radiants in Aquarius: the Eta Aquariids, the Delta Aquariids, and the Iota Aquariids. The Eta Aquariids are the strongest meteor shower radiating from Aquarius. It peaks between 5 and 6 May with a rate of approximately 35 meteors per hour. Originally discovered by Chinese astronomers in 401, Eta Aquariids can be seen coming from the Water Jar beginning on 21 April and as late as 12 May. The parent body of the shower is Halley's Comet, a periodic comet. Fireballs are common shortly after the peak, approximately between 9 May and 11 May. The normal meteors appear to have yellow trails. The Delta Aquariids is a double radiant meteor shower that peaks first on 29 July and second on 6 August. The first radiant is located in the south of the constellation, while the second radiant is located in the northern circlet of Pisces asterism. The southern radiant's peak rate is about 20 meteors per hour, while the northern radiant's peak rate is about 10 meteors per hour. The Iota Aquariids is a fairly weak meteor shower that peaks on 6 August, with a rate of approximately 8 meteors per hour. Astrology , the Sun appears in the constellation Aquarius from 16 February to 12 March. In tropical astrology, the Sun is considered to be in the sign Aquarius from 20 January to 19 February, and in sidereal astrology, from 15 February to 14 March. Aquarius is also associated with the Age of Aquarius, a concept popular in 1960s counterculture. Despite this prominence, the Age of Aquarius will not dawn until the year 2597, as an astrological age does not begin until the Sun is in a particular constellation on the vernal equinox. Notes See also Aquarius (Chinese astronomy) References External links The Deep Photographic Guide to the Constellations: Aquarius The clickable Aquarius Warburg Institute Iconographic Database (medieval and early modern images of Aquarius) Constellations Equatorial constellations Constellations listed by Ptolemy
800
https://en.wikipedia.org/wiki/Anime
Anime
is hand-drawn and computer animation originating from Japan. In Japan and in Japanese, (a term derived from the English word animation) describes all animated works, regardless of style or origin. However, outside of Japan and in English, anime is colloquial for Japanese animation and refers specifically to animation produced in Japan. Animation produced outside of Japan with similar style to Japanese animation is referred to as anime-influenced animation. The earliest commercial Japanese animations date to 1917. A characteristic art style emerged in the 1960s with the works of cartoonist Osamu Tezuka and spread in following decades, developing a large domestic audience. Anime is distributed theatrically, through television broadcasts, directly to home media, and over the Internet. In addition to original works, anime are often adaptations of Japanese comics (manga), light novels, or video games. It is classified into numerous genres targeting various broad and niche audiences. Anime is a diverse medium with distinctive production methods that have adapted in response to emergent technologies. It combines graphic art, characterization, cinematography, and other forms of imaginative and individualistic techniques. Compared to Western animation, anime production generally focuses less on movement, and more on the detail of settings and use of "camera effects", such as panning, zooming, and angle shots. Diverse art styles are used, and character proportions and features can be quite varied, with a common characteristic feature being large and emotive eyes. The anime industry consists of over 430 production companies, including major studios like Studio Ghibli, Sunrise, Ufotable, CoMix Wave Films and Toei Animation. Since the 1980s, the medium has also seen international success with the rise of foreign dubbed, subtitled programming and its increasing distribution through streaming services. As of 2016, Japanese anime accounted for 60% of the world's animated television shows. In 2019, the annual overseas exports of Japanese animation exceeded $10 billion for the first time in history. Etymology As a type of animation, anime is an art form that comprises many genres found in other mediums; it is sometimes mistakenly classified as a genre itself. In Japanese, the term anime is used to refer to all animated works, regardless of style or origin. English-language dictionaries typically define anime () as "a style of Japanese animation" or as "a style of animation originating in Japan". Other definitions are based on origin, making production in Japan a requisite for a work to be considered "anime". The etymology of the term anime is disputed. The English word "animation" is written in Japanese katakana as () and as (, ) in its shortened form. Some sources claim that the term is derived from the French term for animation ("cartoon", literally 'animated design'), but others believe this to be a myth derived from the popularity of anime in France in the late 1970s and 1980s. In English, anime—when used as a common noun—normally functions as a mass noun. (For example: "Do you watch anime?" or "How much anime have you collected?") As with a few other Japanese words, such as saké and Pokémon, English texts sometimes spell anime as animé (as in French), with an acute accent over the final e, to cue the reader to pronounce the letter, not to leave it silent as English orthography may suggest. Prior to the widespread use of anime, the term Japanimation was prevalent throughout the 1970s and 1980s. In the mid-1980s, the term anime began to supplant Japanimation; in general, the latter term now only appears in period works where it is used to distinguish and identify Japanese animation. History Precursors Emakimono and kagee are considered precursors of Japanese animation. Emakimono was common in the eleventh century. Traveling storytellers narrated legends and anecdotes while the emakimono was unrolled from the right to left with chronological order, as a moving panorama. Kagee was popular during the Edo period and originated from the shadows play of China. Magic lanterns from the Netherlands were also popular in the eighteenth century. The paper play called Kamishibai surged in the twelfth century and remained popular in the street theater until the 1930s. Puppets of the bunraku theater and ukiyo-e prints are considered ancestors of characters of most Japanese animations. Finally, mangas were a heavy inspiration for Japanese anime. Cartoonists Kitzawa Rakuten and Okamoto Ippei used film elements in their strips. Pioneers Animation in Japan began in the early 20th century, when filmmakers started to experiment with techniques pioneered in France, Germany, the United States, and Russia. A claim for the earliest Japanese animation is Katsudō Shashin (), a private work by an unknown creator. In 1917, the first professional and publicly displayed works began to appear; animators such as Ōten Shimokawa, Seitarō Kitayama, and Jun'ichi Kōuchi (considered the "fathers of anime") produced numerous films, the oldest surviving of which is Kōuchi's Namakura Gatana. Many early works were lost with the destruction of Shimokawa's warehouse in the 1923 Great Kantō earthquake. By the mid-1930s, animation was well-established in Japan as an alternative format to the live-action industry. It suffered competition from foreign producers, such as Disney, and many animators, including Noburō Ōfuji and Yasuji Murata, continued to work with cheaper cutout animation rather than cel animation. Other creators, including Kenzō Masaoka and Mitsuyo Seo, nevertheless made great strides in technique, benefiting from the patronage of the government, which employed animators to produce educational shorts and propaganda. In 1940, the government dissolved several artists' organizations to form the The first talkie anime was Chikara to Onna no Yo no Naka (1933), a short film produced by Masaoka. The first feature-length anime film was Momotaro: Sacred Sailors (1945), produced by Seo with a sponsorship from the Imperial Japanese Navy. The 1950s saw a proliferation of short, animated advertisements created for television. Modern era In the 1960s, manga artist and animator Osamu Tezuka adapted and simplified Disney animation techniques to reduce costs and limit frame counts in his productions. Originally intended as temporary measures to allow him to produce material on a tight schedule with an inexperienced staff, many of his limited animation practices came to define the medium's style. Three Tales (1960) was the first anime film broadcast on television; the first anime television series was Instant History (1961–64). An early and influential success was Astro Boy (1963–66), a television series directed by Tezuka based on his manga of the same name. Many animators at Tezuka's Mushi Production later established major anime studios (including Madhouse, Sunrise, and Pierrot). The 1970s saw growth in the popularity of manga, many of which were later animated. Tezuka's work—and that of other pioneers in the field—inspired characteristics and genres that remain fundamental elements of anime today. The giant robot genre (also known as "mecha"), for instance, took shape under Tezuka, developed into the super robot genre under Go Nagai and others, and was revolutionized at the end of the decade by Yoshiyuki Tomino, who developed the real robot genre. Robot anime series such as Gundam and Super Dimension Fortress Macross became instant classics in the 1980s, and the genre remained one of the most popular in the following decades. The bubble economy of the 1980s spurred a new era of high-budget and experimental anime films, including Nausicaä of the Valley of the Wind (1984), Royal Space Force: The Wings of Honnêamise (1987), and Akira (1988). Neon Genesis Evangelion (1995), a television series produced by Gainax and directed by Hideaki Anno, began another era of experimental anime titles, such as Ghost in the Shell (1995) and Cowboy Bebop (1998). In the 1990s, anime also began attracting greater interest in Western countries; major international successes include Sailor Moon and Dragon Ball Z, both of which were dubbed into more than a dozen languages worldwide. In 2003, Spirited Away, a Studio Ghibli feature film directed by Hayao Miyazaki, won the Academy Award for Best Animated Feature at the 75th Academy Awards. It later became the highest-grossing anime film, earning more than $355 million. Since the 2000s, an increased number of anime works have been adaptations of light novels and visual novels; successful examples include The Melancholy of Haruhi Suzumiya and Fate/stay night (both 2006). Demon Slayer: Kimetsu no Yaiba the Movie: Mugen Train became the highest-grossing Japanese film and one of the world's highest-grossing films of 2020. It also became the fastest grossing film in Japanese cinema, because in 10 days it made 10 billion yen ($95.3m; £72m). It beat the previous record of Spirited Away which took 25 days. Attributes Anime differs greatly from other forms of animation by its diverse art styles, methods of animation, its production, and its process. Visually, anime works exhibit a wide variety of art styles, differing between creators, artists, and studios. While no single art style predominates anime as a whole, they do share some similar attributes in terms of animation technique and character design. Technique Modern anime follows a typical animation production process, involving storyboarding, voice acting, character design, and cel production. Since the 1990s, animators have increasingly used computer animation to improve the efficiency of the production process. Early anime works were experimental, and consisted of images drawn on blackboards, stop motion animation of paper cutouts, and silhouette animation. Cel animation grew in popularity until it came to dominate the medium. In the 21st century, the use of other animation techniques is mostly limited to independent short films, including the stop motion puppet animation work produced by Tadahito Mochinaga, Kihachirō Kawamoto and Tomoyasu Murata. Computers were integrated into the animation process in the 1990s, with works such as Ghost in the Shell and Princess Mononoke mixing cel animation with computer-generated images. Fuji Film, a major cel production company, announced it would stop cel production, producing an industry panic to procure cel imports and hastening the switch to digital processes. Prior to the digital era, anime was produced with traditional animation methods using a pose to pose approach. The majority of mainstream anime uses fewer expressive key frames and more in-between animation. Japanese animation studios were pioneers of many limited animation techniques, and have given anime a distinct set of conventions. Unlike Disney animation, where the emphasis is on the movement, anime emphasizes the art quality and let limited animation techniques make up for the lack of time spent on movement. Such techniques are often used not only to meet deadlines but also as artistic devices. Anime scenes place emphasis on achieving three-dimensional views, and backgrounds are instrumental in creating the atmosphere of the work. The backgrounds are not always invented and are occasionally based on real locations, as exemplified in Howl's Moving Castle and The Melancholy of Haruhi Suzumiya. Oppliger stated that anime is one of the rare mediums where putting together an all-star cast usually comes out looking "tremendously impressive". The cinematic effects of anime differentiates itself from the stage plays found in American animation. Anime is cinematically shot as if by camera, including panning, zooming, distance and angle shots to more complex dynamic shots that would be difficult to produce in reality. In anime, the animation is produced before the voice acting, contrary to American animation which does the voice acting first. Characters The body proportions of human anime characters tend to accurately reflect the proportions of the human body in reality. The height of the head is considered by the artist as the base unit of proportion. Head heights can vary, but most anime characters are about seven to eight heads tall. Anime artists occasionally make deliberate modifications to body proportions to produce super deformed characters that feature a disproportionately small body compared to the head; many super deformed characters are two to four heads tall. Some anime works like Crayon Shin-chan completely disregard these proportions, in such a way that they resemble caricatured Western cartoons. A common anime character design convention is exaggerated eye size. The animation of characters with large eyes in anime can be traced back to Osamu Tezuka, who was deeply influenced by such early animation characters as Betty Boop, who was drawn with disproportionately large eyes. Tezuka is a central figure in anime and manga history, whose iconic art style and character designs allowed for the entire range of human emotions to be depicted solely through the eyes. The artist adds variable color shading to the eyes and particularly to the cornea to give them greater depth. Generally, a mixture of a light shade, the tone color, and a dark shade is used. Cultural anthropologist Matt Thorn argues that Japanese animators and audiences do not perceive such stylized eyes as inherently more or less foreign. However, not all anime characters have large eyes. For example, the works of Hayao Miyazaki are known for having realistically proportioned eyes, as well as realistic hair colors on their characters. Hair in anime is often unnaturally lively and colorful or uniquely styled. The movement of hair in anime is exaggerated and "hair action" is used to emphasize the action and emotions of characters for added visual effect. Poitras traces hairstyle color to cover illustrations on manga, where eye-catching artwork and colorful tones are attractive for children's manga. Despite being produced for a domestic market, anime features characters whose race or nationality is not always defined, and this is often a deliberate decision, such as in the Pokémon animated series. Anime and manga artists often draw from a common canon of iconic facial expression illustrations to denote particular moods and thoughts. These techniques are often different in form than their counterparts in Western animation, and they include a fixed iconography that is used as shorthand for certain emotions and moods. For example, a male character may develop a nosebleed when aroused. A variety of visual symbols are employed, including sweat drops to depict nervousness, visible blushing for embarrassment, or glowing eyes for an intense glare. Another recurring sight gag is the use of chibi (deformed, simplified character designs) figures to comedically punctuate emotions like confusion or embarrassment. Music The opening and credits sequences of most anime television series are accompanied by J-pop or J-rock songs, often by reputed bands—as written with the series in mind—but are also aimed at the general music market, therefore they often allude only vaguely or not at all, to the thematic settings or plot of the series. Also, they are often used as incidental music ("insert songs") in an episode, in order to highlight particularly important scenes. Genres Anime are often classified by target demographic, including , , and a diverse range of genres targeting an adult audience. Shoujo and shounen anime sometimes contain elements popular with children of both sexes in an attempt to gain crossover appeal. Adult anime may feature a slower pace or greater plot complexity that younger audiences may typically find unappealing, as well as adult themes and situations. A subset of adult anime works featuring pornographic elements are labeled "R18" in Japan, and are internationally known as hentai (originating from ). By contrast, some anime subgenres incorporate ecchi, sexual themes or undertones without depictions of sexual intercourse, as typified in the comedic or harem genres; due to its popularity among adolescent and adult anime enthusiasts, the inclusion of such elements is considered a form of fan service. Some genres explore homosexual romances, such as yaoi (male homosexuality) and yuri (female homosexuality). While often used in a pornographic context, the terms yaoi and yuri can also be used broadly in a wider context to describe or focus on the themes or the development of the relationships themselves. Anime's genre classification differs from other types of animation and does not lend itself to simple classification. Gilles Poitras compared the labeling Gundam 0080 and its complex depiction of war as a "giant robot" anime akin to simply labeling War and Peace a "war novel". Science fiction is a major anime genre and includes important historical works like Tezuka's Astro Boy and Yokoyama's Tetsujin 28-go. A major subgenre of science fiction is mecha, with the Gundam metaseries being iconic. The diverse fantasy genre includes works based on Asian and Western traditions and folklore; examples include the Japanese feudal fairytale InuYasha, and the depiction of Scandinavian goddesses who move to Japan to maintain a computer called Yggdrasil in Ah! My Goddess. Genre crossing in anime is also prevalent, such as the blend of fantasy and comedy in Dragon Half, and the incorporation of slapstick humor in the crime anime film Castle of Cagliostro. Other subgenres found in anime include magical girl, harem, sports, martial arts, literary adaptations, medievalism, and war. Formats Early anime works were made for theatrical viewing, and required played musical components before sound and vocal components were added to the production. In 1958, Nippon Television aired Mogura no Abanchūru ("Mole's Adventure"), both the first televised and first color anime to debut. It was not until the 1960s when the first televised series were broadcast and it has remained a popular medium since. Works released in a direct to video format are called "original video animation" (OVA) or "original animation video" (OAV); and are typically not released theatrically or televised prior to home media release. The emergence of the Internet has led some animators to distribute works online in a format called "original net anime" (ONA). The home distribution of anime releases were popularized in the 1980s with the VHS and LaserDisc formats. The VHS NTSC video format used in both Japan and the United States is credited as aiding the rising popularity of anime in the 1990s. The LaserDisc and VHS formats were transcended by the DVD format which offered the unique advantages; including multiple subtitling and dubbing tracks on the same disc. The DVD format also has its drawbacks in its usage of region coding; adopted by the industry to solve licensing, piracy and export problems and restricted region indicated on the DVD player. The Video CD (VCD) format was popular in Hong Kong and Taiwan, but became only a minor format in the United States that was closely associated with bootleg copies. A key characteristic of many anime television shows is serialization, where a continuous story arc stretches over multiple episodes or seasons. Traditional American television had an episodic format, with each episode typically consisting of a self-contained story. In contrast, anime shows such as Dragon Ball Z had a serialization format, where continuous story arcs stretch over multiple episodes or seasons, which distinguished them from traditional American television shows; serialization has since also become a common characteristic of American streaming television shows during the "Peak TV" era. Industry The animation industry consists of more than 430 production companies with some of the major studios including Toei Animation, Gainax, Madhouse, Gonzo, Sunrise, Bones, TMS Entertainment, Nippon Animation, P.A.Works, Studio Pierrot and Studio Ghibli. Many of the studios are organized into a trade association, The Association of Japanese Animations. There is also a labor union for workers in the industry, the Japanese Animation Creators Association. Studios will often work together to produce more complex and costly projects, as done with Studio Ghibli's Spirited Away. An anime episode can cost between US$100,000 and US$300,000 to produce. In 2001, animation accounted for 7% of the Japanese film market, above the 4.6% market share for live-action works. The popularity and success of anime is seen through the profitability of the DVD market, contributing nearly 70% of total sales. According to a 2016 article on Nikkei Asian Review, Japanese television stations have bought over worth of anime from production companies "over the past few years", compared with under from overseas. There has been a rise in sales of shows to television stations in Japan, caused by late night anime with adults as the target demographic. This type of anime is less popular outside Japan, being considered "more of a niche product". Spirited Away (2001) is the all-time highest-grossing film in Japan. It was also the highest-grossing anime film worldwide until it was overtaken by Makoto Shinkai's 2016 film Your Name. Anime films represent a large part of the highest-grossing Japanese films yearly in Japan, with 6 out of the top 10 in 2014, in 2015 and also in 2016. Anime has to be licensed by companies in other countries in order to be legally released. While anime has been licensed by its Japanese owners for use outside Japan since at least the 1960s, the practice became well-established in the United States in the late 1970s to early 1980s, when such TV series as Gatchaman and Captain Harlock were licensed from their Japanese parent companies for distribution in the US market. The trend towards American distribution of anime continued into the 1980s with the licensing of titles such as Voltron and the 'creation' of new series such as Robotech through use of source material from several original series. In the early 1990s, several companies began to experiment with the licensing of less children-oriented material. Some, such as A.D. Vision, and Central Park Media and its imprints, achieved fairly substantial commercial success and went on to become major players in the now very lucrative American anime market. Others, such as AnimEigo, achieved limited success. Many companies created directly by Japanese parent companies did not do as well, most releasing only one or two titles before completing their American operations. Licenses are expensive, often hundreds of thousands of dollars for one series and tens of thousands for one movie. The prices vary widely; for example, Jinki: Extend cost only $91,000 to license while Kurau Phantom Memory cost $960,000. Simulcast Internet streaming rights can be cheaper, with prices around $1,000-$2,000 an episode, but can also be more expensive, with some series costing more than per episode. The anime market for the United States was worth approximately $2.74 billion in 2009, today in 2022 the anime market for the United States is worth approximately $25 billion. Dubbed animation began airing in the United States in 2000 on networks like The WB and Cartoon Network's Adult Swim. In 2005, this resulted in five of the top ten anime titles having previously aired on Cartoon Network. As a part of localization, some editing of cultural references may occur to better follow the references of the non-Japanese culture. The cost of English localization averages US$10,000 per episode. The industry has been subject to both praise and condemnation for fansubs, the addition of unlicensed and unauthorized subtitled translations of anime series or films. Fansubs, which were originally distributed on VHS bootlegged cassettes in the 1980s, have been freely available and disseminated online since the 1990s. Since this practice raises concerns for copyright and piracy issues, fansubbers tend to adhere to an unwritten moral code to destroy or no longer distribute an anime once an official translated or subtitled version becomes licensed. They also try to encourage viewers to buy an official copy of the release once it comes out in English, although fansubs typically continue to circulate through file-sharing networks. Even so, the laid back regulations of the Japanese animation industry tend to overlook these issues, allowing it to grow underground and thus increasing the popularity until there is a demand for official high-quality releases for animation companies. This has led to an increase in global popularity with Japanese animations, reaching $40 million in sales in 2004. Since the 2010s anime has become a global multibillion industry setting a sales record in 2017 of ¥2.15 trillion ($19.8 billion), driven largely by demand from overseas audiences. In 2019, Japan's anime industry was valued at $24 billion a year with 48% of that revenue coming from overseas (which is now its largest industry sector). By 2025 the anime industry is expected to reach a value of $30 billion with over 60% of that revenue to come from overseas. Markets Japan External Trade Organization (JETRO) valued the domestic anime market in Japan at (), including from licensed products, in 2005. JETRO reported sales of overseas anime exports in 2004 to be (). JETRO valued the anime market in the United States at (), including in home video sales and over from licensed products, in 2005. JETRO projected in 2005 that the worldwide anime market, including sales of licensed products, would grow to (). The anime market in China was valued at in 2017, and is projected to reach by 2020. By 2030 the global anime market is expected to reach a value of $48.3 Billion with the largest contributors to this growth being North America, Europe, China and The Middle East. Awards The anime industry has several annual awards that honor the year's best works. Major annual awards in Japan include the Ōfuji Noburō Award, the Mainichi Film Award for Best Animation Film, the Animation Kobe Awards, the Japan Media Arts Festival animation awards, the Tokyo Anime Award and the Japan Academy Prize for Animation of the Year. In the United States, anime films compete in the Crunchyroll Anime Awards. There were also the American Anime Awards, which were designed to recognize excellence in anime titles nominated by the industry, and were held only once in 2006. Anime productions have also been nominated and won awards not exclusively for anime, like the Academy Award for Best Animated Feature or the Golden Bear. Working conditions In recent years the anime industry has been accused by both Japanese and foreign media for underpaying and overworking its animators. In response the Japanese Prime Minister Fumio Kishida promised to improve the working conditions and salary of all animators and creators working in the industry. A few anime studios such as MAPPA have taken actions to improve the working conditions of their employees. Globalization and Cultural Impact Anime has become commercially profitable in Western countries, as demonstrated by early commercially successful Western adaptations of anime, such as Astro Boy and Speed Racer. Early American adaptions in the 1960s made Japan expand into the continental European market, first with productions aimed at European and Japanese children, such as Heidi, Vicky the Viking and Barbapapa, which aired in various countries. Italy, Spain, and France grew a particular interest into Japan's output, due to its cheap selling price and productive output. In fact, Italy imported the most anime outside of Japan. These mass imports influenced anime popularity in South American, Arabic and German markets. The beginning of 1980 saw the introduction of Japanese anime series into the American culture. In the 1990s, Japanese animation slowly gained popularity in America. Media companies such as Viz and Mixx began publishing and releasing animation into the American market. The 1988 film Akira is largely credited with popularizing anime in the Western world during the early 1990s, before anime was further popularized by television shows such as Pokémon and Dragon Ball Z in the late 1990s. By 1997, Japanese anime was the fastest-growing genre in the American video industry. The growth of the Internet later provided international audiences an easy way to access Japanese content. Early on, online piracy played a major role in this, through over time many legal alternatives appeared. Since the 2010s various streaming services have become increasingly involved in the production and licensing of anime for the international markets. This is especially the case with net services such as Netflix and Crunchyroll which have large catalogs in Western countries, although as of 2020 anime fans in many developing non-Western countries, such as India and Philippines, have fewer options of obtaining access to legal content, and therefore still turn to online piracy. However beginning with the early 2020s anime has been experiencing yet another boom in global popularity and demand due to the Covid-19 pandemic and streaming services like Netflix, Prime video, Hulu and anime only services like Crunchyroll & Funimation increasing the international availability of the amount of new licensed anime shows as well as the size of their catalogs. Netflix reported that, between October 2019 and September 2020, more than member households worldwide had watched at least one anime title on the platform. Anime titles appeared on the streaming platforms top 10 lists in almost 100 countries within the 1-year period. As of 2021 Japanese anime are the most demanded foreign language shows in the United States accounting for 30.5% of the market share(In comparison, Spanish and Korean shows account for 21% and 11% of the market share). In 2022 the anime series Attack on Titan won the award of "Most in-demand TV series in the world 2021" in the Global TV Demand Awards. Attack on Titan became the first ever non-English language series to earn the title of World’s Most In-Demand TV Show, previously held by only The Walking Dead and Game of Thrones. Rising interest in anime as well as japanese video games has led to an increase of university students in the United Kingdom wanting to get a degree in the Japanese language. Various anime and manga series have influenced Hollywood in the making of numerous famous movies and characters. Hollywood itself has produced live-action adaptations of various anime series such as Ghost in the Shell, Death Note, Dragon Ball Evolution and Cowboy Bebop. However most of these adaptations have been reviewed negatively by both the critics and the audience and have become box-office flops. The main reasons for the unsuccessfulness of Hollywood's adaptions of anime being the often change of plot and characters from the original source material and the limited capabilities a live-action movie or series can do in comparison to an animated counterpart. One particular exception however is Alita: Battle Angel which has become a moderate commercial success, receiving generally positive reviews from both the critics and the audience for it's visual effects and following the source material. The movie grossed $404 million worldwide, making it directors Robert Rodriguez's highest-grossing film. Anime alongside many other parts of Japanese pop culture has helped Japan to gain a positive worldwide image and improve it's relations with other countries. In 2015 During remarks welcoming Japanese Prime Minister Shinzo Abe to the White House, President Barack Obama thanked Japan for its cultural contributions to the United States by saying: This visit is a celebration of the ties of friendship and family that bind our peoples. I first felt it when I was 6 years old when my mother took me to Japan. I felt it growing up in Hawaii, like communities across our country, home to so many proud Japanese Americans, and Today is also a chance for Americans, especially our young people, to say thank you for all the things we love from Japan. Like karate and karaoke. Manga and anime. And, of course, emojis. In July 2020, after the approval of a Chilean government project in which citizens of Chile would be allowed to withdraw up to 10% of their privately held retirement savings, journalist Pamela Jiles celebrated by running through congress with her arms spread out behind her, imitating the move of many characters of the anime and manga series Naruto. In April 2021 Peruvian politicians Jorge Hugo Romero of the PPC and Milagros Juárez of the UPP cosplayed as anime characters to get the otaku vote. A 2018 survey conducted in 20 countries and territories using a sample consisting of 6,600 respondents held by Dentsu revealed that 34% of all surveyed people found excellency in anime and manga more than other Japanese cultural or technological aspects which makes this mass Japanese media the 3rd most liked "Japanese thing", below japanese cuisine (34.6%) and japanese robotics (35.1%). The advertisement company views anime as a profitable tool for marketing campaigns in foreign countries due it's popularity and high reception. Anime plays a role in driving tourism to Japan. In surveys held by Statista between 2019 and 2020, 24.2% of tourists from the United States, 7.7% of tourists from China and 6.1% of tourists from South Korea said they were motivated to visit Japan because of Japanese popular culture. In a 2021 survey held by Crunchyroll market research, 94% of Gen-Z's and 73% of the general population said that they are familiar with anime. Fan response Anime clubs gave rise to anime conventions in the 1990s with the "anime boom", a period marked by anime's increased global popularity. These conventions are dedicated to anime and manga and include elements like cosplay contests and industry talk panels. Cosplay, a portmanteau of "costume play", is not unique to anime and has become popular in contests and masquerades at anime conventions. Japanese culture and words have entered English usage through the popularity of the medium, including otaku, an unflattering Japanese term commonly used in English to denote an obsessive fan of anime and/or manga. Another word that has arisen describing obsessive fans in the United States is wapanese meaning 'white individuals who want to be Japanese', or later known as weeaboo or weeb, individuals who demonstrate an obsession in Japanese anime subculture, a term that originated from abusive content posted on the website 4chan.org. While originally derogatory, the terms "Otaku" and "Weeb" have been reappropriated by some in the anime fandom overtime and today are used by some fans to refer to themselves in a comedic and more positive way. Anime enthusiasts have produced fan fiction and fan art, including computer wallpapers and anime music videos (AMVs). As of the 2020s, many anime fans use social media platforms like YouTube, Facebook, Reddit and Twitter (which has added an entire "anime and manga" category of topics) with online communities and databases such as MyAnimeList to discuss anime, manga and track their progress watching respective series as well as using news outlets such as Anime News Network. Due to anime's increased popularity in recent years, a large number of celebrities such as Elon Musk, BTS and Ariana Grande have come out as anime fans. Anime style One of the key points that made anime different from a handful of Western cartoons is the potential for visceral content. Once the expectation that the aspects of visual intrigue or animation being just for children is put aside, the audience can realize that themes involving violence, suffering, sexuality, pain, and death can all be storytelling elements utilized in anime just as much as other media. However, as anime itself became increasingly popular, its styling has been inevitably the subject of both satire and serious creative productions. South Parks "Chinpokomon" and "Good Times with Weapons" episodes, Adult Swim's Perfect Hair Forever, and Nickelodeon's Kappa Mikey are examples of Western satirical depictions of Japanese culture and anime, but anime tropes have also been satirized by some anime such as KonoSuba. Traditionally only Japanese works have been considered anime, but some works have sparked debate for blurring the lines between anime and cartoons, such as the American anime-style production Avatar: The Last Airbender. These anime-styled works have become defined as anime-influenced animation, in an attempt to classify all anime styled works of non-Japanese origin. Some creators of these works cite anime as a source of inspiration, for example the French production team for Ōban Star-Racers that moved to Tokyo to collaborate with a Japanese production team. When anime is defined as a "style" rather than as a national product, it leaves open the possibility of anime being produced in other countries, but this has been contentious amongst fans, with John Oppliger stating, "The insistence on referring to original American art as Japanese "anime" or "manga" robs the work of its cultural identity." A U.A.E.-Filipino produced TV series called Torkaizer is dubbed as the "Middle East's First Anime Show", and is currently in production and looking for funding. Netflix has produced multiple anime series in collaboration with Japanese animation studios, and in doing so, has offered a more accessible channel for distribution to Western markets. The web-based series RWBY, produced by Texas-based company Rooster Teeth, is produced using an anime art style, and the series has been described as "anime" by multiple sources. For example, Adweek, in the headline to one of its articles, described the series as "American-made anime", and in another headline, The Huffington Post described it as simply "anime", without referencing its country of origin. In 2013, Monty Oum, the creator of RWBY, said "Some believe just like Scotch needs to be made in Scotland, an American company can't make anime. I think that's a narrow way of seeing it. Anime is an art form, and to say only one country can make this art is wrong." RWBY has been released in Japan with a Japanese language dub; the CEO of Rooster Teeth, Matt Hullum, commented "This is the first time any American-made anime has been marketed to Japan. It definitely usually works the other way around, and we're really pleased about that." Media franchises In Japanese culture and entertainment, media mix is a strategy to disperse content across multiple representations: different broadcast media, gaming technologies, cell phones, toys, amusement parks, and other methods. It is the Japanese term for a transmedia franchise. The term gained its circulation in late 1980s, but the origins of the strategy can be traced back to the 1960s with the proliferation of anime, with its interconnection of media and commodity goods. A number of anime and manga media franchises such as Demon Slayer: Kimetsu no Yaiba, Dragon Ball and Gundam have gained considerable global popularity, and are among the world's highest-grossing media franchises. Pokémon in particular is estimated to be the highest-grossing media franchise of all time. See also Animation director Chinese animation Cinema of Japan Cool Japan Culture of Japan History of anime Japanophilia Japanese language Japanese popular culture Korean animation Lists of anime Manga Mechademia Otaku Vtuber Voice acting in Japan Notes References Sources External links 1917 introductions Anime and manga terminology Japanese inventions
801
https://en.wikipedia.org/wiki/Asterism
Asterism
Asterism may refer to: Asterism (astronomy), a pattern of stars Asterism (gemology), an optical phenomenon in gemstones Asterism (typography), (⁂) a moderately rare typographical symbol denoting a break in passages See also Aster (disambiguation)
802
https://en.wikipedia.org/wiki/Ankara
Ankara
Ankara ( , ; ), historically known as Ancyra and Angora, is the capital of Turkey. Located in the central part of Anatolia, the city has a population of 5.1 million in its urban center and over 5.7 million in Ankara Province, making it Turkey's second-largest city after Istanbul. Serving as the capital of the ancient Celtic state of Galatia (280–64 BC), and later of the Roman province with the same name (25 BC–7th century), the city is very old, with various Hattian, Hittite, Lydian, Phrygian, Galatian, Greek, Persian, Roman, Byzantine, and Ottoman archeological sites. The Ottomans made the city the capital first of the Anatolia Eyalet (1393 – late 15th century) and then the Angora Vilayet (1867–1922). The historical center of Ankara is a rocky hill rising over the left bank of the Ankara River, a tributary of the Sakarya River. The hill remains crowned by the ruins of Ankara Castle. Although few of its outworks have survived, there are well-preserved examples of Roman and Ottoman architecture throughout the city, the most remarkable being the 20 BC Temple of Augustus and Rome that boasts the Monumentum Ancyranum, the inscription recording the Res Gestae Divi Augusti. On 23 April 1920, the Grand National Assembly of Turkey was established in Ankara, which became the headquarters of the Turkish National Movement during the Turkish War of Independence. Ankara became the new Turkish capital upon the establishment of the Republic on 29 October 1923, succeeding in this role as the former Turkish capital Istanbul following the fall of the Ottoman Empire. The government is a prominent employer, but Ankara is also an important commercial and industrial city located at the center of Turkey's road and railway networks. The city gave its name to the Angora wool shorn from Angora rabbits, the long-haired Angora goat (the source of mohair), and the Angora cat. The area is also known for its pears, honey and muscat grapes. Although situated in one of the driest regions of Turkey and surrounded mostly by steppe vegetation (except for the forested areas on the southern periphery), Ankara can be considered a green city in terms of green areas per inhabitant, at per head. Etymology The orthography of the name Ankara has varied over the ages. It has been identified with the Hittite cult center Ankuwaš, although this remains a matter of debate. In classical antiquity and during the medieval period, the city was known as Ánkyra (,  "anchor") in Greek and Ancyra in Latin; the Galatian Celtic name was probably a similar variant. Following its annexation by the Seljuk Turks in 1073, the city became known in many European languages as Angora; it was also known in Ottoman Turkish as Engürü. The form "Angora" is preserved in the names of breeds of many different kinds of animals, and in the names of several locations in the US (see Angora). History The region's history can be traced back to the Bronze Age Hattic civilization, which was succeeded in the 2nd millennium BC by the Hittites, in the 10th century BC by the Phrygians, and later by the Lydians, Persians, Greeks, Galatians, Romans, Byzantines, and Turks (the Seljuk Sultanate of Rûm, the Ottoman Empire and finally republican Turkey). Ancient history The oldest settlements in and around the city center of Ankara belonged to the Hattic civilization which existed during the Bronze Age and was gradually absorbed c. 2000 – 1700 BC by the Indo-European Hittites. The city grew significantly in size and importance under the Phrygians starting around 1000 BC, and experienced a large expansion following the mass migration from Gordion, (the capital of Phrygia), after an earthquake which severely damaged that city around that time. In Phrygian tradition, King Midas was venerated as the founder of Ancyra, but Pausanias mentions that the city was actually far older, which accords with present archeological knowledge. Phrygian rule was succeeded first by Lydian and later by Persian rule, though the strongly Phrygian character of the peasantry remained, as evidenced by the gravestones of the much later Roman period. Persian sovereignty lasted until the Persians' defeat at the hands of Alexander the Great who conquered the city in 333 BC. Alexander came from Gordion to Ankara and stayed in the city for a short period. After his death at Babylon in 323 BC and the subsequent division of his empire among his generals, Ankara, and its environs fell into the share of Antigonus. Another important expansion took place under the Greeks of Pontos who came there around 300 BC and developed the city as a trading center for the commerce of goods between the Black Sea ports and Crimea to the north; Assyria, Cyprus, and Lebanon to the south; and Georgia, Armenia and Persia to the east. By that time the city also took its name Ἄγκυρα (Ánkyra, meaning anchor in Greek) which, in slightly modified form, provides the modern name of Ankara. Celtic history In 278 BC, the city, along with the rest of central Anatolia, was occupied by a Celtic group, the Galatians, who were the first to make Ankara one of their main tribal centers, the headquarters of the Tectosages tribe. Other centers were Pessinus, today's Ballıhisar, for the Trocmi tribe, and Tavium, to the east of Ankara, for the Tolistobogii tribe. The city was then known as Ancyra. The Celtic element was probably relatively small in numbers; a warrior aristocracy which ruled over Phrygian-speaking peasants. However, the Celtic language continued to be spoken in Galatia for many centuries. At the end of the 4th century, St. Jerome, a native of Dalmatia, observed that the language spoken around Ankara was very similar to that being spoken in the northwest of the Roman world near Trier. Roman history The city was subsequently passed under the control of the Roman Empire. In 25 BC, Emperor Augustus raised it to the status of a polis and made it the capital city of the Roman province of Galatia. Ankara is famous for the Monumentum Ancyranum (Temple of Augustus and Rome) which contains the official record of the Acts of Augustus, known as the Res Gestae Divi Augusti, an inscription cut in marble on the walls of this temple. The ruins of Ancyra still furnish today valuable bas-reliefs, inscriptions and other architectural fragments. Two other Galatian tribal centers, Tavium near Yozgat, and Pessinus (Balhisar) to the west, near Sivrihisar, continued to be reasonably important settlements in the Roman period, but it was Ancyra that grew into a grand metropolis. An estimated 200,000 people lived in Ancyra in good times during the Roman Empire, a far greater number than was to be the case from after the fall of the Roman Empire until the early 20th century. The small Ankara River ran through the center of the Roman town. It has now been covered and diverted, but it formed the northern boundary of the old town during the Roman, Byzantine and Ottoman periods. Çankaya, the rim of the majestic hill to the south of the present city center, stood well outside the Roman city, but may have been a summer resort. In the 19th century, the remains of at least one Roman villa or large house were still standing not far from where the Çankaya Presidential Residence stands today. To the west, the Roman city extended until the area of the Gençlik Park and Railway Station, while on the southern side of the hill, it may have extended downwards as far as the site presently occupied by Hacettepe University. It was thus a sizeable city by any standards and much larger than the Roman towns of Gaul or Britannia. Ancyra's importance rested on the fact that it was the junction point where the roads in northern Anatolia running north–south and east–west intersected, giving it major strategic importance for Rome's eastern frontier. The great imperial road running east passed through Ankara and a succession of emperors and their armies came this way. They were not the only ones to use the Roman highway network, which was equally convenient for invaders. In the second half of the 3rd century, Ancyra was invaded in rapid succession by the Goths coming from the west (who rode far into the heart of Cappadocia, taking slaves and pillaging) and later by the Arabs. For about a decade, the town was one of the western outposts of one of Palmyrean empress Zenobia in the Syrian Desert, who took advantage of a period of weakness and disorder in the Roman Empire to set up a short-lived state of her own. The town was reincorporated into the Roman Empire under Emperor Aurelian in 272. The tetrarchy, a system of multiple (up to four) emperors introduced by Diocletian (284–305), seems to have engaged in a substantial program of rebuilding and of road construction from Ancyra westwards to Germe and Dorylaeum (now Eskişehir). In its heyday, Roman Ancyra was a large market and trading center but it also functioned as a major administrative capital, where a high official ruled from the city's Praetorium, a large administrative palace or office. During the 3rd century, life in Ancyra, as in other Anatolian towns, seems to have become somewhat militarized in response to the invasions and instability of the town. Byzantine history The city is well known during the 4th century as a center of Christian activity (see also below), due to frequent imperial visits, and through the letters of the pagan scholar Libanius. Bishop Marcellus of Ancyra and Basil of Ancyra were active in the theological controversies of their day, and the city was the site of no less than three church synods in 314, 358 and 375, the latter two in favor of Arianism. The city was visited by Emperor Constans I (r. 337–350) in 347 and 350, Julian (r. 361–363) during his Persian campaign in 362, and Julian's successor Jovian (r. 363–364) in winter 363/364 (he entered his consulship while in the city). After Jovian's death soon after, Valentinian I (r. 364–375) was acclaimed emperor at Ancyra, and in the next year his brother Valens (r. 364–378) used Ancyra as his base against the usurper Procopius. When the province of Galatia was divided sometime in 396/99, Ancyra remained the civil capital of Galatia I, as well as its ecclesiastical center (metropolitan see). Emperor Arcadius (r. 383–408) frequently used the city as his summer residence, and some information about the ecclesiastical affairs of the city during the early 5th century is found in the works of Palladius of Galatia and Nilus of Galatia. In 479, the rebel Marcian attacked the city, without being able to capture it. In 610/11, Comentiolus, brother of Emperor Phocas (r. 602–610), launched his own unsuccessful rebellion in the city against Heraclius (r. 610–641). Ten years later, in 620 or more likely 622, it was captured by the Sassanid Persians during the Byzantine–Sassanid War of 602–628. Although the city returned to Byzantine hands after the end of the war, the Persian presence left traces in the city's archeology, and likely began the process of its transformation from a late antique city to a medieval fortified settlement. In 654, the city was captured for the first time by the Arabs of the Rashidun Caliphate, under Muawiyah, the future founder of the Umayyad Caliphate. At about the same time, the themes were established in Anatolia, and Ancyra became capital of the Opsician Theme, which was the largest and most important theme until it was split up under Emperor Constantine V (r. 741–775); Ancyra then became the capital of the new Bucellarian Theme. The city was captured at least temporarily by the Umayyad prince Maslama ibn Hisham in 739/40, the last of the Umayyads' territorial gains from the Byzantine Empire. Ancyra was attacked without success by Abbasid forces in 776 and in 798/99. In 805, Emperor Nikephoros I (r. 802–811) strengthened its fortifications, a fact which probably saved it from sack during the large-scale invasion of Anatolia by Caliph Harun al-Rashid in the next year. Arab sources report that Harun and his successor al-Ma'mun (r. 813–833) took the city, but this information is later invention. In 838, however, during the Amorium campaign, the armies of Caliph al-Mu'tasim (r. 833–842) converged and met at the city; abandoned by its inhabitants, Ancara was razed to the ground, before the Arab armies went on to besiege and destroy Amorium. In 859, Emperor Michael III (r. 842–867) came to the city during a campaign against the Arabs, and ordered its fortifications restored. In 872, the city was menaced, but not taken, by the Paulicians under Chrysocheir. The last Arab raid to reach the city was undertaken in 931, by the Abbasid governor of Tarsus, Thamal al-Dulafi, but the city again was not captured. Ecclesiastical history Early Christian martyrs of Ancyra, about whom little is known, included Proklos and Hilarios who were natives of the otherwise unknown nearby village of Kallippi, and suffered repression under the emperor Trajan (98–117). In the 280s we hear of Philumenos, a Christian corn merchant from southern Anatolia, being captured and martyred in Ankara, and Eustathius. As in other Roman towns, the reign of Diocletian marked the culmination of the persecution of the Christians. In 303, Ancyra was one of the towns where the co-emperors Diocletian and his deputy Galerius launched their anti-Christian persecution. In Ancyra, their first target was the 38-year-old Bishop of the town, whose name was Clement. Clement's life describes how he was taken to Rome, then sent back, and forced to undergo many interrogations and hardship before he, and his brother, and various companions were put to death. The remains of the church of St. Clement can be found today in a building just off Işıklar Caddesi in the Ulus district. Quite possibly this marks the site where Clement was originally buried. Four years later, a doctor of the town named Plato and his brother Antiochus also became celebrated martyrs under Galerius. Theodotus of Ancyra is also venerated as a saint. However, the persecution proved unsuccessful and in 314 Ancyra was the center of an important council of the early church; its 25 disciplinary canons constitute one of the most important documents in the early history of the administration of the Sacrament of Penance. The synod also considered ecclesiastical policy for the reconstruction of the Christian Church after the persecutions, and in particular the treatment of lapsi—Christians who had given in to forced paganism (sacrifices) to avoid martyrdom during these persecutions. Though paganism was probably tottering in Ancyra in Clement's day, it may still have been the majority religion. Twenty years later, Christianity and monotheism had taken its place. Ancyra quickly turned into a Christian city, with a life dominated by monks and priests and theological disputes. The town council or senate gave way to the bishop as the main local figurehead. During the middle of the 4th century, Ancyra was involved in the complex theological disputes over the nature of Christ, and a form of Arianism seems to have originated there. In 362–363, Emperor Julian passed through Ancyra on his way to an ill-fated campaign against the Persians, and according to Christian sources, engaged in a persecution of various holy men. The stone base for a statue, with an inscription describing Julian as "Lord of the whole world from the British Ocean to the barbarian nations", can still be seen, built into the eastern side of the inner circuit of the walls of Ankara Castle. The Column of Julian which was erected in honor of the emperor's visit to the city in 362 still stands today. In 375, Arian bishops met at Ancyra and deposed several bishops, among them St. Gregory of Nyssa. In the late 4th century, Ancyra became something of an imperial holiday resort. After Constantinople became the East Roman capital, emperors in the 4th and 5th centuries would retire from the humid summer weather on the Bosporus to the drier mountain atmosphere of Ancyra. Theodosius II (408–450) kept his court in Ancyra in the summers. Laws issued in Ancyra testify to the time they spent there. The Metropolis of Ancyra continued to be a residential see of the Eastern Orthodox Church until the 20th century, with about 40,000 faithful, mostly Turkish-speaking, but that situation ended as a result of the 1923 Convention Concerning the Exchange of Greek and Turkish Populations. The earlier Armenian genocide put an end to the residential eparchy of Ancyra of the Armenian Catholic Church, which had been established in 1850. It is also a titular metropolis of the Ecumenical Patriarchate of Constantinople. Both the Ancient Byzantine Metropolitan archbishopric and the 'modern' Armenian eparchy are now listed by the Catholic Church as titular sees, with separate apostolic successions. Seljuk and Ottoman history After the Battle of Manzikert in 1071, the Seljuk Turks overran much of Anatolia. By 1073, the Turkish settlers had reached the vicinity of Ancyra, and the city was captured shortly after, at the latest by the time of the rebellion of Nikephoros Melissenos in 1081. In 1101, when the Crusade under Raymond IV of Toulouse arrived, the city had been under Danishmend control for some time. The Crusaders captured the city, and handed it over to the Byzantine emperor Alexios I Komnenos (r. 1081–1118). Byzantine rule did not last long, and the city was captured by the Seljuk Sultanate of Rum at some unknown point; in 1127, it returned to Danishmend control until 1143, when the Seljuks of Rum retook it. After the Battle of Köse Dağ in 1243, in which the Mongols defeated the Seljuks, most of Anatolia became part of the dominion of the Mongols. Taking advantage of Seljuk decline, a semi-religious cast of craftsmen and trade people named Ahiler chose Angora as their independent city-state in 1290. Orhan I, the second Bey of the Ottoman Empire, captured the city in 1356. Timur defeated Bayezid I at the Battle of Ankara in 1402 and took the city, but in 1403 Angora was again under Ottoman control. The Levant Company maintained a factory in the town from 1639 to 1768. In the 19th century, its population was estimated at 20,000 to 60,000. It was sacked by Egyptians under Ibrahim Pasha in 1832. From 1867 to 1922, the city served as the capital of the Angora Vilayet, which included most of ancient Galatia. Prior to World War I, the town had a British consulate and a population of around 28,000, roughly of whom were Christian. Turkish republican capital Following the Ottoman defeat in World War I, the Ottoman capital Constantinople (modern Istanbul) and much of Anatolia was occupied by the Allies, who planned to share these lands between Armenia, France, Greece, Italy and the United Kingdom, leaving for the Turks the core piece of land in central Anatolia. In response, the leader of the Turkish nationalist movement, Mustafa Kemal Atatürk, established the headquarters of his resistance movement in Angora in 1920. After the Turkish War of Independence was won and the Treaty of Sèvres was superseded by the Treaty of Lausanne (1923), the Turkish nationalists replaced the Ottoman Empire with the Republic of Turkey on 29 October 1923. A few days earlier, Angora had officially replaced Constantinople as the new Turkish capital city, on 13 October 1923, and Republican officials declared that the city's name is Ankara. After Ankara became the capital of the newly founded Republic of Turkey, new development divided the city into an old section, called Ulus, and a new section, called Yenişehir. Ancient buildings reflecting Roman, Byzantine, and Ottoman history and narrow winding streets mark the old section. The new section, now centered on Kızılay Square, has the trappings of a more modern city: wide streets, hotels, theaters, shopping malls, and high-rises. Government offices and foreign embassies are also located in the new section. Ankara has experienced a phenomenal growth since it was made Turkey's capital in 1923, when it was "a small town of no importance". In 1924, the year after the government had moved there, Ankara had about 35,000 residents. By 1927 there were 44,553 residents and by 1950 the population had grown to 286,781. Ankara continued to grow rapidly during the latter half of the 20th century and eventually outranked Izmir as Turkey's second-largest city, after Istanbul. Ankara's urban population reached 4,587,558 in 2014, while the population of Ankara Province reached 5,150,072 in 2015. After 1930, it became known officially in Western languages as Ankara. After the late 1930s the public stopped using the name "Angora". Presidential Palace of Turkey is situated in Ankara. This building serves as the main residence of the president. Economy and infrastructure The city has exported mohair (from the Angora goat) and Angora wool (from the Angora rabbit) internationally for centuries. In the 19th century, the city also exported substantial amounts of goat and cat skins, gum, wax, honey, berries, and madder root. It was connected to Istanbul by railway before the First World War, continuing to export mohair, wool, berries, and grain. The Central Anatolia Region is one of the primary locations of grape and wine production in Turkey, and Ankara is particularly famous for its Kalecik Karası and Muscat grapes; and its Kavaklıdere wine, which is produced in the Kavaklıdere neighborhood within the Çankaya district of the city. Ankara is also famous for its pears. Another renowned natural product of Ankara is its indigenous type of honey (Ankara Balı) which is known for its light color and is mostly produced by the Atatürk Forest Farm and Zoo in the Gazi district, and by other facilities in the Elmadağ, Çubuk and Beypazarı districts. Çubuk-1 and Çubuk-2 dams on the Çubuk Brook in Ankara were among the first dams constructed in the Turkish Republic. Ankara is the center of the state-owned and private Turkish defence and aerospace companies, where the industrial plants and headquarters of the Turkish Aerospace Industries, MKE, ASELSAN, HAVELSAN, ROKETSAN, FNSS, Nurol Makina, and numerous other firms are located. Exports to foreign countries from these defense and aerospace firms have steadily increased in the past decades. The IDEF in Ankara is one of the largest international expositions of the global arms industry. A number of the global automotive companies also have production facilities in Ankara, such as the German bus and truck manufacturer MAN SE. Ankara hosts the OSTIM Industrial Zone, Turkey's largest industrial park. A large percentage of the complicated employment in Ankara is provided by the state institutions; such as the ministries, subministries, and other administrative bodies of the Turkish government. There are also many foreign citizens working as diplomats or clerks in the embassies of their respective countries. Geography Ankara and its province are located in the Central Anatolia Region of Turkey. The Çubuk Brook flows through the city center of Ankara. It is connected in the western suburbs of the city to the Ankara River, which is a tributary of the Sakarya River. Climate Ankara has a cold semi-arid climate (Köppen climate classification: BSk). Under the Trewartha climate classification, Ankara has a temperate continental climate (Dc). Due to its elevation and inland location, Ankara has cold and snowy winters, and hot and dry summers. Rainfall occurs mostly during the spring and autumn. The city lies in USDA Hardiness zone 7b, and its annual average precipitation is fairly low at , nevertheless precipitation can be observed throughout the year. Monthly mean temperatures range from in January to in July, with an annual mean of . Demographics Ankara had a population of 75,000 in 1927. As of 2019, Ankara Province has a population of 5,639,076. When Ankara became the capital of the Republic of Turkey in 1923, it was designated as a planned city for 500,000 future inhabitants. During the 1920s, 1930s and 1940s, the city grew in a planned and orderly pace. However, from the 1950s onward, the city grew much faster than envisioned, because unemployment and poverty forced people to migrate from the countryside into the city in order to seek a better standard of living. As a result, many illegal houses called gecekondu were built around the city, causing the unplanned and uncontrolled urban landscape of Ankara, as not enough planned housing could be built fast enough. Although precariously built, the vast majority of them have electricity, running water and modern household amenities. Nevertheless, many of these gecekondus have been replaced by huge public housing projects in the form of tower blocks such as Elvankent, Eryaman and Güzelkent; and also as mass housing compounds for military and civil service accommodation. Although many gecekondus still remain, they too are gradually being replaced by mass housing compounds, as empty land plots in the city of Ankara for new construction projects are becoming impossible to find. Çorum and Yozgat, which are located in Central Anatolia and whose population is decreasing, are the provinces with the highest net migration to Ankara. About one third of the Central Anatolia population of 15,608,868 people resides in Ankara. The population of Ankara has a higher education level than the country average. According to 2008 data, 15-years-higher literacy rate creates 88% of the total provincial population (91% in men and 86% in women). This ratio was 83% for Turkey (88% males, 79% females). This difference is particularly evident in the university educated segment of the population. The ratio of university and high school graduates to total population is 10.6% in Ankara, while 5.4% in Turkey. Transportation The Electricity, Gas, Bus General Directorate (EGO) operates the Ankara Metro and other forms of public transportation. Ankara is served by a suburban rail named Ankaray (A1) and three subway lines (M1, M2, M3) of the Ankara Metro with about 300,000 total daily commuters, while an additional subway line (M4) is under construction. A long gondola lift with four stations connects the district of Şentepe to the Yenimahalle metro station. The Ankara Central Station is a major rail hub in Turkey. The Turkish State Railways operates passenger train service from Ankara to other major cities, such as: Istanbul, Eskişehir, Balıkesir, Kütahya, İzmir, Kayseri, Adana, Kars, Elâzığ, Malatya, Diyarbakır, Karabük, Zonguldak and Sivas. Commuter rail also runs between the stations of Sincan and Kayaş. On 13 March 2009, the new Yüksek Hızlı Tren (YHT) high-speed rail service began operation between Ankara and Eskişehir. On 23 August 2011, another YHT high-speed line commercially started its service between Ankara and Konya. On 25 July 2014, the Ankara–Istanbul high-speed line of YHT entered service. Esenboğa International Airport, located in the north-east of the city, is Ankara's main airport. Ankara public transportation statistics The average amount of time people spend commuting on public transit in Ankara on a weekday is 71 minutes. 17% of public transit passengers, ride for more than two hours every day. The average amount of time people wait at a stop or station for public transit is sixteen minutes, while 28% of users wait for over twenty minutes on average every day. The average distance people usually ride in a single trip with public transit is , while 27% travel for over in a single direction. Politics Since 8 April 2019, the Mayor of Ankara is Mansur Yavaş from the Republican People's Party (CHP), who won the mayoral election in 2019. Ankara is politically a triple battleground between the ruling conservative Justice and Development Party (AKP), the opposition Kemalist center-left Republican People's Party (CHP) and the nationalist far-right Nationalist Movement Party (MHP). The province of Ankara is divided into 25 districts. The CHP's key and almost only political stronghold in Ankara lies within the central area of Çankaya, which is the city's most populous district. While the CHP has always gained between 60 and 70% of the vote in Çankaya since 2002, political support elsewhere throughout Ankara is minimal. The high population within Çankaya, as well as Yenimahalle to an extent, has allowed the CHP to take overall second place behind the AKP in both local and general elections, with the MHP a close third, despite the fact that the MHP is politically stronger than the CHP in almost every other district. Overall, the AKP enjoys the most support throughout the city. The electorate of Ankara thus tend to vote in favor of the political right, far more so than the other main cities of Istanbul and İzmir. In retrospect, the 2013–14 protests against the AKP government were particularly strong in Ankara, proving to be fatal on multiple occasions.The city suffered from a series of terrorist attacks in 2015 and 2016, most notably on 10 October 2015; 17 February 2016; 13 March 2016; and 15 July 2016. Melih Gökçek was the Metropolitan Mayor of Ankara between 1994 and 2017. Initially elected in the 1994 local elections, he was re-elected in 1999, 2004 and 2009. In the 2014 local elections, Gökçek stood for a fifth term. The MHP's metropolitan mayoral candidate for the 2009 local elections, Mansur Yavaş, stood as the CHP's candidate against Gökçek in 2014. In a heavily controversial election, Gökçek was declared the winner by just 1% ahead of Yavaş amid allegations of systematic electoral fraud. With the Supreme Electoral Council and courts rejecting his appeals, Yavaş declared his intention to take the irregularities to the European Court of Human Rights. Although Gökçek was inaugurated for a fifth term, most election observers believe that Yavaş was the winner of the election. Gökçek resigned on 28 October 2017 and was replaced by the former mayor of Sincan district, Mustafa Tuna; who was succeeded by Mansur Yavaş of the CHP, the current Mayor of Ankara, elected in 2019. Main sights Ancient/archeological sites Ankara Citadel The foundations of the Ankara castle and citadel were laid by the Galatians on a prominent lava outcrop (), and the rest was completed by the Romans. The Byzantines and Seljuks further made restorations and additions. The area around and inside the citadel, being the oldest part of Ankara, contains many fine examples of traditional architecture. There are also recreational areas to relax. Many restored traditional Turkish houses inside the citadel area have found new life as restaurants, serving local cuisine. The citadel was depicted in various Turkish banknotes during 1927–1952 and 1983–1989. Roman Theater The remains, the stage, and the backstage of the Roman theater can be seen outside the castle. Roman statues that were found here are exhibited in the Museum of Anatolian Civilizations. The seating area is still under excavation. Temple of Augustus and Rome The Augusteum, now known as the Temple of Augustus and Rome, was built 25  20 BC following the conquest of Central Anatolia by the Roman Empire. Ancyra then formed the capital of the new province of Galatia. After the death of Augustus in AD 14, a copy of the text of the Res Gestae Divi Augusti (the Monumentum Ancyranum) was inscribed on the interior of the temple's in Latin and a Greek translation on an exterior wall of the . The temple on the ancient acropolis of Ancyra was enlarged in the 2nd century and converted into a church in the 5th century. It is located in the Ulus quarter of the city. It was subsequently publicized by the Austrian ambassador Ogier Ghiselin de Busbecq in the 16th century. Roman Baths The Roman Baths of Ankara have all the typical features of a classical Roman bath complex: a frigidarium (cold room), a tepidarium (warm room) and a caldarium (hot room). The baths were built during the reign of the Roman emperor Caracalla in the early 3rd century to honor Asclepios, the God of Medicine. Today, only the basement and first floors remain. It is situated in the Ulus quarter. Roman Road The Roman Road of Ankara or Cardo Maximus was found in 1995 by Turkish archeologist Cevdet Bayburtluoğlu. It is long and wide. Many ancient artifacts were discovered during the excavations along the road and most of them are displayed at the Museum of Anatolian Civilizations. Column of Julian The Column of Julian or Julianus, now in the Ulus district, was erected in honor of the Roman emperor Julian the Apostate's visit to Ancyra in 362. Mosques Kocatepe Mosque Kocatepe Mosque is the largest mosque in the city. Located in the Kocatepe quarter, it was constructed between 1967 and 1987 in classical Ottoman style with four minarets. Its size and prominent location have made it a landmark for the city. Ahmet Hamdi Akseki Mosque Ahmet Hamdi Akseki Mosque is located near the Presidency of Religious Affairs on the Eskişehir Road. Built in the Turkish neoclassical style, it is one of the largest new mosques in the city, completed and opened in 2013. It can accommodate 6 thousand people during general prayers, and up to 30 thousand people during funeral prayers. The mosque was decorated with Anatolian Seljuk style patterns. Yeni (Cenab Ahmet) Mosque It is the largest Ottoman mosque in Ankara and was built by the famous architect Sinan in the 16th century. The mimber (pulpit) and mihrap (prayer niche) are of white marble, and the mosque itself is of Ankara stone, an example of very fine workmanship. Hacı Bayram Mosque This mosque, in the Ulus quarter next to the Temple of Augustus, was built in the early 15th century in Seljuk style by an unknown architect. It was subsequently restored by architect Mimar Sinan in the 16th century, with Kütahya tiles being added in the 18th century. The mosque was built in honor of Hacı Bayram-ı Veli, whose tomb is next to the mosque, two years before his death (1427–28). The usable space inside this mosque is on the first floor and on the second floor. Ahi Elvan Mosque It was founded in the Ulus quarter near the Ankara Citadel and was constructed by the Ahi fraternity during the late 14th and early 15th centuries. The finely carved walnut mimber (pulpit) is of particular interest. Alâeddin Mosque The Alâeddin Mosque is the oldest mosque in Ankara. It has a carved walnut mimber, the inscription on which records that the mosque was completed in early AH 574 (which corresponds to the summer of 1178 AD) and was built by the Seljuk prince Muhiddin Mesud Şah (died 1204), the Bey of Ankara, who was the son of the Anatolian Seljuk sultan Kılıç Arslan II (reigned 1156–1192.) Modern monuments Victory Monument The Victory Monument (Turkish: Zafer Anıtı) was crafted by Austrian sculptor Heinrich Krippel in 1925 and was erected in 1927 at Ulus Square. The monument is made of marble and bronze and features an equestrian statue of Mustafa Kemal Atatürk, who wears a Republic era modern military uniform, with the rank Field Marshal. Statue of Atatürk Located at Zafer(Victory) Square (Turkish: Zafer Meydanı), the marble and bronze statue was crafted by the renowned Italian sculptor Pietro Canonica in 1927 and depicts a standing Atatürk who wears a Republic era modern military uniform, with the rank Field Marshal. Monument to a Secure, Confident Future This monument, located in Güven Park near Kızılay Square, was erected in 1935 and bears Atatürk's advice to his people: "Turk! Be proud, work hard, and believe in yourself." The monument was depicted on the reverse of the Turkish 5 lira banknote of 1937–1952 and of the 1000 lira banknotes of 1939–1946. Hatti Monument Erected in 1978 at Sıhhiye Square, this impressive monument symbolizes the Hatti Sun Disc (which was later adopted by the Hittites) and commemorates Anatolia's earliest known civilization. The Hatti Sun Disc has been used in the previous logo of Ankara Metropolitan Municipality. It was also used in the previous logo of the Ministry of Culture & Tourism. Inns Suluhan Suluhan is a historical Inn in Ankara. It is also called the Hasanpaşa Han. It is about southeast of Ulus Square and situated in the Hacıdoğan neighborhood. According to the vakfiye (inscription) of the building, the Ottoman era han was commissioned by Hasan Pasha, a regional beylerbey, and was constructed between 1508 and 1511, during the final years of the reign of Sultan Bayezid II. There are 102 rooms (now shops) which face the two yards. In each room there is a window, a niche and a chimney. Çengelhan Rahmi Koç Museum Çengelhan Rahmi Koç Museum is a museum of industrial technology situated in Çengel Han, an Ottoman era Inn which was completed in 1523, during the early years of the reign of Sultan Suleiman the Magnificent. The exhibits include industrial/technological artifacts from the 1850s onwards. There are also sections about Mustafa Kemal Atatürk, the founder of modern Turkey; Vehbi Koç, Rahmi Koç's father and one of the first industrialists of Turkey, and Ankara city. Shopping Foreign visitors to Ankara usually like to visit the old shops in Çıkrıkçılar Yokuşu (Weavers' Road) near Ulus, where myriad things ranging from traditional fabrics, hand-woven carpets and leather products can be found at bargain prices. Bakırcılar Çarşısı (Bazaar of Coppersmiths) is particularly popular, and many interesting items, not just of copper, can be found here like jewelry, carpets, costumes, antiques and embroidery. Up the hill to the castle gate, there are many shops selling a huge and fresh collection of spices, dried fruits, nuts, and other produce. Modern shopping areas are mostly found in Kızılay, or on Tunalı Hilmi Avenue, including the modern mall of Karum (named after the ancient Assyrian merchant colonies called Kârum that were established in central Anatolia at the beginning of the 2nd millennium BC) which is located towards the end of the Avenue; and in Çankaya, the quarter with the highest elevation in the city. Atakule Tower next to Atrium Mall in Çankaya has views over Ankara and also has a revolving restaurant at the top. The symbol of the Armada Shopping Mall is an anchor, and there's a large anchor monument at its entrance, as a reference to the ancient Greek name of the city, Ἄγκυρα (Ánkyra), which means anchor. Likewise, the anchor monument is also related with the Spanish name of the mall, Armada, which means naval fleet. As Ankara started expanding westward in the 1970s, several modern, suburbia-style developments and mini-cities began to rise along the western highway, also known as the Eskişehir Road. The Armada, CEPA and Kentpark malls on the highway, the Galleria, Arcadium and Gordion in Ümitköy, and a huge mall, Real in Bilkent Center, offer North American and European style shopping opportunities (these places can be reached through the Eskişehir Highway.) There is also the newly expanded ANKAmall at the outskirts, on the Istanbul Highway, which houses most of the well-known international brands. This mall is the largest throughout the Ankara region. In 2014, a few more shopping malls were open in Ankara. They are Next Level and Taurus on the Boulevard of Mevlana (also known as Konya Road). Culture The arts Turkish State Opera and Ballet, the national directorate of opera and ballet companies of Turkey, has its headquarters in Ankara, and serves the city with three venues: Ankara Opera House (Opera Sahnesi, also known as Büyük Tiyatro) is the largest of the three venues for opera and ballet in Ankara. Music Ankara is host to five classical music orchestras: Presidential Symphony Orchestra (Turkish Presidential Symphony Orchestra) Bilkent Symphony Orchestra (BSO) is a major symphony orchestra of Turkey. Hacettepe Symphony Orchestra was founded in 2003 and is conducted by Erol Erdinç. Başkent Oda Orkestrası (Chamber Orchestra of the Capital) There are four concert halls in the city: CSO Concert Hall Bilkent Concert Hall is a performing arts center in Ankara. It is located in the Bilkent University campus. MEB Şura Salonu (also known as the Festival Hall), It is noted for its tango performances. Çankaya Çağdaş Sanatlar Merkezi Concert Hall was founded in 1994. The city has been host to several well-established, annual theater, music, film festivals: Ankara International Music Festival, a music festival organized in the Turkish capital presenting classical music and ballet programs. Ankara also has a number of concert venues such as Eskiyeni, IF Performance Hall, Jolly Joker, Kite, Nefes Bar, Noxus Pub, Passage Pub and Route, which host the live performances and events of popular musicians. Theater The Turkish State Theatres also has its head office in Ankara and runs the following stages in the city: 125. Yıl Çayyolu Sahnesi Büyük Tiyatro, Küçük Tiyatro, Şinasi Sahnesi, Akün Sahnesi, Altındağ Tiyatrosu, İrfan Şahinbaş Atölye Sahnesi, Oda Tiyatrosu, Mahir Canova Sahnesi, Muhsin Ertuğrul Sahnesi. In addition, the city is served by several private theater companies, among which Ankara Sanat Tiyatrosu, who have their own stage in the city center, is a notable example. Museums There are about 50 museums in the city. Museum of Anatolian Civilizations The Museum of Anatolian Civilizations (Anadolu Medeniyetleri Müzesi) is situated at the entrance of the Ankara Castle. It is an old 15th century bedesten (covered bazaar) that has been restored and now houses a collection of Paleolithic, Neolithic, Hatti, Hittite, Phrygian, Urartian and Roman works as well as a major section dedicated to Lydian treasures. Anıtkabir Anıtkabir is located on an imposing hill, which forms the Anıttepe quarter of the city, where the mausoleum of Mustafa Kemal Atatürk, founder of the Republic of Turkey, stands. Completed in 1953, it is an impressive fusion of ancient and modern architectural styles. An adjacent museum houses a wax statue of Atatürk, his writings, letters and personal items, as well as an exhibition of photographs recording important moments in his life and during the establishment of the Republic. Anıtkabir is open every day, while the adjacent museum is open every day except Mondays. Ankara Ethnography Museum Ankara Ethnography Museum (Etnoğrafya Müzesi) is located opposite to the Ankara Opera House on Talat Paşa Boulevard, in the Ulus district. There is a fine collection of folkloric items, as well as artifacts from the Seljuk and Ottoman periods. In front of the museum building, there is a marble and bronze equestrian statue of Mustafa Kemal Atatürk (who wears a Republic era modern military uniform, with the rank Field Marshal) which was crafted in 1927 by the renowned Italian sculptor Pietro Canonica. State Art and Sculpture Museum The State Art and Sculpture Museum (Resim-Heykel Müzesi) which opened to the public in 1980 is close to the Ethnography Museum and houses a rich collection of Turkish art from the late 19th century to the present day. There are also galleries which host guest exhibitions. Cer Modern Cer Modern is the modern-arts museum of Ankara, inaugurated on 1 April 2010. It is situated in the renovated building of the historic TCDD Cer Atölyeleri, formerly a workshop of the Turkish State Railways. The museum incorporates the largest exhibition hall in Turkey. The museum holds periodic exhibitions of modern and contemporary art as well as hosting other contemporary arts events. War of Independence Museum The War of Independence Museum (Kurtuluş Savaşı Müzesi) is located on Ulus Square. It was originally the first Parliament building (TBMM) of the Republic of Turkey. The War of Independence was planned and directed here as recorded in various photographs and items presently on exhibition. In another display, wax figures of former presidents of the Republic of Turkey are on exhibit. Mehmet Akif Literature Museum Library The Mehmet Akif Literature Museum Library is an important literary museum and archive opened in 2011 and dedicated to Mehmet Akif Ersoy (1873–1936), the poet of the Turkish National Anthem. TCDD Open Air Steam Locomotive Museum The TCDD Open Air Steam Locomotive Museum is an open-air museum which traces the history of steam locomotives. Ankara Aviation Museum Ankara Aviation Museum (Hava Kuvvetleri Müzesi Komutanlığı) is located near the Istanbul Road in Etimesgut. The museum opened to the public in September 1998. It is home to various missiles, avionics, aviation materials and aircraft that have served in the Turkish Air Force (e.g. combat aircraft such as the F-86 Sabre, F-100 Super Sabre, F-102 Delta Dagger, F-104 Starfighter, F-5 Freedom Fighter, F-4 Phantom; and cargo planes such as the Transall C-160.) Also a Hungarian MiG-21, a Pakistani MiG-19, and a Bulgarian MiG-17 are on display at the museum. METU Science and Technology Museum The METU Science and Technology Museum (ODTÜ Bilim ve Teknoloji Müzesi) is located inside the Middle East Technical University campus. Sports As with all other cities of Turkey, football is the most popular sport in Ankara. The city has two football clubs competing in the Turkish Süper Lig: Ankaragücü, founded in 1910, is the oldest club in Ankara and is associated with Ankara's military arsenal manufacturing company MKE. They were the Turkish Cup winners in 1972 and 1981. Gençlerbirliği, founded in 1923, are known as the Ankara Gale or the Poppies because of their colors: red and black. They were the Turkish Cup winners in 1987 and 2001. Gençlerbirliği's B team, Hacettepe S.K. (formerly known as Gençlerbirliği OFTAŞ) played in the Süper Lig but currently plays in the TFF Second League. A fourth team, Büyükşehir Belediye Ankaraspor, played in the Süper Lig until 2010, when they were expelled. The club was reconstituted in 2014 as Osmanlıspor but have since returned to their old identity as Ankaraspor. Ankaraspor currently play in the TFF First League at the Osmanlı Stadium in the Sincan district of Yenikent, outside the city center. Keçiörengücü also currently play in the TFF First League. Ankara has a large number of minor teams, playing at regional levels. In the TFF Second League: Mamak FK in Mamak, Ankara Demirspor in Çankaya, Etimesgut Belediyespor in Etimesgut; in the TFF Third League: Çankaya FK in Keçiören; Altındağspor in Altındağ; in the Amateur League: Turanspor in Etimesgut, Türk Telekomspor owned by the phone company in Yenimahalle, Çubukspor in Çubuk, and Bağlumspor in Keçiören. In the Turkish Basketball League, Ankara is represented by Türk Telekom, whose home is the Ankara Arena, and CASA TED Kolejliler, whose home is the TOBB Sports Hall. Halkbank Ankara is the leading domestic powerhouse in men's volleyball, having won many championships and cups in the Turkish Men's Volleyball League and even the CEV Cup in 2013. Ankara Buz Pateni Sarayı is where the ice skating and ice hockey competitions take place in the city. There are many popular spots for skateboarding which is active in the city since the 1980s. Skaters in Ankara usually meet in the park near the Grand National Assembly of Turkey. The 2012-built THF Sport Hall hosts the Handball Super League and Women's Handball Super League matches scheduled in Ankara. Parks Ankara has many parks and open spaces mainly established in the early years of the Republic and well maintained and expanded thereafter. The most important of these parks are: Gençlik Parkı (houses an amusement park with a large pond for rowing), the Botanical garden, Seğmenler Park, Anayasa Park, Kuğulu Park (famous for the swans received as a gift from the Chinese government), Abdi İpekçi Park, Esertepe Parkı, Güven Park (see above for the monument), Kurtuluş Park (has an ice-skating rink), Altınpark (also a prominent exposition/fair area), Harikalar Diyarı (claimed to be Biggest Park of Europe inside city borders) and Göksu Park. Dikmen Vadisi (Dikmen Valley) is a park and recreation area situated in Çankaya district. Gençlik Park was depicted on the reverse of the Turkish 100 lira banknotes of 1952–1976. Atatürk Forest Farm and Zoo (Atatürk Orman Çiftliği) is an expansive recreational farming area which houses a zoo, several small agricultural farms, greenhouses, restaurants, a dairy farm and a brewery. It is a pleasant place to spend a day with family, be it for having picnics, hiking, biking or simply enjoying good food and nature. There is also an exact replica of the house where Atatürk was born in 1881, in Thessaloniki, Greece. Visitors to the "Çiftlik" (farm) as it is affectionately called by Ankarans, can sample such famous products of the farm such as old-fashioned beer and ice cream, fresh dairy products and meat rolls/kebaps made on charcoal, at a traditional restaurant (Merkez Lokantası, Central Restaurant), cafés and other establishments scattered around the farm. Education Universities Ankara is noted, within Turkey, for the multitude of universities it is home to. These include the following, several of them being among the most reputable in the country: Ankara University Atılım University Başkent University Bilkent University Çankaya University Gazi University Gülhane Military Medical Academy Hacettepe University İpek University Middle East Technical University TED University TOBB University of Economics and Technology Turkish Aeronautical Association University Turkish Military Academy Turkish National Police Academy Ufuk University Yıldırım Beyazıt University Fauna Angora cat Ankara is home to a world-famous domestic cat breed – the Turkish Angora, called Ankara kedisi (Ankara cat) in Turkish. Turkish Angoras are one of the ancient, naturally occurring cat breeds, having originated in Ankara and its surrounding region in central Anatolia. They mostly have a white, silky, medium to long length coat, no undercoat and a fine bone structure. There seems to be a connection between the Angora Cats and Persians, and the Turkish Angora is also a distant cousin of the Turkish Van. Although they are known for their shimmery white coat, there are more than twenty varieties including black, blue and reddish fur. They come in tabby and tabby-white, along with smoke varieties, and are in every color other than pointed, lavender, and cinnamon (all of which would indicate breeding to an outcross.) Eyes may be blue, green, or amber, or even one blue and one amber or green. The W gene which is responsible for the white coat and blue eye is closely related to the hearing ability, and the presence of a blue eye can indicate that the cat is deaf to the side the blue eye is located. However, a great many blue and odd-eyed white cats have normal hearing, and even deaf cats lead a very normal life if kept indoors. Ears are pointed and large, eyes are almond shaped and the head is massive with a two plane profile. Another characteristic is the tail, which is often kept parallel to the back. Angora goat The Angora goat () is a breed of domestic goat that originated in Ankara and its surrounding region in central Anatolia. This breed was first mentioned in the time of Moses, roughly in 1500 BC. The first Angora goats were brought to Europe by Charles V, Holy Roman Emperor, about 1554, but, like later imports, were not very successful. Angora goats were first introduced in the United States in 1849 by Dr. James P. Davis. Seven adult goats were a gift from Sultan Abdülmecid I in appreciation for his services and advice on the raising of cotton. The fleece taken from an Angora goat is called mohair. A single goat produces between of hair per year. Angoras are shorn twice a year, unlike sheep, which are shorn only once. Angoras have high nutritional requirements due to their rapid hair growth. A poor quality diet will curtail mohair development. The United States, Turkey, and South Africa are the top producers of mohair. For a long period of time, Angora goats were bred for their white coat. In 1998, the Colored Angora Goat Breeders Association was set up to promote breeding of colored Angoras. Today, Angora goats produce white, black (deep black to greys and silver), red (the color fades significantly as the goat gets older), and brownish fiber. Angora goats were depicted on the reverse of the Turkish 50 lira banknotes of 1938–1952. Angora rabbit The Angora rabbit () is a variety of domestic rabbit bred for its long, soft hair. The Angora is one of the oldest types of domestic rabbit, originating in Ankara and its surrounding region in central Anatolia, along with the Angora cat and Angora goat. The rabbits were popular pets with French royalty in the mid-18th century, and spread to other parts of Europe by the end of the century. They first appeared in the United States in the early 20th century. They are bred largely for their long Angora wool, which may be removed by shearing, combing, or plucking (gently pulling loose wool.) Angoras are bred mainly for their wool because it is silky and soft. They have a humorous appearance, as they oddly resemble a fur ball. Most are calm and docile but should be handled carefully. Grooming is necessary to prevent the fiber from matting and felting on the rabbit. A condition called "wool block" is common in Angora rabbits and should be treated quickly. Sometimes they are shorn in the summer as the long fur can cause the rabbits to overheat. International relations Twin towns and sister cities Ankara is twinned with: Seoul, South Korea (since 1971) Islamabad, Pakistan (since 1982) Kuala Lumpur, Malaysia (since 1984) Beijing, China (since 1990) Amman, Jordan (since 1992) Bishkek, Kyrgyzstan (since 1992) Budapest, Hungary (since 1992) Khartoum, Sudan (since 1992) Moscow, Russia (since 1992) Sofia, Bulgaria (since 1992) Havana, Cuba (since 1993) Kyiv, Ukraine (since 1993) Ashgabat, Turkmenistan (since 1994) Kuwait City, Kuwait (since 1994) Sarajevo, Bosnia and Herzegovina (since 1994) Tirana, Albania (since 1995) Tbilisi, Georgia (since 1996) Ufa, Bashkortostan, Russia (since 1997) Alanya, Turkey Bucharest, Romania (since 1998) Hanoi, Vietnam (since 1998) Manama, Bahrain (since 2000) Mogadishu, Somalia (since 2000) Santiago, Chile (since 2000) Nur-Sultan, Kazakhstan (since 2001) Dushanbe, Tajikistan (since 2003) Kabul, Afghanistan (since 2003) Ulan Bator, Mongolia (since 2003) Cairo, Egypt (since 2004) Chișinău, Moldova (since 2004) Sana'a, Yemen (since 2004) Tashkent, Uzbekistan (since 2004) Pristina, Kosovo (since 2005) Kazan, Tatarstan, Russia (since 2005) Kinshasa, Democratic Republic of the Congo (since 2005) Addis Ababa, Ethiopia (since 2006) Minsk, Belarus (since 2007) Zagreb, Croatia (since 2008) Damascus, Syria (since 2010) Bissau, Guinea-Bissau (since 2011) Washington, D.C., USA (since 2011) Bangkok, Thailand (since 2012) Tehran, Iran (since 2013) Doha, Qatar (since 2016) Podgorica, Montenegro (since 7 March 2019) North Nicosia, Northern Cyprus Djibouti City, Djibouti (since 2017) Partner cities Skopje, North Macedonia (since 1995) Vienna, Austria See also Angora cat Angora goat Angora rabbit Ankara Agreement Ankara Arena Ankara Central Station Ankara Esenboğa International Airport Ankara Metro Ankara Province Ankara University ATO Congresium Basil of Ancyra Battle of Ancyra Battle of Ankara Clement of Ancyra Gemellus of Ancyra History of Ankara List of hospitals in Ankara Province List of mayors of Ankara List of municipalities in Ankara Province List of districts of Ankara List of people from Ankara List of tallest buildings in Ankara Marcellus of Ancyra Monumentum Ancyranum Nilus of Ancyra Roman Baths of Ankara Synod of Ancyra Theodotus of Ancyra (bishop) Theodotus of Ancyra (martyr) Timeline of Ankara Treaty of Ankara (disambiguation) Victory Monument (Ankara) Notes References 43. ilişki durumu evli izle Attribution Further reading External links Governorate of Ankara Municipality of Ankara GCatholic – (former and) Latin titular see GCatholic – former and titular Armenian Catholic see Ankara Development Agency Esenboğa International Airport Capitals in Asia Populated places in Ankara Province
803
https://en.wikipedia.org/wiki/Arabic
Arabic
Arabic (, or , or ) is a Semitic language that first emerged in the 1st to 4th centuries CE. It is the lingua franca of the Arab world and the liturgical language of Islam. It is named after the Arabs, a term initially used to describe peoples living in the Arabian Peninsula bounded by eastern Egypt in the west, Mesopotamia in the east, and the Anti-Lebanon mountains and northern Syria in the north, as perceived by ancient Greek geographers. The ISO assigns language codes to 32 varieties of Arabic, including its standard form, Modern Standard Arabic, also referred to as Literary Arabic, which is modernized Classical Arabic. This distinction exists primarily among Western linguists; Arabic speakers themselves generally do not distinguish between Modern Standard Arabic and Classical Arabic, but rather refer to both as ( "the eloquent Arabic") or simply (). Arabic is widely taught in schools and universities around the world and is used to varying degrees in workplaces, governments and the media. Arabic, in its Modern Standard Arabic form, is an official language of 26 states and 1 disputed territory, the third most after English and French; it is also the liturgical language of the religion of Islam, since the Quran and the Hadiths were written in Classical Arabic. During the early Middle Ages, Arabic was a major vehicle of culture in the Mediterranean region, especially in science, mathematics and philosophy. As a result, many European languages have also borrowed many words from it. Arabic influence, mainly in vocabulary, is seen in European languages—mainly Spanish and to a lesser extent Portuguese, Catalan, and Sicilian—owing to both the proximity of Christian European and Muslim Arabized civilizations and the long-lasting Muslim culture and Arabic language presence, mainly in Southern Iberia, during the Al-Andalus era. The Maltese language is a Semitic language developed from a dialect of Arabic and written in the Latin alphabet. The Balkan languages, including Greek and Bulgarian, have also acquired a significant number of words of Arabic origin through contact with Ottoman Turkish. Arabic has influenced many other languages around the globe throughout its history especially languages of Muslim cultures and countries that were conquered by Muslims. Some of the most influenced languages are Persian, Turkish, Hindustani (Hindi and Urdu), Kashmiri, Kurdish, Bosnian, Kazakh, Bengali, Malay (Indonesian and Malaysian), Maldivian, Pashto, Punjabi, Albanian, Armenian, Azerbaijani, Sicilian, Spanish, Greek, Bulgarian, Tagalog, Sindhi, Odia Hebrew and Hausa and some languages in parts of Africa. Conversely, Arabic has borrowed words from other languages, including Aramaic as well as Hebrew, Latin, Greek, Persian and to a lesser extent Turkish (due to the Ottoman Empire), English and French (due to their colonization of the Levant) and other Semitic languages such as Abyssinian. Arabic is the liturgical language of 1.9 billion Muslims, and Arabic is one of six official languages of the United Nations. All varieties of Arabic combined are spoken by perhaps as many as 422 million speakers (native and non-native) in the Arab world, making it the fifth most spoken language in the world, and the fourth most used language on the internet in terms of users. In 2011, Bloomberg Businessweek ranked Arabic the fourth most useful language for business, after English, Standard Mandarin Chinese, and French. Arabic is written with the Arabic alphabet, which is an abjad script and is written from right to left, although the spoken varieties are sometimes written in ASCII Latin from left to right with no standardized orthography. Classification Arabic is usually, but not universally, classified as a Central Semitic language. It is related to languages in other subgroups of the Semitic language group (Northwest Semitic, South Semitic, East Semitic, West Semitic), such as Aramaic, Syriac, Hebrew, Ugaritic, Phoenician, Canaanite, Amorite, Ammonite, Eblaite, epigraphic Ancient North Arabian, epigraphic Ancient South Arabian, Ethiopic, Modern South Arabian, and numerous other dead and modern languages. Linguists still differ as to the best classification of Semitic language sub-groups. The Semitic languages changed a great deal between Proto-Semitic and the emergence of the Central Semitic languages, particularly in grammar. Innovations of the Central Semitic languages—all maintained in Arabic—include: The conversion of the suffix-conjugated stative formation (jalas-) into a past tense. The conversion of the prefix-conjugated preterite-tense formation (yajlis-) into a present tense. The elimination of other prefix-conjugated mood/aspect forms (e.g., a present tense formed by doubling the middle root, a perfect formed by infixing a after the first root consonant, probably a jussive formed by a stress shift) in favor of new moods formed by endings attached to the prefix-conjugation forms (e.g., -u for indicative, -a for subjunctive, no ending for jussive, -an or -anna for energetic). The development of an internal passive. There are several features which Classical Arabic, the modern Arabic varieties, as well as the Safaitic and Hismaic inscriptions share which are unattested in any other Central Semitic language variety, including the Dadanitic and Taymanitic languages of the northern Hejaz. These features are evidence of common descent from a hypothetical ancestor, Proto-Arabic. The following features can be reconstructed with confidence for Proto-Arabic: negative particles * ; * to Classical Arabic G-passive participle prepositions and adverbs , , , , a subjunctive in - -demonstratives leveling of the - allomorph of the feminine ending complementizer and subordinator the use of - to introduce modal clauses independent object pronoun in vestiges of nunation History Old Arabic Arabia boasted a wide variety of Semitic languages in antiquity. In the southwest, various Central Semitic languages both belonging to and outside of the Ancient South Arabian family (e.g. Southern Thamudic) were spoken. It is also believed that the ancestors of the Modern South Arabian languages (non-Central Semitic languages) were also spoken in southern Arabia at this time. To the north, in the oases of northern Hejaz, Dadanitic and Taymanitic held some prestige as inscriptional languages. In Najd and parts of western Arabia, a language known to scholars as Thamudic C is attested. In eastern Arabia, inscriptions in a script derived from ASA attest to a language known as Hasaitic. Finally, on the northwestern frontier of Arabia, various languages known to scholars as Thamudic B, Thamudic D, Safaitic, and Hismaic are attested. The last two share important isoglosses with later forms of Arabic, leading scholars to theorize that Safaitic and Hismaic are in fact early forms of Arabic and that they should be considered Old Arabic. Linguists generally believe that "Old Arabic" (a collection of related dialects that constitute the precursor of Arabic) first emerged around the 1st century CE. Previously, the earliest attestation of Old Arabic was thought to be a single 1st century CE inscription in Sabaic script at Qaryat Al-Faw, in southern present-day Saudi Arabia. However, this inscription does not participate in several of the key innovations of the Arabic language group, such as the conversion of Semitic mimation to nunation in the singular. It is best reassessed as a separate language on the Central Semitic dialect continuum. It was also thought that Old Arabic coexisted alongside—and then gradually displaced--epigraphic Ancient North Arabian (ANA), which was theorized to have been the regional tongue for many centuries. ANA, despite its name, was considered a very distinct language, and mutually unintelligible, from "Arabic". Scholars named its variant dialects after the towns where the inscriptions were discovered (Dadanitic, Taymanitic, Hismaic, Safaitic). However, most arguments for a single ANA language or language family were based on the shape of the definite article, a prefixed h-. It has been argued that the h- is an archaism and not a shared innovation, and thus unsuitable for language classification, rendering the hypothesis of an ANA language family untenable. Safaitic and Hismaic, previously considered ANA, should be considered Old Arabic due to the fact that they participate in the innovations common to all forms of Arabic.The earliest attestation of continuous Arabic text in an ancestor of the modern Arabic script are three lines of poetry by a man named Garm(')allāhe found in En Avdat, Israel, and dated to around 125 CE. This is followed by the Namara inscription, an epitaph of the Lakhmid king Imru' al-Qays bar 'Amro, dating to 328 CE, found at Namaraa, Syria. From the 4th to the 6th centuries, the Nabataean script evolves into the Arabic script recognizable from the early Islamic era. There are inscriptions in an undotted, 17-letter Arabic script dating to the 6th century CE, found at four locations in Syria (Zabad, Jabal 'Usays, Harran, Umm al-Jimaal). The oldest surviving papyrus in Arabic dates to 643 CE, and it uses dots to produce the modern 28-letter Arabic alphabet. The language of that papyrus and of the Qur'an are referred to by linguists as "Quranic Arabic", as distinct from its codification soon thereafter into "Classical Arabic". Old Hejazi and Classical Arabic In late pre-Islamic times, a transdialectal and transcommunal variety of Arabic emerged in the Hejaz which continued living its parallel life after literary Arabic had been institutionally standardized in the 2nd and 3rd century of the Hijra, most strongly in Judeo-Christian texts, keeping alive ancient features eliminated from the "learned" tradition (Classical Arabic). This variety and both its classicizing and "lay" iterations have been termed Middle Arabic in the past, but they are thought to continue an Old Higazi register. It is clear that the orthography of the Qur'an was not developed for the standardized form of Classical Arabic; rather, it shows the attempt on the part of writers to record an archaic form of Old Higazi. In the late 6th century AD, a relatively uniform intertribal "poetic koine" distinct from the spoken vernaculars developed based on the Bedouin dialects of Najd, probably in connection with the court of al-Ḥīra. During the first Islamic century, the majority of Arabic poets and Arabic-writing persons spoke Arabic as their mother tongue. Their texts, although mainly preserved in far later manuscripts, contain traces of non-standardized Classical Arabic elements in morphology and syntax. Standardization Abu al-Aswad al-Du'ali (c. 603–689) is credited with standardizing Arabic grammar, or an-naḥw ( "the way"), and pioneering a system of diacritics to differentiate consonants ( nuqat l-i'jām "pointing for non-Arabs") and indicate vocalization ( at-tashkil). Al-Khalil ibn Ahmad al-Farahidi (718 – 786) compiled the first Arabic dictionary, Kitāb al-'Ayn ( "The Book of the Letter ع"), and is credited with establishing the rules of Arabic prosody. Al-Jahiz (776-868) proposed to Al-Akhfash al-Akbar an overhaul of the grammar of Arabic, but it would not come to pass two centuries. The standardization of Arabic reached completion around the end of the 8th century. The first comprehensive description of the ʿarabiyya "Arabic", Sībawayhi's al-Kitāb, is based first of all upon a corpus of poetic texts, in addition to Qur'an usage and Bedouin informants whom he considered to be reliable speakers of the ʿarabiyya. Spread Arabic spread with the spread of Islam. Following the early Muslim conquests, Arabic gained vocabulary from Middle Persian and Turkish. In the early Abbasid period, many Classical Greek terms entered Arabic through translations carried out at Baghdad's House of Wisdom. By the 8th century, knowledge of Classical Arabic had become an essential prerequisite for rising into the higher classes throughout the Islamic world, both for Muslims and non-Muslims. For example, Maimonides, the Andalusi Jewish philosopher, authored works in Judeo-Arabic—Arabic written in Hebrew script—including his famous The Guide for the Perplexed ( Dalālat al-ḥāʾirīn). Development Ibn Jinni of Mosul, a pioneer in phonology, wrote prolifically in the 10th century on Arabic morphology and phonology in works such as Kitāb Al-Munṣif, Kitāb Al-Muḥtasab, and . Ibn Mada' of Cordoba (1116–1196) realized the overhaul of Arabic grammar first proposed by Al-Jahiz 200 years prior. The Maghrebi lexicographer Ibn Manzur compiled (لسان العرب, "Tongue of Arabs"), a major reference dictionary of Arabic, in 1290. Neo-Arabic Charles Ferguson's koine theory (Ferguson 1959) claims that the modern Arabic dialects collectively descend from a single military koine that sprang up during the Islamic conquests; this view has been challenged in recent times. Ahmad al-Jallad proposes that there were at least two considerably distinct types of Arabic on the eve of the conquests: Northern and Central (Al-Jallad 2009). The modern dialects emerged from a new contact situation produced following the conquests. Instead of the emergence of a single or multiple koines, the dialects contain several sedimentary layers of borrowed and areal features, which they absorbed at different points in their linguistic histories. According to Veersteegh and Bickerton, colloquial Arabic dialects arose from pidginized Arabic formed from contact between Arabs and conquered peoples. Pidginization and subsequent creolization among Arabs and arabized peoples could explain relative morphological and phonological simplicity of vernacular Arabic compared to Classical and MSA. In around the 11th and 12th centuries in al-Andalus, the zajal and muwashah poetry forms developed in the dialectical Arabic of Cordoba and the Maghreb. Nahda The Nahda was a cultural and especially literary renaissance of the 19th century in which writers sought "to fuse Arabic and European forms of expression." According to James L. Gelvin, "Nahda writers attempted to simplify the Arabic language and script so that it might be accessible to a wider audience." In the wake of the industrial revolution and European hegemony and colonialism, pioneering Arabic presses, such as the Amiri Press established by Muhammad Ali (1819), dramatically changed the diffusion and consumption of Arabic literature and publications. Rifa'a al-Tahtawi proposed the establishment of in 1836 and led a translation campaign that highlighted the need for a lexical injection in Arabic, to suit concepts of the industrial and post-industrial age. In response, a number of Arabic academies modeled after the Académie française were established with the aim of developing standardized additions to the Arabic lexicon to suit these transformations, first in Damascus (1919), then in Cairo (1932), Baghdad (1948), Rabat (1960), Amman (1977), (1993), and Tunis (1993). In 1997, a bureau of Arabization standardization was added to the Educational, Cultural, and Scientific Organization of the Arab League. These academies and organizations have worked toward the Arabization of the sciences, creating terms in Arabic to describe new concepts, toward the standardization of these new terms throughout the Arabic-speaking world, and toward the development of Arabic as a world language. This gave rise to what Western scholars call Modern Standard Arabic. From the 1950s, Arabization became a postcolonial nationalist policy in countries such as Tunisia, Algeria, Morocco, and Sudan. Classical, Modern Standard and spoken Arabic Arabic usually refers to Standard Arabic, which Western linguists divide into Classical Arabic and Modern Standard Arabic. It could also refer to any of a variety of regional vernacular Arabic dialects, which are not necessarily mutually intelligible. Classical Arabic is the language found in the Quran, used from the period of Pre-Islamic Arabia to that of the Abbasid Caliphate. Classical Arabic is prescriptive, according to the syntactic and grammatical norms laid down by classical grammarians (such as Sibawayh) and the vocabulary defined in classical dictionaries (such as the Lisān al-ʻArab). Modern Standard Arabic (MSA) largely follows the grammatical standards of Classical Arabic and uses much of the same vocabulary. However, it has discarded some grammatical constructions and vocabulary that no longer have any counterpart in the spoken varieties and has adopted certain new constructions and vocabulary from the spoken varieties. Much of the new vocabulary is used to denote concepts that have arisen in the industrial and post-industrial era, especially in modern times. Due to its grounding in Classical Arabic, Modern Standard Arabic is removed over a millennium from everyday speech, which is construed as a multitude of dialects of this language. These dialects and Modern Standard Arabic are described by some scholars as not mutually comprehensible. The former are usually acquired in families, while the latter is taught in formal education settings. However, there have been studies reporting some degree of comprehension of stories told in the standard variety among preschool-aged children. The relation between Modern Standard Arabic and these dialects is sometimes compared to that of Classical Latin and Vulgar Latin vernaculars (which became Romance languages) in medieval and early modern Europe. This view though does not take into account the widespread use of Modern Standard Arabic as a medium of audiovisual communication in today's mass media—a function Latin has never performed. MSA is the variety used in most current, printed Arabic publications, spoken by some of the Arabic media across North Africa and the Middle East, and understood by most educated Arabic speakers. "Literary Arabic" and "Standard Arabic" ( ) are less strictly defined terms that may refer to Modern Standard Arabic or Classical Arabic. Some of the differences between Classical Arabic (CA) and Modern Standard Arabic (MSA) are as follows: Certain grammatical constructions of CA that have no counterpart in any modern vernacular dialect (e.g., the energetic mood) are almost never used in Modern Standard Arabic. Case distinctions are very rare in Arabic vernaculars. As a result, MSA is generally composed without case distinctions in mind, and the proper cases are added after the fact, when necessary. Because most case endings are noted using final short vowels, which are normally left unwritten in the Arabic script, it is unnecessary to determine the proper case of most words. The practical result of this is that MSA, like English and Standard Chinese, is written in a strongly determined word order and alternative orders that were used in CA for emphasis are rare. In addition, because of the lack of case marking in the spoken varieties, most speakers cannot consistently use the correct endings in extemporaneous speech. As a result, spoken MSA tends to drop or regularize the endings except when reading from a prepared text. The numeral system in CA is complex and heavily tied in with the case system. This system is never used in MSA, even in the most formal of circumstances; instead, a significantly simplified system is used, approximating the system of the conservative spoken varieties. MSA uses much Classical vocabulary (e.g., 'to go') that is not present in the spoken varieties, but deletes Classical words that sound obsolete in MSA. In addition, MSA has borrowed or coined many terms for concepts that did not exist in Quranic times, and MSA continues to evolve. Some words have been borrowed from other languages—notice that transliteration mainly indicates spelling and not real pronunciation (e.g., 'film' or 'democracy'). However, the current preference is to avoid direct borrowings, preferring to either use loan translations (e.g., 'branch', also used for the branch of a company or organization; 'wing', is also used for the wing of an airplane, building, air force, etc.), or to coin new words using forms within existing roots ( 'apoptosis', using the root m/w/t 'death' put into the Xth form, or 'university', based on 'to gather, unite'; 'republic', based on 'multitude'). An earlier tendency was to redefine an older word although this has fallen into disuse (e.g., 'telephone' < 'invisible caller (in Sufism)'; 'newspaper' < 'palm-leaf stalk'). Colloquial or dialectal Arabic refers to the many national or regional varieties which constitute the everyday spoken language and evolved from Classical Arabic. Colloquial Arabic has many regional variants; geographically distant varieties usually differ enough to be mutually unintelligible, and some linguists consider them distinct languages. However, research indicates a high degree of mutual intelligibility between closely related Arabic variants for native speakers listening to words, sentences, and texts; and between more distantly related dialects in interactional situations. The varieties are typically unwritten. They are often used in informal spoken media, such as soap operas and talk shows, as well as occasionally in certain forms of written media such as poetry and printed advertising. The only variety of modern Arabic to have acquired official language status is Maltese, which is spoken in (predominantly Catholic) Malta and written with the Latin script. It is descended from Classical Arabic through Siculo-Arabic, but is not mutually intelligible with any other variety of Arabic. Most linguists list it as a separate language rather than as a dialect of Arabic. Even during Muhammad's lifetime, there were dialects of spoken Arabic. Muhammad spoke in the dialect of Mecca, in the western Arabian peninsula, and it was in this dialect that the Quran was written down. However, the dialects of the eastern Arabian peninsula were considered the most prestigious at the time, so the language of the Quran was ultimately converted to follow the eastern phonology. It is this phonology that underlies the modern pronunciation of Classical Arabic. The phonological differences between these two dialects account for some of the complexities of Arabic writing, most notably the writing of the glottal stop or hamzah (which was preserved in the eastern dialects but lost in western speech) and the use of (representing a sound preserved in the western dialects but merged with in eastern speech). Language and dialect The sociolinguistic situation of Arabic in modern times provides a prime example of the linguistic phenomenon of diglossia, which is the normal use of two separate varieties of the same language, usually in different social situations. Tawleed is the process of giving a new shade of meaning to an old classical word. For example, al-hatif lexicographically, means the one whose sound is heard but whose person remains unseen. Now the term al-hatif is used for a telephone. Therefore, the process of tawleed can express the needs of modern civilization in a manner that would appear to be originally Arabic. In the case of Arabic, educated Arabs of any nationality can be assumed to speak both their school-taught Standard Arabic as well as their native dialects, which depending on the region may be mutually unintelligible. Some of these dialects can be considered to constitute separate languages which may have “sub-dialects” of their own. When educated Arabs of different dialects engage in conversation (for example, a Moroccan speaking with a Lebanese), many speakers code-switch back and forth between the dialectal and standard varieties of the language, sometimes even within the same sentence. Arabic speakers often improve their familiarity with other dialects via music or film. The issue of whether Arabic is one language or many languages is politically charged, in the same way it is for the varieties of Chinese, Hindi and Urdu, Serbian and Croatian, Scots and English, etc. In contrast to speakers of Hindi and Urdu who claim they cannot understand each other even when they can, speakers of the varieties of Arabic will claim they can all understand each other even when they cannot. While there is a minimum level of comprehension between all Arabic dialects, this level can increase or decrease based on geographic proximity: for example, Levantine and Gulf speakers understand each other much better than they do speakers from the Maghreb. The issue of diglossia between spoken and written language is a significant complicating factor: A single written form, significantly different from any of the spoken varieties learned natively, unites a number of sometimes divergent spoken forms. For political reasons, Arabs mostly assert that they all speak a single language, despite significant issues of mutual incomprehensibility among differing spoken versions. From a linguistic standpoint, it is often said that the various spoken varieties of Arabic differ among each other collectively about as much as the Romance languages. This is an apt comparison in a number of ways. The period of divergence from a single spoken form is similar—perhaps 1500 years for Arabic, 2000 years for the Romance languages. Also, while it is comprehensible to people from the Maghreb, a linguistically innovative variety such as Moroccan Arabic is essentially incomprehensible to Arabs from the Mashriq, much as French is incomprehensible to Spanish or Italian speakers but relatively easily learned by them. This suggests that the spoken varieties may linguistically be considered separate languages. Influence of Arabic on other languages The influence of Arabic has been most important in Islamic countries, because it is the language of the Islamic sacred book, the Quran. Arabic is also an important source of vocabulary for languages such as Amharic, Azerbaijani, Baluchi, Bengali, Berber, Bosnian, Chaldean, Chechen, Chittagonian, Croatian, Dagestani, Dhivehi, English, German, Gujarati, Hausa, Hindi, Kazakh, Kurdish, Kutchi, Kyrgyz, Malay (Malaysian and Indonesian), Pashto, Persian, Punjabi, Rohingya, Romance languages (French, Catalan, Italian, Portuguese, Sicilian, Spanish, etc.) Saraiki, Sindhi, Somali, Sylheti, Swahili, Tagalog, Tigrinya, Turkish, Turkmen, Urdu, Uyghur, Uzbek, Visayan and Wolof, as well as other languages in countries where these languages are spoken.Modern Hebrew has been also influenced by Arabic especially during the process of revival, as MSA was used as a source for modern Hebrew vocabulary and roots, as well as much of Modern Hebrew's slang. The Education Minister of France Jean-Michel Blanquer has emphasized the learning and usage of Arabic in French schools. In addition, English has many Arabic loanwords, some directly, but most via other Mediterranean languages. Examples of such words include admiral, adobe, alchemy, alcohol, algebra, algorithm, alkaline, almanac, amber, arsenal, assassin, candy, carat, cipher, coffee, cotton, ghoul, hazard, jar, kismet, lemon, loofah, magazine, mattress, sherbet, sofa, sumac, tariff, and zenith. Other languages such as Maltese and Kinubi derive ultimately from Arabic, rather than merely borrowing vocabulary or grammatical rules. Terms borrowed range from religious terminology (like Berber taẓallit, "prayer", from salat ( )), academic terms (like Uyghur mentiq, "logic"), and economic items (like English coffee) to placeholders (like Spanish fulano, "so-and-so"), everyday terms (like Hindustani lekin, "but", or Spanish taza and French tasse, meaning "cup"), and expressions (like Catalan a betzef, "galore, in quantity"). Most Berber varieties (such as Kabyle), along with Swahili, borrow some numbers from Arabic. Most Islamic religious terms are direct borrowings from Arabic, such as (salat), "prayer", and (imam), "prayer leader." In languages not directly in contact with the Arab world, Arabic loanwords are often transferred indirectly via other languages rather than being transferred directly from Arabic. For example, most Arabic loanwords in Hindustani and Turkish entered through Persian. Older Arabic loanwords in Hausa were borrowed from Kanuri. Most Arabic loanwords in Yoruba entered through Hausa. Arabic words also made their way into several West African languages as Islam spread across the Sahara. Variants of Arabic words such as kitāb ("book") have spread to the languages of African groups who had no direct contact with Arab traders. Since, throughout the Islamic world, Arabic occupied a position similar to that of Latin in Europe, many of the Arabic concepts in the fields of science, philosophy, commerce, etc. were coined from Arabic roots by non-native Arabic speakers, notably by Aramaic and Persian translators, and then found their way into other languages. This process of using Arabic roots, especially in Kurdish and Persian, to translate foreign concepts continued through to the 18th and 19th centuries, when swaths of Arab-inhabited lands were under Ottoman rule. Influence of other languages on Arabic The most important sources of borrowings into (pre-Islamic) Arabic are from the related (Semitic) languages Aramaic, which used to be the principal, international language of communication throughout the ancient Near and Middle East, and Ethiopic. In addition, many cultural, religious and political terms have entered Arabic from Iranian languages, notably Middle Persian, Parthian, and (Classical) Persian, and Hellenistic Greek (kīmiyāʼ has as origin the Greek khymia, meaning in that language the melting of metals; see Roger Dachez, Histoire de la Médecine de l'Antiquité au XXe siècle, Tallandier, 2008, p. 251), alembic (distiller) from ambix (cup), almanac (climate) from almenichiakon (calendar). (For the origin of the last three borrowed words, see Alfred-Louis de Prémare, Foundations of Islam, Seuil, L'Univers Historique, 2002.) Some Arabic borrowings from Semitic or Persian languages are, as presented in De Prémare's above-cited book: madīnah/medina (مدينة, city or city square), a word of Aramaic origin “madenta” (in which it means "a state"). jazīrah (جزيرة), as in the well-known form الجزيرة "Al-Jazeera," means "island" and has its origin in the Syriac ܓܙܝܪܗ gazarta. lāzaward (لازورد) is taken from Persian لاژورد lājvard, the name of a blue stone, lapis lazuli. This word was borrowed in several European languages to mean (light) blue – azure in English, azur in French and azul in Portuguese and Spanish. A comprehensive overview of the influence of other languages on Arabic is found in Lucas & Manfredi (2020). Arabic alphabet and nationalism There have been many instances of national movements to convert Arabic script into Latin script or to Romanize the language. Currently, the only language derived from Classical Arabic to use Latin script is Maltese. Lebanon The Beirut newspaper La Syrie pushed for the change from Arabic script to Latin letters in 1922. The major head of this movement was Louis Massignon, a French Orientalist, who brought his concern before the Arabic Language Academy in Damascus in 1928. Massignon's attempt at Romanization failed as the Academy and population viewed the proposal as an attempt from the Western world to take over their country. Sa'id Afghani, a member of the Academy, mentioned that the movement to Romanize the script was a Zionist plan to dominate Lebanon. Said Akl created a Latin-based alphabet for Lebanese and used it in a newspaper he founded, Lebnaan, as well as in some books he wrote. Egypt After the period of colonialism in Egypt, Egyptians were looking for a way to reclaim and re-emphasize Egyptian culture. As a result, some Egyptians pushed for an Egyptianization of the Arabic language in which the formal Arabic and the colloquial Arabic would be combined into one language and the Latin alphabet would be used. There was also the idea of finding a way to use Hieroglyphics instead of the Latin alphabet, but this was seen as too complicated to use. A scholar, Salama Musa agreed with the idea of applying a Latin alphabet to Arabic, as he believed that would allow Egypt to have a closer relationship with the West. He also believed that Latin script was key to the success of Egypt as it would allow for more advances in science and technology. This change in alphabet, he believed, would solve the problems inherent with Arabic, such as a lack of written vowels and difficulties writing foreign words that made it difficult for non-native speakers to learn. Ahmad Lutfi As Sayid and Muhammad Azmi, two Egyptian intellectuals, agreed with Musa and supported the push for Romanization. The idea that Romanization was necessary for modernization and growth in Egypt continued with Abd Al-Aziz Fahmi in 1944. He was the chairman for the Writing and Grammar Committee for the Arabic Language Academy of Cairo. However, this effort failed as the Egyptian people felt a strong cultural tie to the Arabic alphabet. In particular, the older Egyptian generations believed that the Arabic alphabet had strong connections to Arab values and history, due to the long history of the Arabic alphabet (Shrivtiel, 189) in Muslim societies. The language of the Quran and its influence on poetry The Quran introduced a new way of writing to the world. People began studying and applying the unique styles they learned from the Quran to not only their own writing, but also their culture. Writers studied the unique structure and format of the Quran in order to identify and apply the figurative devices and their impact on the reader. Quran's figurative devices The Quran inspired musicality in poetry through the internal rhythm of the verses. The arrangement of words, how certain sounds create harmony, and the agreement of rhymes create the sense of rhythm within each verse. At times, the chapters of the Quran only have the rhythm in common. The repetition in the Quran introduced the true power and impact repetition can have in poetry. The repetition of certain words and phrases made them appear more firm and explicit in the Quran. The Quran uses constant metaphors of blindness and deafness to imply unbelief. Metaphors were not a new concept to poetry, however the strength of extended metaphors was. The explicit imagery in the Quran inspired many poets to include and focus on the feature in their own work. The poet ibn al-Mu'tazz wrote a book regarding the figures of speech inspired by his study of the Quran. Poet Badr Shakir al-Sayyab expresses his political opinion in his work through imagery inspired by the forms of more harsher imagery used in the Quran. The Quran uses figurative devices in order to express the meaning in the most beautiful form possible. The study of the pauses in the Quran as well as other rhetoric allow it to be approached in a multiple ways. Structure Although the Quran is known for its fluency and harmony, the structure can be best described as not always being inherently chronological, but can also flow thematically instead (the chapters in the Quran have segments that flow in chronological order, however segments can transition into other segments not related in chronology, but could be related in topic). The suras, also known as chapters of the Quran, are not placed in chronological order. The only constant in their structure is that the longest are placed first and shorter ones follow. The topics discussed in the chapters can also have no direct relation to each other (as seen in many suras) and can share in their sense of rhyme. The Quran introduces to poetry the idea of abandoning order and scattering narratives throughout the text. Harmony is also present in the sound of the Quran. The elongations and accents present in the Quran create a harmonious flow within the writing. Unique sound of the Quran recited, due to the accents, create a deeper level of understanding through a deeper emotional connection. The Quran is written in a language that is simple and understandable by people. The simplicity of the writing inspired later poets to write in a more clear and clear-cut style. The words of the Quran, although unchanged, are to this day understandable and frequently used in both formal and informal Arabic. The simplicity of the language makes memorizing and reciting the Quran a slightly easier task. Culture and the Quran The writer al-Khattabi explains how culture is a required element to create a sense of art in work as well as understand it. He believes that the fluency and harmony which the Quran possess are not the only elements that make it beautiful and create a bond between the reader and the text. While a lot of poetry was deemed comparable to the Quran in that it is equal to or better than the composition of the Quran, a debate rose that such statements are not possible because humans are incapable of composing work comparable to the Quran. Because the structure of the Quran made it difficult for a clear timeline to be seen, Hadith were the main source of chronological order. The Hadith were passed down from generation to generation and this tradition became a large resource for understanding the context. Poetry after the Quran began possessing this element of tradition by including ambiguity and background information to be required to understand the meaning. After the Quran came down to the people, the tradition of memorizing the verses became present. It is believed that the greater the amount of the Quran memorized, the greater the faith. As technology improved over time, hearing recitations of the Quran became more available as well as more tools to help memorize the verses. The tradition of Love Poetry served as a symbolic representation of a Muslim's desire for a closer contact with their Lord. While the influence of the Quran on Arabic poetry is explained and defended by numerous writers, some writers such as Al-Baqillani believe that poetry and the Quran are in no conceivable way related due to the uniqueness of the Quran. Poetry's imperfections prove his points that they cannot be compared with the fluency the Quran holds. Arabic and Islam Classical Arabic is the language of poetry and literature (including news); it is also mainly the language of the Quran. Classical Arabic is closely associated with the religion of Islam because the Quran was written in it. Most of the world's Muslims do not speak Classical Arabic as their native language, but many can read the Quranic script and recite the Quran. Among non-Arab Muslims, translations of the Quran are most often accompanied by the original text. At present, Modern Standard Arabic (MSA) is also used in modernized versions of literary forms of the Quran. Some Muslims present a monogenesis of languages and claim that the Arabic language was the language revealed by God for the benefit of mankind and the original language as a prototype system of symbolic communication, based upon its system of triconsonantal roots, spoken by man from which all other languages were derived, having first been corrupted. Judaism has a similar account with the Tower of Babel. Dialects and descendants Colloquial Arabic is a collective term for the spoken dialects of Arabic used throughout the Arab world, which differ radically from the literary language. The main dialectal division is between the varieties within and outside of the Arabian peninsula, followed by that between sedentary varieties and the much more conservative Bedouin varieties. All the varieties outside of the Arabian peninsula (which include the large majority of speakers) have many features in common with each other that are not found in Classical Arabic. This has led researchers to postulate the existence of a prestige koine dialect in the one or two centuries immediately following the Arab conquest, whose features eventually spread to all newly conquered areas. These features are present to varying degrees inside the Arabian peninsula. Generally, the Arabian peninsula varieties have much more diversity than the non-peninsula varieties, but these have been understudied. Within the non-peninsula varieties, the largest difference is between the non-Egyptian North African dialects (especially Moroccan Arabic) and the others. Moroccan Arabic in particular is hardly comprehensible to Arabic speakers east of Libya (although the converse is not true, in part due to the popularity of Egyptian films and other media). One factor in the differentiation of the dialects is influence from the languages previously spoken in the areas, which have typically provided a significant number of new words and have sometimes also influenced pronunciation or word order; however, a much more significant factor for most dialects is, as among Romance languages, retention (or change of meaning) of different classical forms. Thus Iraqi aku, Levantine fīh and North African kayən all mean 'there is', and all come from Classical Arabic forms (yakūn, fīhi, kā'in respectively), but now sound very different. Examples Transcription is a broad IPA transcription, so minor differences were ignored for easier comparison. Also, the pronunciation of Modern Standard Arabic differs significantly from region to region. Koiné According to Charles A. Ferguson, the following are some of the characteristic features of the koiné that underlies all the modern dialects outside the Arabian peninsula. Although many other features are common to most or all of these varieties, Ferguson believes that these features in particular are unlikely to have evolved independently more than once or twice and together suggest the existence of the koine: Loss of the dual number except on nouns, with consistent plural agreement (cf. feminine singular agreement in plural inanimates). Change of a to i in many affixes (e.g., non-past-tense prefixes ti- yi- ni-; wi- 'and'; il- 'the'; feminine -it in the construct state). Loss of third-weak verbs ending in w (which merge with verbs ending in y). Reformation of geminate verbs, e.g., 'I untied' → . Conversion of separate words lī 'to me', laka 'to you', etc. into indirect-object clitic suffixes. Certain changes in the cardinal number system, e.g., 'five days' → , where certain words have a special plural with prefixed t. Loss of the feminine elative (comparative). Adjective plurals of the form 'big' → . Change of nisba suffix > . Certain lexical items, e.g., 'bring' < 'come with'; 'see'; 'what' (or similar) < 'which thing'; (relative pronoun). Merger of and . Dialect groups Egyptian Arabic is spoken by around 53 million people in Egypt (55 million worldwide). It is one of the most understood varieties of Arabic, due in large part to the widespread distribution of Egyptian films and television shows throughout the Arabic-speaking world Levantine Arabic includes North Levantine Arabic, South Levantine Arabic and Cypriot Arabic. It is spoken by about 21 million people in Lebanon, Syria, Jordan, Palestine, Israel, Cyprus and Turkey. Lebanese Arabic is a variety of Levantine Arabic spoken primarily in Lebanon. Jordanian Arabic is a continuum of mutually intelligible varieties of Levantine Arabic spoken by the population of the Kingdom of Jordan. Palestinian Arabic is a name of several dialects of the subgroup of Levantine Arabic spoken by the Palestinians in Palestine, by Arab citizens of Israel and in most Palestinian populations around the world. Samaritan Arabic, spoken by only several hundred in the Nablus region Cypriot Maronite Arabic, spoken in Cyprus Maghrebi Arabic, also called "Darija" spoken by about 70 million people in Morocco, Algeria, Tunisia and Libya. It also forms the basis of Maltese via the extinct Sicilian Arabic dialect. Maghrebi Arabic is very hard to understand for Arabic speakers from the Mashriq or Mesopotamia, the most comprehensible being Libyan Arabic and the most difficult Moroccan Arabic. The others such as Algerian Arabic can be considered in between the two in terms of difficulty. Libyan Arabic spoken in Libya and neighboring countries. Tunisian Arabic spoken in Tunisia and North-eastern Algeria Algerian Arabic spoken in Algeria Judeo-Algerian Arabic was spoken by Jews in Algeria until 1962 Moroccan Arabic spoken in Morocco Hassaniya Arabic (3 million speakers), spoken in Mauritania, Western Sahara, some parts of the Azawad in northern Mali, southern Morocco and south-western Algeria. Andalusian Arabic, spoken in Spain until the 16th century. Siculo-Arabic (Sicilian Arabic), was spoken in Sicily and Malta between the end of the 9th century and the end of the 12th century and eventually evolved into the Maltese language. Maltese, spoken on the island of Malta, is the only fully separate standardized language to have originated from an Arabic dialect (the extinct Siculo-Arabic dialect), with independent literary norms. Maltese has evolved independently of Modern Standard Arabic and its varieties into a standardized language over the past 800 years in a gradual process of Latinisation. Maltese is therefore considered an exceptional descendant of Arabic that has no diglossic relationship with Standard Arabic or Classical Arabic. Maltese is also different from Arabic and other Semitic languages since its morphology has been deeply influenced by Romance languages, Italian and Sicilian. It is also the only Semitic language written in the Latin script. In terms of basic everyday language, speakers of Maltese are reported to be able to understand less than a third of what is said to them in Tunisian Arabic, which is related to Siculo-Arabic, whereas speakers of Tunisian are able to understand about 40% of what is said to them in Maltese. This asymmetric intelligibility is considerably lower than the mutual intelligibility found between Maghrebi Arabic dialects. Maltese has its own dialects, with urban varieties of Maltese being closer to Standard Maltese than rural varieties. Mesopotamian Arabic, spoken by about 41.2 million people in Iraq (where it is called "Aamiyah"), eastern Syria and southwestern Iran (Khuzestan) and in the southeastern of Turkey (in the eastern Mediterranean, Southeastern Anatolia Region) North Mesopotamian Arabic is a spoken north of the Hamrin Mountains in Iraq, in western Iran, northern Syria, and in southeastern Turkey (in the eastern Mediterranean Region, Southeastern Anatolia Region, and southern Eastern Anatolia Region). Judeo-Mesopotamian Arabic, also known as Iraqi Judeo Arabic and Yahudic, is a variety of Arabic spoken by Iraqi Jews of Mosul. Baghdad Arabic is the Arabic dialect spoken in Baghdad, and the surrounding cities and it is a subvariety of Mesopotamian Arabic. Baghdad Jewish Arabic is the dialect spoken by the Iraqi Jews of Baghdad. South Mesopotamian Arabic (Basrawi dialect) is the dialect spoken in southern Iraq, such as Basra, Dhi Qar and Najaf. Khuzestani Arabic is the dialect spoken in the Iranian province of Khuzestan. This dialect is a mix of Southen Mesopotamian Arabic and Gulf Arabic. Khorasani Arabic spoken in the Iranian province of Khorasan. Kuwaiti Arabic is a Gulf Arabic dialect spoken in Kuwait. Sudanese Arabic is spoken by 17 million people in Sudan and some parts of southern Egypt. Sudanese Arabic is quite distinct from the dialect of its neighbor to the north; rather, the Sudanese have a dialect similar to the Hejazi dialect. Juba Arabic spoken in South Sudan and southern Sudan Gulf Arabic, spoken by around four million people, predominantly in Kuwait, Bahrain, some parts of Oman, eastern Saudi Arabia coastal areas and some parts of UAE and Qatar. Also spoken in Iran's Bushehr and Hormozgan provinces. Although Gulf Arabic is spoken in Qatar, most Qatari citizens speak Najdi Arabic (Bedawi). Omani Arabic, distinct from the Gulf Arabic of Eastern Arabia and Bahrain, spoken in Central Oman. With recent oil wealth and mobility has spread over other parts of the Sultanate. Hadhrami Arabic, spoken by around 8 million people, predominantly in Hadhramaut, and in parts of the Arabian Peninsula, South and Southeast Asia, and East Africa by Hadhrami descendants. Yemeni Arabic spoken in Yemen, and southern Saudi Arabia by 15 million people. Similar to Gulf Arabic. Najdi Arabic, spoken by around 10 million people, mainly spoken in Najd, central and northern Saudi Arabia. Most Qatari citizens speak Najdi Arabic (Bedawi). Hejazi Arabic (6 million speakers), spoken in Hejaz, western Saudi Arabia Saharan Arabic spoken in some parts of Algeria, Niger and Mali Baharna Arabic (600,000 speakers), spoken by Bahrani Shiʻah in Bahrain and Qatif, the dialect exhibits many big differences from Gulf Arabic. It is also spoken to a lesser extent in Oman. Judeo-Arabic dialects – these are the dialects spoken by the Jews that had lived or continue to live in the Arab World. As Jewish migration to Israel took hold, the language did not thrive and is now considered endangered. So-called Qəltu Arabic. Chadian Arabic, spoken in Chad, Sudan, some parts of South Sudan, Central African Republic, Niger, Nigeria, Cameroon Central Asian Arabic, spoken in Uzbekistan, Tajikistan and Afghanistan, is highly endangered Shirvani Arabic, spoken in Azerbaijan and Dagestan until the 1930s, now extinct. Phonology History Of the 29 Proto-Semitic consonants, only one has been lost: , which merged with , while became (see Semitic languages). Various other consonants have changed their sound too, but have remained distinct. An original lenited to , and – consistently attested in pre-Islamic Greek transcription of Arabic languages – became palatalized to or by the time of the Quran and , , or after early Muslim conquests and in MSA (see Arabic phonology#Local variations for more detail). An original voiceless alveolar lateral fricative became . Its emphatic counterpart was considered by Arabs to be the most unusual sound in Arabic (Hence the Classical Arabic's appellation or "language of the "); for most modern dialects, it has become an emphatic stop with loss of the laterality or with complete loss of any pharyngealization or velarization, . (The classical pronunciation of pharyngealization still occurs in the Mehri language, and the similar sound without velarization, , exists in other Modern South Arabian languages.) Other changes may also have happened. Classical Arabic pronunciation is not thoroughly recorded and different reconstructions of the sound system of Proto-Semitic propose different phonetic values. One example is the emphatic consonants, which are pharyngealized in modern pronunciations but may have been velarized in the eighth century and glottalized in Proto-Semitic. Reduction of and between vowels occurs in a number of circumstances and is responsible for much of the complexity of third-weak ("defective") verbs. Early Akkadian transcriptions of Arabic names shows that this reduction had not yet occurred as of the early part of the 1st millennium BC. The Classical Arabic language as recorded was a poetic koine that reflected a consciously archaizing dialect, chosen based on the tribes of the western part of the Arabian Peninsula, who spoke the most conservative variants of Arabic. Even at the time of Muhammed and before, other dialects existed with many more changes, including the loss of most glottal stops, the loss of case endings, the reduction of the diphthongs and into monophthongs , etc. Most of these changes are present in most or all modern varieties of Arabic. An interesting feature of the writing system of the Quran (and hence of Classical Arabic) is that it contains certain features of Muhammad's native dialect of Mecca, corrected through diacritics into the forms of standard Classical Arabic. Among these features visible under the corrections are the loss of the glottal stop and a differing development of the reduction of certain final sequences containing : Evidently, final became as in the Classical language, but final became a different sound, possibly (rather than again in the Classical language). This is the apparent source of the alif maqṣūrah 'restricted alif' where a final is reconstructed: a letter that would normally indicate or some similar high-vowel sound, but is taken in this context to be a logical variant of alif and represent the sound . Although Classical Arabic was a unitary language and is now used in Quran, its pronunciation varies somewhat from country to country and from region to region within a country. It is influenced by colloquial dialects. Literary Arabic The "colloquial" spoken dialects of Arabic are learned at home and constitute the native languages of Arabic speakers. "Formal" Modern Standard Arabic is learned at school; although many speakers have a native-like command of the language, it is technically not the native language of any speakers. Both varieties can be both written and spoken, although the colloquial varieties are rarely written down and the formal variety is spoken mostly in formal circumstances, e.g., in radio and TV broadcasts, formal lectures, parliamentary discussions and to some extent between speakers of different colloquial dialects. Even when the literary language is spoken, however, it is normally only spoken in its pure form when reading a prepared text out loud and communication between speakers of different colloquial dialects. When speaking extemporaneously (i.e. making up the language on the spot, as in a normal discussion among people), speakers tend to deviate somewhat from the strict literary language in the direction of the colloquial varieties. In fact, there is a continuous range of "in-between" spoken varieties: from nearly pure Modern Standard Arabic (MSA), to a form that still uses MSA grammar and vocabulary but with significant colloquial influence, to a form of the colloquial language that imports a number of words and grammatical constructions in MSA, to a form that is close to pure colloquial but with the "rough edges" (the most noticeably "vulgar" or non-Classical aspects) smoothed out, to pure colloquial. The particular variant (or register) used depends on the social class and education level of the speakers involved and the level of formality of the speech situation. Often it will vary within a single encounter, e.g., moving from nearly pure MSA to a more mixed language in the process of a radio interview, as the interviewee becomes more comfortable with the interviewer. This type of variation is characteristic of the diglossia that exists throughout the Arabic-speaking world. Although Modern Standard Arabic (MSA) is a unitary language, its pronunciation varies somewhat from country to country and from region to region within a country. The variation in individual "accents" of MSA speakers tends to mirror corresponding variations in the colloquial speech of the speakers in question, but with the distinguishing characteristics moderated somewhat. It is important in descriptions of "Arabic" phonology to distinguish between pronunciation of a given colloquial (spoken) dialect and the pronunciation of MSA by these same speakers. Although they are related, they are not the same. For example, the phoneme that derives from Classical Arabic has many different pronunciations in the modern spoken varieties, e.g., including the proposed original . Speakers whose native variety has either or will use the same pronunciation when speaking MSA. Even speakers from Cairo, whose native Egyptian Arabic has , normally use when speaking MSA. The of Persian Gulf speakers is the only variant pronunciation which isn't found in MSA; is used instead, but may use [j] in MSA for comfortable pronunciation. Another reason of different pronunciations is influence of colloquial dialects. The differentiation of pronunciation of colloquial dialects is the influence from other languages previously spoken and some still presently spoken in the regions, such as Coptic in Egypt, Berber, Punic, or Phoenician in North Africa, Himyaritic, Modern South Arabian, and Old South Arabian in Yemen and Oman, and Aramaic and Canaanite languages (including Phoenician) in the Levant and Mesopotamia. Another example: Many colloquial varieties are known for a type of vowel harmony in which the presence of an "emphatic consonant" triggers backed allophones of nearby vowels (especially of the low vowels , which are backed to in these circumstances and very often fronted to in all other circumstances). In many spoken varieties, the backed or "emphatic" vowel allophones spread a fair distance in both directions from the triggering consonant; in some varieties (most notably Egyptian Arabic), the "emphatic" allophones spread throughout the entire word, usually including prefixes and suffixes, even at a distance of several syllables from the triggering consonant. Speakers of colloquial varieties with this vowel harmony tend to introduce it into their MSA pronunciation as well, but usually with a lesser degree of spreading than in the colloquial varieties. (For example, speakers of colloquial varieties with extremely long-distance harmony may allow a moderate, but not extreme, amount of spreading of the harmonic allophones in their MSA speech, while speakers of colloquial varieties with moderate-distance harmony may only harmonize immediately adjacent vowels in MSA.) Vowels Modern Standard Arabic has six pure vowels (while most modern dialects have eight pure vowels which includes the long vowels ), with short and corresponding long vowels . There are also two diphthongs: and . The pronunciation of the vowels differs from speaker to speaker, in a way that tends to reflect the pronunciation of the corresponding colloquial variety. Nonetheless, there are some common trends. Most noticeable is the differing pronunciation of and , which tend towards fronted , or in most situations, but a back in the neighborhood of emphatic consonants. Some accents and dialects, such as those of the Hejaz region, have an open or a central in all situations. The vowel varies towards too. Listen to the final vowel in the recording of at the beginning of this article, for example. The point is, Arabic has only three short vowel phonemes, so those phonemes can have a very wide range of allophones. The vowels and are often affected somewhat in emphatic neighborhoods as well, with generally more back or centralized allophones, but the differences are less great than for the low vowels. The pronunciation of short and tends towards and , respectively, in many dialects. The definition of both "emphatic" and "neighborhood" vary in ways that reflect (to some extent) corresponding variations in the spoken dialects. Generally, the consonants triggering "emphatic" allophones are the pharyngealized consonants ; ; and , if not followed immediately by . Frequently, the fricatives also trigger emphatic allophones; occasionally also the pharyngeal consonants (the former more than the latter). Many dialects have multiple emphatic allophones of each vowel, depending on the particular nearby consonants. In most MSA accents, emphatic coloring of vowels is limited to vowels immediately adjacent to a triggering consonant, although in some it spreads a bit farther: e.g., 'time'; 'homeland'; 'downtown' (sometimes or similar). In a non-emphatic environment, the vowel in the diphthong is pronounced or : hence 'sword' but 'summer'. However, in accents with no emphatic allophones of (e.g., in the Hejaz), the pronunciation or occurs in all situations. Consonants The phoneme is represented by the Arabic letter () and has many standard pronunciations. is characteristic of north Algeria, Iraq, and most of the Arabian peninsula but with an allophonic in some positions; occurs in most of the Levant and most of North Africa; and is used in most of Egypt and some regions in Yemen and Oman. Generally this corresponds with the pronunciation in the colloquial dialects. In some regions in Sudan and Yemen, as well as in some Sudanese and Yemeni dialects, it may be either or , representing the original pronunciation of Classical Arabic. Foreign words containing may be transcribed with , , , , , or , mainly depending on the regional spoken variety of Arabic or the commonly diacriticized Arabic letter. In northern Egypt, where the Arabic letter () is normally pronounced , a separate phoneme , which may be transcribed with , occurs in a small number of mostly non-Arabic loanwords, e.g., 'jacket'. () can be pronounced as . In some places of Maghreb it can be also pronounced as . and () are velar, post-velar, or uvular. In many varieties, () are epiglottal in Western Asia. is pronounced as velarized in الله , the name of God, q.e. Allah, when the word follows a, ā, u or ū (after i or ī it is unvelarized: bismi l–lāh ). Some speakers velarize other occurrences of in MSA, in imitation of their spoken dialects. The emphatic consonant was actually pronounced , or possibly —either way, a highly unusual sound. The medieval Arabs actually termed their language 'the language of the Ḍād' (the name of the letter used for this sound), since they thought the sound was unique to their language. (In fact, it also exists in a few other minority Semitic languages, e.g., Mehri.) Arabic has consonants traditionally termed "emphatic" (), which exhibit simultaneous pharyngealization as well as varying degrees of velarization (depending on the region), so they may be written with the "Velarized or pharyngealized" diacritic () as: . This simultaneous articulation is described as "Retracted Tongue Root" by phonologists. In some transcription systems, emphasis is shown by capitalizing the letter, for example, is written ; in others the letter is underlined or has a dot below it, for example, . Vowels and consonants can be phonologically short or long. Long (geminate) consonants are normally written doubled in Latin transcription (i.e. bb, dd, etc.), reflecting the presence of the Arabic diacritic mark , which indicates doubled consonants. In actual pronunciation, doubled consonants are held twice as long as short consonants. This consonant lengthening is phonemically contrastive: 'he accepted' vs. 'he kissed'. Syllable structure Arabic has two kinds of syllables: open syllables (CV) and (CVV)—and closed syllables (CVC), (CVVC) and (CVCC). The syllable types with two morae (units of time), i.e. CVC and CVV, are termed heavy syllables, while those with three morae, i.e. CVVC and CVCC, are superheavy syllables. Superheavy syllables in Classical Arabic occur in only two places: at the end of the sentence (due to pausal pronunciation) and in words such as 'hot', 'stuff, substance', 'they disputed with each other', where a long occurs before two identical consonants (a former short vowel between the consonants has been lost). (In less formal pronunciations of Modern Standard Arabic, superheavy syllables are common at the end of words or before clitic suffixes such as 'us, our', due to the deletion of final short vowels.) In surface pronunciation, every vowel must be preceded by a consonant (which may include the glottal stop ). There are no cases of hiatus within a word (where two vowels occur next to each other, without an intervening consonant). Some words do have an underlying vowel at the beginning, such as the definite article al- or words such as 'he bought', 'meeting'. When actually pronounced, one of three things happens: If the word occurs after another word ending in a consonant, there is a smooth transition from final consonant to initial vowel, e.g., 'meeting' . If the word occurs after another word ending in a vowel, the initial vowel of the word is elided, e.g., 'house of the director' . If the word occurs at the beginning of an utterance, a glottal stop is added onto the beginning, e.g., 'The house is ...' . Stress Word stress is not phonemically contrastive in Standard Arabic. It bears a strong relationship to vowel length. The basic rules for Modern Standard Arabic are: A final vowel, long or short, may not be stressed. Only one of the last three syllables may be stressed. Given this restriction, the last heavy syllable (containing a long vowel or ending in a consonant) is stressed, if it is not the final syllable. If the final syllable is super heavy and closed (of the form CVVC or CVCC) it receives stress. If no syllable is heavy or super heavy, the first possible syllable (i.e. third from end) is stressed. As a special exception, in Form VII and VIII verb forms stress may not be on the first syllable, despite the above rules: Hence 'he subscribed' (whether or not the final short vowel is pronounced), 'he subscribes' (whether or not the final short vowel is pronounced), 'he should subscribe (juss.)'. Likewise Form VIII 'he bought', 'he buys'. Examples: 'book', 'writer', 'desk', 'desks', 'library' (but 'library' in short pronunciation), (Modern Standard Arabic) 'they wrote' = (dialect), (Modern Standard Arabic) 'they wrote it' = (dialect), (Modern Standard Arabic) 'they (dual, fem) wrote', (Modern Standard Arabic) 'I wrote' = (short form or dialect). Doubled consonants count as two consonants: 'magazine', "place". These rules may result in differently stressed syllables when final case endings are pronounced, vs. the normal situation where they are not pronounced, as in the above example of 'library' in full pronunciation, but 'library' in short pronunciation. The restriction on final long vowels does not apply to the spoken dialects, where original final long vowels have been shortened and secondary final long vowels have arisen from loss of original final -hu/hi. Some dialects have different stress rules. In the Cairo (Egyptian Arabic) dialect a heavy syllable may not carry stress more than two syllables from the end of a word, hence 'school', 'Cairo'. This also affects the way that Modern Standard Arabic is pronounced in Egypt. In the Arabic of Sanaa, stress is often retracted: 'two houses', 'their table', 'desks', 'sometimes', 'their school'. (In this dialect, only syllables with long vowels or diphthongs are considered heavy; in a two-syllable word, the final syllable can be stressed only if the preceding syllable is light; and in longer words, the final syllable cannot be stressed.) Levels of pronunciation The final short vowels (e.g., the case endings -a -i -u and mood endings -u -a) are often not pronounced in this language, despite forming part of the formal paradigm of nouns and verbs. The following levels of pronunciation exist: Full pronunciation with pausa This is the most formal level actually used in speech. All endings are pronounced as written, except at the end of an utterance, where the following changes occur: Final short vowels are not pronounced. (But possibly an exception is made for feminine plural -na and shortened vowels in the jussive/imperative of defective verbs, e.g., irmi! 'throw!'".) The entire indefinite noun endings -in and -un (with nunation) are left off. The ending -an is left off of nouns preceded by a tāʾ marbūṭah ة (i.e. the -t in the ending -at- that typically marks feminine nouns), but pronounced as -ā in other nouns (hence its writing in this fashion in the Arabic script). The tāʼ marbūṭah itself (typically of feminine nouns) is pronounced as h. (At least, this is the case in extremely formal pronunciation, e.g., some Quranic recitations. In practice, this h is usually omitted.) Formal short pronunciation This is a formal level of pronunciation sometimes seen. It is somewhat like pronouncing all words as if they were in pausal position (with influence from the colloquial varieties). The following changes occur: Most final short vowels are not pronounced. However, the following short vowels are pronounced: feminine plural -na shortened vowels in the jussive/imperative of defective verbs, e.g., irmi! 'throw!' second-person singular feminine past-tense -ti and likewise anti 'you (fem. sg.)' sometimes, first-person singular past-tense -tu sometimes, second-person masculine past-tense -ta and likewise anta 'you (masc. sg.)' final -a in certain short words, e.g., laysa 'is not', sawfa (future-tense marker) The nunation endings -an -in -un are not pronounced. However, they are pronounced in adverbial accusative formations, e.g., تَقْرِيبًا 'almost, approximately', عَادَةً 'usually'. The tāʾ marbūṭah ending ة is unpronounced, except in construct state nouns, where it sounds as t (and in adverbial accusative constructions, e.g., عَادَةً 'usually', where the entire -tan is pronounced). The masculine singular nisbah ending is actually pronounced and is unstressed (but plural and feminine singular forms, i.e. when followed by a suffix, still sound as ). Full endings (including case endings) occur when a clitic object or possessive suffix is added (e.g., 'us/our'). Informal short pronunciation This is the pronunciation used by speakers of Modern Standard Arabic in extemporaneous speech, i.e. when producing new sentences rather than simply reading a prepared text. It is similar to formal short pronunciation except that the rules for dropping final vowels apply even when a clitic suffix is added. Basically, short-vowel case and mood endings are never pronounced and certain other changes occur that echo the corresponding colloquial pronunciations. Specifically: All the rules for formal short pronunciation apply, except as follows. The past tense singular endings written formally as -tu -ta -ti are pronounced -t -t -ti. But masculine is pronounced in full. Unlike in formal short pronunciation, the rules for dropping or modifying final endings are also applied when a clitic object or possessive suffix is added (e.g., 'us/our'). If this produces a sequence of three consonants, then one of the following happens, depending on the speaker's native colloquial variety: A short vowel (e.g., -i- or -ǝ-) is consistently added, either between the second and third or the first and second consonants. Or, a short vowel is added only if an otherwise unpronounceable sequence occurs, typically due to a violation of the sonority hierarchy (e.g., -rtn- is pronounced as a three-consonant cluster, but -trn- needs to be broken up). Or, a short vowel is never added, but consonants like r l m n occurring between two other consonants will be pronounced as a syllabic consonant (as in the English words "butter bottle bottom button"). When a doubled consonant occurs before another consonant (or finally), it is often shortened to a single consonant rather than a vowel added. (However, Moroccan Arabic never shortens doubled consonants or inserts short vowels to break up clusters, instead tolerating arbitrary-length series of arbitrary consonants and hence Moroccan Arabic speakers are likely to follow the same rules in their pronunciation of Modern Standard Arabic.) The clitic suffixes themselves tend also to be changed, in a way that avoids many possible occurrences of three-consonant clusters. In particular, -ka -ki -hu generally sound as -ak -ik -uh. Final long vowels are often shortened, merging with any short vowels that remain. Depending on the level of formality, the speaker's education level, etc., various grammatical changes may occur in ways that echo the colloquial variants: Any remaining case endings (e.g. masculine plural nominative -ūn vs. oblique -īn) will be leveled, with the oblique form used everywhere. (However, in words like 'father' and 'brother' with special long-vowel case endings in the construct state, the nominative is used everywhere, hence 'father of', 'brother of'.) Feminine plural endings in verbs and clitic suffixes will often drop out, with the masculine plural endings used instead. If the speaker's native variety has feminine plural endings, they may be preserved, but will often be modified in the direction of the forms used in the speaker's native variety, e.g. -an instead of -na. Dual endings will often drop out except on nouns and then used only for emphasis (similar to their use in the colloquial varieties); elsewhere, the plural endings are used (or feminine singular, if appropriate). Colloquial varieties Vowels As mentioned above, many spoken dialects have a process of emphasis spreading, where the "emphasis" (pharyngealization) of emphatic consonants spreads forward and back through adjacent syllables, pharyngealizing all nearby consonants and triggering the back allophone in all nearby low vowels. The extent of emphasis spreading varies. For example, in Moroccan Arabic, it spreads as far as the first full vowel (i.e. sound derived from a long vowel or diphthong) on either side; in many Levantine dialects, it spreads indefinitely, but is blocked by any or ; while in Egyptian Arabic, it usually spreads throughout the entire word, including prefixes and suffixes. In Moroccan Arabic, also have emphatic allophones and , respectively. Unstressed short vowels, especially , are deleted in many contexts. Many sporadic examples of short vowel change have occurred (especially → and interchange ↔). Most Levantine dialects merge short /i u/ into in most contexts (all except directly before a single final consonant). In Moroccan Arabic, on the other hand, short triggers labialization of nearby consonants (especially velar consonants and uvular consonants), and then short /a i u/ all merge into , which is deleted in many contexts. (The labialization plus is sometimes interpreted as an underlying phoneme .) This essentially causes the wholesale loss of the short-long vowel distinction, with the original long vowels remaining as half-long , phonemically , which are used to represent both short and long vowels in borrowings from Literary Arabic. Most spoken dialects have monophthongized original to in most circumstances, including adjacent to emphatic consonants, while keeping them as the original diphthongs in others e.g. . In most of the Moroccan, Algerian and Tunisian (except Sahel and Southeastern) Arabic dialects, they have subsequently merged into original . Consonants In most dialects, there may be more or fewer phonemes than those listed in the chart above. For example, is considered a native phoneme in most Arabic dialects except in Levantine dialects like Syrian or Lebanese where is pronounced and is pronounced . or () is considered a native phoneme in most dialects except in Egyptian and a number of Yemeni and Omani dialects where is pronounced . or and are distinguished in the dialects of Egypt, Sudan, the Levant and the Hejaz, but they have merged as in most dialects of the Arabian Peninsula, Iraq and Tunisia and have merged as in Morocco and Algeria. The usage of non-native and depends on the usage of each speaker but they might be more prevalent in some dialects than others. The Iraqi and Gulf Arabic also has the sound and writes it and with the Persian letters and , as in "plum"; "truffle". Early in the expansion of Arabic, the separate emphatic phonemes and coalesced into a single phoneme . Many dialects (such as Egyptian, Levantine, and much of the Maghreb) subsequently lost fricatives, converting into . Most dialects borrow "learned" words from the Standard language using the same pronunciation as for inherited words, but some dialects without interdental fricatives (particularly in Egypt and the Levant) render original in borrowed words as . Another key distinguishing mark of Arabic dialects is how they render the original velar and uvular plosives , (Proto-Semitic ), and : retains its original pronunciation in widely scattered regions such as Yemen, Morocco, and urban areas of the Maghreb. It is pronounced as a glottal stop in several prestige dialects, such as those spoken in Cairo, Beirut and Damascus. But it is rendered as a voiced velar plosive in Persian Gulf, Upper Egypt, parts of the Maghreb, and less urban parts of the Levant (e.g. Jordan). In Iraqi Arabic it sometimes retains its original pronunciation and is sometimes rendered as a voiced velar plosive, depending on the word. Some traditionally Christian villages in rural areas of the Levant render the sound as , as do Shiʻi Bahrainis. In some Gulf dialects, it is palatalized to or . It is pronounced as a voiced uvular constrictive in Sudanese Arabic. Many dialects with a modified pronunciation for maintain the pronunciation in certain words (often with religious or educational overtones) borrowed from the Classical language. is pronounced as an affricate in Iraq and much of the Arabian Peninsula but is pronounced in most of North Egypt and parts of Yemen and Oman, in Morocco, Tunisia, and the Levant, and , in most words in much of the Persian Gulf. usually retains its original pronunciation but is palatalized to in many words in Israel and the Palestinian Territories, Iraq, and countries in the eastern part of the Arabian Peninsula. Often a distinction is made between the suffixes ('you', masc.) and ('you', fem.), which become and , respectively. In Sana'a, Omani, and Bahrani is pronounced . Pharyngealization of the emphatic consonants tends to weaken in many of the spoken varieties, and to spread from emphatic consonants to nearby sounds. In addition, the "emphatic" allophone automatically triggers pharyngealization of adjacent sounds in many dialects. As a result, it may difficult or impossible to determine whether a given coronal consonant is phonemically emphatic or not, especially in dialects with long-distance emphasis spreading. (A notable exception is the sounds vs. in Moroccan Arabic, because the former is pronounced as an affricate but the latter is not.) Grammar Literary Arabic As in other Semitic languages, Arabic has a complex and unusual morphology (i.e. method of constructing words from a basic root). Arabic has a nonconcatenative "root-and-pattern" morphology: A root consists of a set of bare consonants (usually three), which are fitted into a discontinuous pattern to form words. For example, the word for 'I wrote' is constructed by combining the root 'write' with the pattern 'I Xed' to form 'I wrote'. Other verbs meaning 'I Xed' will typically have the same pattern but with different consonants, e.g. 'I read', 'I ate', 'I went', although other patterns are possible (e.g. 'I drank', 'I said', 'I spoke', where the subpattern used to signal the past tense may change but the suffix is always used). From a single root , numerous words can be formed by applying different patterns: كَتَبْتُ 'I wrote' كَتَّبْتُ 'I had (something) written' كَاتَبْتُ 'I corresponded (with someone)' أَكْتَبْتُ 'I dictated' اِكْتَتَبْتُ 'I subscribed' تَكَاتَبْنَا 'we corresponded with each other' أَكْتُبُ 'I write' أُكَتِّبُ 'I have (something) written' أُكَاتِبُ 'I correspond (with someone)' أُكْتِبُ 'I dictate' أَكْتَتِبُ 'I subscribe' نَتَكَتِبُ 'we correspond each other' كُتِبَ 'it was written' أُكْتِبَ 'it was dictated' مَكْتُوبٌ 'written' مُكْتَبٌ 'dictated' كِتَابٌ 'book' كُتُبٌ 'books' كَاتِبٌ 'writer' كُتَّابٌ 'writers' مَكْتَبٌ 'desk, office' مَكْتَبَةٌ 'library, bookshop' etc. Nouns and adjectives Nouns in Literary Arabic have three grammatical cases (nominative, accusative, and genitive [also used when the noun is governed by a preposition]); three numbers (singular, dual and plural); two genders (masculine and feminine); and three "states" (indefinite, definite, and construct). The cases of singular nouns (other than those that end in long ā) are indicated by suffixed short vowels (/-u/ for nominative, /-a/ for accusative, /-i/ for genitive). The feminine singular is often marked by ـَة /-at/, which is pronounced as /-ah/ before a pause. Plural is indicated either through endings (the sound plural) or internal modification (the broken plural). Definite nouns include all proper nouns, all nouns in "construct state" and all nouns which are prefixed by the definite article اَلْـ /al-/. Indefinite singular nouns (other than those that end in long ā) add a final /-n/ to the case-marking vowels, giving /-un/, /-an/ or /-in/ (which is also referred to as nunation or tanwīn). Adjectives in Literary Arabic are marked for case, number, gender and state, as for nouns. However, the plural of all non-human nouns is always combined with a singular feminine adjective, which takes the ـَة /-at/ suffix. Pronouns in Literary Arabic are marked for person, number and gender. There are two varieties, independent pronouns and enclitics. Enclitic pronouns are attached to the end of a verb, noun or preposition and indicate verbal and prepositional objects or possession of nouns. The first-person singular pronoun has a different enclitic form used for verbs (ـنِي /-nī/) and for nouns or prepositions (ـِي /-ī/ after consonants, ـيَ /-ya/ after vowels). Nouns, verbs, pronouns and adjectives agree with each other in all respects. However, non-human plural nouns are grammatically considered to be feminine singular. Furthermore, a verb in a verb-initial sentence is marked as singular regardless of its semantic number when the subject of the verb is explicitly mentioned as a noun. Numerals between three and ten show "chiasmic" agreement, in that grammatically masculine numerals have feminine marking and vice versa. Verbs Verbs in Literary Arabic are marked for person (first, second, or third), gender, and number. They are conjugated in two major paradigms (past and non-past); two voices (active and passive); and six moods (indicative, imperative, subjunctive, jussive, shorter energetic and longer energetic), the fifth and sixth moods, the energetics, exist only in Classical Arabic but not in MSA. There are also two participles (active and passive) and a verbal noun, but no infinitive. The past and non-past paradigms are sometimes also termed perfective and imperfective, indicating the fact that they actually represent a combination of tense and aspect. The moods other than the indicative occur only in the non-past, and the future tense is signaled by prefixing سَـ or سَوْفَ onto the non-past. The past and non-past differ in the form of the stem (e.g., past كَتَبـ vs. non-past ـكْتُبـ ), and also use completely different sets of affixes for indicating person, number and gender: In the past, the person, number and gender are fused into a single suffixal morpheme, while in the non-past, a combination of prefixes (primarily encoding person) and suffixes (primarily encoding gender and number) are used. The passive voice uses the same person/number/gender affixes but changes the vowels of the stem. The following shows a paradigm of a regular Arabic verb, كَتَبَ 'to write'. In Modern Standard, the energetic mood (in either long or short form, which have the same meaning) is almost never used. Derivation Like other Semitic languages, and unlike most other languages, Arabic makes much more use of nonconcatenative morphology (applying many templates applied roots) to derive words than adding prefixes or suffixes to words. For verbs, a given root can occur in many different derived verb stems (of which there are about fifteen), each with one or more characteristic meanings and each with its own templates for the past and non-past stems, active and passive participles, and verbal noun. These are referred to by Western scholars as "Form I", "Form II", and so on through "Form XV" (although Forms XI to XV are rare). These stems encode grammatical functions such as the causative, intensive and reflexive. Stems sharing the same root consonants represent separate verbs, albeit often semantically related, and each is the basis for its own conjugational paradigm. As a result, these derived stems are part of the system of derivational morphology, not part of the inflectional system. Examples of the different verbs formed from the root كتب 'write' (using حمر 'red' for Form IX, which is limited to colors and physical defects): Form II is sometimes used to create transitive denominative verbs (verbs built from nouns); Form V is the equivalent used for intransitive denominatives. The associated participles and verbal nouns of a verb are the primary means of forming new lexical nouns in Arabic. This is similar to the process by which, for example, the English gerund "meeting" (similar to a verbal noun) has turned into a noun referring to a particular type of social, often work-related event where people gather together to have a "discussion" (another lexicalized verbal noun). Another fairly common means of forming nouns is through one of a limited number of patterns that can be applied directly to roots, such as the "nouns of location" in ma- (e.g. 'desk, office' < 'write', 'kitchen' < 'cook'). The only three genuine suffixes are as follows: The feminine suffix -ah; variously derives terms for women from related terms for men, or more generally terms along the same lines as the corresponding masculine, e.g. 'library' (also a writing-related place, but different from , as above). The nisbah suffix -iyy-. This suffix is extremely productive, and forms adjectives meaning "related to X". It corresponds to English adjectives in -ic, -al, -an, -y, -ist, etc. The feminine nisbah suffix -iyyah. This is formed by adding the feminine suffix -ah onto nisba adjectives to form abstract nouns. For example, from the basic root 'share' can be derived the Form VIII verb 'to cooperate, participate', and in turn its verbal noun 'cooperation, participation' can be formed. This in turn can be made into a nisbah adjective 'socialist', from which an abstract noun 'socialism' can be derived. Other recent formations are 'republic' (lit. "public-ness", < 'multitude, general public'), and the Gaddafi-specific variation 'people's republic' (lit. "masses-ness", < 'the masses', pl. of , as above). Colloquial varieties The spoken dialects have lost the case distinctions and make only limited use of the dual (it occurs only on nouns and its use is no longer required in all circumstances). They have lost the mood distinctions other than imperative, but many have since gained new moods through the use of prefixes (most often /bi-/ for indicative vs. unmarked subjunctive). They have also mostly lost the indefinite "nunation" and the internal passive. The following is an example of a regular verb paradigm in Egyptian Arabic. Writing system The Arabic alphabet derives from the Aramaic through Nabatean, to which it bears a loose resemblance like that of Coptic or Cyrillic scripts to Greek script. Traditionally, there were several differences between the Western (North African) and Middle Eastern versions of the alphabet—in particular, the faʼ had a dot underneath and qaf a single dot above in the Maghreb, and the order of the letters was slightly different (at least when they were used as numerals). However, the old Maghrebi variant has been abandoned except for calligraphic purposes in the Maghreb itself, and remains in use mainly in the Quranic schools (zaouias) of West Africa. Arabic, like all other Semitic languages (except for the Latin-written Maltese, and the languages with the Ge'ez script), is written from right to left. There are several styles of scripts such as thuluth, muhaqqaq, tawqi, rayhan and notably naskh, which is used in print and by computers, and ruqʻah, which is commonly used for correspondence. Originally Arabic was made up of only rasm without diacritical marks Later diacritical points (which in Arabic are referred to as nuqaṯ) were added (which allowed readers to distinguish between letters such as b, t, th, n and y). Finally signs known as Tashkil were used for short vowels known as harakat and other uses such as final postnasalized or long vowels. Calligraphy After Khalil ibn Ahmad al Farahidi finally fixed the Arabic script around 786, many styles were developed, both for the writing down of the Quran and other books, and for inscriptions on monuments as decoration. Arabic calligraphy has not fallen out of use as calligraphy has in the Western world, and is still considered by Arabs as a major art form; calligraphers are held in great esteem. Being cursive by nature, unlike the Latin script, Arabic script is used to write down a verse of the Quran, a hadith, or simply a proverb. The composition is often abstract, but sometimes the writing is shaped into an actual form such as that of an animal. One of the current masters of the genre is Hassan Massoudy. In modern times the intrinsically calligraphic nature of the written Arabic form is haunted by the thought that a typographic approach to the language, necessary for digitized unification, will not always accurately maintain meanings conveyed through calligraphy. Romanization There are a number of different standards for the romanization of Arabic, i.e. methods of accurately and efficiently representing Arabic with the Latin script. There are various conflicting motivations involved, which leads to multiple systems. Some are interested in transliteration, i.e. representing the spelling of Arabic, while others focus on transcription, i.e. representing the pronunciation of Arabic. (They differ in that, for example, the same letter is used to represent both a consonant, as in "you" or "yet", and a vowel, as in "me" or "eat".) Some systems, e.g. for scholarly use, are intended to accurately and unambiguously represent the phonemes of Arabic, generally making the phonetics more explicit than the original word in the Arabic script. These systems are heavily reliant on diacritical marks such as "š" for the sound equivalently written sh in English. Other systems (e.g. the Bahá'í orthography) are intended to help readers who are neither Arabic speakers nor linguists with intuitive pronunciation of Arabic names and phrases. These less "scientific" systems tend to avoid diacritics and use digraphs (like sh and kh). These are usually simpler to read, but sacrifice the definiteness of the scientific systems, and may lead to ambiguities, e.g. whether to interpret sh as a single sound, as in gash, or a combination of two sounds, as in gashouse. The ALA-LC romanization solves this problem by separating the two sounds with a prime symbol ( ′ ); e.g., as′hal 'easier'. During the last few decades and especially since the 1990s, Western-invented text communication technologies have become prevalent in the Arab world, such as personal computers, the World Wide Web, email, bulletin board systems, IRC, instant messaging and mobile phone text messaging. Most of these technologies originally had the ability to communicate using the Latin script only, and some of them still do not have the Arabic script as an optional feature. As a result, Arabic speaking users communicated in these technologies by transliterating the Arabic text using the Latin script, sometimes known as IM Arabic. To handle those Arabic letters that cannot be accurately represented using the Latin script, numerals and other characters were appropriated. For example, the numeral "3" may be used to represent the Arabic letter . There is no universal name for this type of transliteration, but some have named it Arabic Chat Alphabet. Other systems of transliteration exist, such as using dots or capitalization to represent the "emphatic" counterparts of certain consonants. For instance, using capitalization, the letter , may be represented by d. Its emphatic counterpart, , may be written as D. Numerals In most of present-day North Africa, the Western Arabic numerals (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) are used. However, in Egypt and Arabic-speaking countries to the east of it, the Eastern Arabic numerals ( – – – – – – – – – ) are in use. When representing a number in Arabic, the lowest-valued position is placed on the right, so the order of positions is the same as in left-to-right scripts. Sequences of digits such as telephone numbers are read from left to right, but numbers are spoken in the traditional Arabic fashion, with units and tens reversed from the modern English usage. For example, 24 is said "four and twenty" just like in the German language (vierundzwanzig) and Classical Hebrew, and 1975 is said "a thousand and nine-hundred and five and seventy" or, more eloquently, "a thousand and nine-hundred five seventy" Language-standards regulators Academy of the Arabic Language is the name of a number of language-regulation bodies formed in the Arab League. The most active are in Damascus and Cairo. They review language development, monitor new words and approve inclusion of new words into their published standard dictionaries. They also publish old and historical Arabic manuscripts. As a foreign language Arabic has been taught worldwide in many elementary and secondary schools, especially Muslim schools. Universities around the world have classes that teach Arabic as part of their foreign languages, Middle Eastern studies, and religious studies courses. Arabic language schools exist to assist students to learn Arabic outside the academic world. There are many Arabic language schools in the Arab world and other Muslim countries. Because the Quran is written in Arabic and all Islamic terms are in Arabic, millions of Muslims (both Arab and non-Arab) study the language. Software and books with tapes are also important part of Arabic learning, as many of Arabic learners may live in places where there are no academic or Arabic language school classes available. Radio series of Arabic language classes are also provided from some radio stations. A number of websites on the Internet provide online classes for all levels as a means of distance education; most teach Modern Standard Arabic, but some teach regional varieties from numerous countries. Status in the Arab world vs. other languages With the sole example of Medieval linguist Abu Hayyan al-Gharnati – who, while a scholar of the Arabic language, was not ethnically Arab – Medieval scholars of the Arabic language made no efforts at studying comparative linguistics, considering all other languages inferior. In modern times, the educated upper classes in the Arab world have taken a nearly opposite view. Yasir Suleiman wrote in 2011 that "studying and knowing English or French in most of the Middle East and North Africa have become a badge of sophistication and modernity and ... feigning, or asserting, weakness or lack of facility in Arabic is sometimes paraded as a sign of status, class, and perversely, even education through a mélange of code-switching practises." See also Arabic Ontology Arabic diglossia Arabic influence on the Spanish language Arabic Language International Council Arabic literature Arabic–English Lexicon Arabist Dictionary of Modern Written Arabic Glossary of Islam International Association of Arabic Dialectology List of Arab newspapers List of Arabic-language television channels List of Arabic given names List of arabophones List of countries where Arabic is an official language List of French words of Arabic origin List of replaced loanwords in Turkish References Citations Sources Suileman, Yasir. Arabic, Self and Identity: A Study in Conflict and Displacement. Oxford University Press, 2011. . External links Dr. Nizar Habash's, Columbia University, Introduction to Arabic Natural Language Processing Google Ta3reeb – Google Transliteration Transliteration Arabic language pronunciation applet Alexis Neme (2011), A lexicon of Arabic verbs constructed on the basis of Semitic taxonomy and using finite-state transducers Alexis Neme and Eric Laporte (2013), Pattern-and-root inflectional morphology: the Arabic broken plural Alexis Neme and Eric Laporte (2015), Do computer scientists deeply understand Arabic morphology? – , available also in Arabic, Indonesian, French Arabic manuscripts, UA 5572 at L. Tom Perry Special Collections, Brigham Young University Online Arabic Keyboard (Bilingual dictionary) Arabic Student's Dictionary Languages attested from the 9th century BC Articles containing video clips Central Semitic languages Fusional languages Languages of Algeria Languages of Bahrain Languages of Cameroon Languages of Chad Languages of the Comoros Languages of Djibouti Languages of Eritrea Languages of Gibraltar Languages of Israel Languages of Iran Languages of Iraq Languages of Jordan Languages of Kurdistan Languages of Kuwait Languages of Lebanon Languages of Libya Languages of Mali Languages of Mauritania Languages of Morocco Languages of Niger Languages of Oman Languages of the State of Palestine Languages of Qatar Languages of Saudi Arabia Languages of Senegal Languages of South Sudan Languages of Sicily Languages of Somalia Languages of Sudan Languages of Syria Languages of the United Arab Emirates Languages of Tunisia Languages of Yemen Stress-timed languages Subject–verb–object languages Verb–subject–object languages
808
https://en.wikipedia.org/wiki/Alfred%20Hitchcock
Alfred Hitchcock
Sir Alfred Joseph Hitchcock (13 August 1899 – 29 April 1980) was an English filmmaker who was one of the most influential figures in the history of cinema. In a career spanning six decades, he directed over 50 feature films, many of which are still widely watched and studied today. Known as the "Master of Suspense", he became as well known as any of his actors thanks to his many interviews, his cameo roles in most of his films, and his hosting and producing the television anthology Alfred Hitchcock Presents (1955–65). His films garnered 46 Academy Award nominations, including six wins, although he never won the award for Best Director despite five nominations. Hitchcock initially trained as a technical clerk and copy writer before entering the film industry in 1919 as a title card designer. His directorial debut was the British-German silent film The Pleasure Garden (1925). His first successful film, The Lodger: A Story of the London Fog (1927), helped to shape the thriller genre, and Blackmail (1929) was the first British "talkie". His thrillers The 39 Steps (1935) and The Lady Vanishes (1938) are ranked among the greatest British films of the 20th century. By 1939, he had international recognition and producer David O. Selznick persuaded him to move to Hollywood. A string of successful films followed, including Rebecca (1940), Foreign Correspondent (1940), Suspicion (1941), Shadow of a Doubt (1943), and Notorious (1946). Rebecca won the Academy Award for Best Picture, with Hitchcock nominated as Best Director; he was also nominated for Lifeboat (1944) and Spellbound (1945). After a brief commercial lull, he returned to form with Strangers on a Train (1951) and Dial M for Murder (1954); he then went on to direct four films often ranked among the greatest of all time: Rear Window (1954), Vertigo (1958), North by Northwest (1959) and Psycho (1960), the first and last of these garnering him Best Director nominations. The Birds (1963) and Marnie (1964) were also financially successful and are highly regarded by film historians. The "Hitchcockian" style includes the use of camera movement to mimic a person's gaze, thereby turning viewers into voyeurs, and framing shots to maximise anxiety and fear. The film critic Robin Wood wrote that the meaning of a Hitchcock film "is there in the method, in the progression from shot to shot. A Hitchcock film is an organism, with the whole implied in every detail and every detail related to the whole." Hitchcock made multiple films with some of the biggest stars in Hollywood, including four with Cary Grant in the 1940s and 1950s, three with Ingrid Bergman in the last half of the 1940s, four with James Stewart over a ten-year span commencing in 1948, and three with Grace Kelly in the mid-1950s. Hitchcock became an American citizen in 1955. In 2012, Hitchcock's psychological thriller Vertigo, starring Stewart, displaced Orson Welles' Citizen Kane (1941) as the British Film Institute's greatest film ever made based on its world-wide poll of hundreds of film critics. , nine of his films had been selected for preservation in the United States National Film Registry, including his personal favourite, Shadow of a Doubt (1943). He received the BAFTA Fellowship in 1971, the AFI Life Achievement Award in 1979 and was knighted in December that year, four months before his death on 29 April 1980. Biography Early life: 1899–1919 Early childhood and education Hitchcock was born on 13 August 1899 in the flat above his parents' leased grocer's shop at 517 High Road, Leytonstone, on the outskirts of East London (then part of Essex), the youngest of three children: William Daniel (1890–1943), Ellen Kathleen ("Nellie") (1892–1979), and Alfred Joseph (1899–1980). His parents, Emma Jane Hitchcock ( Whelan; 1863–1942), and William Edgar Hitchcock (1862–1914), were both Roman Catholics, with partial roots in Ireland; His father was a greengrocer, as his grandfather had been. There was a large extended family, including uncle John Hitchcock with his five-bedroom Victorian house on Campion Road, Putney, complete with maid, cook, chauffeur and gardener. Every summer, his uncle rented a seaside house for the family in Cliftonville, Kent. Hitchcock said that he first became class-conscious there, noticing the differences between tourists and locals. Describing himself as a well-behaved boy—his father called him his "little lamb without a spot"—Hitchcock said he could not remember ever having had a playmate. One of his favourite stories for interviewers was about his father sending him to the local police station with a note when he was five; the policeman looked at the note and locked him in a cell for a few minutes, saying, "This is what we do to naughty boys." The experience left him, he said, with a lifelong fear of policemen; in 1973 he told Tom Snyder that he was "scared stiff of anything ... to do with the law" and wouldn't even drive a car in case he got a parking ticket. When he was six, the family moved to Limehouse and leased two stores at 130 and 175 Salmon Lane, which they ran as a fish-and-chips shop and fishmongers' respectively; they lived above the former. Hitchcock attended his first school, the Howrah House Convent in Poplar, which he entered in 1907, at age 7. According to biographer Patrick McGilligan, he stayed at Howrah House for at most two years. He also attended a convent school, the Wode Street School "for the daughters of gentlemen and little boys", run by the Faithful Companions of Jesus. He then attended a primary school near his home and was for a short time a boarder at Salesian College in Battersea. The family moved again when he was 11, this time to Stepney, and on 5 October 1910 Hitchcock was sent to St Ignatius College in Stamford Hill, Tottenham (now in the London Borough of Haringey), a Jesuit grammar school with a reputation for discipline. The priests used a hard rubber cane on the boys, always at the end of the day, so the boys had to sit through classes anticipating the punishment if they had been written up for it. He later said that this is where he developed his sense of fear. The school register lists his year of birth as 1900 rather than 1899; biographer Donald Spoto says he was deliberately enrolled as a 10-year-old because he was a year behind with his schooling. While biographer Gene Adair reports that Hitchcock was "an average, or slightly above-average, pupil", Hitchcock said that he was "usually among the four or five at the top of the class"; at the end of his first year, his work in Latin, English, French and religious education was noted. He told Peter Bogdanovich: "The Jesuits taught me organisation, control and, to some degree, analysis." His favourite subject was geography, and he became interested in maps, and railway, tram and bus timetables; according to John Russell Taylor, he could recite all the stops on the Orient Express. He also had a particular interest in London trams. An overwhelming majority of his films include rail or tram scenes, in particular The Lady Vanishes, Strangers on a Train and Number Seventeen. A clapperboard shows the number of the scene and the number of takes, and Hitchcock would often take the two numbers on the clapperboard and whisper the London tram route names. For example, if the clapperboard showed Scene 23; Take 3; Hitchcock would whisper "Woodford, Hampstead" – Woodford being the terminus of the route 23 tram, and Hampstead the end of route 3. Henley's Hitchcock told his parents that he wanted to be an engineer, and on 25 July 1913, he left St Ignatius and enrolled in night classes at the London County Council School of Engineering and Navigation in Poplar. In a book-length interview in 1962, he told François Truffaut that he had studied "mechanics, electricity, acoustics, and navigation". Then on 12 December 1914 his father, who had been suffering from emphysema and kidney disease, died at the age of 52. To support himself and his mother—his older siblings had left home by then—Hitchcock took a job, for 15 shillings a week (£ in ), as a technical clerk at the Henley Telegraph and Cable Company in Blomfield Street near London Wall. He continued night classes, this time in art history, painting, economics, and political science. His older brother ran the family shops, while he and his mother continued to live in Salmon Lane. Hitchcock was too young to enlist when the First World War started in July 1914, and when he reached the required age of 18 in 1917, he received a C3 classification ("free from serious organic disease, able to stand service conditions in garrisons at home ... only suitable for sedentary work"). He joined a cadet regiment of the Royal Engineers and took part in theoretical briefings, weekend drills, and exercises. John Russell Taylor wrote that, in one session of practical exercises in Hyde Park, Hitchcock was required to wear puttees. He could never master wrapping them around his legs, and they repeatedly fell down around his ankles. After the war, Hitchcock took an interest in creative writing. In June 1919, he became a founding editor and business manager of Henley's in-house publication, The Henley Telegraph (sixpence a copy), to which he submitted several short stories. Henley's promoted him to the advertising department, where he wrote copy and drew graphics for electric cable advertisements. He enjoyed the job and would stay late at the office to examine the proofs; he told Truffaut that this was his "first step toward cinema". He enjoyed watching films, especially American cinema, and from the age of 16 read the trade papers; he watched Charlie Chaplin, D. W. Griffith and Buster Keaton, and particularly liked Fritz Lang's Der müde Tod (1921). Inter-war career: 1919–1939 Famous Players-Lasky While still at Henley's, he read in a trade paper that Famous Players-Lasky, the production arm of Paramount Pictures, was opening a studio in London. They were planning to film The Sorrows of Satan by Marie Corelli, so he produced some drawings for the title cards and sent his work to the studio. They hired him, and in 1919 he began working for Islington Studios in Poole Street, Hoxton, as a title-card designer. Donald Spoto wrote that most of the staff were Americans with strict job specifications, but the English workers were encouraged to try their hand at anything, which meant that Hitchcock gained experience as a co-writer, art director and production manager on at least 18 silent films. The Times wrote in February 1922 about the studio's "special art title department under the supervision of Mr. A. J. Hitchcock". His work included Number 13 (1922), also known as Mrs. Peabody; it was cancelled because of financial problems—the few finished scenes are lost—and Always Tell Your Wife (1923), which he and Seymour Hicks finished together when Hicks was about to give up on it. Hicks wrote later about being helped by "a fat youth who was in charge of the property room ... [n]one other than Alfred Hitchcock". Gainsborough Pictures and work in Germany When Paramount pulled out of London in 1922, Hitchcock was hired as an assistant director by a new firm run in the same location by Michael Balcon, later known as Gainsborough Pictures. Hitchcock worked on Woman to Woman (1923) with the director Graham Cutts, designing the set, writing the script and producing. He said: "It was the first film that I had really got my hands onto." The editor and "script girl" on Woman to Woman was Alma Reville, his future wife. He also worked as an assistant to Cutts on The White Shadow (1924), The Passionate Adventure (1924), The Blackguard (1925), and The Prude's Fall (1925). The Blackguard was produced at the Babelsberg Studios in Potsdam, where Hitchcock watched part of the making of F. W. Murnau's film The Last Laugh (1924). He was impressed with Murnau's work and later used many of his techniques for the set design in his own productions. In the summer of 1925, Balcon asked Hitchcock to direct The Pleasure Garden (1925), starring Virginia Valli, a co-production of Gainsborough and the German firm Emelka at the Geiselgasteig studio near Munich. Reville, by then Hitchcock's fiancée, was assistant director-editor. Although the film was a commercial flop, Balcon liked Hitchcock's work; a Daily Express headline called him the "Young man with a master mind". Production of The Pleasure Garden encountered obstacles which Hitchcock would later learn from: on arrival to Brenner Pass, he failed to declare his film stock to customs and it was confiscated; one actress could not enter the water for a scene because she was on her period; budget overruns meant that he had to borrow money from the actors. Hitchcock also needed a translator to give instructions to the cast and crew. In Germany, Hitchcock observed the nuances of German cinema and filmmaking which had a big influence on him. When he was not working, he would visit Berlin's art galleries, concerts and museums. He would also meet with actors, writers, and producers to build connections. Balcon asked him to direct a second film in Munich, The Mountain Eagle (1926), based on an original story titled Fear o' God. The film is lost, and Hitchcock called it "a very bad movie". A year later, Hitchcock wrote and directed The Ring; although the screenplay was credited solely to his name, Elliot Stannard assisted him with the writing. The Ring garnered positive reviews; the Bioscope magazine critic called it "the most magnificent British film ever made". When he returned to England, Hitchcock was one of the early members of the London Film Society, newly formed in 1925. Through the Society, he became fascinated by the work by Soviet filmmakers: Dziga Vertov, Lev Kuleshov, Sergei Eisenstein, and Vsevolod Pudovkin. He would also socialise with fellow English filmmakers Ivor Montagu and Adrian Brunel, and Walter C. Mycroft. Hitchcock's luck came with his first thriller, The Lodger: A Story of the London Fog (1927), about the hunt for a serial killer, wearing a black cloak and carrying a black bag, is murdering young blonde women in London, and only on Tuesdays. A landlady suspects that her lodger is the killer, but he turns out to be innocent. To convey the impression footsteps were being heard from an upper floor, Hitchcock had a glass floor made so that the viewer could see the lodger pacing up and down in his room above the landlady. Hitchcock had wanted the leading man to be guilty, or for the film at least to end ambiguously, but the star was Ivor Novello, a matinée idol, and the "star system" meant that Novello could not be the villain. Hitchcock told Truffaut: "You have to clearly spell it out in big letters: 'He is innocent.'" (He had the same problem years later with Cary Grant in Suspicion (1941).) Released in January 1927, The Lodger was a commercial and critical success in the UK. Hitchcock told Truffaut that the film was the first of his to be influenced by German Expressionism: "In truth, you might almost say that The Lodger was my first picture." He made his first cameo appearances in the film; he was depicted sitting in a newsroom, and in the second, standing in a crowd as the leading man is arrested. Marriage On 2 December 1926, Hitchcock married the English screenwriter Alma Reville at the Brompton Oratory in South Kensington. The couple honeymooned in Paris, Lake Como and St. Moritz, before returning to London to live in a leased flat on the top two floors of 153 Cromwell Road, Kensington. Reville, who was born just hours after Hitchcock, converted from Protestantism to Catholicism, apparently at the insistence of Hitchcock's mother; she was baptised on 31 May 1927 and confirmed at Westminster Cathedral by Cardinal Francis Bourne on 5 June. In 1928, when they learned that Reville was pregnant, the Hitchcocks purchased "Winter's Grace", a Tudor farmhouse set in 11 acres on Stroud Lane, Shamley Green, Surrey, for £2,500. Their daughter and only child, Patricia Alma Hitchcock, was born on 7 July that year. Patricia died on 9 August 2021 at 93. Reville became her husband's closest collaborator; Charles Champlin wrote in 1982: "The Hitchcock touch had four hands, and two were Alma's." When Hitchcock accepted the AFI Life Achievement Award in 1979, he said that he wanted to mention "four people who have given me the most affection, appreciation and encouragement, and constant collaboration. The first of the four is a film editor, the second is a scriptwriter, the third is the mother of my daughter, Pat, and the fourth is as fine a cook as ever performed miracles in a domestic kitchen. And their names are Alma Reville." Reville wrote or co-wrote on many of Hitchcock's films, including Shadow of a Doubt, Suspicion and The 39 Steps. Early sound films Hitchcock began work on his tenth film, Blackmail (1929), when its production company, British International Pictures (BIP), converted its Elstree studios to sound. The film was the first British "talkie"; this followed the rapid development of sound films in the United States, from the use of brief sound segments in The Jazz Singer (1927) to the first full sound feature Lights of New York (1928). Blackmail began the Hitchcock tradition of using famous landmarks as a backdrop for suspense sequences, with the climax taking place on the dome of the British Museum. It also features one of his longest cameo appearances, which shows him being bothered by a small boy as he reads a book on the London Underground. In the PBS series The Men Who Made The Movies, Hitchcock explained how he used early sound recording as a special element of the film, stressing the word "knife" in a conversation with the woman suspected of murder. During this period, Hitchcock directed segments for a BIP revue, Elstree Calling (1930), and directed a short film, An Elastic Affair (1930), featuring two Film Weekly scholarship winners. An Elastic Affair is one of the lost films. In 1933, Hitchcock signed a multi-film contract with Gaumont-British, once again working for Michael Balcon. His first film for the company, The Man Who Knew Too Much (1934), was a success; his second, The 39 Steps (1935), was acclaimed in the UK and gained him recognition in the United States. It also established the quintessential English "Hitchcock blonde" (Madeleine Carroll) as the template for his succession of ice-cold, elegant leading ladies. Screenwriter Robert Towne remarked, "It's not much of an exaggeration to say that all contemporary escapist entertainment begins with The 39 Steps". This film was one of the first to introduce the "MacGuffin" plot device, a term coined by the English screenwriter Angus MacPhail. The MacGuffin is an item or goal the protagonist is pursuing, one that otherwise has no narrative value; in The 39 Steps, the MacGuffin is a stolen set of design plans. Hitchcock released two spy thrillers in 1936. Sabotage was loosely based on Joseph Conrad's novel, The Secret Agent (1907), about a woman who discovers that her husband is a terrorist, and Secret Agent, based on two stories in Ashenden: Or the British Agent (1928) by W. Somerset Maugham. At this time, Hitchcock also became notorious for pranks against the cast and crew. These jokes ranged from simple and innocent to crazy and maniacal. For instance, he hosted a dinner party where he dyed all the food blue because he claimed there weren't enough blue foods. He also had a horse delivered to the dressing room of his friend, actor Gerald du Maurier. Hitchcock followed up with Young and Innocent in 1937, a crime thriller based on the 1936 novel A Shilling for Candles by Josephine Tey. Starring Nova Pilbeam and Derrick De Marney, the film was relatively enjoyable for the cast and crew to make. To meet distribution purposes in America, the film's runtime was cut and this included removal of one of Hitchcock's favourite scenes: a children's tea party which becomes menacing to the protagonists. Hitchcock's next major success was The Lady Vanishes (1938), "one of the greatest train movies from the genre's golden era", according to Philip French, in which Miss Froy (May Whitty), a British spy posing as a governess, disappears on a train journey through the fictional European country of Bandrika. The film saw Hitchcock receive the 1938 New York Film Critics Circle Award for Best Director. Benjamin Crisler of the New York Times wrote in June 1938: "Three unique and valuable institutions the British have that we in America have not: Magna Carta, the Tower Bridge and Alfred Hitchcock, the greatest director of screen melodramas in the world." The film was based on the novel The Wheel Spins (1936) written by Ethel Lina White. By 1938 Hitchcock was aware that he had reached his peak in Britain. He had received numerous offers from producers in the United States, but he turned them all down because he disliked the contractual obligations or thought the projects were repellent. However, producer David O. Selznick offered him a concrete proposal to make a film based on the sinking of , which was eventually shelved, but Selznick persuaded Hitchcock to come to Hollywood. In July 1938, Hitchcock flew to New York, and found that he was already a celebrity; he was featured in magazines and gave interviews to radio stations. In Hollywood, Hitchcock met Selznick for the first time. Selznick offered him a four-film contract, approximately $40,000 for each picture (). Early Hollywood years: 1939–1945 Selznick contract Selznick signed Hitchcock to a seven-year contract beginning in April 1939, and the Hitchcocks moved to Hollywood. The Hitchcocks lived in a spacious flat on Wilshire Boulevard, and slowly acclimatised themselves to the Los Angeles area. He and his wife Alma kept a low profile, and were not interested in attending parties or being celebrities. Hitchcock discovered his taste for fine food in West Hollywood, but still carried on his way of life from England. He was impressed with Hollywood's filmmaking culture, expansive budgets and efficiency, compared to the limits that he had often faced in Britain. In June that year, Life magazine called him the "greatest master of melodrama in screen history". Although Hitchcock and Selznick respected each other, their working arrangements were sometimes difficult. Selznick suffered from constant financial problems, and Hitchcock was often unhappy about Selznick's creative control and interference over his films. Selznick was also displeased with Hitchcock's method of shooting just what was in the script, and nothing more, which meant that the film could not be cut and remade differently at a later time. As well as complaining about Hitchcock's "goddamn jigsaw cutting", their personalities were mismatched: Hitchcock was reserved whereas Selznick was flamboyant. Eventually, Selznick generously lent Hitchcock to the larger film studios. Selznick made only a few films each year, as did fellow independent producer Samuel Goldwyn, so he did not always have projects for Hitchcock to direct. Goldwyn had also negotiated with Hitchcock on a possible contract, only to be outbid by Selznick. In a later interview, Hitchcock said: "[Selznick] was the Big Producer. ... Producer was king. The most flattering thing Mr. Selznick ever said about me—and it shows you the amount of control—he said I was the 'only director' he'd 'trust with a film'." Hitchcock approached American cinema cautiously; his first American film was set in England in which the "Americanness" of the characters was incidental: Rebecca (1940) was set in a Hollywood version of England's Cornwall and based on a novel by English novelist Daphne du Maurier. Selznick insisted on a faithful adaptation of the book, and disagreed with Hitchcock with the use of humour. The film, starring Laurence Olivier and Joan Fontaine, concerns an unnamed naïve young woman who marries a widowed aristocrat. She lives in his large English country house, and struggles with the lingering reputation of his elegant and worldly first wife Rebecca, who died under mysterious circumstances. The film won Best Picture at the 13th Academy Awards; the statuette was given to producer Selznick. Hitchcock received his first nomination for Best Director, his first of five such nominations. Hitchcock's second American film was the thriller Foreign Correspondent (1940), set in Europe, based on Vincent Sheean's book Personal History (1935) and produced by Walter Wanger. It was nominated for Best Picture that year. Hitchcock felt uneasy living and working in Hollywood while Britain was at war; his concern resulted in a film that overtly supported the British war effort. Filmed in 1939, it was inspired by the rapidly changing events in Europe, as covered by an American newspaper reporter played by Joel McCrea. By mixing footage of European scenes with scenes filmed on a Hollywood backlot, the film avoided direct references to Nazism, Nazi Germany, and Germans, to comply with the Motion Picture Production Code at the time. Early war years In September 1940 the Hitchcocks bought the Cornwall Ranch near Scotts Valley, California, in the Santa Cruz Mountains. Their primary residence was an English-style home in Bel Air, purchased in 1942. Hitchcock's films were diverse during this period, ranging from the romantic comedy Mr. & Mrs. Smith (1941) to the bleak film noir Shadow of a Doubt (1943). Suspicion (1941) marked Hitchcock's first film as a producer and director. It is set in England; Hitchcock used the north coast of Santa Cruz for the English coastline sequence. The film is the first of four in which Cary Grant was cast by Hitchcock, and it is one of the rare occasions that Grant plays a sinister character. Grant plays Johnnie Aysgarth, an English conman whose actions raise suspicion and anxiety in his shy young English wife, Lina McLaidlaw (Joan Fontaine). In one scene, Hitchcock placed a light inside a glass of milk, perhaps poisoned, that Grant is bringing to his wife; the light ensures that the audience's attention is on the glass. Grant's character is actually a killer, as per written in the book, Before the Fact by Francis Iles, but the studio felt that Grant's image would be tarnished by that. Hitchcock therefore settled for an ambiguous finale, although he would have preferred to end with the wife's murder. Fontaine won Best Actress for her performance. Saboteur (1942) is the first of two films that Hitchcock made for Universal Studios during the decade. Hitchcock was forced by Universal to use Universal contract player Robert Cummings and Priscilla Lane, a freelancer who signed a one-picture deal with the studio, both known for their work in comedies and light dramas. The story depicts a confrontation between a suspected saboteur (Cummings) and a real saboteur (Norman Lloyd) atop the Statue of Liberty. Hitchcock took a three-day tour of New York City to scout for Saboteurs filming locations. He also directed Have You Heard? (1942), a photographic dramatisation for Life magazine of the dangers of rumours during wartime. In 1943, he wrote a mystery story for Look magazine, "The Murder of Monty Woolley", a sequence of captioned photographs inviting the reader to find clues to the murderer's identity; Hitchcock cast the performers as themselves, such as Woolley, Doris Merrick, and make-up man Guy Pearce. Back in England, Hitchcock's mother Emma was severely ill; she died on 26 September 1942 at age 79. Hitchcock never spoke publicly about his mother, but his assistant said that he admired her. Four months later, on 4 January 1943, his brother William died of an overdose at age 52. Hitchcock was not very close to William, but his death made Hitchcock conscious about his own eating and drinking habits. He was overweight and suffering from back aches. His New Year's resolution in 1943 was to take his diet seriously with the help of a physician. In January that year, Shadow of a Doubt was released, which Hitchcock had fond memories of making. In the film, Charlotte "Charlie" Newton (Teresa Wright) suspects her beloved uncle Charlie Oakley (Joseph Cotten) of being a serial killer. Hitchcock filmed extensively on location, this time in the Northern California city of Santa Rosa. At 20th Century Fox, Hitchcock approached John Steinbeck with an idea for a film, which recorded the experiences of the survivors of a German U-boat attack. Steinbeck began work on the script for what would become Lifeboat (1944). However, Steinbeck was unhappy with the film and asked that his name be removed from the credits, to no avail. The idea was rewritten as a short story by Harry Sylvester and published in Collier's in 1943. The action sequences were shot in a small boat in the studio water tank. The locale posed problems for Hitchcock's traditional cameo appearance; it was solved by having Hitchcock's image appear in a newspaper that William Bendix is reading in the boat, showing the director in a before-and-after advertisement for "Reduco-Obesity Slayer". He told Truffaut in 1962: Hitchcock's typical dinner before his weight loss had been a roast chicken, boiled ham, potatoes, bread, vegetables, relishes, salad, dessert, a bottle of wine and some brandy. To lose weight, his diet consisted of black coffee for breakfast and lunch, and steak and salad for dinner, but it was hard to maintain; Donald Spoto wrote that his weight fluctuated considerably over the next 40 years. At the end of 1943, despite the weight loss, the Occidental Insurance Company of Los Angeles refused his application for life insurance. Wartime non-fiction films Hitchcock returned to the UK for an extended visit in late 1943 and early 1944. While there he made two short propaganda films, Bon Voyage (1944) and Aventure Malgache (1944), for the Ministry of Information. In June and July 1945, Hitchcock served as "treatment advisor" on a Holocaust documentary that used Allied Forces footage of the liberation of Nazi concentration camps. The film was assembled in London and produced by Sidney Bernstein of the Ministry of Information, who brought Hitchcock (a friend of his) on board. It was originally intended to be broadcast to the Germans, but the British government deemed it too traumatic to be shown to a shocked post-war population. Instead, it was transferred in 1952 from the British War Office film vaults to London's Imperial War Museum and remained unreleased until 1985, when an edited version was broadcast as an episode of PBS Frontline, under the title the Imperial War Museum had given it: Memory of the Camps. The full-length version of the film, German Concentration Camps Factual Survey, was restored in 2014 by scholars at the Imperial War Museum. Post-war Hollywood years: 1945–1953 Later Selznick films Hitchcock worked for David Selznick again when he directed Spellbound (1945), which explores psychoanalysis and features a dream sequence designed by Salvador Dalí. The dream sequence as it appears in the film is ten minutes shorter than was originally envisioned; Selznick edited it to make it "play" more effectively. Gregory Peck plays amnesiac Dr. Anthony Edwardes under the treatment of analyst Dr. Peterson (Ingrid Bergman), who falls in love with him while trying to unlock his repressed past. Two point-of-view shots were achieved by building a large wooden hand (which would appear to belong to the character whose point of view the camera took) and out-sized props for it to hold: a bucket-sized glass of milk and a large wooden gun. For added novelty and impact, the climactic gunshot was hand-coloured red on some copies of the black-and-white film. The original musical score by Miklós Rózsa makes use of the theremin, and some of it was later adapted by the composer into Rozsa's Piano Concerto Op. 31 (1967) for piano and orchestra. The spy film Notorious followed next in 1946. Hitchcock told François Truffaut that Selznick sold him, Ingrid Bergman, Cary Grant, and Ben Hecht's screenplay, to RKO Radio Pictures as a "package" for $500,000 (equivalent to $ million in ) because of cost overruns on Selznick's Duel in the Sun (1946). Notorious stars Bergman and Grant, both Hitchcock collaborators, and features a plot about Nazis, uranium and South America. His prescient use of uranium as a plot device led to him being briefly placed under surveillance by the Federal Bureau of Investigation. According to Patrick McGilligan, in or around March 1945, Hitchcock and Hecht consulted Robert Millikan of the California Institute of Technology about the development of a uranium bomb. Selznick complained that the notion was "science fiction", only to be confronted by the news of the detonation of two atomic bombs on Hiroshima and Nagasaki in Japan in August 1945. Transatlantic Pictures Hitchcock formed an independent production company, Transatlantic Pictures, with his friend Sidney Bernstein. He made two films with Transatlantic, one of which was his first colour film. With Rope (1948), Hitchcock experimented with marshalling suspense in a confined environment, as he had done earlier with Lifeboat. The film appears as a very limited number of continuous shots, but it was actually shot in 10 ranging from 4- to 10 minutes each; a 10-minute length of film was the most that a camera's film magazine could hold at the time. Some transitions between reels were hidden by having a dark object fill the entire screen for a moment. Hitchcock used those points to hide the cut, and began the next take with the camera in the same place. The film features James Stewart in the leading role, and was the first of four films that Stewart made with Hitchcock. It was inspired by the Leopold and Loeb case of the 1920s. Critical response at the time was mixed. Under Capricorn (1949), set in 19th-century Australia, also uses the short-lived technique of long takes, but to a more limited extent. He again used Technicolor in this production, then returned to black-and-white for several years. Transatlantic Pictures became inactive after the last two films. Hitchcock filmed Stage Fright (1950) at Elstree Studios in England, where he had worked during his British International Pictures contract many years before. He paired one of Warner Bros.' most popular stars, Jane Wyman, with the expatriate German actor Marlene Dietrich and used several prominent British actors, including Michael Wilding, Richard Todd and Alastair Sim. This was Hitchcock's first proper production for Warner Bros., which had distributed Rope and Under Capricorn, because Transatlantic Pictures was experiencing financial difficulties. His thriller Strangers on a Train (1951) was based on the novel of the same name by Patricia Highsmith. Hitchcock combined many elements from his preceding films. He approached Dashiell Hammett to write the dialogue, but Raymond Chandler took over, then left over disagreements with the director. In the film, two men casually meet, one of whom speculates on a foolproof method to murder; he suggests that two people, each wishing to do away with someone, should each perform the other's murder. Farley Granger's role was as the innocent victim of the scheme, while Robert Walker, previously known for "boy-next-door" roles, played the villain. I Confess (1953) was set in Quebec with Montgomery Clift as a Catholic priest. Peak years: 1954–1964 Dial M for Murder and Rear Window I Confess was followed by three colour films starring Grace Kelly: Dial M for Murder (1954), Rear Window (1954), and To Catch a Thief (1955). In Dial M for Murder, Ray Milland plays the villain who tries to murder his unfaithful wife (Kelly) for her money. She kills the hired assassin in self-defence, so Milland manipulates the evidence to make it look like murder. Her lover, Mark Halliday (Robert Cummings), and Police Inspector Hubbard (John Williams) save her from execution. Hitchcock experimented with 3D cinematography for Dial M for Murder. Hitchcock moved to Paramount Pictures and filmed Rear Window (1954), starring James Stewart and Grace Kelly, as well as Thelma Ritter and Raymond Burr. Stewart's character is a photographer called Jeff (based on Robert Capa) who must temporarily use a wheelchair. Out of boredom, he begins observing his neighbours across the courtyard, then becomes convinced that one of them (Raymond Burr) has murdered his wife. Jeff eventually manages to convince his policeman buddy (Wendell Corey) and his girlfriend (Kelly). As with Lifeboat and Rope, the principal characters are depicted in confined or cramped quarters, in this case Stewart's studio apartment. Hitchcock uses close-ups of Stewart's face to show his character's reactions, "from the comic voyeurism directed at his neighbours to his helpless terror watching Kelly and Burr in the villain's apartment". Alfred Hitchcock Presents From 1955 to 1965, Hitchcock was the host of the television series Alfred Hitchcock Presents. With his droll delivery, gallows humour and iconic image, the series made Hitchcock a celebrity. The title-sequence of the show pictured a minimalist caricature of his profile (he drew it himself; it is composed of only nine strokes), which his real silhouette then filled. The series theme tune was Funeral March of a Marionette by the French composer Charles Gounod (1818–1893). His introductions always included some sort of wry humour, such as the description of a recent multi-person execution hampered by having only one electric chair, while two are shown with a sign "Two chairs—no waiting!" He directed 18 episodes of the series, which aired from 1955 to 1965. It became The Alfred Hitchcock Hour in 1962, and NBC broadcast the final episode on 10 May 1965. In the 1980s, a new version of Alfred Hitchcock Presents was produced for television, making use of Hitchcock's original introductions in a colourised form. Hitchcock's success in television spawned a set of short-story collections in his name; these included Alfred Hitchcock's Anthology, Stories They Wouldn't Let Me Do on TV, and Tales My Mother Never Told Me. In 1956, HSD Publications also licensed the director's name to create Alfred Hitchcock's Mystery Magazine, a monthly digest specialising in crime and detective fiction. Hitchcock's television series' were very profitable, and his foreign-language versions of books were bringing revenues of up to $100,000 a year (). From To Catch a Thief to Vertigo In 1955, Hitchcock became a United States citizen. In the same year, his third Grace Kelly film, To Catch a Thief, was released; it is set in the French Riviera, and stars Kelly and Cary Grant. Grant plays retired thief John Robie, who becomes the prime suspect for a spate of robberies in the Riviera. A thrill-seeking American heiress played by Kelly surmises his true identity and tries to seduce him. "Despite the obvious age disparity between Grant and Kelly and a lightweight plot, the witty script (loaded with double entendres) and the good-natured acting proved a commercial success." It was Hitchcock's last film with Kelly; she married Prince Rainier of Monaco in 1956, and ended her film career afterward. Hitchcock then remade his own 1934 film The Man Who Knew Too Much in 1956. This time, the film starred James Stewart and Doris Day, who sang the theme song "Que Sera, Sera", which won the Academy Award for Best Original Song and became a big hit. They play a couple whose son is kidnapped to prevent them from interfering with an assassination. As in the 1934 film, the climax takes place at the Royal Albert Hall. The Wrong Man (1956), Hitchcock's final film for Warner Bros., is a low-key black-and-white production based on a real-life case of mistaken identity reported in Life magazine in 1953. This was the only film of Hitchcock to star Henry Fonda, playing a Stork Club musician mistaken for a liquor store thief, who is arrested and tried for robbery while his wife (Vera Miles) emotionally collapses under the strain. Hitchcock told Truffaut that his lifelong fear of the police attracted him to the subject and was embedded in many scenes. While directing episodes for Alfred Hitchcock Presents during the summer of 1957, Hitchcock was admitted to hospital for hernia and gallstones, and had to have his gallbladder removed. Following a successful surgery, he immediately returned to work to prepare for his next project. Vertigo (1958) again starred James Stewart, with Kim Novak and Barbara Bel Geddes. He had wanted Vera Miles to play the lead, but she was pregnant. He told Oriana Fallaci: "I was offering her a big part, the chance to become a beautiful sophisticated blonde, a real actress. We'd have spent a heap of dollars on it, and she has the bad taste to get pregnant. I hate pregnant women, because then they have children." In Vertigo, Stewart plays Scottie, a former police investigator suffering from acrophobia, who becomes obsessed with a woman he has been hired to shadow (Novak). Scottie's obsession leads to tragedy, and this time Hitchcock did not opt for a happy ending. Some critics, including Donald Spoto and Roger Ebert, agree that Vertigo is the director's most personal and revealing film, dealing with the Pygmalion-like obsessions of a man who moulds a woman into the person he desires. Vertigo explores more frankly and at greater length his interest in the relation between sex and death, than any other work in his filmography. Vertigo contains a camera technique developed by Irmin Roberts, commonly referred to as a dolly zoom, which has been copied by many filmmakers. The film premiered at the San Sebastián International Film Festival, and Hitchcock won the Silver Seashell prize. Vertigo is considered a classic, but it attracted mixed reviews and poor box-office receipts at the time; the critic from Variety magazine opined that the film was "too slow and too long". Bosley Crowther of the New York Times thought it was "devilishly far-fetched", but praised the cast performances and Hitchcock's direction. The picture was also the last collaboration between Stewart and Hitchcock. In the 2002 Sight & Sound polls, it ranked just behind Citizen Kane (1941); ten years later, in the same magazine, critics chose it as the best film ever made. North by Northwest and Psycho After Vertigo, the rest of 1958 was a difficult year for Hitchcock. During pre-production of North by Northwest (1959), which was a "slow" and "agonising" process, his wife Alma was diagnosed with cancer. While she was in hospital, Hitchcock kept himself occupied with his television work and would visit her every day. Alma underwent surgery and made a full recovery, but it caused Hitchcock to imagine, for the first time, life without her. Hitchcock followed up with three more successful films, which are also recognised as among his best: North by Northwest, Psycho (1960) and The Birds (1963). In North by Northwest, Cary Grant portrays Roger Thornhill, a Madison Avenue advertising executive who is mistaken for a government secret agent. He is pursued across the United States by enemy agents, including Eve Kendall (Eva Marie Saint). At first, Thornhill believes Kendall is helping him, but then realises that she is an enemy agent; he later learns that she is working undercover for the CIA. During its opening two-week run at Radio City Music Hall, the film grossed $404,056 (equivalent to $ million in ), setting a non-holiday gross record for that theatre. Time magazine called the film "smoothly troweled and thoroughly entertaining". Psycho (1960) is arguably Hitchcock's best-known film. Based on Robert Bloch's 1959 novel Psycho, which was inspired by the case of Ed Gein, the film was produced on a tight budget of $800,000 (equivalent to $ million in ) and shot in black-and-white on a spare set using crew members from Alfred Hitchcock Presents. The unprecedented violence of the shower scene, the early death of the heroine, and the innocent lives extinguished by a disturbed murderer became the hallmarks of a new horror-film genre. The film proved popular with audiences, with lines stretching outside theatres as viewers waited for the next showing. It broke box-office records in the United Kingdom, France, South America, the United States and Canada, and was a moderate success in Australia for a brief period. Psycho was the most profitable of Hitchcock's career, and he personally earned in excess of $15 million (equivalent to $ million in ). He subsequently swapped his rights to Psycho and his TV anthology for 150,000 shares of MCA, making him the third largest shareholder and his own boss at Universal, in theory at least, although that did not stop studio interference. Following the first film, Psycho became an American horror franchise: Psycho II, Psycho III, Bates Motel, Psycho IV: The Beginning, and a colour 1998 remake of the original. Truffaut interview On 13 August 1962, Hitchcock's 63rd birthday, the French director François Truffaut began a 50-hour interview of Hitchcock, filmed over eight days at Universal Studios, during which Hitchcock agreed to answer 500 questions. It took four years to transcribe the tapes and organise the images; it was published as a book in 1967, which Truffaut nicknamed the "Hitchbook". The audio tapes were used as the basis of a documentary in 2015. Truffaut sought the interview because it was clear to him that Hitchcock was not simply the mass-market entertainer the American media made him out to be. It was obvious from his films, Truffaut wrote, that Hitchcock had "given more thought to the potential of his art than any of his colleagues". He compared the interview to "Oedipus' consultation of the oracle". The Birds The film scholar Peter William Evans wrote that The Birds (1963) and Marnie (1964) are regarded as "undisputed masterpieces". Hitchcock had intended to film Marnie first, and in March 1962 it was announced that Grace Kelly, Princess Grace of Monaco since 1956, would come out of retirement to star in it. When Kelly asked Hitchcock to postpone Marnie until 1963 or 1964, he recruited Evan Hunter, author of The Blackboard Jungle (1954), to develop a screenplay based on a Daphne du Maurier short story, "The Birds" (1952), which Hitchcock had republished in his My Favorites in Suspense (1959). He hired Tippi Hedren to play the lead role. It was her first role; she had been a model in New York when Hitchcock saw her, in October 1961, in an NBC television advert for Sego, a diet drink: "I signed her because she is a classic beauty. Movies don't have them any more. Grace Kelly was the last." He insisted, without explanation, that her first name be written in single quotation marks: 'Tippi'. In The Birds, Melanie Daniels, a young socialite, meets lawyer Mitch Brenner (Rod Taylor) in a bird shop; Jessica Tandy plays his possessive mother. Hedren visits him in Bodega Bay (where The Birds was filmed) carrying a pair of lovebirds as a gift. Suddenly waves of birds start gathering, watching, and attacking. The question: "What do the birds want?" is left unanswered. Hitchcock made the film with equipment from the Revue Studio, which made Alfred Hitchcock Presents. He said it was his most technically challenging film, using a combination of trained and mechanical birds against a backdrop of wild ones. Every shot was sketched in advance. An HBO/BBC television film, The Girl (2012), depicted Hedren's experiences on set; she said that Hitchcock became obsessed with her and sexually harassed her. He reportedly isolated her from the rest of the crew, had her followed, whispered obscenities to her, had her handwriting analysed, and had a ramp built from his private office directly into her trailer. Diane Baker, her co-star in Marnie, said: "[N]othing could have been more horrible for me than to arrive on that movie set and to see her being treated the way she was." While filming the attack scene in the attic—which took a week to film—she was placed in a caged room while two men wearing elbow-length protective gloves threw live birds at her. Toward the end of the week, to stop the birds' flying away from her too soon, one leg of each bird was attached by nylon thread to elastic bands sewn inside her clothes. She broke down after a bird cut her lower eyelid, and filming was halted on doctor's orders. Marnie In June 1962, Grace Kelly announced that she had decided against appearing in Marnie (1964). Hedren had signed an exclusive seven-year, $500-a-week contract with Hitchcock in October 1961, and he decided to cast her in the lead role opposite Sean Connery. In 2016, describing Hedren's performance as "one of the greatest in the history of cinema", Richard Brody called the film a "story of sexual violence" inflicted on the character played by Hedren: "The film is, to put it simply, sick, and it's so because Hitchcock was sick. He suffered all his life from furious sexual desire, suffered from the lack of its gratification, suffered from the inability to transform fantasy into reality, and then went ahead and did so virtually, by way of his art." A 1964 New York Times film review called it Hitchcock's "most disappointing film in years", citing Hedren's and Connery's lack of experience, an amateurish script and "glaringly fake cardboard backdrops". In the film, Marnie Edgar (Hedren) steals $10,000 from her employer and goes on the run. She applies for a job at Mark Rutland's (Connery) company in Philadelphia and steals from there too. Earlier she is shown having a panic attack during a thunderstorm and fearing the colour red. Mark tracks her down and blackmails her into marrying him. She explains that she does not want to be touched, but during the "honeymoon", Mark rapes her. Marnie and Mark discover that Marnie's mother had been a prostitute when Marnie was a child, and that, while the mother was fighting with a client during a thunderstorm—the mother believed the client had tried to molest Marnie—Marnie had killed the client to save her mother. Cured of her fears when she remembers what happened, she decides to stay with Mark. Hitchcock told cinematographer Robert Burks that the camera had to be placed as close as possible to Hedren when he filmed her face. Evan Hunter, the screenwriter of The Birds who was writing Marnie too, explained to Hitchcock that, if Mark loved Marnie, he would comfort her, not rape her. Hitchcock reportedly replied: "Evan, when he sticks it in her, I want that camera right on her face!" When Hunter submitted two versions of the script, one without the rape scene, Hitchcock replaced him with Jay Presson Allen. Later years: 1966–1980 Final films Failing health reduced Hitchcock's output during the last two decades of his life. Biographer Stephen Rebello claimed Universal imposed two films on him, Torn Curtain (1966) and Topaz (1969), the latter of which is based on a Leon Uris novel, partly set in Cuba. Both were spy thrillers with Cold War-related themes. Torn Curtain, with Paul Newman and Julie Andrews, precipitated the bitter end of the 12-year collaboration between Hitchcock and composer Bernard Herrmann. Hitchcock was unhappy with Herrmann's score and replaced him with John Addison, Jay Livingston and Ray Evans. Upon release, Torn Curtain was a box office disappointment, and Topaz was disliked by critics and the studio. Hitchcock returned to Britain to make his penultimate film, Frenzy (1972), based on the novel Goodbye Piccadilly, Farewell Leicester Square (1966). After two espionage films, the plot marked a return to the murder-thriller genre. Richard Blaney (Jon Finch), a volatile barman with a history of explosive anger, becomes the prime suspect in the investigation into the "Necktie Murders", which are actually committed by his friend Bob Rusk (Barry Foster). This time, Hitchcock makes the victim and villain kindreds, rather than opposites as in Strangers on a Train. In Frenzy, Hitchcock allowed nudity for the first time. Two scenes show naked women, one of whom is being raped and strangled; Donald Spoto called the latter "one of the most repellent examples of a detailed murder in the history of film". Both actors, Barbara Leigh-Hunt and Anna Massey, refused to do the scenes, so models were used instead. Biographers have noted that Hitchcock had always pushed the limits of film censorship, often managing to fool Joseph Breen, the head of the Motion Picture Production Code. Hitchcock would add subtle hints of improprieties forbidden by censorship until the mid-1960s. Yet Patrick McGilligan wrote that Breen and others often realised that Hitchcock was inserting such material and were actually amused, as well as alarmed by Hitchcock's "inescapable inferences". Family Plot (1976) was Hitchcock's last film. It relates the escapades of "Madam" Blanche Tyler, played by Barbara Harris, a fraudulent spiritualist, and her taxi-driver lover Bruce Dern, making a living from her phony powers. While Family Plot was based on the Victor Canning novel The Rainbird Pattern (1972), the novel's tone is more sinister. Screenwriter Ernest Lehman originally wrote the film, under the working title Deception, with a dark tone but was pushed to a lighter, more comical tone by Hitchcock where it took the name Deceit, then finally, Family Plot. Knighthood and death Toward the end of his life, Hitchcock was working on the script for a spy thriller, The Short Night, collaborating with James Costigan, Ernest Lehman and David Freeman. Despite preliminary work, it was never filmed. Hitchcock's health was declining and he was worried about his wife, who had suffered a stroke. The screenplay was eventually published in Freeman's book The Last Days of Alfred Hitchcock (1999). Having refused a CBE in 1962, Hitchcock was appointed a Knight Commander of the Most Excellent Order of the British Empire (KBE) in the 1980 New Year Honours. He was too ill to travel to London—he had a pacemaker and was being given cortisone injections for his arthritis—so on 3 January 1980 the British consul general presented him with the papers at Universal Studios. Asked by a reporter after the ceremony why it had taken the Queen so long, Hitchcock quipped, "I suppose it was a matter of carelessness." Cary Grant, Janet Leigh, and others attended a luncheon afterwards. His last public appearance was on 16 March 1980, when he introduced the next year's winner of the American Film Institute award. He died of kidney failure the following month, on 29 April, in his Bel Air home. Donald Spoto, one of Hitchcock's biographers, wrote that Hitchcock had declined to see a priest, but according to Jesuit priest Mark Henninger, he and another priest, Tom Sullivan, celebrated Mass at the filmmaker's home, and Sullivan heard his confession. Hitchcock was survived by his wife and daughter. His funeral was held at Good Shepherd Catholic Church in Beverly Hills on 30 April, after which his body was cremated. His remains were scattered over the Pacific Ocean on 10 May 1980. Filmmaking Style and themes Hitchcock's film production career evolved from small-scale silent films to financially significant sound films. Hitchcock remarked that he was influenced by early filmmakers George Méliès, D.W. Griffith and Alice Guy-Blaché. His silent films between 1925 and 1929 were in the crime and suspense genres, but also included melodramas and comedies. Whilst visual storytelling was pertinent during the silent era, even after the arrival of sound, Hitchcock still relied on visuals in cinema; Hitchcock referred to this emphasis on visual storytelling as "pure cinema". In Britain, he honed his craft so that by the time he moved to Hollywood, the director had perfected his style and camera techniques. Hitchcock later said that his British work was the "sensation of cinema", whereas the American phase was when his "ideas were fertilised". Scholar Robin Wood writes that the director's first two films, The Pleasure Garden and The Mountain Eagle, were influenced by German Expressionism. Afterward, he discovered Soviet cinema, and Sergei Eisenstein's and Vsevolod Pudovkin's theories of montage. 1926's The Lodger was inspired by both German and Soviet aesthetics, styles which solidified the rest of his career. Although Hitchcock's work in the 1920s found some success, several British reviewers criticised Hitchcock's films for being unoriginal and conceited. Raymond Durgnat opined that Hitchcock's films were carefully and intelligently constructed, but thought they can be shallow and rarely present a "coherent worldview". Earning the title "Master of Suspense", the director experimented with ways to generate tension in his work. He said, "My suspense work comes out of creating nightmares for the audience. And I play with an audience. I make them gasp and surprise them and shock them. When you have a nightmare, it's awfully vivid if you're dreaming that you're being led to the electric chair. Then you're as happy as can be when you wake up because you're relieved." During filming of North by Northwest, Hitchcock explained his reasons for recreating the set of Mount Rushmore: "The audience responds in proportion to how realistic you make it. One of the dramatic reasons for this type of photography is to get it looking so natural that the audience gets involved and believes, for the time being, what's going on up there on the screen." Hitchcock's films, from the silent to the sound era, contained a number of recurring themes that he is famous for. His films explored audience as a voyeur, notably in Rear Window, Marnie and Psycho. He understood that human beings enjoy voyeuristic activities and made the audience participate in it through the character's actions. Of his fifty-three films, eleven revolved around stories of mistaken identity, where an innocent protagonist is accused of a crime and is pursued by police. In most cases, it is an ordinary, everyday person who finds themselves in a dangerous situation. Hitchcock told Truffaut: "That's because the theme of the innocent man being accused, I feel, provides the audience with a greater sense of danger. It's easier for them to identify with him than with a guilty man on the run." One of his constant themes were the struggle of a personality torn between "order and chaos"; known as the notion of "double", which is a comparison or contrast between two characters or objects: the double representing a dark or evil side. According to Robin Wood, Hitchcock had mixed feelings towards homosexuality despite working with gay actors in his career. Donald Spoto suggests that Hitchcock's sexually repressive childhood may have contributed to his exploration of deviancy. During the 1950s, the Motion Picture Production Code prohibited direct references to homosexuality but the director was known for his subtle references, and pushing the boundaries of the censors. Moreover, Shadow of a Doubt has a double incest theme through the storyline, expressed implicitly through images. Author Jane Sloan argues that Hitchcock was drawn to both conventional and unconventional sexual expression in his work, and the theme of marriage was usually presented in a "bleak and skeptical" manner. It was also not until after his mother's death in 1942, that Hitchcock portrayed motherly figures as "notorious monster-mothers". The espionage backdrop, and murders committed by characters with psychopathic tendencies were common themes too. In Hitchcock's depiction of villains and murderers, they were usually charming and friendly, forcing viewers to identify with them. The director's strict childhood and Jesuit education may have led to his distrust of authoritarian figures such as policemen and politicians; a theme which he has explored. Also, he used the "MacGuffin"—the use of an object, person or event to keep the plot moving along even if it was non-essential to the story. Some examples include the microfilm in North by Northwest and the stolen $40,000 in Psycho. Hitchcock appears briefly in most of his own films. For example, he is seen struggling to get a double bass onto a train (Strangers on a Train), walking dogs out of a pet shop (The Birds), fixing a neighbour's clock (Rear Window), as a shadow (Family Plot), sitting at a table in a photograph (Dial M for Murder), and riding a bus (North by Northwest, To Catch a Thief). Representation of women Hitchcock's portrayal of women has been the subject of much scholarly debate. Bidisha wrote in The Guardian in 2010: "There's the vamp, the tramp, the snitch, the witch, the slink, the double-crosser and, best of all, the demon mommy. Don't worry, they all get punished in the end." In a widely cited essay in 1975, Laura Mulvey introduced the idea of the male gaze; the view of the spectator in Hitchcock's films, she argued, is that of the heterosexual male protagonist. "The female characters in his films reflected the same qualities over and over again", Roger Ebert wrote in 1996: "They were blonde. They were icy and remote. They were imprisoned in costumes that subtly combined fashion with fetishism. They mesmerised the men, who often had physical or psychological handicaps. Sooner or later, every Hitchcock woman was humiliated." The victims in The Lodger are all blondes. In The 39 Steps, Madeleine Carroll is put in handcuffs. Ingrid Bergman, whom Hitchcock directed three times (Spellbound, Notorious, and Under Capricorn), is dark blonde. In Rear Window, Lisa (Grace Kelly) risks her life by breaking into Lars Thorwald's apartment. In To Catch a Thief, Francie (also Kelly) offers to help a man she believes is a burglar. In Vertigo and North by Northwest respectively, Kim Novak and Eva Marie Saint play the blonde heroines. In Psycho, Janet Leigh's character steals $40,000 and is murdered by Norman Bates, a reclusive psychopath. Tippi Hedren, a blonde, appears to be the focus of the attacks in The Birds. In Marnie, the title character, again played by Hedren, is a thief. In Topaz, French actresses Dany Robin as Stafford's wife and Claude Jade as Stafford's daughter are blonde heroines, the mistress was played by brunette Karin Dor. Hitchcock's last blonde heroine was Barbara Harris as a phony psychic turned amateur sleuth in Family Plot (1976), his final film. In the same film, the diamond smuggler played by Karen Black wears a long blonde wig in several scenes. His films often feature characters struggling in their relationships with their mothers, such as Norman Bates in Psycho. In North by Northwest, Roger Thornhill (Cary Grant) is an innocent man ridiculed by his mother for insisting that shadowy, murderous men are after him. In The Birds, the Rod Taylor character, an innocent man, finds his world under attack by vicious birds, and struggles to free himself from a clinging mother (Jessica Tandy). The killer in Frenzy has a loathing of women but idolises his mother. The villain Bruno in Strangers on a Train hates his father, but has an incredibly close relationship with his mother (played by Marion Lorne). Sebastian (Claude Rains) in Notorious has a clearly conflicting relationship with his mother, who is (rightly) suspicious of his new bride, Alicia Huberman (Ingrid Bergman). Relationship with actors Hitchcock became known for having remarked that "actors should be treated like cattle". During the filming of Mr. & Mrs. Smith (1941), Carole Lombard brought three cows onto the set wearing the name tags of Lombard, Robert Montgomery, and Gene Raymond, the stars of the film, to surprise him. In an episode of The Dick Cavett Show, originally broadcast on 8 June 1972, Dick Cavett stated as fact that Hitchcock had once called actors cattle. Hitchcock responded by saying that, at one time, he had been accused of calling actors cattle. "I said that I would never say such an unfeeling, rude thing about actors at all. What I probably said, was that all actors should be treated like cattle...In a nice way of course." He then described Carole Lombard's joke, with a smile. Hitchcock believed that actors should concentrate on their performances and leave work on script and character to the directors and screenwriters. He told Bryan Forbes in 1967: "I remember discussing with a method actor how he was taught and so forth. He said, 'We're taught using improvisation. We are given an idea and then we are turned loose to develop in any way we want to.' I said, 'That's not acting. That's writing.' " Recalling their experiences on Lifeboat for Charles Chandler, author of It's Only a Movie: Alfred Hitchcock A Personal Biography, Walter Slezak said that Hitchcock "knew more about how to help an actor than any director I ever worked with", and Hume Cronyn dismissed the idea that Hitchcock was not concerned with his actors as "utterly fallacious", describing at length the process of rehearsing and filming Lifeboat. Critics observed that, despite his reputation as a man who disliked actors, actors who worked with him often gave brilliant performances. He used the same actors in many of his films; Cary Grant and James Stewart both worked with Hitchcock four times, and Ingrid Bergman and Grace Kelly three. James Mason said that Hitchcock regarded actors as "animated props". For Hitchcock, the actors were part of the film's setting. He told François Truffaut: "The chief requisite for an actor is the ability to do nothing well, which is by no means as easy as it sounds. He should be willing to be used and wholly integrated into the picture by the director and the camera. He must allow the camera to determine the proper emphasis and the most effective dramatic highlights." Writing, storyboards and production Hitchcock planned his scripts in detail with his writers. In Writing with Hitchcock (2001), Steven DeRosa noted that Hitchcock supervised them through every draft, asking that they tell the story visually. Hitchcock told Roger Ebert in 1969: Hitchcock's films were extensively storyboarded to the finest detail. He was reported to have never even bothered looking through the viewfinder, since he did not need to, although in publicity photos he was shown doing so. He also used this as an excuse to never have to change his films from his initial vision. If a studio asked him to change a film, he would claim that it was already shot in a single way, and that there were no alternative takes to consider. This view of Hitchcock as a director who relied more on pre-production than on the actual production itself has been challenged by Bill Krohn, the American correspondent of French film magazine Cahiers du cinéma, in his book Hitchcock at Work. After investigating script revisions, notes to other production personnel written by or to Hitchcock, and other production material, Krohn observed that Hitchcock's work often deviated from how the screenplay was written or how the film was originally envisioned. He noted that the myth of storyboards in relation to Hitchcock, often regurgitated by generations of commentators on his films, was to a great degree perpetuated by Hitchcock himself or the publicity arm of the studios. For example, the celebrated crop-spraying sequence of North by Northwest was not storyboarded at all. After the scene was filmed, the publicity department asked Hitchcock to make storyboards to promote the film, and Hitchcock in turn hired an artist to match the scenes in detail. Even when storyboards were made, scenes that were shot differed from them significantly. Krohn's analysis of the production of Hitchcock classics like Notorious reveals that Hitchcock was flexible enough to change a film's conception during its production. Another example Krohn notes is the American remake of The Man Who Knew Too Much, whose shooting schedule commenced without a finished script and moreover went over schedule, something that, as Krohn notes, was not an uncommon occurrence on many of Hitchcock's films, including Strangers on a Train and Topaz. While Hitchcock did do a great deal of preparation for all his films, he was fully cognisant that the actual film-making process often deviated from the best-laid plans and was flexible to adapt to the changes and needs of production as his films were not free from the normal hassles faced and common routines used during many other film productions. Krohn's work also sheds light on Hitchcock's practice of generally shooting in chronological order, which he notes sent many films over budget and over schedule and, more importantly, differed from the standard operating procedure of Hollywood in the Studio System Era. Equally important is Hitchcock's tendency to shoot alternative takes of scenes. This differed from coverage in that the films were not necessarily shot from varying angles so as to give the editor options to shape the film how they chose (often under the producer's aegis). Rather they represented Hitchcock's tendency to give himself options in the editing room, where he would provide advice to his editors after viewing a rough cut of the work. According to Krohn, this and a great deal of other information revealed through his research of Hitchcock's personal papers, script revisions and the like refute the notion of Hitchcock as a director who was always in control of his films, whose vision of his films did not change during production, which Krohn notes has remained the central long-standing myth of Alfred Hitchcock. Both his fastidiousness and attention to detail also found their way into each film poster for his films. Hitchcock preferred to work with the best talent of his day—film poster designers such as Bill Gold and Saul Bass—who would produce posters that accurately represented his films. Legacy Awards and honours Hitchcock was inducted into the Hollywood Walk of Fame on 8 February 1960 with two stars: one for television and a second for his motion pictures. In 1978, John Russell Taylor described him as "the most universally recognizable person in the world" and "a straightforward middle-class Englishman who just happened to be an artistic genius". In 2002, MovieMaker named him the most influential director of all time, and a 2007 The Daily Telegraph critics' poll ranked him Britain's greatest director. David Gritten, the newspaper's film critic, wrote: "Unquestionably the greatest filmmaker to emerge from these islands, Hitchcock did more than any director to shape modern cinema, which would be utterly different without him. His flair was for narrative, cruelly withholding crucial information (from his characters and from us) and engaging the emotions of the audience like no one else." In 1992, the Sight & Sound Critics' Poll ranked Hitchcock at No. 4 in its list of "Top 10 Directors" of all time. In 2002, Hitchcock was ranked 2nd in the critics' top ten poll and 5th in the directors' top ten poll in the list of The Greatest Directors of All Time compiled by the Sight & Sound magazine. Hitchcock was voted the "Greatest Director of 20th Century" in a poll conducted by Japanese film magazine kinema Junpo. In 1996, Entertainment Weekly ranked Hitchcock at No. 1 in its "50 Greatest Directors" list. Hitchcock was ranked at No. 2 on Empire magazine's "Top 40 Greatest Directors of All-Time" list in 2005. In 2007, Total Film magazine ranked Hitchcock at No. 1 on its "100 Greatest Film Directors Ever" list. He won two Golden Globes, eight Laurel Awards, and five lifetime achievement awards, including the first BAFTA Academy Fellowship Award and, in 1979, an AFI Life Achievement Award. He was nominated five times for an Academy Award for Best Director. Rebecca, nominated for 11 Oscars, won the Academy Award for Best Picture of 1940; another Hitchcock film, Foreign Correspondent, was also nominated that year. By 2021, nine of his films had been selected for preservation by the US National Film Registry: Rebecca (1940; inducted 2018), Shadow of a Doubt (1943; inducted 1991), Notorious (1946; inducted 2006), Strangers on a Train (1951; inducted 2021), Rear Window (1954; inducted 1997), Vertigo (1958; inducted 1989), North by Northwest (1959; inducted 1995), Psycho (1960; inducted 1992), and The Birds (1963; inducted 2016). In 2012, Hitchcock was selected by artist Sir Peter Blake, author of the Beatles' Sgt. Pepper's Lonely Hearts Club Band album cover, to appear in a new version of the cover, along with other British cultural figures, and he was featured that year in a BBC Radio 4 series, The New Elizabethans, as someone "whose actions during the reign of Elizabeth II have had a significant impact on lives in these islands and given the age its character". In June 2013 nine restored versions of Hitchcock's early silent films, including The Pleasure Garden (1925), were shown at the Brooklyn Academy of Music's Harvey Theatre; known as "The Hitchcock 9", the travelling tribute was organised by the British Film Institute. Archives The Alfred Hitchcock Collection is housed at the Academy Film Archive in Hollywood, California. It includes home movies, 16mm film shot on the set of Blackmail (1929) and Frenzy (1972), and the earliest known colour footage of Hitchcock. The Academy Film Archive has preserved many of his home movies. The Alfred Hitchcock Papers are housed at the Academy's Margaret Herrick Library. The David O. Selznick and the Ernest Lehman collections housed at the Harry Ransom Humanities Research Center in Austin, Texas, contain material related to Hitchcock's work on the production of The Paradine Case, Rebecca, Spellbound, North by Northwest and Family Plot. Hitchcock portrayals Anthony Hopkins in Hitchcock (2012) Toby Jones in The Girl (2012) Roger Ashton-Griffiths in Grace of Monaco (2014) Filmography Films Silent films Sound films See also Alfred Hitchcock's unrealized projects List of Alfred Hitchcock cameo appearances List of film director and actor collaborations Notes and sources Notes References Works cited Biographies (chronological) Miscellaneous Further reading Articles Hitchcock's Style at the BFI's Screenonline Books Deflem, Mathieu. 2016. "Alfred Hitchcock: Visions of Guilt and Innocence." pp. 203–227 in Framing Law and Crime: An Interdisciplinary Anthology, edited by Caroline Joan S. Picart, Michael Hviid Jacobsen, and Cecil Greek. Latham, MD; Madison, NJ: Rowman & Littlefield; Fairleigh Dickinson University Press. Slavoj Žižek et al.:Everything You Always Wanted to Know About Lacan But Were Afraid to Ask Hitchcock, London and New York, Verso, 2nd edition 2010 External links 1899 births 1980 deaths 20th-century screenwriters 20th-century English businesspeople 20th-century English people AFI Life Achievement Award recipients American people of Irish descent Articles containing video clips BAFTA fellows British Army personnel of World War I Cecil B. DeMille Award Golden Globe winners Deaths from kidney failure Directors Guild of America Award winners Edgar Award winners English emigrants to the United States English film directors English film producers English people of Irish descent English Roman Catholics English television directors Film directors from London Film directors from Los Angeles Film producers from London German-language film directors Horror film directors Knights Commander of the Order of the British Empire People educated at St Ignatius' College, Enfield People from Bel Air, Los Angeles People from Leytonstone People with acquired American citizenship Recipients of the Irving G. Thalberg Memorial Award Royal Engineers soldiers Silent film directors Silent film screenwriters
809
https://en.wikipedia.org/wiki/Anaconda
Anaconda
Anacondas or water boas are a group of large snakes of the genus Eunectes. They are found in tropical South America. Four species are currently recognized. Description Although the name applies to a group of snakes, it is often used to refer only to one species, in particular, the common or green anaconda (Eunectes murinus), which is the largest snake in the world by weight, and the second longest. Etymology The South American names anacauchoa and anacaona were suggested in an account by Peter Martyr d'Anghiera, but the idea of a South American origin was questioned by Henry Walter Bates who, in his travels in South America, failed to find any similar name in use. The word anaconda is derived from the name of a snake from Ceylon (Sri Lanka) that John Ray described in Latin in his Synopsis Methodica Animalium (1693) as serpens indicus bubalinus anacandaia zeylonibus, ides bubalorum aliorumque jumentorum membra conterens. Ray used a catalogue of snakes from the Leyden museum supplied by Dr. Tancred Robinson, but the description of its habit was based on Andreas Cleyer who in 1684 described a gigantic snake that crushed large animals by coiling around their bodies and crushing their bones. Henry Yule in his Hobson-Jobson notes that the word became more popular due to a piece of fiction published in 1768 in the Scots Magazine by a certain R. Edwin. Edwin described a 'tiger' being crushed to death by an anaconda, when there actually never were any tigers in Sri Lanka. Yule and Frank Wall noted that the snake was in fact a python and suggested a Tamil origin anai-kondra meaning elephant killer. A Sinhalese origin was also suggested by Donald Ferguson who pointed out that the word Henakandaya (hena lightning/large and kanda stem/trunk) was used in Sri Lanka for the small whip snake (Ahaetulla pulverulenta) and somehow got misapplied to the python before myths were created. The name commonly used for the anaconda in Brazil is sucuri, sucuriju or sucuriuba. Species and other uses of the term "anaconda" The term "anaconda" has been used to refer to: Any member of the genus Eunectes, a group of large, aquatic snakes found in South America: Eunectes murinus, the green anaconda – the largest species, found east of the Andes in Colombia, Venezuela, the Guianas, Ecuador, Peru, Bolivia, Brazil and Trinidad and Tobago Eunectes notaeus, the yellow anaconda – a small species, found in eastern Bolivia, southern Brazil, Paraguay, and northeastern Argentina Eunectes deschauenseei, the darkly-spotted anaconda – a rare species, found in northeastern Brazil and coastal French Guiana Eunectes beniensis, the Bolivian anaconda – the most recently defined species, found in the Departments of Beni and Pando in Bolivia The term was previously applied imprecisely, indicating any large snake that constricts its prey, though this usage is now archaic. "Anaconda" is also used as a metaphor for an action aimed at constricting and suffocating an opponent – for example, the Anaconda Plan proposed at the beginning of the American Civil War, in which the Union Army was to effectively "suffocate" the Confederacy. Another example is the anaconda choke in the martial art Brazilian jiu-jitsu, which is performed by wrapping your arms under the opponent's neck and through the armpit, and grasping the biceps of the opposing arm, when caught in this move, you will lose consciousness if you do not tap out. See also South American jaguar, a competitor or predator Notes References Eunectes Snake common names
824
https://en.wikipedia.org/wiki/Altaic%20languages
Altaic languages
Altaic (; also called Transeurasian) is a sprachbund (i.e. a linguistic area) and proposed language family that would include the Turkic, Mongolic and Tungusic language families and possibly also the Japonic and Koreanic languages. Speakers of these languages are currently scattered over most of Asia north of 35 °N and in some eastern parts of Europe, extending in longitude from Turkey to Japan. The group is named after the Altai mountain range in the center of Asia. The hypothetical language family has long been rejected by most comparative linguists, although it continues to be supported by a small but stable scholarly minority. The Altaic family was first proposed in the 18th century. It was widely accepted until the 1960s and is still listed in many encyclopedias and handbooks. Since the 1950s, many comparative linguists have rejected the proposal, after supposed cognates were found not to be valid, hypothesized sound shifts were not found, and Turkic and Mongolic languages were found to be converging rather than diverging over the centuries. Opponents of the theory proposed that the similarities are due to mutual linguistic influences between the groups concerned. Modern supporters of Altaic acknowledge that many shared features are the result of contact and convergence and thus cannot be taken as evidence for a genetic relationship, but they nevertheless argue that a core of existing correspondences goes back to a common ancestor. The original hypothesis unified only the Turkic, Mongolian, and Tungusic groups. Later proposals to include the Korean and Japanese languages into a "Macro-Altaic" family have always been controversial. The original proposal was sometimes called "Micro-Altaic" by retronymy. Most proponents of Altaic continue to support the inclusion of Korean. A common ancestral Proto-Altaic language for the "Macro" family has been tentatively reconstructed by Sergei Starostin and others. Some proposals also included Ainuic but this is not widely accepted even among Altaicists themselves. Micro-Altaic includes about 66 living languages, to which Macro-Altaic would add Korean, Jeju, Japanese, and the Ryukyuan languages, for a total of about 74 (depending on what is considered a language and what is considered a dialect). These numbers do not include earlier states of languages, such as Middle Mongol, Old Korean, or Old Japanese. Earliest attestations of the languages The earliest known texts in a Turkic language are the Orkhon inscriptions, 720–735 AD. They were deciphered in 1893 by the Danish linguist Vilhelm Thomsen in a scholarly race with his rival, the German–Russian linguist Wilhelm Radloff. However, Radloff was the first to publish the inscriptions. The first Tungusic language to be attested is Jurchen, the language of the ancestors of the Manchus. A writing system for it was devised in 1119 AD and an inscription using this system is known from 1185 (see List of Jurchen inscriptions). The earliest Mongolic language of which we have written evidence is known as Middle Mongol. It is first attested by an inscription dated to 1224 or 1225 AD, the Stele of Yisüngge, and by the Secret History of the Mongols, written in 1228 (see Mongolic languages). The earliest Para-Mongolic text is the Memorial for Yelü Yanning, written in the Khitan large script and dated to 986 AD. However, the Inscription of Hüis Tolgoi, discovered in 1975 and analysed as being in an early form of Mongolic, has been dated to 604-620 AD. The Bugut inscription dates back to 584 AD. Japanese is first attested in the form of names contained in a few short inscriptions in Classical Chinese from the 5th century AD, such as found on the Inariyama Sword. The first substantial text in Japanese, however, is the Kojiki, which dates from 712 AD. It is followed by the Nihon shoki, completed in 720, and then by the Man'yōshū, which dates from c. 771–785, but includes material that is from about 400 years earlier. The most important text for the study of early Korean is the Hyangga, a collection of 25 poems, of which some go back to the Three Kingdoms period (57 BC–668 AD), but are preserved in an orthography that only goes back to the 9th century AD. Korean is copiously attested from the mid-15th century on in the phonetically precise Hangul system of writing. History of the Altaic family concept Origins The earliest known reference to a unified language group of Turkic, Mongolic and Tungusic languages is from the 1692 work of Nicolaes Witsen which may be based on a 1661 work of Abu al-Ghazi Bahadur Genealogy of the Turks. A proposed grouping of the Turkic, Mongolic, and Tungusic languages was published in 1730 by Philip Johan von Strahlenberg, a Swedish officer who traveled in the eastern Russian Empire while a prisoner of war after the Great Northern War. However, he may not have intended to imply a closer relationship among those languages. Uralo-Altaic hypothesis In 1844, the Finnish philologist Matthias Castrén proposed a broader grouping, that later came to be called the Ural–Altaic family, which included Turkic, Mongolian, and Manchu-Tungus (=Tungusic) as an "Altaic" branch, and also the Finno-Ugric and Samoyedic languages as the "Uralic" branch (though Castrén himself used the terms "Tataric" and "Chudic"). The name "Altaic" referred to the Altai Mountains in East-Central Asia, which are approximately the center of the geographic range of the three main families. The name "Uralic" referred to the Ural Mountains. While the Ural-Altaic family hypothesis can still be found in some encyclopedias, atlases, and similar general references, after the 1960s it has been heavily criticized. Even linguists who accept the basic Altaic family, like Sergei Starostin, completely discard the inclusion of the "Uralic" branch. Korean and Japanese languages In 1857, the Austrian scholar Anton Boller suggested adding Japanese to the Ural–Altaic family. In the 1920s, G.J. Ramstedt and E.D. Polivanov advocated the inclusion of Korean. Decades later, in his 1952 book, Ramstedt rejected the Ural–Altaic hypothesis but again included Korean in Altaic, an inclusion followed by most leading Altaicists (supporters of the theory) to date. His book contained the first comprehensive attempt to identify regular correspondences among the sound systems within the Altaic language families. In 1960, Nicholas Poppe published what was in effect a heavily revised version of Ramstedt's volume on phonology that has since set the standard in Altaic studies. Poppe considered the issue of the relationship of Korean to Turkic-Mongolic-Tungusic not settled. In his view, there were three possibilities: (1) Korean did not belong with the other three genealogically, but had been influenced by an Altaic substratum; (2) Korean was related to the other three at the same level they were related to each other; (3) Korean had split off from the other three before they underwent a series of characteristic changes. Roy Andrew Miller's 1971 book Japanese and the Other Altaic Languages convinced most Altaicists that Japanese also belonged to Altaic. Since then, the "Macro-Altaic" has been generally assumed to include Turkic, Mongolic, Tungusic, Korean, and Japanese. In 1990, Unger advocated a family consisting of Tungusic, Korean, and Japonic languages, but not Turkic or Mongolic. However, many linguists dispute the alleged affinities of Korean and Japanese to the other three groups. Some authors instead tried to connect Japanese to the Austronesian languages. In 2017, Martine Robbeets proposed that Japanese (and possibly Korean) originated as a hybrid language. She proposed that the ancestral home of the Turkic, Mongolic, and Tungusic languages was somewhere in northwestern Manchuria. A group of those proto-Altaic ("Transeurasian") speakers would have migrated south into the modern Liaoning province, where they would have been mostly assimilated by an agricultural community with an Austronesian-like language. The fusion of the two languages would have resulted in proto-Japanese and proto-Korean. In a typological study that does not directly evaluate the validity of the Altaic hypothesis, Yurayong and Szeto (2020) discuss for Koreanic and Japonic the stages of convergence to the Altaic typological model and subsequent divergence from that model, which resulted in the present typological similarity between Koreanic and Japonic. They state that both are "still so different from the Core Altaic languages that we can even speak of an independent Japanese-Korean type of grammar. Given also that there is neither a strong proof of common Proto-Altaic lexical items nor solid regular sound correspondences but, rather, only lexical and structural borrowings between languages of the Altaic typology, our results indirectly speak in favour of a “Paleo-Asiatic” origin of the Japonic and Koreanic languages." The Ainu language In 1962, John C. Street proposed an alternative classification, with Turkic-Mongolic-Tungusic in one grouping and Korean-Japanese-Ainu in another, joined in what he designated as the "North Asiatic" family. The inclusion of Ainu was adopted also by James Patrie in 1982. The Turkic-Mongolic-Tungusic and Korean-Japanese-Ainu groupings were also posited in 2000–2002 by Joseph Greenberg. However, he treated them as independent members of a larger family, which he termed Eurasiatic. The inclusion of Ainu is not widely accepted by Altaicists. In fact, no convincing genealogical relationship between Ainu and any other language family has been demonstrated, and it is generally regarded as a language isolate. Early criticism and rejection Starting in the late 1950s, some linguists became increasingly critical of even the minimal Altaic family hypothesis, disputing the alleged evidence of genetic connection between Turkic, Mongolic and Tungusic languages. Among the earlier critics were Gerard Clauson (1956), Gerhard Doerfer (1963), and Alexander Shcherbak. They claimed that the words and features shared by Turkic, Mongolic, and Tungusic languages were for the most part borrowings and that the rest could be attributed to chance resemblances. In 1988, Doerfer again rejected all the genetic claims over these major groups. Modern controversy A major continuing supporter of the Altaic hypothesis has been Sergei Starostin, who published a comparative lexical analysis of the Altaic languages in 1991. He concluded that the analysis supported the Altaic grouping, although it was "older than most other language families in Eurasia, such as Indo-European or Finno-Ugric, and this is the reason why the modern Altaic languages preserve few common elements". In 1991 and again in 1996, Roy Miller defended the Altaic hypothesis and claimed that the criticisms of Clauson and Doerfer apply exclusively to the lexical correspondences, whereas the most pressing evidence for the theory is the similarities in verbal morphology. In 2003, Claus Schönig published a critical overview of the history of the Altaic hypothesis up to that time, siding with the earlier criticisms of Clauson, Doerfer, and Shcherbak. In 2003, Starostin, Anna Dybo and Oleg Mudrak published the Etymological Dictionary of the Altaic Languages, which expanded the 1991 lexical lists and added other phonological and grammatical arguments. Starostin's book was criticized by Stefan Georg in 2004 and 2005, and by Alexander Vovin in 2005. Other defenses of the theory, in response to the criticisms of Georg and Vovin, were published by Starostin in 2005, Blažek in 2006, Robbeets in 2007, and Dybo and G. Starostin in 2008 In 2010, Lars Johanson echoed Miller's 1996 rebuttal to the critics, and called for a muting of the polemic. List of supporters and critics of the Altaic hypothesis The list below comprises linguists who have worked specifically on the Altaic problem since the publication of the first volume of Ramstedt's Einführung in 1952. The dates given are those of works concerning Altaic. For supporters of the theory, the version of Altaic they favor is given at the end of the entry, if other than the prevailing one of Turkic–Mongolic–Tungusic–Korean–Japanese. Major supporters Pentti Aalto (1955). Turkic–Mongolic–Tungusic–Korean. Anna V. Dybo (S. Starostin et al. 2003, A. Dybo and G. Starostin 2008). Frederik Kortlandt (2010). Karl H. Menges (1975). Common ancestor of Korean, Japanese and traditional Altaic dated back to the 7th or 8th millennium BC (1975: 125). Roy Andrew Miller (1971, 1980, 1986, 1996). Supported the inclusion of Korean and Japanese. Oleg A. Mudrak (S. Starostin et al. 2003). Nicholas Poppe (1965). Turkic–Mongolic–Tungusic and perhaps Korean. Alexis Manaster Ramer. Martine Robbeets (2004, 2005, 2007, 2008, 2015) (in the form of "Transeurasian"). G. J. Ramstedt (1952–1957). Turkic–Mongolic–Tungusic–Korean. George Starostin (A. Dybo and G. Starostin 2008). Sergei Starostin (1991, S. Starostin et al. 2003). John C. Street (1962). Turkic–Mongolic–Tungusic and Korean–Japanese–Ainu, grouped as "North Asiatic". Talat Tekin (1994). Turkic–Mongolic–Tungusic–Korean. Major critics Gerard Clauson (1956, 1959, 1962). Gerhard Doerfer (1963, 1966, 1967, 1968, 1972, 1973, 1974, 1975, 1981, 1985, 1988, 1993). Susumu Ōno (1970, 2000) Juha Janhunen (1992, 1995) (tentative support of Mongolic-Tungusic). Claus Schönig (2003). Stefan Georg (2004, 2005). Alexander Vovin (2005, 2010, 2017). Formerly an advocate of Altaic (1994, 1995, 1997, 1999, 2000, 2001), now a critic. Alexander Shcherbak. Alexander B. M. Stiven (2008, 2010). Advocates of alternative hypotheses James Patrie (1982) and Joseph Greenberg (2000–2002). Turkic–Mongolic–Tungusic and Korean–Japanese–Ainu, grouped in a common taxon (cf. John C. Street 1962), called Eurasiatic by Greenberg. J. Marshall Unger (1990). Tungusic–Korean–Japanese ("Macro-Tungusic"), with Turkic and Mongolic as separate language families. Lars Johanson (2010). Agnostic, proponent of a "Transeurasian" verbal morphology not necessarily genealogically linked. Languages Tungusic languages With fewer speakers than Mongolic or Turkic languages, Tungusic languages are distributed across most of Eastern Siberia (including the Sakhalin Island), northern Manchuria and extending into some parts of Xinjiang and Mongolia. Some Tungusic languages are extinct or endangered languages as a consequence of language shift to Chinese and Russian. In China, where the Tungusic population is over 10 million, just 46,000 still retain knowledge of their ethnic languages. Scholars have yet to reach agreement on how to classify the Tungusic languages but two subfamilies have been proposed: South Tungusic (or Manchu) and North Tungusic (Tungus). Jurchen (now extinct; Da Jin 大金), Manchu (critically endangered; Da Qing 大清), Sibe (Xibo 锡伯) and other minor languages comprise the Manchu group. The Northern Tungusic languages can be reclassified even further into the Siberian Tungusic languages (Evenki, Lamut, Solon and Negidal) and the Lower Amur Tungusic languages (Nanai, Ulcha, Orok to name a few). Significant disagreements remain, not only about the linguistic sub-classifications but also some controversy around the Chinese names of some ethnic groups, like the use of Hezhe (赫哲) for the Nanai people. Mongolic languages Mongolic languages are spoken in three geographic areas: Russia (especially Siberia), China and Mongolia. Although Russia and China host significant Mongol populations many of the Mongol people in these countries don't speak their own ethnic language. They are usually sub-classified into two groups: the Western languages (Oirat, Kalmyk and related dialects) and Eastern languages. The latter group can be further subdivided as follows: Southern Mongol - Ordos, Chakhar and Khorchin Central Mongol - Khalkha, Darkhat Northern Mongol - Buriat and dialects, Khamnigan There are also additional archaic and obscure languages within these groups: Moghol (Afghanistan), Dagur (Manchuria) and languages associated with Gansu and Qinghai. Linguisitically two branches emerge, the Common Mongolic and the Khitan/Serbi (sometimes called "para-Mongolic"). Of the latter, only Dagur survives into the present day. Arguments For the Altaic grouping Phonological and grammatical features The original arguments for grouping the "micro-Altaic" languages within a Uralo-Altaic family were based on such shared features as vowel harmony and agglutination. According to Roy Miller, the most pressing evidence for the theory is the similarities in verbal morphology. The Etymological Dictionary by Starostin and others (2003) proposes a set of sound change laws that would explain the evolution from Proto-Altaic to the descendant languages. For example, although most of today's Altaic languages have vowel harmony, Proto-Altaic as reconstructed by them lacked it; instead, various vowel assimilations between the first and second syllables of words occurred in Turkic, Mongolic, Tungusic, Korean, and Japonic. They also included a number of grammatical correspondences between the languages. Shared lexicon Starostin claimed in 1991 that the members of the proposed Altaic group shared about 15–20% of apparent cognates within a 110-word Swadesh-Yakhontov list; in particular, Turkic–Mongolic 20%, Turkic–Tungusic 18%, Turkic–Korean 17%, Mongolic–Tungusic 22%, Mongolic–Korean 16%, and Tungusic–Korean 21%. The 2003 Etymological Dictionary includes a list of 2,800 proposed cognate sets, as well as a few important changes to the reconstruction of Proto-Altaic. The authors tried hard to distinguish loans between Turkic and Mongolic and between Mongolic and Tungusic from cognates; and suggest words that occur in Turkic and Tungusic but not in Mongolic. All other combinations between the five branches also occur in the book. It lists 144 items of shared basic vocabulary, including words for such items as 'eye', 'ear', 'neck', 'bone', 'blood', 'water', 'stone', 'sun', and 'two'. Robbeets and Bouckaert (2018) use Bayesian phylolinguistic methods to argue for the coherence of the "narrow" Altaic languages (Turkic, Mongolic, and Tungusic) together with Japonic and Koreanic, which they refer to as the Transeurasian languages. Their results include the following phylogenetic tree: Martine Robbeets (2020) argues that early Transeurasian speakers were originally agriculturalists in northeastern China, only becoming pastoralists later on. Some lexical reconstructions of agricultural terms by Robbeets (2020) are listed below. Abbreviations PTEA = Proto-Transeurasian PA = Proto-Altaic PTk = Proto-Turkic PMo = Proto-Mongolic PTg = Proto-Tungusic PJK = Proto-Japano-Koreanic PK = Proto-Koreanic PJ = Proto-Japonic Additional family-level reconstructions of agricultural vocabulary from Robbeets et al. (2020): Proto-Turkic *ek- ‘to sprinkle with the hand; sow’ > *ek-e.g. ‘plow’ Proto-Turkic *tarï- ‘to cultivate (the ground)’ > *tarï-g ‘what is cultivated; crops, main crop, cultivated land’ Proto-Turkic *ko- ‘to put’ > *koːn- ‘to settle down (of animals), to take up residence (of people), to be planted (of plants)’ > *konak ‘foxtail millet (Setaria italica)’ Proto-Turkic *tög- ‘to hit, beat; to pound, crush (food in a mortar); to husk, thresh (cereals)’ > *tögi ‘husked millet; husked rice’ Proto-Turkic *ügür ‘(broomcorn) millet’ Proto-Turkic *arpa ‘barley (Hordeum vulgare)' < ? Proto-Iranian *arbusā ‘barley’ Proto-Mongolic *amun ‘cereals; broomcorn millet (Panicum miliaceum)’ (Nugteren 2011: 268) Proto-Mongolic *konag ‘foxtail millet’ < PTk *konak ‘foxtail millet (Setaria italica)’ Proto-Mongolic *budaga ‘cooked cereals; porridge; meal’ Proto-Mongolic *tari- ‘to sow, plant’ (Nugteren 2011: 512–13) Proto-Macro-Mongolic *püre ‘seed; descendants’ Proto-Tungusic *pisi-ke ‘broomcorn millet (Panicum miliaceum)’ Proto-Tungusic *jiya- ‘foxtail millet (Setaria italica)’ Proto-Tungusic *murgi ‘barley (Hordeum vulgare)’ Proto-Tungusic *üse- ~ *üsi- ‘to plant’ üse ~ üsi ‘seed, seedling’, üsi-n ‘field for cultivation’ Proto-Tungusic *tari- ‘to sow, to plant’ Proto-Koreanic *pisi ‘seed’, *pihi ‘barnyard millet’ < Proto-Transeurasian (PTEA) *pisi-i (sow-NMLZ) ‘seed’ ~ *pisi-ke (sow-RES.NMLZ) ‘what is sown, major crop’ Proto-Koreanic *patʌ-k ‘dry field’ < Proto-Japano-Koreanic (PJK) *pata ‘dry field’ < PTEA *pata ‘field for cultivation’ Proto-Koreanic *mutʌ-k ‘dry land’ < PJK *muta ‘land’ < PTEA *mudu ‘uncultivated land’ Proto-Koreanic *mat-ʌk ‘garden plot’ < PJK *mat ‘plot of land for cultivation’ Proto-Koreanic *non ‘rice paddy field’ < PJK *non ‘field’ Proto-Koreanic *pap ‘any boiled preparation of cereal; boiled rice’ Proto-Koreanic *pʌsal ‘hulled (of any grain); hulled corn of grain; hulled rice’ < Proto-Japonic *wasa-ra ‘early ripening (of any grain)’ Proto-Koreanic *ipi > *pi > *pye ‘(unhusked) rice’ < Proto-Japonic *ip-i (eat-NMLZ) ‘cooked millet, steamed rice’ Proto-Japonic *nuka ‘rice bran’ < PJ *nuka- (remove.NMLZ) Proto-Japonic *məmi ‘hulled rice’ < PJ *məm-i (move.back.and.forth.with.force-NMLZ) Proto-Japonic *ipi ‘cooked millet, steamed rice’ < *ip-i (eat-NMLZ) < PK *me(k)i ‘rice offered to a higher rank’ < *mek-i (eat-NMLZ) ‘what you eat, food’ < Proto-Austronesian *ka-en eat-OBJ.NMLZ Proto-Japonic *wasa- ~ *wəsə- ‘to be early ripening (of crops); an early ripening variety (of any crop); early-ripening rice plant’ Proto-Japonic *usu ‘(rice and grain) mortar’ < Para-Austronesian *lusuŋ ‘(rice) mortar’; cf. Proto-Austronesian *lusuŋ ‘(rice) mortar’ Proto-Japonic *kəmai ‘dehusked rice’ < Para-Austronesian *hemay < Proto-Macro-Austronesian *Semay ‘cooked rice’; cf. Proto-Austronesian *Semay ‘cooked rice’ Against the grouping Weakness of lexical and typological data According to G. Clauson (1956), G. Doerfer (1963), and A. Shcherbak (1963), many of the typological features of the supposed Altaic languages, particularly agglutinative strongly suffixing morphology and subject–object–verb (SOV) word order, often occur together in languages. Those critics also argued that the words and features shared by Turkic, Mongolic, and Tungusic languages were for the most part borrowings and that the rest could be attributed to chance resemblances. They noted that there was little vocabulary shared by Turkic and Tungusic languages, though more shared with Mongolic languages. They reasoned that, if all three families had a common ancestor, we should expect losses to happen at random, and not only at the geographical margins of the family; and that the observed pattern is consistent with borrowing. According to C. Schönig (2003), after accounting for areal effects, the shared lexicon that could have a common genetic origin was reduced to a small number of monosyllabic lexical roots, including the personal pronouns and a few other deictic and auxiliary items, whose sharing could be explained in other ways; not the kind of sharing expected in cases of genetic relationship. The Sprachbund hypothesis Instead of a common genetic origin, Clauson, Doerfer, and Shcherbak proposed (in 1956–1966) that Turkic, Mongolic, and Tungusic languages form a Sprachbund: a set of languages with similarities due to convergence through intensive borrowing and long contact, rather than common origin. Asya Pereltsvaig further observed in 2011 that, in general, genetically related languages and families tend to diverge over time: the earlier forms are more similar than modern forms. However, she claims that an analysis of the earliest written records of Mongolic and Turkic languages shows the opposite, suggesting that they do not share a common traceable ancestor, but rather have become more similar through language contact and areal effects. Hypothesis about the original homeland The prehistory of the peoples speaking the "Altaic" languages is largely unknown. Whereas for certain other language families, such as the speakers of Indo-European, Uralic, and Austronesian, it is possible to frame substantial hypotheses, in the case of the proposed Altaic family much remains to be done. Some scholars have hypothesised a possible Uralic and Altaic homeland in the Central Asian steppes. According to Juha Janhunen, the ancestral languages of Turkic, Mongolic, Tungusic, Korean, and Japanese were spoken in a relatively small area comprising present-day North Korea, Southern Manchuria, and Southeastern Mongolia. However Janhunen is sceptical about an affiliation of Japanese to Altaic, while András Róna-Tas remarked that a relationship between Altaic and Japanese, if it ever existed, must be more remote than the relationship of any two of the Indo-European languages. Ramsey stated that "the genetic relationship between Korean and Japanese, if it in fact exists, is probably more complex and distant than we can imagine on the basis of our present state of knowledge". Supporters of the Altaic hypothesis formerly set the date of the Proto-Altaic language at around 4000 BC, but today at around 5000 BC or 6000 BC. This would make Altaic a language family older than Indo-European (around 3000 to 4000 BC according to mainstream hypotheses) but considerably younger than Afroasiatic (c. 10,000 BC or 11,000 to 16,000 BC according to different sources). See also Classification of the Japonic languages Nostratic languages Pan-Turanism Turco-Mongol Uralo-Siberian languages Xiongnu Comparison of Japanese and Korean References Citations Sources Aalto, Pentti. 1955. "On the Altaic initial *p-." Central Asiatic Journal 1, 9–16. Anonymous. 2008. [title missing]. Bulletin of the Society for the Study of the Indigenous Languages of the Americas, 31 March 2008, 264: . Anthony, David W. 2007. The Horse, the Wheel, and Language. Princeton: Princeton University Press. Boller, Anton. 1857. Nachweis, daß das Japanische zum ural-altaischen Stamme gehört. Wien. Clauson, Gerard. 1959. "The case for the Altaic theory examined." Akten des vierundzwanzigsten internationalen Orientalisten-Kongresses, edited by H. Franke. Wiesbaden: Deutsche Morgenländische Gesellschaft, in Komission bei Franz Steiner Verlag. Clauson, Gerard. 1968. "A lexicostatistical appraisal of the Altaic theory." Central Asiatic Journal 13: 1–23. Doerfer, Gerhard. 1973. "Lautgesetze und Zufall: Betrachtungen zum Omnicomparativismus." Innsbrucker Beiträge zur Sprachwissenschaft 10. Doerfer, Gerhard. 1974. "Ist das Japanische mit den altaischen Sprachen verwandt?" Zeitschrift der Deutschen Morgenländischen Gesellschaft 114.1. Doerfer, Gerhard. 1985. Mongolica-Tungusica. Wiesbaden: Otto Harrassowitz. Georg, Stefan. 1999 / 2000. "Haupt und Glieder der altaischen Hypothese: die Körperteilbezeichnungen im Türkischen, Mongolischen und Tungusischen" ('Head and members of the Altaic hypothesis: The body-part designations in Turkic, Mongolic, and Tungusic'). Ural-altaische Jahrbücher, neue Folge B 16, 143–182. . Lee, Ki-Moon and S. Robert Ramsey. 2011. A History of the Korean Language. Cambridge: Cambridge University Press. Menges, Karl. H. 1975. Altajische Studien II. Japanisch und Altajisch. Wiesbaden: Franz Steiner Verlag. Miller, Roy Andrew. 1980. Origins of the Japanese Language: Lectures in Japan during the Academic Year 1977–1978. Seattle: University of Washington Press. . Ramstedt, G.J. 1952. Einführung in die altaische Sprachwissenschaft I. Lautlehre, 'Introduction to Altaic Linguistics, Volume 1: Phonology', edited and published by Pentti Aalto. Helsinki: Suomalais-Ugrilainen Seura. Ramstedt, G.J. 1957. Einführung in die altaische Sprachwissenschaft II. Formenlehre, 'Introduction to Altaic Linguistics, Volume 2: Morphology', edited and published by Pentti Aalto. Helsinki: Suomalais-Ugrilainen Seura. Ramstedt, G.J. 1966. Einführung in die altaische Sprachwissenschaft III. Register, 'Introduction to Altaic Linguistics, Volume 3: Index', edited and published by Pentti Aalto. Helsinki: Suomalais-Ugrilainen Seura. Robbeets, Martine. 2004. "Swadesh 100 on Japanese, Korean and Altaic." Tokyo University Linguistic Papers, TULIP 23, 99–118. Robbeets, Martine. 2005. Is Japanese related to Korean, Tungusic, Mongolic and Turkic? Wiesbaden: Otto Harrassowitz. Strahlenberg, P.J.T. von. 1730. Das nord- und ostliche Theil von Europa und Asia.... Stockholm. (Reprint: 1975. Studia Uralo-Altaica. Szeged and Amsterdam.) Strahlenberg, P.J.T. von. 1738. Russia, Siberia and Great Tartary, an Historico-geographical Description of the North and Eastern Parts of Europe and Asia.... (Reprint: 1970. New York: Arno Press.) English translation of the previous. Tekin, Talat. 1994. "Altaic languages." In The Encyclopedia of Language and Linguistics, Vol. 1, edited by R.E. Asher. Oxford and New York: Pergamon Press. Vovin, Alexander. 1993. "About the phonetic value of the Middle Korean grapheme ᅀ." Bulletin of the School of Oriental and African Studies 56(2), 247–259. Vovin, Alexander. 1994. "Genetic affiliation of Japanese and methodology of linguistic comparison." Journal de la Société finno-ougrienne 85, 241–256. Vovin, Alexander. 2001. "Japanese, Korean, and Tungusic: evidence for genetic relationship from verbal morphology." Altaic Affinities (Proceedings of the 40th Meeting of PIAC, Provo, Utah, 1997), edited by David B. Honey and David C. Wright, 83–202. Indiana University, Research Institute for Inner Asian Studies. Vovin, Alexander. 2010. Koreo-Japonica: A Re-Evaluation of a Common Genetic Origin. University of Hawaii Press. Whitney Coolidge, Jennifer. 2005. Southern Turkmenistan in the Neolithic: A Petrographic Case Study. Oxbow Books. Further reading Greenberg, Joseph H. 1997. "Does Altaic exist?" In Irén Hegedus, Peter A. Michalove, and Alexis Manaster Ramer (editors), Indo-European, Nostratic and Beyond: A Festschrift for Vitaly V. Shevoroshkin, Washington, DC: Institute for the Study of Man, 1997, 88–93. (Reprinted in Joseph H. Greenberg, Genetic Linguistics, Oxford: Oxford University Press, 2005, 325–330.) Hahn, Reinhard F. 1994. LINGUIST List 5.908, 18 August 1994. Janhune, Juha. 1995. "Prolegomena to a Comparative Analysis of Mongolic and Tungusic". Proceedings of the 38th Permanent International Altaistic Conference (PIAC), 209–218. Wiesbaden: Harrassowitz. Johanson, Lars. 1999. "Cognates and copies in Altaic verb derivation." Language and Literature – Japanese and the Other Altaic Languages: Studies in Honour of Roy Andrew Miller on His 75th Birthday, edited by Karl H. Menges and Nelly Naumann, 1–13. Wiesbaden: Otto Harrassowitz. (Also: HTML version.) Johanson, Lars. 1999. "Attractiveness and relatedness: Notes on Turkic language contacts." Proceedings of the Twenty-fifth Annual Meeting of the Berkeley Linguistics Society: Special Session on Caucasian, Dravidian, and Turkic Linguistics, edited by Jeff Good and Alan C.L. Yu, 87–94. Berkeley: Berkeley Linguistics Society. Johanson, Lars. 2002. Structural Factors in Turkic Language Contacts, translated by Vanessa Karam. Richmond, Surrey: Curzon Press. Kortlandt, Frederik. 1993. "The origin of the Japanese and Korean accent systems." Acta Linguistica Hafniensia 26, 57–65. Robbeets, Martine. 2004. "Belief or argument? The classification of the Japanese language." Eurasia Newsletter 8. Graduate School of Letters, Kyoto University. Ruhlen, Merritt. 1987. A Guide to the World's Languages. Stanford University Press. Sinor, Denis. 1990. Essays in Comparative Altaic Linguistics. Bloomington: Indiana University, Research Institute for Inner Asian Studies. . Vovin, Alexander. 2009. Japanese, Korean, and other 'non-Altaic' languages. Central Asiatic Journal 53 (1): 105–147. External links Altaic at the Linguist List MultiTree Project (not functional as of 2014): Genealogical trees attributed to Ramstedt 1957, Miller 1971, and Poppe 1982 Swadesh vocabulary lists for Altaic languages (from Wiktionary's Swadesh-list appendix) Monumenta altaica Altaic linguistics website, maintained by Ilya Gruntov Altaic Etymological Dictionary, database version by Sergei A. Starostin, Anna V. Dybo, and Oleg A. Mudrak (does not include introductory chapters) LINGUIST List 5.911 defense of Altaic by Alexis Manaster Ramer (1994) LINGUIST List 5.926 1. Remarks by Alexander Vovin. 2. Clarification by J. Marshall Unger. (1994) Agglutinative languages Central Asia Proposed language families
825
https://en.wikipedia.org/wiki/Austrian%20German
Austrian German
Austrian German (), Austrian Standard German (ASG), Standard Austrian German (), or Austrian High German (), is the variety of Standard German written and spoken in Austria. It has the highest sociolinguistic prestige locally, as it is the variation used in the media and for other formal situations. In less formal situations, Austrians tend to use forms closer to or identical with the Bavarian and Alemannic dialects, traditionally spoken – but rarely written – in Austria. History German in Austria (Austria German) has its beginning in the mid-18th century, when empress Maria Theresa and her son Joseph II introduced compulsory schooling (in 1774) and several reforms of administration in their multilingual Habsburg empire. At the time, the written standard was Oberdeutsche Schreibsprache (Upper German written language), which was highly influenced by the Bavarian and Alemannic dialects of Austria. Another option was to create a new standard based on the Southern German dialects, as proposed by the linguist Johann Siegmund Popowitsch. Instead they decided for pragmatic reasons to adopt the already standardized chancellery language of Saxony (Sächsische Kanzleisprache or Meißner Kanzleideutsch), which was based on the administrative language of the non-Austrian area of Meißen and Dresden. Austria High German (Hochdeutsch in Österreich, not to be confused with the Bavarian Austria German dialects) has the same geographic origin as the Swiss High German (Schweizer Hochdeutsch, not to be confused with the Alemannic Swiss German dialects). The process of introducing the new written standard was led by Joseph von Sonnenfels. Since 1951 the standardized form of Austrian German for official texts and schools has been defined by the Austrian Dictionary (), published under the authority of the Austrian Federal Ministry of Education, Arts and Culture. General situation of German As German is a pluricentric language, Austrian German is one among several varieties of German. Much like the relationship between British English and American English, the German varieties differ in minor respects (e.g., spelling, word usage and grammar) but are recognizably equivalent and largely mutually intelligible. Standard Austrian German in Austria The official Austrian dictionary, das Österreichische Wörterbuch, prescribes grammatical and spelling rules defining the official language. Austrian delegates participated in the international working group that drafted the German spelling reform of 1996—several conferences leading up to the reform were hosted in Vienna at the invitation of the Austrian federal government—and adopted it as a signatory, along with Germany, Switzerland, and Liechtenstein, of an international memorandum of understanding () signed in Vienna in 1996. The eszett or "sharp s" (ß) is used in Austria, as in Germany (but unlike in Switzerland). Because of the German language's pluricentric nature, German dialects in Austria should not be confused with the variety of Standard Austrian German spoken by most Austrians, which is distinct from that of Germany or Switzerland. Distinctions in vocabulary persist, for example, in culinary terms, where communication with Germans is frequently difficult, and administrative and legal language, which is due to Austria's exclusion from the development of a German nation-state in the late 19th century and its manifold particular traditions. A comprehensive collection of Austrian-German legal, administrative and economic terms is offered in Markhardt, Heidemarie: Wörterbuch der österreichischen Rechts-, Wirtschafts- und Verwaltungsterminologie (Peter Lang, 2006). Former spoken standard Until 1918, the spoken standard in Austria was the , a sociolect spoken by the imperial Habsburg family and the nobility of Austria-Hungary. The dialect was similar to Viennese German and other eastern dialects of German spoken in Austria, but was slightly nasalized. Special written forms For many years, Austria had a special form of the language for official government documents. This form is known as , or "Austrian chancellery language". It is a very traditional form of the language, probably derived from medieval deeds and documents, and has a very complex structure and vocabulary generally reserved for such documents. For most speakers (even native speakers), this form of the language is generally difficult to understand, as it contains many highly specialised terms for diplomatic, internal, official, and military matters. There are no regional variations, because this special written form has mainly been used by a government that has now for centuries been based in Vienna. is now used less and less, thanks to various administrative reforms that reduced the number of traditional civil servants (). As a result, Standard Austrian German is replacing it in government and administrative texts. European Union When Austria became a member of the European Union, 23 food-related terms were listed in its accession agreement as having the same legal status as the equivalent terms used in Germany, for example, the words for "potato", "tomato", and "Brussels sprouts". (Examples in "Vocabulary") Austrian German is the only variety of a pluricentric language recognized under international law or EU primary law. Grammar Verbs In Austria, as in the German-speaking parts of Switzerland and in southern Germany, verbs that express a state tend to use as the auxiliary verb in the perfect, as well as verbs of movement. Verbs which fall into this category include sitzen (to sit), liegen (to lie) and, in parts of Carinthia, schlafen (to sleep). Therefore, the perfect of these verbs would be ich bin gesessen, ich bin gelegen and ich bin geschlafen respectively. In Germany, the words stehen (to stand) and gestehen (to confess) are identical in the present perfect: habe gestanden. The Austrian variant avoids this potential ambiguity (bin gestanden from stehen, "to stand"; and habe gestanden from gestehen, "to confess", e.g. "der Verbrecher ist vor dem Richter gestanden und hat gestanden"). In addition, the preterite (simple past) is very rarely used in Austria, especially in the spoken language, with the exception of some modal verbs (i.e. ich sollte, ich wollte). Vocabulary There are many official terms that differ in Austrian German from their usage in most parts of Germany. Words used in Austria are Jänner (January) rather than Januar, Feber (seldom, February) along with Februar, heuer (this year) along with dieses Jahr, Stiege (stairs) along with Treppen, Rauchfang (chimney) instead of Schornstein, many administrative, legal and political terms, and many food terms, including the following: There are, however, some false friends between the two regional varieties: Kasten (wardrobe) along with or instead of Schrank (and, similarly, Eiskasten along with Kühlschrank, fridge), as opposed to Kiste (box) instead of Kasten. Kiste in Germany means both "box" and "chest". Sessel (chair) instead of Stuhl. Sessel means "" in Germany and Stuhl means "stool (faeces)" in both varieties. Dialects Classification Dialects of the Austro-Bavarian group, which also comprises dialects from Bavaria Central Austro-Bavarian (along the main rivers Isar and Danube, spoken in the northern parts of the State of Salzburg, Upper Austria, Lower Austria, and northern Burgenland) Viennese German Southern Austro-Bavarian (in Tyrol, South Tyrol, Carinthia, Styria, and the southern parts of Salzburg and Burgenland) Vorarlbergerisch, spoken in Vorarlberg, is a High Alemannic dialect. Regional accents In addition to the standard variety, in everyday life most Austrians speak one of a number of Upper German dialects. While strong forms of the various dialects are not fully mutually intelligible to northern Germans, communication is much easier in Bavaria, especially rural areas, where the Bavarian dialect still predominates as the mother tongue. The Central Austro-Bavarian dialects are more intelligible to speakers of Standard German than the Southern Austro-Bavarian dialects of Tyrol. Viennese, the Austro-Bavarian dialect of Vienna, is seen for many in Germany as quintessentially Austrian. The people of Graz, the capital of Styria, speak yet another dialect which is not very Styrian and more easily understood by people from other parts of Austria than other Styrian dialects, for example from western Styria. Simple words in the various dialects are very similar, but pronunciation is distinct for each and, after listening to a few spoken words, it may be possible for an Austrian to realise which dialect is being spoken. However, in regard to the dialects of the deeper valleys of the Tyrol, other Tyroleans are often unable to understand them. Speakers from the different states of Austria can easily be distinguished from each other by their particular accents (probably more so than Bavarians), those of Carinthia, Styria, Vienna, Upper Austria, and the Tyrol being very characteristic. Speakers from those regions, even those speaking Standard German, can usually be easily identified by their accent, even by an untrained listener. Several of the dialects have been influenced by contact with non-Germanic linguistic groups, such as the dialect of Carinthia, where, in the past, many speakers were bilingual (and, in the southeastern portions of the state, many still are even today) with Slovene, and the dialect of Vienna, which has been influenced by immigration during the Austro-Hungarian period, particularly from what is today Czechia. The German dialects of South Tyrol have been influenced by local Romance languages, particularly noticeable with the many loanwords from Italian and Ladin. The geographic borderlines between the different accents (isoglosses) coincide strongly with the borders of the states and also with the border with Bavaria, with Bavarians having a markedly different rhythm of speech in spite of the linguistic similarities. References Notes Citations Works cited Further reading : Die deutsche Sprache in Deutschland, Österreich und der Schweiz: Das Problem der nationalen Varietäten. de Gruyter, Berlin/New York 1995. Ammon, Ulrich / Hans Bickel, Jakob Ebner u. a.: Variantenwörterbuch des Deutschen. Die Standardsprache in Österreich, der Schweiz und Deutschland sowie in Liechtenstein, Luxemburg, Ostbelgien und Südtirol. Berlin/New York 2004, . Dollinger, Stefan: Österreichisches Deutsch oder Deutsch in Österreich? Identitäten im 21. Jahrhundert. New Academic Press, 2021. ISBN: 978-3-99036-023-1. Grzega, Joachim: „Deutschländisch und Österreichisches Deutsch: Mehr Unterschiede als nur in Wortschatz und Aussprache.“ In: Joachim Grzega: Sprachwissenschaft ohne Fachchinesisch. Shaker, Aachen 2001, S. 7–26. . Grzega, Joachim: "On the Description of National Varieties: Examples from (German and Austrian) German and (English and American) English". In: Linguistik Online 7 (2000). Grzega, Joachim: "Nonchalance als Merkmal des Österreichischen Deutsch". In: Muttersprache 113 (2003): 242–254. Muhr, Rudolf / Schrodt, Richard: Österreichisches Deutsch und andere nationale Varietäten plurizentrischer Sprachen in Europa. Wien, 1997 Muhr, Rudolf/Schrodt, Richard/Wiesinger, Peter (eds.): Österreichisches Deutsch: Linguistische, sozialpsychologische und sprachpolitische Aspekte einer nationalen Variante des Deutschen. Wien, 1995. Pohl, Heinz Dieter: „Österreichische Identität und österreichisches Deutsch“ aus dem „Kärntner Jahrbuch für Politik 1999“Wiesinger, Peter: Die deutsche Sprache in Österreich. Eine Einführung, In: Wiesinger (Hg.): Das österreichische Deutsch. Schriften zur deutschen Sprache. Band 12.'' (Wien, Köln, Graz, 1988, Verlag, Böhlau) External links Austrian German – German Dictionary Das Österreichische Volkswörterbuch Bavarian language German dialects German National varieties of German
840
https://en.wikipedia.org/wiki/Axiom%20of%20choice
Axiom of choice
In mathematics, the axiom of choice, or AC, is an axiom of set theory equivalent to the statement that a Cartesian product of a collection of non-empty sets is non-empty. Informally put, the axiom of choice says that given any collection of bins, each containing at least one object, it is possible to make a selection of exactly one object from each bin, even if the collection is infinite. Formally, it states that for every indexed family of nonempty sets there exists an indexed family of elements such that for every . The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem. In many cases, such a selection can be made without invoking the axiom of choice; this is in particular the case if the number of sets is finite, or if a selection rule is available – some distinguishing property that happens to hold for exactly one element in each set. An illustrative example is sets picked from the natural numbers. From such sets, one may always select the smallest number, e.g. given the sets {{4, 5, 6}, {10, 12}, {1, 400, 617, 8000}} the set containing each smallest element is {4, 10, 1}. In this case, "select the smallest number" is a choice function. Even if infinitely many sets were collected from the natural numbers, it will always be possible to choose the smallest element from each set to produce a set. That is, the choice function provides the set of chosen elements. However, no choice function is known for the collection of all non-empty subsets of the real numbers (if there are non-constructible reals). In that case, the axiom of choice must be invoked. Bertrand Russell coined an analogy: for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate selection; this makes it possible to directly define a choice function. For an infinite collection of pairs of socks (assumed to have no distinguishing features), there is no obvious way to make a function that selects one sock from each pair, without invoking the axiom of choice. Although originally controversial, the axiom of choice is now used without reservation by most mathematicians, and it is included in the standard form of axiomatic set theory, Zermelo–Fraenkel set theory with the axiom of choice (ZFC). One motivation for this use is that a number of generally accepted mathematical results, such as Tychonoff's theorem, require the axiom of choice for their proofs. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, such as the axiom of determinacy. The axiom of choice is avoided in some varieties of constructive mathematics, although there are varieties of constructive mathematics in which the axiom of choice is embraced. Statement A choice function (also called selector or selection) is a function f, defined on a collection X of nonempty sets, such that for every set A in X, f(A) is an element of A. With this concept, the axiom can be stated: Formally, this may be expressed as follows: Thus, the negation of the axiom of choice states that there exists a collection of nonempty sets that has no choice function. (, so where is negation.) Each choice function on a collection X of nonempty sets is an element of the Cartesian product of the sets in X. This is not the most general situation of a Cartesian product of a family of sets, where a given set can occur more than once as a factor; however, one can focus on elements of such a product that select the same element every time a given set appears as factor, and such elements correspond to an element of the Cartesian product of all distinct sets in the family. The axiom of choice asserts the existence of such elements; it is therefore equivalent to: Given any family of nonempty sets, their Cartesian product is a nonempty set. Nomenclature ZF, AC, and ZFC In this article and other discussions of the Axiom of Choice the following abbreviations are common: AC – the Axiom of Choice. ZF – Zermelo–Fraenkel set theory omitting the Axiom of Choice. ZFC – Zermelo–Fraenkel set theory, extended to include the Axiom of Choice. Variants There are many other equivalent statements of the axiom of choice. These are equivalent in the sense that, in the presence of other basic axioms of set theory, they imply the axiom of choice and are implied by it. One variation avoids the use of choice functions by, in effect, replacing each choice function with its range. Given any set X of pairwise disjoint non-empty sets, there exists at least one set C that contains exactly one element in common with each of the sets in X. This guarantees for any partition of a set X the existence of a subset C of X containing exactly one element from each part of the partition. Another equivalent axiom only considers collections X that are essentially powersets of other sets: For any set A, the power set of A (with the empty set removed) has a choice function. Authors who use this formulation often speak of the choice function on A, but this is a slightly different notion of choice function. Its domain is the power set of A (with the empty set removed), and so makes sense for any set A, whereas with the definition used elsewhere in this article, the domain of a choice function on a collection of sets is that collection, and so only makes sense for sets of sets. With this alternate notion of choice function, the axiom of choice can be compactly stated as Every set has a choice function. which is equivalent to For any set A there is a function f such that for any non-empty subset B of A, f(B) lies in B. The negation of the axiom can thus be expressed as: There is a set A such that for all functions f (on the set of non-empty subsets of A), there is a B such that f(B) does not lie in B. Restriction to finite sets The statement of the axiom of choice does not specify whether the collection of nonempty sets is finite or infinite, and thus implies that every finite collection of nonempty sets has a choice function. However, that particular case is a theorem of the Zermelo–Fraenkel set theory without the axiom of choice (ZF); it is easily proved by mathematical induction. In the even simpler case of a collection of one set, a choice function just corresponds to an element, so this instance of the axiom of choice says that every nonempty set has an element; this holds trivially. The axiom of choice can be seen as asserting the generalization of this property, already evident for finite collections, to arbitrary collections. Usage Until the late 19th century, the axiom of choice was often used implicitly, although it had not yet been formally stated. For example, after having established that the set X contains only non-empty sets, a mathematician might have said "let F(s) be one of the members of s for all s in X" to define a function F. In general, it is impossible to prove that F exists without the axiom of choice, but this seems to have gone unnoticed until Zermelo. Not every situation requires the axiom of choice. For finite sets X, the axiom of choice follows from the other axioms of set theory. In that case, it is equivalent to saying that if we have several (a finite number of) boxes, each containing at least one item, then we can choose exactly one item from each box. Clearly, we can do this: We start at the first box, choose an item; go to the second box, choose an item; and so on. The number of boxes is finite, so eventually, our choice procedure comes to an end. The result is an explicit choice function: a function that takes the first box to the first element we chose, the second box to the second element we chose, and so on. (A formal proof for all finite sets would use the principle of mathematical induction to prove "for every natural number k, every family of k nonempty sets has a choice function.") This method cannot, however, be used to show that every countable family of nonempty sets has a choice function, as is asserted by the axiom of countable choice. If the method is applied to an infinite sequence (Xi : i∈ω) of nonempty sets, a function is obtained at each finite stage, but there is no stage at which a choice function for the entire family is constructed, and no "limiting" choice function can be constructed, in general, in ZF without the axiom of choice. Examples The nature of the individual nonempty sets in the collection may make it possible to avoid the axiom of choice even for certain infinite collections. For example, suppose that each member of the collection X is a nonempty subset of the natural numbers. Every such subset has a smallest element, so to specify our choice function we can simply say that it maps each set to the least element of that set. This gives us a definite choice of an element from each set, and makes it unnecessary to apply the axiom of choice. The difficulty appears when there is no natural choice of elements from each set. If we cannot make explicit choices, how do we know that our set exists? For example, suppose that X is the set of all non-empty subsets of the real numbers. First we might try to proceed as if X were finite. If we try to choose an element from each set, then, because X is infinite, our choice procedure will never come to an end, and consequently, we shall never be able to produce a choice function for all of X. Next we might try specifying the least element from each set. But some subsets of the real numbers do not have least elements. For example, the open interval (0,1) does not have a least element: if x is in (0,1), then so is x/2, and x/2 is always strictly smaller than x. So this attempt also fails. Additionally, consider for instance the unit circle S, and the action on S by a group G consisting of all rational rotations. Namely, these are rotations by angles which are rational multiples of π. Here G is countable while S is uncountable. Hence S breaks up into uncountably many orbits under G. Using the axiom of choice, we could pick a single point from each orbit, obtaining an uncountable subset X of S with the property that all of its translates by G are disjoint from X. The set of those translates partitions the circle into a countable collection of disjoint sets, which are all pairwise congruent. Since X is not measurable for any rotation-invariant countably additive finite measure on S, finding an algorithm to select a point in each orbit requires the axiom of choice. See non-measurable set for more details. The reason that we are able to choose least elements from subsets of the natural numbers is the fact that the natural numbers are well-ordered: every nonempty subset of the natural numbers has a unique least element under the natural ordering. One might say, "Even though the usual ordering of the real numbers does not work, it may be possible to find a different ordering of the real numbers which is a well-ordering. Then our choice function can choose the least element of every set under our unusual ordering." The problem then becomes that of constructing a well-ordering, which turns out to require the axiom of choice for its existence; every set can be well-ordered if and only if the axiom of choice holds. Criticism and acceptance A proof requiring the axiom of choice may establish the existence of an object without explicitly defining the object in the language of set theory. For example, while the axiom of choice implies that there is a well-ordering of the real numbers, there are models of set theory with the axiom of choice in which no well-ordering of the reals is definable. Similarly, although a subset of the real numbers that is not Lebesgue measurable can be proved to exist using the axiom of choice, it is consistent that no such set is definable. The axiom of choice proves the existence of these intangibles (objects that are proved to exist, but which cannot be explicitly constructed), which may conflict with some philosophical principles. Because there is no canonical well-ordering of all sets, a construction that relies on a well-ordering may not produce a canonical result, even if a canonical result is desired (as is often the case in category theory). This has been used as an argument against the use of the axiom of choice. Another argument against the axiom of choice is that it implies the existence of objects that may seem counterintuitive. One example is the Banach–Tarski paradox which says that it is possible to decompose the 3-dimensional solid unit ball into finitely many pieces and, using only rotations and translations, reassemble the pieces into two solid balls each with the same volume as the original. The pieces in this decomposition, constructed using the axiom of choice, are non-measurable sets. Despite these seemingly paradoxical facts, most mathematicians accept the axiom of choice as a valid principle for proving new results in mathematics. The debate is interesting enough, however, that it is considered of note when a theorem in ZFC (ZF plus AC) is logically equivalent (with just the ZF axioms) to the axiom of choice, and mathematicians look for results that require the axiom of choice to be false, though this type of deduction is less common than the type which requires the axiom of choice to be true. It is possible to prove many theorems using neither the axiom of choice nor its negation; such statements will be true in any model of ZF, regardless of the truth or falsity of the axiom of choice in that particular model. The restriction to ZF renders any claim that relies on either the axiom of choice or its negation unprovable. For example, the Banach–Tarski paradox is neither provable nor disprovable from ZF alone: it is impossible to construct the required decomposition of the unit ball in ZF, but also impossible to prove there is no such decomposition. Similarly, all the statements listed below which require choice or some weaker version thereof for their proof are unprovable in ZF, but since each is provable in ZF plus the axiom of choice, there are models of ZF in which each statement is true. Statements such as the Banach–Tarski paradox can be rephrased as conditional statements, for example, "If AC holds, then the decomposition in the Banach–Tarski paradox exists." Such conditional statements are provable in ZF when the original statements are provable from ZF and the axiom of choice. In constructive mathematics As discussed above, in ZFC, the axiom of choice is able to provide "nonconstructive proofs" in which the existence of an object is proved although no explicit example is constructed. ZFC, however, is still formalized in classical logic. The axiom of choice has also been thoroughly studied in the context of constructive mathematics, where non-classical logic is employed. The status of the axiom of choice varies between different varieties of constructive mathematics. In Martin-Löf type theory and higher-order Heyting arithmetic, the appropriate statement of the axiom of choice is (depending on approach) included as an axiom or provable as a theorem. Errett Bishop argued that the axiom of choice was constructively acceptable, saying In constructive set theory, however, Diaconescu's theorem shows that the axiom of choice implies the law of excluded middle (unlike in Martin-Löf type theory, where it does not). Thus the axiom of choice is not generally available in constructive set theory. A cause for this difference is that the axiom of choice in type theory does not have the extensionality properties that the axiom of choice in constructive set theory does. Some results in constructive set theory use the axiom of countable choice or the axiom of dependent choice, which do not imply the law of the excluded middle in constructive set theory. Although the axiom of countable choice in particular is commonly used in constructive mathematics, its use has also been questioned. Independence In 1938, Kurt Gödel showed that the negation of the axiom of choice is not a theorem of ZF by constructing an inner model (the constructible universe) which satisfies ZFC and thus showing that ZFC is consistent if ZF itself is consistent. In 1963, Paul Cohen employed the technique of forcing, developed for this purpose, to show that, assuming ZF is consistent, the axiom of choice itself is not a theorem of ZF. He did this by constructing a much more complex model which satisfies ZF¬C (ZF with the negation of AC added as axiom) and thus showing that ZF¬C is consistent. Together these results establish that the axiom of choice is logically independent of ZF. The assumption that ZF is consistent is harmless because adding another axiom to an already inconsistent system cannot make the situation worse. Because of independence, the decision whether to use the axiom of choice (or its negation) in a proof cannot be made by appeal to other axioms of set theory. The decision must be made on other grounds. One argument given in favor of using the axiom of choice is that it is convenient to use it because it allows one to prove some simplifying propositions that otherwise could not be proved. Many theorems which are provable using choice are of an elegant general character: every ideal in a ring is contained in a maximal ideal, every vector space has a basis, and every product of compact spaces is compact. Without the axiom of choice, these theorems may not hold for mathematical objects of large cardinality. The proof of the independence result also shows that a wide class of mathematical statements, including all statements that can be phrased in the language of Peano arithmetic, are provable in ZF if and only if they are provable in ZFC. Statements in this class include the statement that P = NP, the Riemann hypothesis, and many other unsolved mathematical problems. When one attempts to solve problems in this class, it makes no difference whether ZF or ZFC is employed if the only question is the existence of a proof. It is possible, however, that there is a shorter proof of a theorem from ZFC than from ZF. The axiom of choice is not the only significant statement which is independent of ZF. For example, the generalized continuum hypothesis (GCH) is not only independent of ZF, but also independent of ZFC. However, ZF plus GCH implies AC, making GCH a strictly stronger claim than AC, even though they are both independent of ZF. Stronger axioms The axiom of constructibility and the generalized continuum hypothesis each imply the axiom of choice and so are strictly stronger than it. In class theories such as Von Neumann–Bernays–Gödel set theory and Morse–Kelley set theory, there is an axiom called the axiom of global choice that is stronger than the axiom of choice for sets because it also applies to proper classes. The axiom of global choice follows from the axiom of limitation of size. Tarski's axiom, which is used in Tarski–Grothendieck set theory and states (in the vernacular) that every set belongs to Grothendieck universe, is stronger than the axiom of choice. Equivalents There are important statements that, assuming the axioms of ZF but neither AC nor ¬AC, are equivalent to the axiom of choice. The most important among them are Zorn's lemma and the well-ordering theorem. In fact, Zermelo initially introduced the axiom of choice in order to formalize his proof of the well-ordering theorem. Set theory Well-ordering theorem: Every set can be well-ordered. Consequently, every cardinal has an initial ordinal. Tarski's theorem about choice: For every infinite set A, there is a bijective map between the sets A and A×A. Trichotomy: If two sets are given, then either they have the same cardinality, or one has a smaller cardinality than the other. Given two non-empty sets, one has a surjection to the other. The Cartesian product of any family of nonempty sets is nonempty. König's theorem: Colloquially, the sum of a sequence of cardinals is strictly less than the product of a sequence of larger cardinals. (The reason for the term "colloquially" is that the sum or product of a "sequence" of cardinals cannot be defined without some aspect of the axiom of choice.) Every surjective function has a right inverse. Order theory Zorn's lemma: Every non-empty partially ordered set in which every chain (i.e., totally ordered subset) has an upper bound contains at least one maximal element. Hausdorff maximal principle: In any partially ordered set, every totally ordered subset is contained in a maximal totally ordered subset. The restricted principle "Every partially ordered set has a maximal totally ordered subset" is also equivalent to AC over ZF. Tukey's lemma: Every non-empty collection of finite character has a maximal element with respect to inclusion. Antichain principle: Every partially ordered set has a maximal antichain. Abstract algebra Every vector space has a basis. Krull's theorem: Every unital ring other than the trivial ring contains a maximal ideal. For every non-empty set S there is a binary operation defined on S that gives it a group structure. (A cancellative binary operation is enough, see group structure and the axiom of choice.) Every free abelian group is projective. Baer's criterion: Every divisible abelian group is injective. Every set is a projective object in the category Set of sets. Functional analysis The closed unit ball of the dual of a normed vector space over the reals has an extreme point. Point-set topology Tychonoff's theorem: Every product of compact topological spaces is compact. In the product topology, the closure of a product of subsets is equal to the product of the closures. Mathematical logic If S is a set of sentences of first-order logic and B is a consistent subset of S, then B is included in a set that is maximal among consistent subsets of S. The special case where S is the set of all first-order sentences in a given signature is weaker, equivalent to the Boolean prime ideal theorem; see the section "Weaker forms" below. Graph theory Every connected graph has a spanning tree. Category theory There are several results in category theory which invoke the axiom of choice for their proof. These results might be weaker than, equivalent to, or stronger than the axiom of choice, depending on the strength of the technical foundations. For example, if one defines categories in terms of sets, that is, as sets of objects and morphisms (usually called a small category), or even locally small categories, whose hom-objects are sets, then there is no category of all sets, and so it is difficult for a category-theoretic formulation to apply to all sets. On the other hand, other foundational descriptions of category theory are considerably stronger, and an identical category-theoretic statement of choice may be stronger than the standard formulation, à la class theory, mentioned above. Examples of category-theoretic statements which require choice include: Every small category has a skeleton. If two small categories are weakly equivalent, then they are equivalent. Every continuous functor on a small-complete category which satisfies the appropriate solution set condition has a left-adjoint (the Freyd adjoint functor theorem). Weaker forms There are several weaker statements that are not equivalent to the axiom of choice, but are closely related. One example is the axiom of dependent choice (DC). A still weaker example is the axiom of countable choice (ACω or CC), which states that a choice function exists for any countable set of nonempty sets. These axioms are sufficient for many proofs in elementary mathematical analysis, and are consistent with some principles, such as the Lebesgue measurability of all sets of reals, that are disprovable from the full axiom of choice. Other choice axioms weaker than axiom of choice include the Boolean prime ideal theorem and the axiom of uniformization. The former is equivalent in ZF to Tarski's 1930 ultrafilter lemma: every filter is a subset of some ultrafilter. Results requiring AC (or weaker forms) but weaker than it One of the most interesting aspects of the axiom of choice is the large number of places in mathematics that it shows up. Here are some statements that require the axiom of choice in the sense that they are not provable from ZF but are provable from ZFC (ZF plus AC). Equivalently, these statements are true in all models of ZFC but false in some models of ZF. Set theory The ultrafilter lemma (with ZF) can be used to prove the Axiom of choice for finite sets: Given and a collection of non-empty sets, their product is not empty. Any union of countably many countable sets is itself countable (because it is necessary to choose a particular ordering for each of the countably many sets). If the set A is infinite, then there exists an injection from the natural numbers N to A (see Dedekind infinite). Eight definitions of a finite set are equivalent. Every infinite game in which is a Borel subset of Baire space is determined. Measure theory The Vitali theorem on the existence of non-measurable sets which states that there is a subset of the real numbers that is not Lebesgue measurable. The Hausdorff paradox. The Banach–Tarski paradox. Algebra Every field has an algebraic closure. Every field extension has a transcendence basis. Stone's representation theorem for Boolean algebras needs the Boolean prime ideal theorem. The Nielsen–Schreier theorem, that every subgroup of a free group is free. The additive groups of R and C are isomorphic. Functional analysis The Hahn–Banach theorem in functional analysis, allowing the extension of linear functionals The theorem that every Hilbert space has an orthonormal basis. The Banach–Alaoglu theorem about compactness of sets of functionals. The Baire category theorem about complete metric spaces, and its consequences, such as the open mapping theorem and the closed graph theorem. On every infinite-dimensional topological vector space there is a discontinuous linear map. General topology A uniform space is compact if and only if it is complete and totally bounded. Every Tychonoff space has a Stone–Čech compactification. Mathematical logic Gödel's completeness theorem for first-order logic: every consistent set of first-order sentences has a completion. That is, every consistent set of first-order sentences can be extended to a maximal consistent set. The compactness theorem: If is a set of first-order (or alternatively, zero-order) sentences such that every finite subset of has a model, then has a model. Possibly equivalent implications of AC There are several historically important set-theoretic statements implied by AC whose equivalence to AC is open. The partition principle, which was formulated before AC itself, was cited by Zermelo as a justification for believing AC. In 1906, Russell declared PP to be equivalent, but whether the partition principle implies AC is still the oldest open problem in set theory, and the equivalences of the other statements are similarly hard old open problems. In every known model of ZF where choice fails, these statements fail too, but it is unknown if they can hold without choice. Set theory Partition principle: if there is a surjection from A to B, there is an injection from B to A. Equivalently, every partition P of a set S is less than or equal to S in size. Converse Schröder–Bernstein theorem: if two sets have surjections to each other, they are equinumerous. Weak partition principle: A partition of a set S cannot be strictly larger than S. If WPP holds, this already implies the existence of a non-measurable set. Each of the previous three statements is implied by the preceding one, but it is unknown if any of these implications can be reversed. There is no infinite decreasing sequence of cardinals. The equivalence was conjectured by Schoenflies in 1905. Abstract algebra Hahn embedding theorem: Every ordered abelian group G order-embeds as a subgroup of the additive group endowed with a lexicographical order, where Ω is the set of Archimedean equivalence classes of G. This equivalence was conjectured by Hahn in 1907. Stronger forms of the negation of AC If we abbreviate by BP the claim that every set of real numbers has the property of Baire, then BP is stronger than ¬AC, which asserts the nonexistence of any choice function on perhaps only a single set of nonempty sets. Strengthened negations may be compatible with weakened forms of AC. For example, ZF + DC + BP is consistent, if ZF is. It is also consistent with ZF + DC that every set of reals is Lebesgue measurable; however, this consistency result, due to Robert M. Solovay, cannot be proved in ZFC itself, but requires a mild large cardinal assumption (the existence of an inaccessible cardinal). The much stronger axiom of determinacy, or AD, implies that every set of reals is Lebesgue measurable, has the property of Baire, and has the perfect set property (all three of these results are refuted by AC itself). ZF + DC + AD is consistent provided that a sufficiently strong large cardinal axiom is consistent (the existence of infinitely many Woodin cardinals). Quine's system of axiomatic set theory, "New Foundations" (NF), takes its name from the title ("New Foundations for Mathematical Logic") of the 1937 article which introduced it. In the NF axiomatic system, the axiom of choice can be disproved. Statements consistent with the negation of AC There are models of Zermelo-Fraenkel set theory in which the axiom of choice is false. We shall abbreviate "Zermelo-Fraenkel set theory plus the negation of the axiom of choice" by ZF¬C. For certain models of ZF¬C, it is possible to prove the negation of some standard facts. Any model of ZF¬C is also a model of ZF, so for each of the following statements, there exists a model of ZF in which that statement is true. In some model, there is a set that can be partitioned into strictly more equivalence classes than the original set has elements, and a function whose domain is strictly smaller than its range. In fact, this is the case in all known models. There is a function f from the real numbers to the real numbers such that f is not continuous at a, but f is sequentially continuous at a, i.e., for any sequence {xn} converging to a, limn f(xn)=f(a). In some model, there is an infinite set of real numbers without a countably infinite subset. In some model, the real numbers are a countable union of countable sets. This does not imply that the real numbers are countable: As pointed out above, to show that a countable union of countable sets is itself countable requires the Axiom of countable choice. In some model, there is a field with no algebraic closure. In all models of ZF¬C there is a vector space with no basis. In some model, there is a vector space with two bases of different cardinalities. In some model there is a free complete boolean algebra on countably many generators. In some model there is a set that cannot be linearly ordered. There exists a model of ZF¬C in which every set in Rn is measurable. Thus it is possible to exclude counterintuitive results like the Banach–Tarski paradox which are provable in ZFC. Furthermore, this is possible whilst assuming the Axiom of dependent choice, which is weaker than AC but sufficient to develop most of real analysis. In all models of ZF¬C, the generalized continuum hypothesis does not hold. For proofs, see . Additionally, by imposing definability conditions on sets (in the sense of descriptive set theory) one can often prove restricted versions of the axiom of choice from axioms incompatible with general choice. This appears, for example, in the Moschovakis coding lemma. Axiom of choice in type theory In type theory, a different kind of statement is known as the axiom of choice. This form begins with two types, σ and τ, and a relation R between objects of type σ and objects of type τ. The axiom of choice states that if for each x of type σ there exists a y of type τ such that R(x,y), then there is a function f from objects of type σ to objects of type τ such that R(x,f(x)) holds for all x of type σ: Unlike in set theory, the axiom of choice in type theory is typically stated as an axiom scheme, in which R varies over all formulas or over all formulas of a particular logical form. Quotes This is a joke: although the three are all mathematically equivalent, many mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn's lemma to be too complex for any intuition. The observation here is that one can define a function to select from an infinite number of pairs of shoes by stating for example, to choose a left shoe. Without the axiom of choice, one cannot assert that such a function exists for pairs of socks, because left and right socks are (presumably) indistinguishable. Polish-American mathematician Jan Mycielski relates this anecdote in a 2006 article in the Notices of the AMS. This quote comes from the famous April Fools' Day article in the computer recreations column of the Scientific American, April 1989. Notes References Per Martin-Löf, "100 years of Zermelo's axiom of choice: What was the problem with it?", in Logicism, Intuitionism, and Formalism: What Has Become of Them?, Sten Lindström, Erik Palmgren, Krister Segerberg, and Viggo Stoltenberg-Hansen, editors (2008). , available as a Dover Publications reprint, 2013, . Herman Rubin, Jean E. Rubin: Equivalents of the axiom of choice. North Holland, 1963. Reissued by Elsevier, April 1970. . Herman Rubin, Jean E. Rubin: Equivalents of the Axiom of Choice II. North Holland/Elsevier, July 1985, . George Tourlakis, Lectures in Logic and Set Theory. Vol. II: Set Theory, Cambridge University Press, 2003. Ernst Zermelo, "Untersuchungen über die Grundlagen der Mengenlehre I," Mathematische Annalen 65: (1908) pp. 261–81. PDF download via digizeitschriften.de Translated in: Jean van Heijenoort, 2002. From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931. New edition. Harvard University Press. 1904. "Proof that every set can be well-ordered," 139-41. 1908. "Investigations in the foundations of set theory I," 199–215. External links Axiom of Choice entry in the Springer Encyclopedia of Mathematics. Axiom of Choice and Its Equivalents entry at ProvenMath. Includes formal statement of the Axiom of Choice, Hausdorff's Maximal Principle, Zorn's Lemma and formal proofs of their equivalence down to the finest detail. Consequences of the Axiom of Choice, based on the book by Paul Howard and Jean Rubin. .
841
https://en.wikipedia.org/wiki/Attila
Attila
Attila (, ; ), frequently called Attila the Hun, was the ruler of the Huns from 434 until his death in March 453. He was also the leader of a tribal empire consisting of Huns, Ostrogoths, Alans and Bulgars, among others, in Central and Eastern Europe. He is also considered one of the most powerful rulers in world history. During his reign, he was one of the most feared enemies of the Western and Eastern Roman Empires. He crossed the Danube twice and plundered the Balkans, but was unable to take Constantinople. His unsuccessful campaign in Persia was followed in 441 by an invasion of the Eastern Roman (Byzantine) Empire, the success of which emboldened Attila to invade the West. He also attempted to conquer Roman Gaul (modern France), crossing the Rhine in 451 and marching as far as Aurelianum (Orléans) before being stopped in the Battle of the Catalaunian Plains. He subsequently invaded Italy, devastating the northern provinces, but was unable to take Rome. He planned for further campaigns against the Romans, but died in 453. After Attila's death, his close adviser, Ardaric of the Gepids, led a Germanic revolt against Hunnic rule, after which the Hunnic Empire quickly collapsed. Attila would live on as a character in Germanic heroic legend. Appearance and character There is no surviving first-hand account of Attila's appearance, but there is a possible second-hand source provided by Jordanes, who cites a description given by Priscus. Some scholars have suggested that this description is typically East Asian, because it has all the combined features that fit the physical type of people from Eastern Asia, and Attila's ancestors may have come from there. Other historians also believed that the same descriptions were also evident on some Scythian people. Etymology Many scholars have argued that the name Attila derives from East Germanic origin; Attila is formed from the Gothic or Gepidic noun atta, "father", by means of the diminutive suffix -ila, meaning "little father", compare Wulfila from wulfs "wolf" and -ila, i.e. "little wolf". The Gothic etymology was first proposed by Jacob and Wilhelm Grimm in the early 19th century. Maenchen-Helfen notes that this derivation of the name "offers neither phonetic nor semantic difficulties", and Gerhard Doerfer notes that the name is simply correct Gothic. Alexander Savelyev and Choongwon Jeong (2020) similarly state that Attila's name "must have been Gothic in origin." The name has sometimes been interpreted as a Germanization of a name of Hunnic origin. Other scholars have argued for a Turkic origin of the name. Omeljan Pritsak considered Ἀττίλα (Attíla) a composite title-name which derived from Turkic *es (great, old), and *til (sea, ocean), and the suffix /a/. The stressed back syllabic til assimilated the front member es, so it became *as. It is a nominative, in form of attíl- (< *etsíl < *es tíl) with the meaning "the oceanic, universal ruler". J. J. Mikkola connected it with Turkic āt (name, fame). As another Turkic possibility, H. Althof (1902) considered it was related to Turkish atli (horseman, cavalier), or Turkish at (horse) and dil (tongue). Maenchen-Helfen argues that Pritsak's derivation is "ingenious but for many reasons unacceptable", while dismissing Mikkola's as "too farfetched to be taken seriously". M. Snædal similarly notes that none of these proposals has achieved wide acceptance. Criticizing the proposals of finding Turkic or other etymologies for Attila, Doerfer notes that King George VI of the United Kingdom had a name of Greek origin, and Süleyman the Magnificent had a name of Arabic origin, yet that does not make them Greeks or Arabs: it is therefore plausible that Attila would have a name not of Hunnic origin. Historian Hyun Jin Kim, however, has argued that the Turkic etymology is "more probable". M. Snædal, in a paper that rejects the Germanic derivation but notes the problems with the existing proposed Turkic etymologies, argues that Attila's name could have originated from Turkic-Mongolian at, adyy/agta (gelding, warhorse) and Turkish atli (horseman, cavalier), meaning "possessor of geldings, provider of warhorses". Historiography and source The historiography of Attila is faced with a major challenge, in that the only complete sources are written in Greek and Latin by the enemies of the Huns. Attila's contemporaries left many testimonials of his life, but only fragments of these remain. Priscus was a Byzantine diplomat and historian who wrote in Greek, and he was both a witness to and an actor in the story of Attila, as a member of the embassy of Theodosius II at the Hunnic court in 449. He was obviously biased by his political position, but his writing is a major source for information on the life of Attila, and he is the only person known to have recorded a physical description of him. He wrote a history of the late Roman Empire in eight books covering the period from 430 to 476. Only fragments of Priscus' work remain. It was cited extensively by 6th-century historians Procopius and Jordanes, especially in Jordanes' The Origin and Deeds of the Goths, which contains numerous references to Priscus's history, and it is also an important source of information about the Hunnic empire and its neighbors. He describes the legacy of Attila and the Hunnic people for a century after Attila's death. Marcellinus Comes, a chancellor of Justinian during the same era, also describes the relations between the Huns and the Eastern Roman Empire. Numerous ecclesiastical writings contain useful but scattered information, sometimes difficult to authenticate or distorted by years of hand-copying between the 6th and 17th centuries. The Hungarian writers of the 12th century wished to portray the Huns in a positive light as their glorious ancestors, and so repressed certain historical elements and added their own legends. The literature and knowledge of the Huns themselves was transmitted orally, by means of epics and chanted poems that were handed down from generation to generation. Indirectly, fragments of this oral history have reached us via the literature of the Scandinavians and Germans, neighbors of the Huns who wrote between the 9th and 13th centuries. Attila is a major character in many Medieval epics, such as the Nibelungenlied, as well as various Eddas and sagas. Archaeological investigation has uncovered some details about the lifestyle, art, and warfare of the Huns. There are a few traces of battles and sieges, but the tomb of Attila and the location of his capital have not yet been found. Early life and background The Huns were a group of Eurasian nomads, appearing from east of the Volga, who migrated further into Western Europe c. 370 and built up an enormous empire there. Their main military techniques were mounted archery and javelin throwing. They were in the process of developing settlements before their arrival in Western Europe, yet the Huns were a society of pastoral warriors whose primary form of nourishment was meat and milk, products of their herds. The origin and language of the Huns has been the subject of debate for centuries. According to some theories, their leaders at least may have spoken a Turkic language, perhaps closest to the modern Chuvash language. One scholar suggests a relationship to Yeniseian. According to the Encyclopedia of European Peoples, "the Huns, especially those who migrated to the west, may have been a combination of central Asian Turkic, Mongolic, and Ugric stocks". Attila's father Mundzuk was the brother of kings Octar and Ruga, who reigned jointly over the Hunnic empire in the early fifth century. This form of diarchy was recurrent with the Huns, but historians are unsure whether it was institutionalized, merely customary, or an occasional occurrence. His family was from a noble lineage, but it is uncertain whether they constituted a royal dynasty. Attila's birthdate is debated; journalist Éric Deschodt and writer Herman Schreiber have proposed a date of 395. However, historian Iaroslav Lebedynsky and archaeologist Katalin Escher prefer an estimate between the 390s and the first decade of the fifth century. Several historians have proposed 406 as the date. Attila grew up in a rapidly changing world. His people were nomads who had only recently arrived in Europe. They crossed the Volga river during the 370s and annexed the territory of the Alans, then attacked the Gothic kingdom between the Carpathian mountains and the Danube. They were a very mobile people, whose mounted archers had acquired a reputation for invincibility, and the Germanic tribes seemed unable to withstand them. Vast populations fleeing the Huns moved from Germania into the Roman Empire in the west and south, and along the banks of the Rhine and Danube. In 376, the Goths crossed the Danube, initially submitting to the Romans but soon rebelling against Emperor Valens, whom they killed in the Battle of Adrianople in 378. Large numbers of Vandals, Alans, Suebi, and Burgundians crossed the Rhine and invaded Roman Gaul on December 31, 406 to escape the Huns. The Roman Empire had been split in half since 395 and was ruled by two distinct governments, one based in Ravenna in the West, and the other in Constantinople in the East. The Roman Emperors, both East and West, were generally from the Theodosian family in Attila's lifetime (despite several power struggles). The Huns dominated a vast territory with nebulous borders determined by the will of a constellation of ethnically varied peoples. Some were assimilated to Hunnic nationality, whereas many retained their own identities and rulers but acknowledged the suzerainty of the king of the Huns. The Huns were also the indirect source of many of the Romans' problems, driving various Germanic tribes into Roman territory, yet relations between the two empires were cordial: the Romans used the Huns as mercenaries against the Germans and even in their civil wars. Thus, the usurper Joannes was able to recruit thousands of Huns for his army against Valentinian III in 424. It was Aëtius, later Patrician of the West, who managed this operation. They exchanged ambassadors and hostages, the alliance lasting from 401 to 450 and permitting the Romans numerous military victories. The Huns considered the Romans to be paying them tribute, whereas the Romans preferred to view this as payment for services rendered. The Huns had become a great power by the time that Attila came of age during the reign of his uncle Ruga, to the point that Nestorius, the Patriarch of Constantinople, deplored the situation with these words: "They have become both masters and slaves of the Romans". Campaigns against the Eastern Roman Empire The death of Rugila (also known as Rua or Ruga) in 434 left the sons of his brother Mundzuk, Attila and Bleda, in control of the united Hun tribes. At the time of the two brothers' accession, the Hun tribes were bargaining with Eastern Roman Emperor Theodosius II's envoys for the return of several renegades who had taken refuge within the Eastern Roman Empire, possibly Hunnic nobles who disagreed with the brothers' assumption of leadership. The following year, Attila and Bleda met with the imperial legation at Margus (Požarevac), all seated on horseback in the Hunnic manner, and negotiated an advantageous treaty. The Romans agreed to return the fugitives, to double their previous tribute of 350 Roman pounds (c. 115 kg) of gold, to open their markets to Hunnish traders, and to pay a ransom of eight solidi for each Roman taken prisoner by the Huns. The Huns, satisfied with the treaty, decamped from the Roman Empire and returned to their home in the Great Hungarian Plain, perhaps to consolidate and strengthen their empire. Theodosius used this opportunity to strengthen the walls of Constantinople, building the city's first sea wall, and to build up his border defenses along the Danube. The Huns remained out of Roman sight for the next few years while they invaded the Sassanid Empire. They were defeated in Armenia by the Sassanids, abandoned their invasion, and turned their attentions back to Europe. In 440, they reappeared in force on the borders of the Roman Empire, attacking the merchants at the market on the north bank of the Danube that had been established by the treaty of 435. Crossing the Danube, they laid waste to the cities of Illyricum and forts on the river, including (according to Priscus) Viminacium, a city of Moesia. Their advance began at Margus, where they demanded that the Romans turn over a bishop who had retained property that Attila regarded as his. While the Romans discussed the bishop's fate, he slipped away secretly to the Huns and betrayed the city to them. While the Huns attacked city-states along the Danube, the Vandals (led by Geiseric) captured the Western Roman province of Africa and its capital of Carthage. Africa was the richest province of the Western Empire and a main source of food for Rome. The Sassanid Shah Yazdegerd II invaded Armenia in 441. The Romans stripped the Balkan area of forces, sending them to Sicily in order to mount an expedition against the Vandals in Africa. This left Attila and Bleda a clear path through Illyricum into the Balkans, which they invaded in 441. The Hunnish army sacked Margus and Viminacium, and then took Singidunum (Belgrade) and Sirmium. During 442, Theodosius recalled his troops from Sicily and ordered a large issue of new coins to finance operations against the Huns. He believed that he could defeat the Huns and refused the Hunnish kings' demands. Attila responded with a campaign in 443. For the first time (as far as the Romans knew) his forces were equipped with battering rams and rolling siege towers, with which they successfully assaulted the military centers of Ratiara and Naissus (Niš) and massacred the inhabitants. Priscus said "When we arrived at Naissus we found the city deserted, as though it had been sacked; only a few sick persons lay in the churches. We halted at a short distance from the river, in an open space, for all the ground adjacent to the bank was full of the bones of men slain in war." Advancing along the Nišava River, the Huns next took Serdica (Sofia), Philippopolis (Plovdiv), and Arcadiopolis (Lüleburgaz). They encountered and destroyed a Roman army outside Constantinople but were stopped by the double walls of the Eastern capital. They defeated a second army near Callipolis (Gelibolu). Theodosius, unable to make effective armed resistance, admitted defeat, sending the Magister militum per Orientem Anatolius to negotiate peace terms. The terms were harsher than the previous treaty: the Emperor agreed to hand over 6,000 Roman pounds (c. 2000 kg) of gold as punishment for having disobeyed the terms of the treaty during the invasion; the yearly tribute was tripled, rising to 2,100 Roman pounds (c. 700 kg) in gold; and the ransom for each Roman prisoner rose to 12 solidi. Their demands were met for a time, and the Hun kings withdrew into the interior of their empire. Bleda died following the Huns' withdrawal from Byzantium (probably around 445). Attila then took the throne for himself, becoming the sole ruler of the Huns. Solitary kingship In 447, Attila again rode south into the Eastern Roman Empire through Moesia. The Roman army, under Gothic magister militum Arnegisclus, met him in the Battle of the Utus and was defeated, though not without inflicting heavy losses. The Huns were left unopposed and rampaged through the Balkans as far as Thermopylae. Constantinople itself was saved by the Isaurian troops of magister militum per Orientem Zeno and protected by the intervention of prefect Constantinus, who organized the reconstruction of the walls that had been previously damaged by earthquakes and, in some places, to construct a new line of fortification in front of the old. Callinicus, in his Life of Saint Hypatius, wrote: In the west In 450, Attila proclaimed his intent to attack the Visigoth kingdom of Toulouse by making an alliance with Emperor Valentinian III. He had previously been on good terms with the Western Roman Empire and its influential general Flavius Aëtius. Aëtius had spent a brief exile among the Huns in 433, and the troops that Attila provided against the Goths and Bagaudae had helped earn him the largely honorary title of magister militum in the west. The gifts and diplomatic efforts of Geiseric, who opposed and feared the Visigoths, may also have influenced Attila's plans. However, Valentinian's sister was Honoria, who had sent the Hunnish king a plea for help—and her engagement ring—in order to escape her forced betrothal to a Roman senator in the spring of 450. Honoria may not have intended a proposal of marriage, but Attila chose to interpret her message as such. He accepted, asking for half of the western Empire as dowry. When Valentinian discovered the plan, only the influence of his mother Galla Placidia convinced him to exile Honoria, rather than killing her. He also wrote to Attila, strenuously denying the legitimacy of the supposed marriage proposal. Attila sent an emissary to Ravenna to proclaim that Honoria was innocent, that the proposal had been legitimate, and that he would come to claim what was rightfully his. Attila interfered in a succession struggle after the death of a Frankish ruler. Attila supported the elder son, while Aëtius supported the younger. (The location and identity of these kings is not known and subject to conjecture.) Attila gathered his vassals—Gepids, Ostrogoths, Rugians, Scirians, Heruls, Thuringians, Alans, Burgundians, among others—and began his march west. In 451, he arrived in Belgica with an army exaggerated by Jordanes to half a million strong. On April 7, he captured Metz. Other cities attacked can be determined by the hagiographic vitae written to commemorate their bishops: Nicasius was slaughtered before the altar of his church in Rheims; Servatus is alleged to have saved Tongeren with his prayers, as Saint Genevieve is said to have saved Paris. Lupus, bishop of Troyes, is also credited with saving his city by meeting Attila in person. Aëtius moved to oppose Attila, gathering troops from among the Franks, the Burgundians, and the Celts. A mission by Avitus and Attila's continued westward advance convinced the Visigoth king Theodoric I (Theodorid) to ally with the Romans. The combined armies reached Orléans ahead of Attila, thus checking and turning back the Hunnish advance. Aëtius gave chase and caught the Huns at a place usually assumed to be near Catalaunum (modern Châlons-en-Champagne). Attila decided to fight the Romans on plains where he could use his cavalry. The two armies clashed in the Battle of the Catalaunian Plains, the outcome of which is commonly considered to be a strategic victory for the Visigothic-Roman alliance. Theodoric was killed in the fighting, and Aëtius failed to press his advantage, according to Edward Gibbon and Edward Creasy, because he feared the consequences of an overwhelming Visigothic triumph as much as he did a defeat. From Aëtius' point of view, the best outcome was what occurred: Theodoric died, Attila was in retreat and disarray, and the Romans had the benefit of appearing victorious. Invasion of Italy and death Attila returned in 452 to renew his marriage claim with Honoria, invading and ravaging Italy along the way. Communities became established in what would later become Venice as a result of these attacks when the residents fled to small islands in the Venetian Lagoon. His army sacked numerous cities and razed Aquileia so completely that it was afterwards hard to recognize its original site. Aëtius lacked the strength to offer battle, but managed to harass and slow Attila's advance with only a shadow force. Attila finally halted at the River Po. By this point, disease and starvation may have taken hold in Attila's camp, thus hindering his war efforts and potentially contributing to the cessation of invasion. Emperor Valentinian III sent three envoys, the high civilian officers Gennadius Avienus and Trigetius, as well as the Bishop of Rome Leo I, who met Attila at Mincio in the vicinity of Mantua and obtained from him the promise that he would withdraw from Italy and negotiate peace with the Emperor. Prosper of Aquitaine gives a short description of the historic meeting, but gives all the credit to Leo for the successful negotiation. Priscus reports that superstitious fear of the fate of Alaric gave him pause—as Alaric died shortly after sacking Rome in 410. Italy had suffered from a terrible famine in 451 and her crops were faring little better in 452. Attila's devastating invasion of the plains of northern Italy this year did not improve the harvest. To advance on Rome would have required supplies which were not available in Italy, and taking the city would not have improved Attila's supply situation. Therefore, it was more profitable for Attila to conclude peace and retreat to his homeland. Furthermore, an East Roman force had crossed the Danube under the command of another officer also named Aetius—who had participated in the Council of Chalcedon the previous year—and proceeded to defeat the Huns who had been left behind by Attila to safeguard their home territories. Attila, hence, faced heavy human and natural pressures to retire "from Italy without ever setting foot south of the Po". As Hydatius writes in his Chronica Minora: Death In the Eastern Roman Empire, Emperor Marcian succeeded Theodosius II, and stopped paying tribute to the Huns. Attila withdrew from Italy to his palace across the Danube, while making plans to strike at Constantinople once more to reclaim tribute. However, he died in the early months of 453. The conventional account from Priscus says that Attila was at a feast celebrating his latest marriage, this time to the beautiful young Ildico (the name suggests Gothic or Ostrogoth origins). In the midst of the revels, however, he suffered severe bleeding and died. He may have had a nosebleed and choked to death in a stupor. Or he may have succumbed to internal bleeding, possibly due to ruptured esophageal varices. Esophageal varices are dilated veins that form in the lower part of the esophagus, often caused by years of excessive alcohol consumption; they are fragile and can easily rupture, leading to death by hemorrhage. Another account of his death was first recorded 80 years after the events by Roman chronicler Marcellinus Comes. It reports that "Attila, King of the Huns and ravager of the provinces of Europe, was pierced by the hand and blade of his wife". One modern analyst suggests that he was assassinated, but most reject these accounts as no more than hearsay, preferring instead the account given by Attila's contemporary Priscus, recounted in the 6th century by Jordanes: Attila's sons Ellac, Dengizich and Ernak, "in their rash eagerness to rule they all alike destroyed his empire". They "were clamoring that the nations should be divided among them equally and that warlike kings with their peoples should be apportioned to them by lot like a family estate". Against the treatment as "slaves of the basest condition" a Germanic alliance led by the Gepid ruler Ardaric (who was noted for great loyalty to Attila) revolted and fought with the Huns in Pannonia in the Battle of Nedao 454 AD. Attila's eldest son Ellac was killed in that battle. Attila's sons "regarding the Goths as deserters from their rule, came against them as though they were seeking fugitive slaves", attacked Ostrogothic co-ruler Valamir (who also fought alongside Ardaric and Attila at the Catalaunian Plains), but were repelled, and some group of Huns moved to Scythia (probably those of Ernak). His brother Dengizich attempted a renewed invasion across the Danube in 468 AD, but was defeated at the Battle of Bassianae by the Ostrogoths. Dengizich was killed by Roman-Gothic general Anagast the following year, after which the Hunnic dominion ended. Attila's many children and relatives are known by name and some even by deeds, but soon valid genealogical sources all but dried up, and there seems to be no verifiable way to trace Attila's descendants. This has not stopped many genealogists from attempting to reconstruct a valid line of descent for various medieval rulers. One of the most credible claims has been that of the Nominalia of the Bulgarian khans for mythological Avitohol and Irnik from the Dulo clan of the Bulgars. Later folklore and iconography Jordanes embellished the report of Priscus, reporting that Attila had possessed the "Holy War Sword of the Scythians", which was given to him by Mars and made him a "prince of the entire world". By the end of the 12th century the royal court of Hungary proclaimed their descent from Attila. Lampert of Hersfeld's contemporary chronicles report that shortly before the year 1071, the Sword of Attila had been presented to Otto of Nordheim by the exiled queen of Hungary, Anastasia of Kiev. This sword, a cavalry sabre now in the Kunsthistorisches Museum in Vienna, appears to be the work of Hungarian goldsmiths of the ninth or tenth century. An anonymous chronicler of the medieval period represented the meeting of Pope Leo and Atilla as attended also by Saint Peter and Saint Paul, "a miraculous tale calculated to meet the taste of the time" This apotheosis was later portrayed artistically by the Renaissance artist Raphael and sculptor Algardi, whom eighteenth-century historian Edward Gibbon praised for establishing "one of the noblest legends of ecclesiastical tradition". According to a version of this narrative related in the Chronicon Pictum, a mediaeval Hungarian chronicle, the Pope promised Attila that if he left Rome in peace, one of his successors would receive a holy crown (which has been understood as referring to the Holy Crown of Hungary). Some histories and chronicles describe him as a great and noble king, and he plays major roles in three Norse sagas: Atlakviða, Volsunga saga, and Atlamál. The Polish Chronicle represents Attila's name as Aquila. Frutolf of Michelsberg and Otto of Freising pointed out that some songs as "vulgar fables" made Theoderic the Great, Attila and Ermanaric contemporaries, when any reader of Jordanes knew that this was not the case. This refers to the so-called historical poems about Dietrich von Bern (Theoderic), in which Etzel (Attila) is Dietrich's refuge in exile from his wicked uncle Ermenrich (Ermanaric). Etzel is most prominent in the poems Dietrichs Flucht and the Rabenschlacht. Etzel also appears as Kriemhild's second noble husband in the Nibelungenlied, in which Kriemhild causes the destruction of both the Hunnish kingdom and that of her Burgundian relatives. In 1812, Ludwig van Beethoven conceived the idea of writing an opera about Attila and approached August von Kotzebue to write the libretto. It was, however, never written. In 1846, Giuseppe Verdi wrote the opera, loosely based on episodes in Attila's invasion of Italy. In World War I, Allied propaganda referred to Germans as the "Huns", based on a 1900 speech by Emperor Wilhelm II praising Attila the Hun's military prowess, according to Jawaharlal Nehru's Glimpses of World History. Der Spiegel commented on 6 November 1948, that the Sword of Attila was hanging menacingly over Austria. American writer Cecelia Holland wrote The Death of Attila (1973), a historical novel in which Attila appears as a powerful background figure whose life and death deeply affect the protagonists, a young Hunnic warrior and a Germanic one. The name has many variants in several languages: Atli and Atle in Old Norse; Etzel in Middle High German (Nibelungenlied); Ætla in Old English; Attila, Atilla, and Etele in Hungarian (Attila is the most popular); Attila, Atilla, Atilay, or Atila in Turkish; and Adil and Edil in Kazakh or Adil ("same/similar") or Edil ("to use") in Mongolian. In modern Hungary and in Turkey, "Attila" and its Turkish variation "Atilla" are commonly used as a male first name. In Hungary, several public places are named after Attila; for instance, in Budapest there are 10 Attila Streets, one of which is an important street behind the Buda Castle. When the Turkish Armed Forces invaded Cyprus in 1974, the operations were named after Attila ("The Attila Plan"). The 1954 Universal International film Sign of the Pagan starred Jack Palance as Attila. Depictions of Attila See also Alaric I Arminius Bato (Daesitiate chieftain) Boiorix Brennus (4th century BC) Gaiseric Ermanaric Hannibal Mithridates VI of Pontus Onegesius Odoacer Radagaisus Spartacus Theodoric the Great Totila Notes Sources External links Works about Attila at Project Gutenberg 5th-century Hunnic rulers 5th-century monarchs in Europe 406 births 453 deaths Deaths from choking
842
https://en.wikipedia.org/wiki/Aegean%20Sea
Aegean Sea
The Aegean Sea is an elongated embayment of the Mediterranean Sea between Europe and Asia. It is located between the Balkans and Anatolia, and covers an area of some 215,000 square kilometres. In the north, the Aegean is connected to the Marmara Sea and the Black Sea by the straits of the Dardanelles and the Bosphorus. The Aegean Islands are located within the sea and some bound it on its southern periphery, including Crete and Rhodes. The sea reaches a maximum depth of 3,544 meters, to the east of Crete. The Thracian Sea and the Myrtoan Sea are subdivisions of the Aegean Sea. The Aegean Islands can be divided into several island groups, including the Dodecanese, the Cyclades, the Sporades, the Saronic islands and the North Aegean Islands, as well as Crete and its surrounding islands. The Dodecanese, located to the southeast, includes the islands of Rhodes, Kos, and Patmos; the islands of Delos and Naxos are within the Cyclades to the south of the sea. Lesbos is part of the North Aegean Islands. Euboea, the second-largest island in Greece, is located in the Aegean, despite being administered as part of Central Greece. Nine out of twelve of the Administrative regions of Greece border the sea, along with the Turkish provinces of Edirne, Canakkale, Balıkesir, Izmir, Aydın and Muğla to the east of the sea. Various Turkish islands in the sea are Imbros, Tenedos, Cunda Island, and the Foça Islands. The Aegean Sea has been historically important, especially in regards to the civilization of Ancient Greece, who inhabited the area around the coast of the Aegean and the Aegean islands. The Aegean islands facilitated contact between the people of the area and between Europe and Asia. Along with the Greeks, Thracians lived among the northern coast. The Romans conquered the area under the Roman Empire, and later the Byzantine Empire held it against advances by the First Bulgarian Empire. The Fourth Crusade weakened Byzantine control of the area, and it was eventually conquered by the Ottoman Empire, with the exception of Crete, which was a Venetian colony until 1669. The Greek War of Independence allowed a Greek state on the coast of the Aegean from 1829 onwards. The Ottoman Empire held a presence over the sea for over 500 years, until it was replaced by modern Turkey. The rocks making up the floor of the Aegean are mainly limestone, though often greatly altered by volcanic activity that has convulsed the region in relatively recent geologic times. Of particular interest are the richly coloured sediments in the region of the islands of Santorini and Milos, in the south Aegean. Notable cities on the Aegean coastline include Athens, Thessaloniki, Volos, Kavala and Heraklion in Greece, and İzmir and Bodrum in Turkey. The Aegean Sea groundwater itself has a high salinity content leading one to think that the soil would be infertile due to the volcanic region, but actually has an equilibrium with its soil content structure making it able to grow fertile crops on land that would seem infertile. A number of issues concerning sovereignty within the Aegean Sea are disputed between Greece and Turkey. The Aegean dispute has had a large effect on Greek-Turkish relations since the 1970s. Issues include the delimitation of territorial waters, national airspace, exclusive economic zones and flight information regions. Name and etymology Late Latin authors referred the name Aegaeus to Aegeus, who was said to have jumped into that sea to drown himself (rather than throwing himself from the Athenian acropolis, as told by some Greek authors). He was the father of Theseus, the mythical king and founder-hero of Athens. Aegeus had told Theseus to put up white sails when returning if he was successful in killing the Minotaur. When Theseus returned, he forgot these instructions, and Aegeus thinking his son to have died then drowned himself in the sea. The sea was known in Latin as Aegaeum mare under the control of the Roman Empire. The Venetians, who ruled many Greek islands in the High and Late Middle Ages, popularized the name Archipelago (Greek: αρχιπέλαγος, meaning "main sea" or "chief sea"), a name that held on in many European countries until the early modern period. In the South Slavic languages, the Aegean is called White Sea (Bulgarian: /; Macedonian: /; Serbo-Croatian: /). The Turkish name for the sea is Ege Denizi, derived from the Greek name. Geography The Aegean Sea is an elongated embayment of the Mediterranean Sea, and covers about in area, measuring about longitudinally and latitudinal. The sea's maximum depth is , located at a point east of Crete. The Aegean Islands are found within its waters, with the following islands delimiting the sea on the south, generally from west to east: Kythera, Antikythera, Crete, Kasos, Karpathos and Rhodes. The Anatolian peninsula marks the eastern boundary of the sea, while the Greek mainland marks the west. Several seas are contained within the Aegean Sea; the Thracian Sea is a section of the Aegean located to the north, the Icarian Sea to the east, the Myrtoan Sea to the west, while the Sea of Crete is the southern section. The Greek regions that border the sea, in alphabetical order, are Attica, Central Greece, Central Macedonia, Crete, Eastern Macedonia and Thrace, North Aegean, Peloponnese, South Aegean, and Thessaly. The historical region of Macedonia also borders the sea, to the north. The Aegean Islands, which almost all belong to Greece, can be divided into seven groups: Northeastern Aegean Islands, which lie in the Thracian Sea East Aegean Islands (Euboea) Northern Sporades Cyclades Saronic Islands (or Argo-Saronic Islands) Dodecanese (or Southern Sporades) Crete Many of the Aegean islands or island chains, are geographically extensions of the mountains on the mainland. One chain extends across the sea to Chios, another extends across Euboea to Samos, and a third extends across the Peloponnese and Crete to Rhodes, dividing the Aegean from the Mediterranean. The bays and gulfs of the Aegean beginning at the South and moving clockwise include on Crete, the Mirabello, Almyros, Souda and Chania bays or gulfs, on the mainland the Myrtoan Sea to the west with the Argolic Gulf, the Saronic Gulf northwestward, the Petalies Gulf which connects with the South Euboic Sea, the Pagasetic Gulf which connects with the North Euboic Sea, the Thermian Gulf northwestward, the Chalkidiki Peninsula including the Cassandra and the Singitic Gulfs, northward the Strymonian Gulf and the Gulf of Kavala and the rest are in Turkey; Saros Gulf, Edremit Gulf, Dikili Gulf, Gulf of Çandarlı, Gulf of İzmir, Gulf of Kuşadası, Gulf of Gökova, Güllük Gulf. The Aegean sea is connected to the Sea of Marmara by the Dardanelles, also known from Classical Antiquity as the Hellespont. The Dardanelles are located to the northeast of the sea. It ultimately connects with the Black Sea through the Bosphoros strait, upon which lies the city of Istanbul. The Dardanelles and the Bosphoros are known as the Turkish Straits. Extent According to the International Hydrographic Organization, the limits of the Aegean Sea as follows: On the south: A line running from Cape Aspro (28°16′E) in Asia Minor, to Cum Burnù (Capo della Sabbia) the Northeast extreme of the Island of Rhodes, through the island to Cape Prasonisi, the Southwest point thereof, on to Vrontos Point (35°33′N) in Skarpanto [Karpathos], through this island to Castello Point, the South extreme thereof, across to Cape Plaka (East extremity of Crete), through Crete to Agria Grabusa, the Northwest extreme thereof, thence to Cape Apolitares in Antikithera Island, through the island to Psira Rock (off the Northwest point) and across to Cape Trakhili in Kithera Island, through Kithera to the Northwest point (Cape Karavugia) and thence to Cape Santa Maria () in the Morea. In the Dardanelles: A line joining Kum Kale (26°11′E) and Cape Helles. Hydrography Aegean surface water circulates in a counterclockwise gyre, with hypersaline Mediterranean water moving northward along the west coast of Turkey, before being displaced by less dense Black Sea outflow. The dense Mediterranean water sinks below the Black Sea inflow to a depth of , then flows through the Dardanelles Strait and into the Sea of Marmara at velocities of . The Black Sea outflow moves westward along the northern Aegean Sea, then flows southwards along the east coast of Greece. The physical oceanography of the Aegean Sea is controlled mainly by the regional climate, the fresh water discharge from major rivers draining southeastern Europe, and the seasonal variations in the Black Sea surface water outflow through the Dardanelles Strait. Analysis of the Aegean during 1991 and 1992 revealed three distinct water masses: Aegean Sea Surface Water – thick veneer, with summer temperatures of 21–26 °C and winter temperatures ranging from in the north to in the south. Aegean Sea Intermediate Water – Aegean Sea Intermediate Water extends from 40 to 50 m to with temperatures ranging from 11 to 18 °C. Aegean Sea Bottom Water – occurring at depths below 500–1000 m with a very uniform temperature (13–14 °C) and salinity (3.91–3.92%). Climate The climate of the Aegean Sea largely reflects the climate of Greece and Western Turkey, which is to say, predominately Mediterranean. According to the Köppen climate classification, most of the Aegean is classified as Hot-summer Mediterranean (Csa), with hotter and drier summers along with milder and wetter winters. However, high temperatures during summers are generally not quite as high as those in arid or semiarid climates due to the presence of a large body of water. This is most predominant in the west and east coasts of the Aegean, and within the Aegean islands. In the north of the Aegean Sea, the climate is instead classified as Cold semi-arid (BSk), which feature cooler summers than Hot-summer Mediterranean climates. The Etesian winds are a dominant weather influence in the Aegean Basin. The below table lists climate conditions of some major Aegean cities: Population Numerous Greek and Turkish settlements are located along their mainland coast, as well as on towns on the Aegean islands. The largest cities are Athens and Thessaloniki in Greece and İzmir in Turkey. The most populated of the Aegean islands is Crete, followed by Euboea and Rhodes. Biogeography and ecology Protected Areas Greece has established several marine protected areas along its coasts. According to the Network of Managers of Marine Protected Areas in the Mediterranean (MedPAN), four Greek MPAs are participating in the Network. These include Alonnisos Marine Park, while the Missolonghi–Aitoliko Lagoons and the island of Zakynthos are not on the Aegean. History Ancient history The current coastline dates back to about 4000 BC. Before that time, at the peak of the last ice age (about 18,000 years ago) sea levels everywhere were 130 metres lower, and there were large well-watered coastal plains instead of much of the northern Aegean. When they were first occupied, the present-day islands including Milos with its important obsidian production were probably still connected to the mainland. The present coastal arrangement appeared around 9,000 years ago, with post-ice age sea levels continuing to rise for another 3,000 years after that. The subsequent Bronze Age civilizations of Greece and the Aegean Sea have given rise to the general term Aegean civilization. In ancient times, the sea was the birthplace of two ancient civilizations – the Minoans of Crete and the Myceneans of the Peloponnese. The Minoan civilization was a Bronze Age civilization on the island of Crete and other Aegean islands, flourishing from around 3000 to 1450 BC before a period of decline, finally ending at around 1100 BC. It represented the first advanced civilization in Europe, leaving behind massive building complexes, tools, stunning artwork, writing systems, and a massive network of trade. The Minoan period saw extensive trade between Crete, Aegean, and Mediterranean settlements, particularly the Near East. The most notable Minoan palace is that of Knossos, followed by that of Phaistos. The Mycenaean Greeks arose on the mainland, becoming the first advanced civilization in mainland Greece, which lasted from approximately 1600 to 1100 BC. It is believed that the site of Mycenae, which sits close to the Aegean coast, was the center of Mycenaean civilization. The Mycenaeans introduced several innovations in the fields of engineering, architecture and military infrastructure, while trade over vast areas of the Mediterranean, including the Aegean, was essential for the Mycenaean economy. Their syllabic script, the Linear B, offers the first written records of the Greek language and their religion already included several deities that can also be found in the Olympic Pantheon. Mycenaean Greece was dominated by a warrior elite society and consisted of a network of palace-centered states that developed rigid hierarchical, political, social and economic systems. At the head of this society was the king, known as wanax. The civilization of Mycenaean Greeks perished with the collapse of Bronze Age culture in the eastern Mediterranean, to be followed by the so-called Greek Dark Ages. It is undetermined what cause the collapse of the Mycenaeans. During the Greek Dark Ages, writing in the Linear B script ceased, vital trade links were lost, and towns and villages were abandoned. Ancient Greece The Archaic period followed the Greek Dark Ages in the 8th century BC. Greece became divided into small self-governing communities, and adopted the Phoenician alphabet, modifying it to create the Greek alphabet. By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes, of which Athens, Sparta, and Corinth were closest to the Aegean Sea. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well. In the 8th and 7th centuries BC many Greeks emigrated to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield. The Aegean Sea was the setting for one of the most pivotal naval engagements in history, when on September 20, 480 B.C. the Athenian fleet gained a decisive victory over the Persian fleet of the Xerxes II of Persia at the Battle of Salamis. Thus ending any further attempt of western expansion by the Achaemenid Empire. The Aegean Sea would later come to be under the control, albeit briefly, of the Kingdom of Macedonia. Philip II and his son Alexander the Great led a series of conquests that led not only to the unification of the Greek mainland and the control of the Aegean Sea under his rule, but also the destruction of the Achaemenid Empire. After Alexander the Great's death, his empire was divided among his generals. Cassander became king of the Hellenistic kingdom of Macedon, which held territory along the western coast of the Aegean, roughly corresponding to modern-day Greece. The Kingdom of Lysimachus had control over the sea's eastern coast. Greece had entered the Hellenistic period. Roman rule The Macedonian Wars were a series of conflicts fought by the Roman Republic and its Greek allies in the eastern Mediterranean against several different major Greek kingdoms. They resulted in Roman control or influence over the eastern Mediterranean basin, including the Aegean, in addition to their hegemony in the western Mediterranean after the Punic Wars. During Roman rule, the land around the Aegean Sea fell under the provinces of Achaea, Macedonia, Thracia, Asia and Creta et Cyrenica (island of Crete) Medieval period The Fall of the Western Roman Empire allowed its successor state, the Byzantine Empire, to continue Roman control over the Aegean Sea. However, their territory would later be threatened by the Early Muslim conquests initiated by Muhammad in the 7th century. Although the Rashidun Caliphate did not manage to obtain land along the coast of the Aegean sea, its conquest of the Eastern Anatolian peninsula as well as Egypt, the Levant, and North Africa left the Byzantine Empire weakened. The Umayyad Caliphate expanded the territorial gains of the Rashidun Caliphate, conquering much of North Africa, and threatened the Byzantine Empire's control of Western Anatolia, where it meets the Aegean Sea. During the 820s, Crete was conquered by a group of Berbers Andalusians exiles led by Abu Hafs Umar al-Iqritishi, and it became an independent Islamic state. The Byzantine Empire launched a campaign that took most of the island back in 842 and 843 under Theoktistos, but the reconquest was not completed and was soon reversed. Later attempts by the Byzantine Empire to recover the island were without success. For the approximately 135 years of its existence, the emirate of Crete was one of the major foes of Byzantium. Crete commanded the sea lanes of the Eastern Mediterranean and functioned as a forward base and haven for Muslim corsair fleets that ravaged the Byzantine-controlled shores of the Aegean Sea. Crete returned to Byzantine rule under Nikephoros Phokas, who launched a huge campaign against the Emirate of Crete in 960 to 961. Meanwhile, the Bulgarian Empire threatened Byzantine control of Northern Greece and the Aegean coast to the south. Under Presian I and his successor Boris I, the Bulgarian Empire managed to obtain a small portion of the northern Aegean coast. Simeon I of Bulgaria led Bulgaria to its greatest territorial expansion, and managed to conqueror much of the northern and western coasts of the Aegean. The Byzantines later regained control. The Second Bulgarian Empire achieved similar success along, again, the northern and western coasts, under Ivan Asen II of Bulgaria. The Seljuq Turks, under the Seljuk Empire, invaded the Byzantine Empire in 1068, from which they annexed almost all the territories of Anatolia, including the east coast of the Aegean Sea, during the reign of Alp Arslan, the second Sultan of the Seljuk Empire. After the death of his successor, Malik Shah I, the empire was divided, and Malik Shah was succeeded in Anatolia by Kilij Arslan I, who founded the Sultanate of Rum. The Byzantines yet again recaptured the eastern coast of the Aegean. After Constantinople was occupied by Western European and Venetian forces during the Fourth Crusade, the area around the Aegean sea was fragmented into multiple entities, including the Latin Empire, the Kingdom of Thessalonica, the Empire of Nicaea, the Principality of Achaea, and the Duchy of Athens. The Venetians created the maritime state of the Duchy of the Archipelago, which included all the Cyclades except Mykonos and Tinos. The Empire of Nicaea, a Byzantine rump state, managed to effect the Recapture of Constantinople from the Latins in 1261 and defeat Epirus. Byzantine successes were not to last; the Ottomans would conquer the area around the Aegean coast, but before their expansion the Byzantine Empire had already been weakened from internal conflict. By the late 14th century the Byzantine Empire had lost all control of the coast of the Aegean Sea and could exercise power around their capital, Constantinople. The Ottoman Empire then gained control of all the Aegean coast with the exception of Crete, which was a Venetian colony until 1669. Modern Period The Greek War of Independence allowed a Greek state on the coast of the Aegean from 1829 onward. The Ottoman Empire held a presence over the sea for over 500 years until its dissolution following World War I, when it was replaced by modern Turkey. During the war, Greece gained control over the area around the northern coast of the Aegean. By the 1930s, Greece and Turkey had about resumed their present-day borders. In the Italo-Turkish War of 1912, Italy captured the Dodecanese islands, and had occupied them since, reneging on the 1919 Venizelos–Tittoni agreement to cede them to Greece. The Greco-Italian War took place from October 1940 to April 1941 as part of the Balkans Campaign of World War II. The Italian war aim was to establish a Greek puppet state, which would permit the Italian annexation of the Sporades and the Cyclades islands in the Aegean Sea, to be administered as a part of the Italian Aegean Islands. The German invasion resulted in the Axis occupation of Greece. The German troops evacuated Athens on 12 October 1944, and by the end of the month, they had withdrawn from mainland Greece. Greece was then liberated by Allied troops. Economy and politics Many of the islands in the Aegean have safe harbours and bays. In ancient times, navigation through the sea was easier than travelling across the rough terrain of the Greek mainland, and to some extent, the coastal areas of Anatolia. Many of the islands are volcanic, and marble and iron are mined on other islands. The larger islands have some fertile valleys and plains. The Armenian king dynasty Achaemenids made one of the greatest highways of the Ancient world. Its name was "Royal road," its length was 2400km, and it was situated between  Persian Empire and the Aegean sea. A part of the road passed by the southwest of Armenia, which gave an excellent opportunity to participate in international trading. Of the main islands in the Aegean Sea, two belong to Turkey – Bozcaada (Tenedos) and Gökçeada (Imbros); the rest belong to Greece. Between the two countries, there are political disputes over several aspects of political control over the Aegean space, including the size of territorial waters, air control and the delimitation of economic rights to the continental shelf. These issues are known as the Aegean dispute. Transport Multiple ports are located along the Greek and Turkish coasts of the Aegean Sea. The port of Piraeus in Athens is the chief port in Greece, the largest passenger port in Europe and the third largest in the world, servicing about 20 million passengers annually. With a throughput of 1.4 million TEUs, Piraeus is placed among the top ten ports in container traffic in Europe and the top container port in the Eastern Mediterranean. Piraeus is also the commercial hub of Greek shipping. Piraeus bi-annually acts as the focus for a major shipping convention, known as Posidonia, which attracts maritime industry professionals from all over the world. Piraeus is currently Greece's third-busiest port in terms of tons of goods transported, behind Aghioi Theodoroi and Thessaloniki. The central port serves ferry routes to almost every island in the eastern portion of Greece, the island of Crete, the Cyclades, the Dodecanese, and much of the northern and the eastern Aegean Sea, while the western part of the port is used for cargo services. As of 2007, the Port of Thessaloniki was the second-largest container port in Greece after the port of Piraeus, making it one of the busiest ports in Greece. In 2007, the Port of Thessaloniki handled 14,373,245 tonnes of cargo and 222,824 TEU's. Paloukia, on the island of Salamis, is a major passenger port. Fishing Fish are Greece's second-largest agricultural export, and Greece has Europe's largest fishing fleet. Fish captured include sardines, mackerel, grouper, grey mullets, sea bass, and seabream. There is a considerable difference between fish catches between the pelagic and demersal zones; with respect to pelagic fisheries, the catches from the northern, central and southern Aegean area groupings are dominated, respectively, by anchovy, horse mackerels, and boops. For demersal fisheries, the catches from the northern and southern Aegean area groupings are dominated by grey mullets and pickerel (Spicara smaris) respectively. The industry has been impacted by the Great Recession. Overfishing and habitat destruction is also a concern, threatening grouper, and seabream populations, resulting in perhaps a 50% decline of fish catch. To address these concerns, Greek fishermen have been offered a compensation by the government. Although some species are defined as protected or threatened under EU legislation, several illegal species such as the molluscs Pinna nobilis, Charonia tritonis and Lithophaga lithophaga, can be bought in restaurants and fish markets around Greece. Tourism The Aegean islands within the Aegean Sea are significant tourist destinations. Tourism to the Aegean islands contributes a significant portion of tourism in Greece, especially since the second half of the 20th century. A total of five UNESCO World Heritage sites are located the Aegean Islands; these include the Monastery of Saint John the Theologian and the Cave of the Apocalypse on Patmos, the Pythagoreion and Heraion of Samos in Samos, the Nea Moni of Chios, the island of Delos, and the Medieval City of Rhodes. Greece is one of the most visited countries in Europe and the world with over 33 million visitors in 2018, and the tourism industry around a quarter of Greece's Gross Domestic Product. The islands of Santorini, Crete, Lesbos, Delos, and Mykonos are common tourist destinations. An estimated 2 million tourists visit Santorini annually. However, concerns relating to overtourism have arisen in recent years, such as issues of inadequate infrastructure and overcrowding. Alongside Greece, Turkey has also been successful in developing resort areas and attracting large number of tourists, contributing to tourism in Turkey. The phrase "Blue Cruise" refers to recreational voyages along the Turkish Riviera, including across the Aegean. The ancient city of Troy, a World Heritage Site, is on the Turkish coast of the Aegean. Greece and Turkey both take part in the Blue Flag beach certification programme of the Foundation for Environmental Education. The certification is awarded for beaches and marinas meeting strict quality standards including environmental protection, water quality, safety and services criteria. As of 2015, the Blue Flag has been awarded to 395 beaches and 9 marinas in Greece. Southern Aegean beaches on the Turkish coast include Muğla, with 102 beaches awarded with the blue flag, along with İzmir and Aydın, who have 49 and 30 beaches awarded respectively. See also Exclusive economic zone of Greece Geography of Turkey List of Greek place names References External links Seas of Greece Seas of Turkey Marginal seas of the Mediterranean European seas Seas of Asia Landforms of Çanakkale Province Landforms of Muğla Province Landforms of İzmir Province Landforms of Balıkesir Province Landforms of Edirne Province Landforms of Aydın Province
843
https://en.wikipedia.org/wiki/A%20Clockwork%20Orange%20%28novel%29
A Clockwork Orange (novel)
A Clockwork Orange is a dystopian satirical black comedy novel by English writer Anthony Burgess, published in 1962. It is set in a near-future society that has a youth subculture of extreme violence. The teenage protagonist, Alex, narrates his violent exploits and his experiences with state authorities intent on reforming him. The book is partially written in a Russian-influenced argot called "Nadsat", which takes its name from the Russian suffix that is equivalent to '-teen' in English. According to Burgess, it was a jeu d'esprit written in just three weeks. In 2005, A Clockwork Orange was included on Time magazine's list of the 100 best English-language novels written since 1923, and it was named by Modern Library and its readers as one of the 100 best English-language novels of the 20th century. The original manuscript of the book has been kept at McMaster University's William Ready Division of Archives and Research Collections in Hamilton, Ontario, Canada since the institution purchased the documents in 1971. It is considered one of the most influential dystopian books. Plot summary Part 1: Alex's world Alex is a 15-year-old gang leader living in a near-future dystopian city. His friends ("droogs" in the novel's Anglo-Russian slang, "Nadsat") and fellow gang members are Dim, a slow-witted bruiser, who is the gang's muscle; Georgie, an ambitious second-in-command; and Pete, who mostly plays along as the droogs indulge their taste for "ultra-violence" (random, violent mayhem). Characterised as a sociopath and hardened juvenile delinquent, Alex is also intelligent, quick-witted, and enjoys classical music; he is particularly fond of Beethoven, whom he calls "Lovely Ludwig Van". The story begins with the droogs sitting in their favourite hangout, the Korova Milk Bar, and drinking "milk-plus" – a beverage consisting of milk laced with the customer's drug of choice – to prepare for a night of ultra-violence. They assault a scholar walking home from the public library; rob a store, leaving the owner and his wife bloodied and unconscious; beat up a beggar; then scuffle with a rival gang. Joyriding through the countryside in a stolen car, they break into an isolated cottage and terrorise the young couple living there, beating the husband and gang-raping his wife. In a metafictional touch, the husband is a writer working on a manuscript called "A Clockwork Orange", and Alex contemptuously reads out a paragraph that states the novel's main theme before shredding the manuscript. Back at the Korova, Alex strikes Dim for his crude response to a woman's singing of an operatic passage, and strains within the gang become apparent. At home in his parents' flat, Alex plays classical music at top volume, which he describes as giving him orgasmic bliss before falling asleep. Alex feigns illness to his parents to stay out of school the next day. Following an unexpected visit from P.R. Deltoid, his "post-corrective adviser", Alex visits a record store, where he meets two pre-teen girls. He invites them back to the flat, where he drugs and rapes them. That night after a nap, Alex finds his droogs in a mutinous mood, waiting downstairs in the torn-up and graffitied lobby. Georgie challenges Alex for leadership of the gang, demanding that they focus on higher-value targets in their robberies. Alex quells the rebellion by slashing Dim's hand and fighting with Georgie, then pacifies the gang by agreeing to Georgie's plan to rob the home of a wealthy elderly woman. Alex breaks in and knocks the woman unconscious; but, when he hears sirens and opens the door to flee, Dim strikes him in payback for the earlier fight. The gang abandons Alex on the front step to be arrested by the police; while in custody, he learns that the woman has died from her injuries. Part 2: The Ludovico Technique Alex is convicted of murder and sentenced to 14 years in prison. His parents visit one day to inform him that Georgie has been killed in a botched robbery. Two years into his term, he has obtained a job in one of the prison chapels, playing music on the stereo to accompany the Sunday Christian services. The chaplain mistakes Alex's Bible studies for stirrings of faith; in reality, Alex is only reading Scripture for the violent or sexual passages. After his fellow cellmates blame him for beating a troublesome cellmate to death, he is chosen to undergo an experimental behaviour modification treatment called the Ludovico Technique in exchange for having the remainder of his sentence commuted. The technique is a form of aversion therapy in which Alex is injected with nausea-inducing drugs while watching graphically violent films, eventually conditioning him to become severely ill at the mere thought of violence. As an unintended consequence, the soundtrack to one of the films, Beethoven's Ninth Symphony, renders Alex unable to enjoy his beloved classical music as before. The effectiveness of the technique is demonstrated to a group of VIPs, who watch as Alex collapses before a bully and abases himself before a scantily clad young woman. Although the prison chaplain accuses the state of stripping Alex of free will, the government officials on the scene are pleased with the results, and Alex is released from prison. Part 3: After prison Alex returns to his parents' flat, only to find that they are letting his room to a lodger. Now homeless, he wanders the streets and enters a public library, hoping to learn of a painless method for committing suicide. The old scholar whom Alex had assaulted in Part 1 finds him and beats him, with the help of several friends. Two policemen come to Alex's rescue, but they turn out to be Dim and Billyboy, a former rival gang leader. They take Alex outside town, brutalise him, and abandon him there. Alex collapses at the door of an isolated cottage, realising too late that it is the one he and his droogs invaded in Part 1. The writer, F. Alexander, still lives here, but his wife has since died of what he believes to be injuries she sustained in the rape. He does not recognise Alex but gives him shelter and questions him about the conditioning he has undergone. Alexander and his colleagues, all highly critical of the government, plan to use Alex as a symbol of state brutality and thus prevent the incumbent government from being re-elected. Alex inadvertently reveals that he was the ringleader of the home invasion; he is removed from the cottage and locked in an upper-story bedroom as a relentless barrage of classical music plays over speakers. He attempts suicide by leaping from the window. Alex wakes up in a hospital, where he is courted by government officials anxious to counter the bad publicity created by his suicide attempt. He is informed that Alexander has been "put away" for Alex's protection and his own. Alex is offered a well-paying job if he agrees to side with the government once he is discharged. A round of tests reveals that his old violent impulses have returned, indicating that the hospital doctors have undone the effects of his conditioning. As photographers snap pictures, Alex daydreams of orgiastic violence and reflects, "I was cured all right." In the final chapter, Alex — now 18 years old and working for the nation's musical recording archives — finds himself halfheartedly preparing for yet another night of crime with a new gang (Len, Rick and Bully). After a chance encounter with Pete, who has reformed and married, Alex finds himself taking less and less pleasure in acts of senseless violence. He begins contemplating giving up crime himself to become a productive member of society and start a family of his own, while reflecting on the notion that his own children could possibly end up being just as destructive as he has been, if not more so. Omission of the final chapter The book has three parts, each with seven chapters. Burgess has stated that the total of 21 chapters was an intentional nod to the age of 21 being recognised as a milestone in human maturation. The 21st chapter was omitted from the editions published in the United States prior to 1986. In the introduction to the updated American text (these newer editions include the missing 21st chapter), Burgess explains that when he first brought the book to an American publisher, he was told that U.S. audiences would never go for the final chapter, in which Alex sees the error of his ways, decides he has simply gotten bored of violence and resolves to turn his life around. At the American publisher's insistence, Burgess allowed their editors to cut the redeeming final chapter from the U.S. version, so that the tale would end on a darker note, with Alex becoming his old, ultraviolent self again – an ending which the publisher insisted would be "more realistic" and appealing to a US audience. The film adaptation, directed by Stanley Kubrick, is based on the American edition of the book (which Burgess considered to be "badly flawed"). Kubrick called Chapter 21 "an extra chapter" and claimed that he had not read the original version until he had virtually finished the screenplay and that he had never given serious consideration to using it. In Kubrick's opinion – as in the opinion of other readers, including the original American editor – the final chapter was unconvincing and inconsistent with the book. Characters Alex: The novel's protagonist and leader among his droogs. He often refers to himself as "Your Humble Narrator". Having coaxed two ten-year-old girls into his bedroom, Alex refers to himself as "Alexander the Large" while raping them; this was later the basis for Alex's claimed surname DeLarge in the 1971 film. George, Georgie or Georgie Boy: Effectively Alex's greedy second-in-command. Georgie attempts to undermine Alex's status as leader of the gang and take over their gang as the new leader. He is later killed during a botched robbery while Alex is in prison. Pete: The only one who does not take particular sides when the droogs fight among themselves. He later meets and marries a girl named Georgina, renouncing his violent ways and even losing his former (Nadsat) speech patterns. A chance encounter with Pete in the final chapter influences Alex to realise that he has grown bored with violence and recognise that human energy is better expended on creation than destruction. Dim: An idiotic and thoroughly gormless member of the gang, persistently condescended to by Alex, but respected to some extent by his droogs for his formidable fighting abilities, his weapon of choice being a length of bike chain. He later becomes a police officer, exacting his revenge on Alex for the abuse he once suffered under his command. P. R. Deltoid: A criminal rehabilitation social worker assigned the task of keeping Alex on the straight and narrow. He seemingly has no clue about dealing with young people, and is devoid of empathy or understanding for his troublesome charge. Indeed, when Alex is arrested for murdering an old woman and then ferociously beaten by several police officers, Deltoid simply spits on him. Prison Chaplain: The character who first questions whether it is moral to turn a violent person into a behavioural automaton who can make no choice in such matters. This is the only character who is truly concerned about Alex's welfare; he is not taken seriously by Alex, though. He is nicknamed by Alex "prison charlie" or "chaplin", a pun on Charlie Chaplin. Billyboy: A rival of Alex's. Early on in the story, Alex and his droogs battle Billyboy and his droogs, which ends abruptly when the police arrive. Later, after Alex is released from prison, Billyboy (along with Dim, who like Billyboy has become a police officer) rescues Alex from a mob, then subsequently beats him in a location out of town. Prison Governor: The man who decides to let Alex "choose" to be the first reformed by the Ludovico technique. The Minister of the Interior: The government high-official who determined that the Ludovico's technique will be used to cut recidivism. He is referred to as the Inferior by Alex. Dr Branom: A scientist, co-developer of the Ludovico technique. He appears friendly and almost paternal towards Alex at first, before forcing him into the theatre and what Alex calls the "chair of torture". Dr Brodsky: Branom's colleague and co-developer of the Ludovico technique. He seems much more passive than Branom and says considerably less. F. Alexander: An author who was in the process of typing his magnum opus A Clockwork Orange when Alex and his droogs broke into his house, beat him, tore up his work and then brutally gang-raped his wife, which caused her subsequent death. He is left deeply scarred by these events and when he encounters Alex two years later, he uses him as a guinea pig in a sadistic experiment intended to prove the Ludovico technique unsound. The government imprisons him afterwards. He is given the name Frank Alexander in the film. Cat Woman: An indirectly named woman who blocks Alex's gang's entrance scheme, and threatens to shoot Alex and set her cats on him if he does not leave. After Alex breaks into her house, she fights with him, ordering her cats to join the melee, but reprimands Alex for fighting them off. She sustains a fatal blow to the head during the scuffle. She is given the name Miss Weathers in the film. Analysis Background A Clockwork Orange was written in Hove, then a senescent seaside town. Burgess had arrived back in Britain after his stint abroad to see that much had changed. A youth culture had developed, based around coffee bars, pop music and teenage gangs. England was gripped by fears over juvenile delinquency. Burgess stated that the novel's inspiration was his first wife Lynne's beating by a gang of drunk American servicemen stationed in England during World War II. She subsequently miscarried. In its investigation of free will, the book's target is ostensibly the concept of behaviourism, pioneered by such figures as B. F. Skinner. Burgess later stated that he wrote the book in three weeks. Title Burgess has offered several clarifications about the meaning and origin of its title: He had overheard the phrase "as queer as a clockwork orange" in a London pub in 1945 and assumed it was a Cockney expression. In Clockwork Marmalade, an essay published in the Listener in 1972, he said that he had heard the phrase several times since that occasion. He also explained the title in response to a question from William Everson on the television programme Camera Three in 1972, "Well, the title has a very different meaning but only to a particular generation of London Cockneys. It's a phrase which I heard many years ago and so fell in love with, I wanted to use it, the title of the book. But the phrase itself I did not make up. The phrase "as queer as a clockwork orange" is good old East London slang and it didn't seem to me necessary to explain it. Now, obviously, I have to give it an extra meaning. I've implied an extra dimension. I've implied the junction of the organic, the lively, the sweet – in other words, life, the orange – and the mechanical, the cold, the disciplined. I've brought them together in this kind of oxymoron, this sour-sweet word." Nonetheless, no other record of the expression being used before 1962 has ever appeared. Kingsley Amis notes in his Memoirs (1991) that no trace of it appears in Eric Partridge's Dictionary of Historical Slang. The saying "as queer as ..." followed by an improbable object: "... a clockwork orange", or "... a four-speed walking stick" or "... a left-handed corkscrew" etc. predates Burgess' novel. An early example, "as queer as Dick's hatband", appeared in 1796, and was alluded to in 1757. His second explanation was that it was a pun on the Malay word orang, meaning "man". The novella contains no other Malay words or links. In a prefatory note to A Clockwork Orange: A Play with Music, he wrote that the title was a metaphor for "an organic entity, full of juice and sweetness and agreeable odour, being turned into a mechanism". In his essay Clockwork Oranges, Burgess asserts that "this title would be appropriate for a story about the application of Pavlovian or mechanical laws to an organism which, like a fruit, was capable of colour and sweetness". While addressing the reader in a letter before some editions of the book, the author says that when a man ceases to have free will, they are no longer a man. "Just a clockwork orange", a shiny, appealing object, but "just a toy to be wound-up by either God or the Devil, or (what is increasingly replacing both) the State. This title alludes to the protagonist's negative emotional responses to feelings of evil which prevent the exercise of his free will subsequent to the administration of the Ludovico Technique. To induce this conditioning, Alex is forced to watch scenes of violence on a screen that are systematically paired with negative physical stimulation. The negative physical stimulation takes the form of nausea and "feelings of terror", which are caused by an emetic medicine administered just before the presentation of the films. Use of slang The book, narrated by Alex, contains many words in a slang argot which Burgess invented for the book, called Nadsat. It is a mix of modified Slavic words, rhyming slang and derived Russian (like baboochka). For instance, these terms have the following meanings in Nadsat: droog (друг) = friend; moloko (молоко) = milk; gulliver (голова) = head; malchick (мальчик) or malchickiwick = boy; soomka (сумка) = sack or bag; Bog = God; horrorshow (хорошо) = good; prestoopnick (преступник) = criminal; rooker (рука) = hand; cal (кал) = crap; veck ("человек") = man or guy; litso (лицо) = face; malenky (маленький) = little; and so on. Some words Burgess invented himself or just adapted from pre-existing languages. Compare Polari. One of Alex's doctors explains the language to a colleague as "odd bits of old rhyming slang; a bit of gypsy talk, too. But most of the roots are Slav propaganda. Subliminal penetration." Some words are not derived from anything, but merely easy to guess, e.g. "in-out, in-out" or "the old in-out" means sexual intercourse. Cutter, however, means "money", because "cutter" rhymes with "bread-and-butter"; this is rhyming slang, which is intended to be impenetrable to outsiders (especially eavesdropping policemen). Additionally, slang like appypolly loggy ("apology") seems to derive from school boy slang. This reflects Alex's age of 15. In the first edition of the book, no key was provided, and the reader was left to interpret the meaning from the context. In his appendix to the restored edition, Burgess explained that the slang would keep the book from seeming dated, and served to muffle "the raw response of pornography" from the acts of violence. The term "ultraviolence", referring to excessive or unjustified violence, was coined by Burgess in the book, which includes the phrase "do the ultra-violent". The term's association with aesthetic violence has led to its use in the media. Banning and censorship history in the US In 1976, A Clockwork Orange was removed from an Aurora, Colorado high school because of "objectionable language". A year later in 1977 it was removed from high school classrooms in Westport, Massachusetts over similar concerns with "objectionable" language. In 1982, it was removed from two Anniston, Alabama libraries, later to be reinstated on a restricted basis. Also, in 1973 a bookseller was arrested for selling the novel. The charges were later dropped. However, each of these instances came after the release of Stanley Kubrick's popular 1971 film adaptation of A Clockwork Orange, itself the subject of much controversy. Reception Initial response The Sunday Telegraph review was positive, and described the book as "entertaining ... even profound". Kingsley Amis in The Observer acclaimed the novel as "cheerful horror", writing "Mr Burgess has written a fine farrago of outrageousness, one which incidentally suggests a view of juvenile violence I can’t remember having met before". Malcolm Bradbury wrote "All of Mr Burgess’s powers as a comic writer, which are considerable, have gone into the rich language of his inverted Utopia. If you can stomach the horrors, you’ll enjoy the manner". Roald Dahl called it "a terrifying and marvellous book". Many reviewers praised the inventiveness of the language, but expressed unease at the violent subject matter. The Spectator praised Burgess's "extraordinary technical feat" but was uncomfortable with "a certain arbitrariness about the plot which is slightly irritating". New Statesman acclaimed Burgess for addressing "acutely and savagely the tendencies of our time" but called the book "a great strain to read". The Sunday Times review was negative, and described the book as "a very ordinary, brutal and psychologically shallow story". The Times also reviewed the book negatively, describing it as "a somewhat clumsy experiment with science fiction [with] clumsy cliches about juvenile delinquency". The violence was criticised as "unconvincing in detail". Writer's appraisal Burgess dismissed A Clockwork Orange as "too didactic to be artistic". He claimed that the violent content of the novel "nauseated" him. In 1985, Burgess published Flame into Being: The Life and Work of D. H. Lawrence and while discussing Lady Chatterley's Lover in his biography, Burgess compared that novel's notoriety with A Clockwork Orange: "We all suffer from the popular desire to make the known notorious. The book I am best known for, or only known for, is a novel I am prepared to repudiate: written a quarter of a century ago, a jeu d'esprit knocked off for money in three weeks, it became known as the raw material for a film which seemed to glorify sex and violence. The film made it easy for readers of the book to misunderstand what it was about, and the misunderstanding will pursue me until I die. I should not have written the book because of this danger of misinterpretation, and the same may be said of Lawrence and Lady Chatterley's Lover." Awards and nominations and rankings 1983 – Prometheus Award (Preliminary Nominee) 1999 – Prometheus Award (Nomination) 2002 – Prometheus Award (Nomination) 2003 – Prometheus Award (Nomination) 2006 – Prometheus Award (Nomination) 2008 – Prometheus Award (Hall of Fame Award) A Clockwork Orange was chosen by Time magazine as one of the 100 best English-language books from 1923 to 2005. Adaptations A 1965 film by Andy Warhol entitled Vinyl was an adaptation of Burgess's novel. The best known adaptation of the novella to other forms is the 1971 film A Clockwork Orange by Stanley Kubrick, featuring Malcolm McDowell as Alex. In 1987, Burgess published a stage play titled A Clockwork Orange: A Play with Music. The play includes songs, written by Burgess, which are inspired by Beethoven and Nadsat slang. A manga anthology by Osamu Tezuka entitled Tokeijikake no Ringo (Clockwork Apple) was released in 1983. In 1988, a German adaptation of A Clockwork Orange at the intimate theatre of Bad Godesberg featured a musical score by the German punk rock band Die Toten Hosen which, combined with orchestral clips of Beethoven's Ninth Symphony and "other dirty melodies" (so stated by the subtitle), was released on the album Ein kleines bisschen Horrorschau. The track Hier kommt Alex became one of the band's signature songs. In February 1990, another musical version was produced at the Barbican Theatre in London by the Royal Shakespeare Company. Titled A Clockwork Orange: 2004, it received mostly negative reviews, with John Peter of The Sunday Times of London calling it "only an intellectual Rocky Horror Show", and John Gross of The Sunday Telegraph calling it "a clockwork lemon". Even Burgess himself, who wrote the script based on his novel, was disappointed. According to The Evening Standard, he called the score, written by Bono and The Edge of the rock group U2, "neo-wallpaper". Burgess had originally worked alongside the director of the production, Ron Daniels, and envisioned a musical score that was entirely classical. Unhappy with the decision to abandon that score, he heavily criticised the band's experimental mix of hip hop, liturgical and gothic music. Lise Hand of The Irish Independent reported The Edge as saying that Burgess's original conception was "a score written by a novelist rather than a songwriter". Calling it "meaningless glitz", Jane Edwardes of 20/20 magazine said that watching this production was "like being invited to an expensive French Restaurant – and being served with a Big Mac." In 1994, Chicago's Steppenwolf Theater put on a production of A Clockwork Orange directed by Terry Kinney. The American premiere of novelist Anthony Burgess's own adaptation of his A Clockwork Orange starred K. Todd Freeman as Alex. In 2001, UNI Theatre (Mississauga, Ontario) presented the Canadian premiere of the play under the direction of Terry Costa. In 2002, Godlight Theatre Company presented the New York Premiere adaptation of A Clockwork Orange at Manhattan Theatre Source. The production went on to play at the SoHo Playhouse (2002), Ensemble Studio Theatre (2004), 59E59 Theaters (2005) and the Edinburgh Festival Fringe (2005). While at Edinburgh, the production received rave reviews from the press while playing to sold-out audiences. The production was directed by Godlight's artistic director, Joe Tantalo. In 2003, Los Angeles director Brad Mays and the ARK Theatre Company staged a multi-media adaptation of A Clockwork Orange, which was named "Pick of the Week" by the LA Weekly and nominated for three of the 2004 LA Weekly Theater Awards: Direction, Revival Production (of a 20th-century work), and Leading Female Performance. Vanessa Claire Smith won Best Actress for her gender-bending portrayal of Alex, the music-loving teenage sociopath. This production utilised three separate video streams outputted to seven onstage video monitors – six 19-inch and one 40-inch. In order to preserve the first-person narrative of the book, a pre-recorded video stream of Alex, "your humble narrator", was projected onto the 40-inch monitor, thereby freeing the onstage character during passages which would have been awkward or impossible to sustain in the breaking of the fourth wall. An adaptation of the work, based on the original novel, the film and Burgess's own stage version, was performed by the SiLo Theatre in Auckland, New Zealand in early 2007. In 2021, the International Anthony Burgess Foundation premiered a webpage cataloging various productions of A Clockwork Orange from around the world. Release details 1962, UK, William Heinemann (ISBN ?), December 1962, Hardcover 1962, US, W. W. Norton & Co Ltd (ISBN ?), 1962, Hardcover 1963, US, W. W. Norton & Co Ltd (), 1963, Paperback 1965, US, Ballantine Books (), 1965, Paperback 1969, US, Ballantine Books (ISBN ?), 1969, Paperback 1971, US, Ballantine Books (), 1971, Paperback, Movie released 1972, UK, Lorrimer, (), 11 September 1972, Hardcover 1972, UK, Penguin Books Ltd (), 25 January 1973, Paperback 1973, US, Caedmon Records, 1973, Vinyl LP (First 4 chapters read by Anthony Burgess) 1977, US, Ballantine Books (), 12 September 1977, Paperback 1979, US, Ballantine Books (), April 1979, Paperback 1983, US, Ballantine Books (), 12 July 1983, Unbound 1986, US, W. W. Norton & Company (), November 1986, Paperback (Adds final chapter not previously available in U.S. versions) 1987, UK, W. W. Norton & Co Ltd (), July 1987, Hardcover 1988, US, Ballantine Books (), March 1988, Paperback 1995, UK, W. W. Norton & Co Ltd (), June 1995, Paperback 1996, UK, Penguin Books Ltd (), 25 April 1996, Paperback 1996, UK, HarperAudio (), September 1996, Audio Cassette 1997, UK, Heyne Verlag (), 31 January 1997, Paperback 1998, UK, Penguin Books Ltd (), 3 September 1998, Paperback 1999, UK, Rebound by Sagebrush (), October 1999, Library Binding 2000, UK, Penguin Books Ltd (), 24 February 2000, Paperback 2000, UK, Penguin Books Ltd (), 2 March 2000, Paperback 2000, UK, Turtleback Books (), November 2000, Hardback 2001, UK, Penguin Books Ltd (), 27 September 2001, Paperback 2002, UK, Thorndike Press (), October 2002, Hardback 2005, UK, Buccaneer Books (), 29 January 2005, Library Binding 2010, Greece, Anubis Publications (), 2010, Paperback (Adds final chapter not previously available in Greek versions) 2012, US, W. W. Norton & Company () 22 October 2012, Hardback (50th Anniversary Edition), revised text version. Andrew Biswell, PhD, director of the International Burgess Foundation, has taken a close look at the three varying published editions alongside the original typescript to recreate the novel as Anthony Burgess envisioned it. See also Classical conditioning List of cultural references to A Clockwork Orange List of stories set in a future now past Project MKUltra Violence in art References Further reading A Clockwork Orange: A Play With Music. Century Hutchinson Ltd. (1987). An extract is quoted on several web sites: Anthony Burgess from A Clockwork Orange: A Play With Music (Century Hutchinson Ltd, 1987), , A Clockwork Orange - From A Clockwork Orange: A Play With Music Burgess, Anthony (1978). "Clockwork Oranges". In 1985. London: Hutchinson. (extracts quoted here) External links A Clockwork Orange at SparkNotes A Clockwork Orange at Literapedia A Clockwork Orange (1962) | Last chapter | Anthony Burgess (1917–1993) Comparisons with the Kubrick film adaptation Dalrymple, Theodore. "A Prophetic and Violent Masterpiece", City Journal Giola, Ted. "A Clockwork Orange by Anthony Burgess" at Conceptual Fiction Priestley, Brenton. "Of Clockwork Apples and Oranges: Burgess and Kubrick (2002)" Novel 1962 British novels 1962 science fiction novels Fiction about mind control Books written in fictional dialects British novellas British novels adapted into films British novels adapted into plays British philosophical novels British science fiction novels Censored books Dystopian novels Fiction with unreliable narrators Novels about music Novels by Anthony Burgess Obscenity controversies in literature Novels about rape Heinemann (publisher) books English-language novels Novels set in London Metafictional novels Novels about sociopathy Science fiction novels adapted into films Crime novels
844
https://en.wikipedia.org/wiki/Amsterdam
Amsterdam
Amsterdam ( , , ) is the capital and most populous city of the Netherlands; with a population of 872,680 within the city proper, 1,558,755 in the urban area and 2,480,394 in the metropolitan area. Found within the Dutch province of North Holland, Amsterdam is colloquially referred to as the "Venice of the North", due to the large number of canals which form a UNESCO World Heritage Site. Amsterdam was founded at the Amstel, that was dammed to control flooding; the city's name derives from the Amstel dam. Originating as a small fishing village in the late 12th century, Amsterdam became one of the most important ports in the world during the Dutch Golden Age of the 17th century, and became the leading centre for the finance and trade sectors. In the 19th and 20th centuries, the city expanded and many new neighborhoods and suburbs were planned and built. The 17th-century canals of Amsterdam and the 19–20th century Defence Line of Amsterdam are on the UNESCO World Heritage List. Sloten, annexed in 1921 by the municipality of Amsterdam, is the oldest part of the city, dating to the 9th century. Amsterdam's main attractions include its historic canals, the Rijksmuseum, the Van Gogh Museum, the Stedelijk Museum, Hermitage Amsterdam, the Concertgebouw, the Anne Frank House, the Scheepvaartmuseum, the Amsterdam Museum, the Heineken Experience, the Royal Palace of Amsterdam, Natura Artis Magistra, Hortus Botanicus Amsterdam, NEMO, the red-light district and many cannabis coffee shops. It drew more than 5 million international visitors in 2014. The city is also well known for its nightlife and festival activity; with several of its nightclubs (Melkweg, Paradiso) among the world's most famous. Primarily known for its artistic heritage, elaborate canal system and narrow houses with gabled façades; well-preserved legacies of the city's 17th-century Golden Age. These characteristics are arguably responsible for attracting millions of Amsterdam's visitors annually. Cycling is key to the city's character, and there are numerous biking paths and lanes spread throughout the entire city. The Amsterdam Stock Exchange is considered the oldest "modern" securities market stock exchange in the world. As the commercial capital of the Netherlands and one of the top financial centres in Europe, Amsterdam is considered an alpha world city by the Globalization and World Cities (GaWC) study group. The city is also the cultural capital of the Netherlands. Many large Dutch institutions have their headquarters in the city, including: the Philips conglomerate, AkzoNobel, Booking.com, TomTom, and ING. Moreover, many of the world's largest companies are based in Amsterdam or have established their European headquarters in the city, such as leading technology companies Uber, Netflix and Tesla. In 2012, Amsterdam was ranked the second-best city to live in by the Economist Intelligence Unit (EIU) and 12th globally on quality of living for environment and infrastructure by Mercer. The city was ranked 4th place globally as top tech hub in the Savills Tech Cities 2019 report (2nd in Europe), and 3rd in innovation by Australian innovation agency 2thinknow in their Innovation Cities Index 2009. The Port of Amsterdam is the fifth largest in Europe. The KLM hub and Amsterdam's main airport, Schiphol, is the Netherlands' busiest airport as well as the third busiest in Europe and 11th busiest airport in the world. The Dutch capital is considered one of the most multicultural cities in the world, with at least 177 nationalities represented. A few of Amsterdam's notable residents throughout history include: painters Rembrandt and Van Gogh, the diarist Anne Frank, and philosopher Baruch Spinoza. History Prehistory Due to its geographical location in what used to be wet peatland, the founding of Amsterdam is of a younger age than the founding of other urban centers in the Low Countries. However, in and around the area of what later became Amsterdam, local farmers settled as early as three millennia ago. They lived along the prehistoric IJ river and upstream of its tributary Amstel. The prehistoric IJ was a shallow and quiet stream in peatland behind beach ridges. This secluded area could grow there into an important local settlement center, especially in the late Bronze Age, the Iron Age and the Roman Age. Neolithic and Roman artefacts have also been found downstream of this area, in the prehistoric Amstel bedding under Amsterdam's Damrak and Rokin, such as shards of Bell Beaker culture pottery (2200-2000 BC) and a granite grinding stone (2700-2750 BC). But the location of these artefacts around the river banks of the Amstel probably point to a presence of a modest semi-permanent or seasonal settlement of the previous mentioned local farmers. A permanent settlement would not have been possible, since the river mouth and the banks of the Amstel in this period in time were too wet for permanent habitation. Etymology and founding The origins of Amsterdam is linked to the development of the peatland called Amestelle, meaning 'watery area', from Aa(m) 'river' + stelle 'site at a shoreline', 'river bank'. In this area, land reclamation started as early as the late 10th century. Amestelle was located along a side arm of the IJ. This side arm took the name from the eponymous land: Amstel. Amestelle was inhabited by farmers, who lived more inland and more upstream, where the land was not as wet as at the banks of the downstream river mouth. These farmers were starting the reclamation around upstream Ouderkerk aan de Amstel, and later at the other side of the river at Amstelveen. The Van Amstel family, known in documents by this name since 1019, held the stewardship in this northwestern nook of the ecclesiastical district of the bishop of Utrecht. The family later served also under the count of Holland. A major turning point in the development of the Amstel river mouth is the All Saint's Flood of 1170. In an extremely short period of time, the shallow river IJ turned into a wide estuary, which from then on offered the Amstel an open connection to the Zuiderzee, IJssel and waterways further afield. This made the water flow of the Amstel more active, so excess water could be drained better. With drier banks, the downstream Amstel mouth became attractive for permanent habitation. Moreover, the river had grown from an insignificant peat stream into a junction of international waterways. A settlement was built here immediately after the landscape change of 1170, and right from the start of its foundation it focused on traffic, production and trade; not on farming, as opposed to how communities had lived further upstream for the past 200 years and northward for thousands of years. The construction of a dam at the mouth of the Amstel, eponymously named Dam, is historically estimated to have occurred between 1264 and 1275. The settlement first appeared in a document concerning a road toll granted by the count of Holland Floris V to the residents apud Amestelledamme 'at the dam in the Amstel' or 'at the dam of Amstelland'. This allowed the inhabitants of the village to travel freely through the County of Holland, paying no tolls at bridges, locks and dams. By 1327, the name had developed into Aemsterdam. Middle Ages Amsterdam was granted city rights in either 1300 or 1306. From the 14th century on, Amsterdam flourished, largely from trade with the Hanseatic League. In 1345, an alleged Eucharistic miracle in Kalverstraat rendered the city an important place of pilgrimage until the adoption of the Protestant faith. The Miracle devotion went underground but was kept alive. In the 19th century, especially after the jubilee of 1845, the devotion was revitalised and became an important national point of reference for Dutch Catholics. The Stille Omgang—a silent walk or procession in civil attire—is the expression of the pilgrimage within the Protestant Netherlands since the late 19th century. In the heyday of the Silent Walk, up to 90,000 pilgrims came to Amsterdam. In the 21st century, this has reduced to about 5,000. Conflict with Spain In the 16th century, the Dutch rebelled against Philip II of Spain and his successors. The main reasons for the uprising were the imposition of new taxes, the tenth penny, and the religious persecution of Protestants by the newly introduced Inquisition. The revolt escalated into the Eighty Years' War, which ultimately led to Dutch independence. Strongly pushed by Dutch Revolt leader William the Silent, the Dutch Republic became known for its relative religious tolerance. Jews from the Iberian Peninsula, Huguenots from France, prosperous merchants and printers from Flanders, and economic and religious refugees from the Spanish-controlled parts of the Low Countries found safety in Amsterdam. The influx of Flemish printers and the city's intellectual tolerance made Amsterdam a centre for the European free press. Centre of the Dutch Golden Age The 17th century is considered Amsterdam's Golden Age, during which it became the wealthiest city in the western world. Ships sailed from Amsterdam to the Baltic Sea, North America, and Africa, as well as present-day Indonesia, India, Sri Lanka, and Brazil, forming the basis of a worldwide trading network. Amsterdam's merchants had the largest share in both the Dutch East India Company and the Dutch West India Company. These companies acquired overseas possessions that later became Dutch colonies. Amsterdam was Europe's most important point for the shipment of goods and was the leading financial centre of the western world. In 1602, the Amsterdam office of the international trading Dutch East India Company became the world's first stock exchange by trading in its own shares. The Bank of Amsterdam started operations in 1609, acting as a full-service bank for Dutch merchant bankers and as a reserve bank. Decline and modernisation Amsterdam's prosperity declined during the 18th and early 19th centuries. The wars of the Dutch Republic with England and France took their toll on Amsterdam. During the Napoleonic Wars, Amsterdam's significance reached its lowest point, with Holland being absorbed into the French Empire. However, the later establishment of the United Kingdom of the Netherlands in 1815 marked a turning point. The end of the 19th century is sometimes called Amsterdam's second Golden Age. New museums, a railway station, and the Concertgebouw were built; in this same time, the Industrial Revolution reached the city. The Amsterdam–Rhine Canal was dug to give Amsterdam a direct connection to the Rhine, and the North Sea Canal was dug to give the port a shorter connection to the North Sea. Both projects dramatically improved commerce with the rest of Europe and the world. In 1906, Joseph Conrad gave a brief description of Amsterdam as seen from the seaside, in The Mirror of the Sea. 20th century–present Shortly before the First World War, the city started to expand again, and new suburbs were built. Even though the Netherlands remained neutral in this war, Amsterdam suffered a food shortage, and heating fuel became scarce. The shortages sparked riots in which several people were killed. These riots are known as the Aardappeloproer (Potato rebellion). People started looting stores and warehouses in order to get supplies, mainly food. On 1 January 1921, after a flood in 1916, the depleted municipalities of Durgerdam, Holysloot, Zunderdorp and Schellingwoude, all lying north of Amsterdam, were, at their own request, annexed to the city. Between the wars, the city continued to expand, most notably to the west of the Jordaan district in the Frederik Hendrikbuurt and surrounding neighbourhoods. Nazi Germany invaded the Netherlands on 10 May 1940 and took control of the country. Some Amsterdam citizens sheltered Jews, thereby exposing themselves and their families to a high risk of being imprisoned or sent to concentration camps. More than 100,000 Dutch Jews were deported to Nazi concentration camps, of whom some 60,000 lived in Amsterdam. In response, the Dutch Communist Party organized the February strike attended by 300,000 people to protest against the raids. Perhaps the most famous deportee was the young Jewish girl Anne Frank, who died in the Bergen-Belsen concentration camp. At the end of the Second World War, communication with the rest of the country broke down, and food and fuel became scarce. Many citizens traveled to the countryside to forage. Dogs, cats, raw sugar beets, and tulip bulbs—cooked to a pulp—were consumed to stay alive. Many trees in Amsterdam were cut down for fuel, and wood was taken from the houses, apartments and other buildings of deported Jews. Many new suburbs, such as Osdorp, Slotervaart, Slotermeer and Geuzenveld, were built in the years after the Second World War. These suburbs contained many public parks and wide-open spaces, and the new buildings provided improved housing conditions with larger and brighter rooms, gardens, and balconies. Because of the war and other events of the 20th century, almost the entire city centre had fallen into disrepair. As society was changing, politicians and other influential figures made plans to redesign large parts of it. There was an increasing demand for office buildings, and also for new roads, as the automobile became available to most people. A metro started operating in 1977 between the new suburb of Bijlmermeer in the city's Zuidoost (southeast) exclave and the centre of Amsterdam. Further plans were to build a new highway above the metro to connect Amsterdam Centraal and the city centre with other parts of the city. The required large-scale demolitions began in Amsterdam's former Jewish neighborhood. Smaller streets, such as the Jodenbreestraat and Weesperstraat, were widened and almost all houses and buildings were demolished. At the peak of the demolition, the Nieuwmarktrellen (Nieuwmarkt Riots) broke out; the rioters expressed their fury about the demolition caused by the restructuring of the city. As a result, the demolition was stopped and the highway into the city's centre was never fully built; only the metro was completed. Only a few streets remained widened. The new city hall was built on the almost completely demolished Waterlooplein. Meanwhile, large private organizations, such as Stadsherstel Amsterdam, were founded to restore the entire city centre. Although the success of this struggle is visible today, efforts for further restoration are still ongoing. The entire city centre has reattained its former splendour and, as a whole, is now a protected area. Many of its buildings have become monuments, and in July 2010 the Grachtengordel (the three concentric canals: Herengracht, Keizersgracht, and Prinsengracht) was added to the UNESCO World Heritage List. In the 21st century, the Amsterdam city centre has attracted large numbers of tourists: between 2012 and 2015, the annual number of visitors rose from 10 to 17 million. Real estate prices have surged, and local shops are making way for tourist-oriented ones, making the centre unaffordable for the city's inhabitants. These developments have evoked comparisons with Venice, a city thought to be overwhelmed by the tourist influx. Construction of a new metro line connecting the part of the city north of the IJ to its southern part was started in 2003. The project was controversial because its cost had exceeded its budget by a factor three by 2008, because of fears of damage to buildings in the centre, and because construction had to be halted and restarted multiple times. The new metro line was completed in 2018. Since 2014, renewed focus has been given to urban regeneration and renewal, especially in areas directly bordering the city centre, such as Frederik Hendrikbuurt. This urban renewal and expansion of the traditional centre of the city—with the construction on artificial islands of the new eastern IJburg neighbourhood—is part of the Structural Vision Amsterdam 2040 initiative. Geography Amsterdam is located in the Western Netherlands, in the province of North Holland, the capital of which is not Amsterdam, but rather Haarlem. The river Amstel ends in the city centre and connects to a large number of canals that eventually terminate in the IJ. Amsterdam is about below sea level. The surrounding land is flat as it is formed of large polders. A man-made forest, Amsterdamse Bos, is in the southwest. Amsterdam is connected to the North Sea through the long North Sea Canal. Amsterdam is intensely urbanised, as is the Amsterdam metropolitan area surrounding the city. Comprising of land, the city proper has 4,457 inhabitants per km2 and 2,275 houses per km2. Parks and nature reserves make up 12% of Amsterdam's land area. Water Amsterdam has more than of canals, most of which are navigable by boat. The city's three main canals are the Prinsengracht, Herengracht and Keizersgracht. In the Middle Ages, Amsterdam was surrounded by a moat, called the Singel, which now forms the innermost ring in the city, and gives the city centre a horseshoe shape. The city is also served by a seaport. It has been compared with Venice, due to its division into about 90 islands, which are linked by more than 1,200 bridges. Climate Amsterdam has an oceanic climate (Köppen Cfb) strongly influenced by its proximity to the North Sea to the west, with prevailing westerly winds. Amsterdam, as well as most of the North Holland province, lies in USDA Hardiness zone 8b. Frosts mainly occur during spells of easterly or northeasterly winds from the inner European continent. Even then, because Amsterdam is surrounded on three sides by large bodies of water, as well as having a significant heat-island effect, nights rarely fall below , while it could easily be in Hilversum, southeast. Summers are moderately warm with a number of hot and humid days every month. The average daily high in August is , and or higher is only measured on average on 2.5 days, placing Amsterdam in AHS Heat Zone 2. The record extremes range from to . Days with more than of precipitation are common, on average 133 days per year. Amsterdam's average annual precipitation is . A large part of this precipitation falls as light rain or brief showers. Cloudy and damp days are common during the cooler months of October through March. Demographics Historical population In 1300, Amsterdam's population was around 1,000 people. While many towns in Holland experienced population decline during the 15th and 16th centuries, Amsterdam's population grew, mainly due to the rise of the profitable Baltic maritime trade after the Burgundian victory in the Dutch–Hanseatic War. Still, the population of Amsterdam was only modest compared to the towns and cities of Flanders and Brabant, which comprised the most urbanised area of the Low Countries. This changed when, during the Dutch Revolt, many people from the Southern Netherlands fled to the North, especially after Antwerp fell to Spanish forces in 1585. Jewish people from Spain, Portugal and Eastern Europe similarly settled in Amsterdam, as did Germans and Scandinavians. In thirty years, Amsterdam's population more than doubled between 1585 and 1610. By 1600, its population was around 50,000. During the 1660s, Amsterdam's population reached 200,000. The city's growth levelled off and the population stabilised around 240,000 for most of the 18th century. In 1750, Amsterdam was the fourth largest city in Western Europe, behind London (676,000), Paris (560,000) and Naples (324,000). This was all the more remarkable as Amsterdam was neither the capital city nor the seat of government of the Dutch Republic, which itself was a much smaller state than England, France or the Ottoman Empire. In contrast to those other metropolises, Amsterdam was also surrounded by large towns such as Leiden (about 67,000), Rotterdam (45,000), Haarlem (38,000) and Utrecht (30,000). The city's population declined in the early 19th century, dipping under 200,000 in 1820. By the second half of the 19th century, industrialisation spurred renewed growth. Amsterdam's population hit an all-time high of 872,000 in 1959, before declining in the following decades due to government-sponsored suburbanisation to so-called groeikernen (growth centres) such as Purmerend and Almere. Between 1970 and 1980, Amsterdam experienced its sharp population decline, peaking at a net loss of 25,000 people in 1973. By 1985 the city had only 675,570 residents. This was soon followed by reurbanisation and gentrification, leading to renewed population growth in the 2010s. Also in the 2010s, much of Amsterdam's population growth was due to immigration to the city. Amsterdam's population failed to beat the expectations of 873,000 in 2019. Immigration In the 16th and 17th century, non-Dutch immigrants to Amsterdam were mostly Huguenots, Flemings, Sephardi Jews and Westphalians. Huguenots came after the Edict of Fontainebleau in 1685, while the Flemish Protestants came during the Eighty Years' War. The Westphalians came to Amsterdam mostly for economic reasons – their influx continued through the 18th and 19th centuries. Before the Second World War, 10% of the city population was Jewish. Just twenty percent of them survived the Shoah. The first mass immigration in the 20th century was by people from Indonesia, who came to Amsterdam after the independence of the Dutch East Indies in the 1940s and 1950s. In the 1960s guest workers from Turkey, Morocco, Italy, and Spain emigrated to Amsterdam. After the independence of Suriname in 1975, a large wave of Surinamese settled in Amsterdam, mostly in the Bijlmer area. Other immigrants, including refugees asylum seekers and illegal immigrants, came from Europe, America, Asia and Africa. In the 1970s and 1980s, many 'old' Amsterdammers moved to 'new' cities like Almere and Purmerend, prompted by the third planological bill of the Dutch Government. This bill promoted suburbanisation and arranged for new developments in so-called "groeikernen", literally cores of growth. Young professionals and artists moved into neighborhoods De Pijp and the Jordaan abandoned by these Amsterdammers. The non-Western immigrants settled mostly in the social housing projects in Amsterdam-West and the Bijlmer. Today, people of non-Western origin make up approximately one-fifth of the population of Amsterdam, and more than 30% of the city's children. Ethnic Dutch (as defined by the Dutch census) now make up a minority of the total population, although by far the largest one. Only one in three inhabitants under 15 is an autochthon, or a person who has two parents of Dutch origin. Segregation along ethnic lines is clearly visible, with people of non-Western origin, considered a separate group by Statistics Netherlands, concentrating in specific neighbourhoods especially in Nieuw-West, Zeeburg, Bijlmer and in certain areas of Amsterdam-Noord. In 2000, Christians formed the largest religious group in the city (28% of the population). The next largest religion was Islam (8%), most of whose followers were Sunni. In 2015, Christians formed the largest religious group in the city (28% of the population). The next largest religion was Islam (7.1%), most of whose followers were Sunni. Religion In 1578, the largely Catholic city of Amsterdam joined the revolt against Spanish rule, late in comparison to other major northern Dutch cities. Catholic priests were driven out of the city. Following the Dutch takeover, all churches were converted to Protestant worship. Calvinism was declared the main religion; although Catholicism was not forbidden and priests allowed to serve, the Catholic hierarchy was prohibited. This led to the establishment of schuilkerken, covert religious buildings that were hidden in pre-existing buildings. Catholics, some Jewish and dissenting Protestants worshiped in such buildings. A large influx of foreigners of many religions came to 17th-century Amsterdam, in particular Sefardic Jews from Spain and Portugal, Huguenots from France, Lutherans, Mennonites, as well as Protestants from across the Netherlands. This led to the establishment of many non-Dutch-speaking churches. In 1603, the Jewish received permission to practice their religion in the city. In 1639, the first synagogue was consecrated. The Jews came to call the town 'Jerusalem of the West'. As they became established in the city, other Christian denominations used converted Catholic chapels to conduct their own services. The oldest English-language church congregation in the world outside the United Kingdom is found at the Begijnhof. Regular services there are still offered in English under the auspices of the Church of Scotland. Being Calvinists, the Huguenots soon integrated into the Dutch Reformed Church, though often retaining their own congregations. Some, commonly referred by the moniker 'Walloon', are recognizable today as they offer occasional services in French. In the second half of the 17th century, Amsterdam experienced an influx of Ashkenazim, Jews from Central and Eastern Europe. Jews often fled the pogroms in those areas. The first Ashkenazis who arrived in Amsterdam were refugees from the Khmelnytsky Uprising occurring in Ukraine and the Thirty Years' War, which devastated much of Central Europe. They not only founded their own synagogues, but had a strong influence on the 'Amsterdam dialect' adding a large Yiddish local vocabulary. Despite an absence of an official Jewish ghetto, most Jews preferred to live in the eastern part, which used to be the center of medieval Amsterdam. The main street of this Jewish neighbourhood was Jodenbreestraat. The neighbourhood comprised the Waterlooplein and the Nieuwmarkt. Buildings in this neighbourhood fell into disrepair after the Second World War a large section of the neighbourhood was demolished during the construction of the metro system. This led to riots, and as a result the original plans for large-scale reconstruction were abandoned by the government. The neighbourhood was rebuilt with smaller-scale residence buildings on the basis of its original layout. Catholic churches in Amsterdam have been constructed since the restoration of the episcopal hierarchy in 1853. One of the principal architects behind the city's Catholic churches, Cuypers, was also responsible for the Amsterdam Centraal station and the Rijksmuseum. In 1924, the Catholic Church hosted the International Eucharistic Congress in Amsterdam; numerous Catholic prelates visited the city, where festivities were held in churches and stadiums. Catholic processions on the public streets, however, were still forbidden under law at the time. Only in the 20th century was Amsterdam's relation to Catholicism normalised, but despite its far larger population size, the episcopal see of the city was placed in the provincial town of Haarlem. Historically, Amsterdam has been predominantly Christian, in 1900 Christians formed the largest religious group in the city (70% of the population), Dutch Reformed Church formed 45% of the city population, while the Catholic Church formed 25% of the city population. In recent times, religious demographics in Amsterdam have been changed by immigration from former colonies. Hinduism has been introduced from the Hindu diaspora from Suriname and several distinct branches of Islam have been brought from various parts of the world. Islam is now the largest non-Christian religion in Amsterdam. The large community of Ghanaian immigrants have established African churches, often in parking garages in the Bijlmer area. Diversity and immigration Amsterdam experienced an influx of religions and cultures after the Second World War. With 180 different nationalities, Amsterdam is home to one of the widest varieties of nationalities of any city in the world. The proportion of the population of immigrant origin in the city proper is about 50% and 88% of the population are Dutch citizens. Amsterdam has been one of the municipalities in the Netherlands which provided immigrants with extensive and free Dutch-language courses, which have benefited many immigrants. Cityscape and architecture Amsterdam fans out south from the Amsterdam Centraal station and Damrak, the main street off the station. The oldest area of the town is known as De Wallen (English: "The Quays"). It lies to the east of Damrak and contains the city's famous red-light district. To the south of De Wallen is the old Jewish quarter of Waterlooplein. The medieval and colonial age canals of Amsterdam, known as grachten, embraces the heart of the city where homes have interesting gables. Beyond the Grachtengordel are the former working-class areas of Jordaan and de Pijp. The Museumplein with the city's major museums, the Vondelpark, a 19th-century park named after the Dutch writer Joost van den Vondel, as well as the Plantage neighbourhood, with the zoo, are also located outside the Grachtengordel. Several parts of the city and the surrounding urban area are polders. This can be recognised by the suffix -meer which means lake, as in Aalsmeer, Bijlmermeer, Haarlemmermeer and Watergraafsmeer. Canals The Amsterdam canal system is the result of conscious city planning. In the early 17th century, when immigration was at a peak, a comprehensive plan was developed that was based on four concentric half-circles of canals with their ends emerging at the IJ bay. Known as the Grachtengordel, three of the canals were mostly for residential development: the Herengracht (where "Heren" refers to Heren Regeerders van de stad Amsterdam, ruling lords of Amsterdam, whilst gracht means canal, so that the name can be roughly translated as "Canal of the Lords"), Keizersgracht (Emperor's Canal) and Prinsengracht (Prince's Canal). The fourth and outermost canal is the Singelgracht, which is often not mentioned on maps because it is a collective name for all canals in the outer ring. The Singelgracht should not be confused with the oldest and innermost canal, the Singel. The canals served for defense, water management and transport. The defenses took the form of a moat and earthen dikes, with gates at transit points, but otherwise no masonry superstructures. The original plans have been lost, so historians, such as Ed Taverne, need to speculate on the original intentions: it is thought that the considerations of the layout were purely practical and defensive rather than ornamental. Construction started in 1613 and proceeded from west to east, across the breadth of the layout, like a gigantic windshield wiper as the historian Geert Mak calls it – and not from the centre outwards, as a popular myth has it. The canal construction in the southern sector was completed by 1656. Subsequently, the construction of residential buildings proceeded slowly. The eastern part of the concentric canal plan, covering the area between the Amstel river and the IJ bay, has never been implemented. In the following centuries, the land was used for parks, senior citizens' homes, theatres, other public facilities, and waterways without much planning. Over the years, several canals have been filled in, becoming streets or squares, such as the Nieuwezijds Voorburgwal and the Spui. Expansion After the development of Amsterdam's canals in the 17th century, the city did not grow beyond its borders for two centuries. During the 19th century, Samuel Sarphati devised a plan based on the grandeur of Paris and London at that time. The plan envisaged the construction of new houses, public buildings and streets just outside the Grachtengordel. The main aim of the plan, however, was to improve public health. Although the plan did not expand the city, it did produce some of the largest public buildings to date, like the Paleis voor Volksvlijt. Following Sarphati, civil engineers Jacobus van Niftrik and Jan Kalff designed an entire ring of 19th-century neighbourhoods surrounding the city's centre, with the city preserving the ownership of all land outside the 17th-century limit, thus firmly controlling development. Most of these neighbourhoods became home to the working class. In response to overcrowding, two plans were designed at the beginning of the 20th century which were very different from anything Amsterdam had ever seen before: Plan Zuid (designed by the architect Berlage) and West. These plans involved the development of new neighbourhoods consisting of housing blocks for all social classes. After the Second World War, large new neighbourhoods were built in the western, southeastern, and northern parts of the city. These new neighbourhoods were built to relieve the city's shortage of living space and give people affordable houses with modern conveniences. The neighbourhoods consisted mainly of large housing blocks located among green spaces, connected to wide roads, making the neighbourhoods easily accessible by motor car. The western suburbs which were built in that period are collectively called the Westelijke Tuinsteden. The area to the southeast of the city built during the same period is known as the Bijlmer. Architecture Amsterdam has a rich architectural history. The oldest building in Amsterdam is the Oude Kerk (English: Old Church), at the heart of the Wallen, consecrated in 1306. The oldest wooden building is Het Houten Huys at the Begijnhof. It was constructed around 1425 and is one of only two existing wooden buildings. It is also one of the few examples of Gothic architecture in Amsterdam. The oldest stone building of the Netherlands, The Moriaan is built in 's-Hertogenbosch. In the 16th century, wooden buildings were razed and replaced with brick ones. During this period, many buildings were constructed in the architectural style of the Renaissance. Buildings of this period are very recognisable with their stepped gable façades, which is the common Dutch Renaissance style. Amsterdam quickly developed its own Renaissance architecture. These buildings were built according to the principles of the architect Hendrick de Keyser. One of the most striking buildings designed by Hendrick de Keyser is the Westerkerk. In the 17th century baroque architecture became very popular, as it was elsewhere in Europe. This roughly coincided with Amsterdam's Golden Age. The leading architects of this style in Amsterdam were Jacob van Campen, Philips Vingboons and Daniel Stalpaert. Philip Vingboons designed splendid merchants' houses throughout the city. A famous building in baroque style in Amsterdam is the Royal Palace on Dam Square. Throughout the 18th century, Amsterdam was heavily influenced by French culture. This is reflected in the architecture of that period. Around 1815, architects broke with the baroque style and started building in different neo-styles. Most Gothic style buildings date from that era and are therefore said to be built in a neo-gothic style. At the end of the 19th century, the Jugendstil or Art Nouveau style became popular and many new buildings were constructed in this architectural style. Since Amsterdam expanded rapidly during this period, new buildings adjacent to the city centre were also built in this style. The houses in the vicinity of the Museum Square in Amsterdam Oud-Zuid are an example of Jugendstil. The last style that was popular in Amsterdam before the modern era was Art Deco. Amsterdam had its own version of the style, which was called the Amsterdamse School. Whole districts were built this style, such as the Rivierenbuurt. A notable feature of the façades of buildings designed in Amsterdamse School is that they are highly decorated and ornate, with oddly shaped windows and doors. The old city centre is the focal point of all the architectural styles before the end of the 19th century. Jugendstil and Georgian are mostly found outside the city's centre in the neighbourhoods built in the early 20th century, although there are also some striking examples of these styles in the city centre. Most historic buildings in the city centre and nearby are houses, such as the famous merchants' houses lining the canals. Parks and recreational areas Amsterdam has many parks, open spaces, and squares throughout the city. The Vondelpark, the largest park in the city, is located in the Oud-Zuid neighbourhood and is named after the 17th-century Amsterdam author Joost van den Vondel. Yearly, the park has around 10 million visitors. In the park is an open-air theatre, a playground and several horeca facilities. In the Zuid borough, is the Beatrixpark, named after Queen Beatrix. Between Amsterdam and Amstelveen is the Amsterdamse Bos ("Amsterdam Forest"), the largest recreational area in Amsterdam. Annually, almost 4.5 million people visit the park, which has a size of 1.000 hectares and is approximately three times the size of Central Park. The Amstelpark in the Zuid borough houses the Rieker windmill, which dates to 1636. Other parks include the Sarphatipark in the De Pijp neighbourhood, the Oosterpark in the Oost borough and the Westerpark in the Westerpark neighbourhood. The city has three beaches: Nemo Beach, Citybeach "Het stenen hoofd" (Silodam) and Blijburg, all located in the Centrum borough. The city has many open squares (plein in Dutch). The namesake of the city as the site of the original dam, Dam Square, is the main city square and has the Royal Palace and National Monument. Museumplein hosts various museums, including the Rijksmuseum, Van Gogh Museum, and Stedelijk Museum. Other squares include Rembrandtplein, Muntplein, Nieuwmarkt, Leidseplein, Spui and Waterlooplein. Also, near to Amsterdam is the Nekkeveld estate conservation project. Economy Amsterdam is the financial and business capital of the Netherlands. According to the 2007 European Cities Monitor (ECM) – an annual location survey of Europe's leading companies carried out by global real estate consultant Cushman & Wakefield – Amsterdam is one of the top European cities in which to locate an international business, ranking fifth in the survey. with the survey determining London, Paris, Frankfurt and Barcelona as the four European cities surpassing Amsterdam in this regard. A substantial number of large corporations and banks' headquarters are located in the Amsterdam area, including: AkzoNobel, Heineken International, ING Group, ABN AMRO, TomTom, Delta Lloyd Group, Booking.com and Philips. Although many small offices remain along the historic canals, centrally based companies have increasingly relocated outside Amsterdam's city centre. Consequently, the Zuidas (English: South Axis) has become the new financial and legal hub of Amsterdam, with the country's five largest law firms and several subsidiaries of large consulting firms, such as Boston Consulting Group and Accenture, as well as the World Trade Centre (Amsterdam) located in the Zuidas district. In addition to the Zuidas, there are three smaller financial districts in Amsterdam: around Amsterdam Sloterdijk railway station. Where one can find the offices of several newspapers, such as De Telegraaf. as well as those of Deloitte, the Gemeentelijk Vervoerbedrijf (municipal public transport company), and the Dutch tax offices (Belastingdienst); around the Johan Cruyff Arena in Amsterdam Zuidoost, with the headquarters of ING Group; around the Amstel railway station in the Amsterdam-Oost district to the east of the historical city. Amsterdam's tallest building, the Rembrandt Tower, is located here. As are the headquarters of Philips, the Dutch multinational conglomerate. Amsterdam has been a leading city to reduce the use of raw materials and has created a plan to become a circular city by 2050. The adjoining municipality of Amstelveen is the location of KPMG International's global headquarters. Other non-Dutch companies have chosen to settle in communities surrounding Amsterdam since they allow freehold property ownership, whereas Amsterdam retains ground rent. The Amsterdam Stock Exchange (AEX), now part of Euronext, is the world's oldest stock exchange and, due to Brexit, has overtaken LSE as the largest bourse in Europe. It is near Dam Square in the city centre. Port of Amsterdam The Port of Amsterdam is the fourth-largest port in Europe, the 38th largest port in the world and the second-largest port in the Netherlands by metric tons of cargo. In 2014, the Port of Amsterdam had a cargo throughput of 97,4 million tons of cargo, which was mostly bulk cargo. Amsterdam has the biggest cruise port in the Netherlands with more than 150 cruise ships every year. In 2019, the new lock in IJmuiden opened; since then, the port has been able to grow to 125 million tonnes in capacity. Tourism Amsterdam is one of the most popular tourist destinations in Europe, receiving more than 5.34 million international visitors annually, this is excluding the 16 million day-trippers visiting the city every year. The number of visitors has been growing steadily over the past decade. This can be attributed to an increasing number of European visitors. Two-thirds of the hotels are located in the city's centre. Hotels with 4 or 5 stars contribute 42% of the total beds available and 41% of the overnight stays in Amsterdam. The room occupation rate was 85% in 2017, up from 78% in 2006. The majority of tourists (74%) originate from Europe. The largest group of non-European visitors come from the United States, accounting for 14% of the total. Certain years have a theme in Amsterdam to attract extra tourists. For example, the year 2006 was designated "Rembrandt 400", to celebrate the 400th birthday of Rembrandt van Rijn. Some hotels offer special arrangements or activities during these years. The average number of guests per year staying at the four campsites around the city range from 12,000 to 65,000. De Wallen (red-light district) De Wallen, also known as Walletjes or Rosse Buurt, is a designated area for legalised prostitution and is Amsterdam's largest and best-known red-light district. This neighbourhood has become a famous attraction for tourists. It consists of a network of canals, streets, and alleys containing several hundred small, one-room apartments rented by sex workers who offer their services from behind a window or glass door, typically illuminated with red lights. In recent years, the city government has been closing and repurposing the famous red-light district windows in an effort to clean up the area and reduce the amount of party and sex tourism. Retail Shops in Amsterdam range from large high-end department stores such as De Bijenkorf founded in 1870 to small speciality shops. Amsterdam's high-end shops are found in the streets P.C. Hooftstraat and Cornelis Schuytstraat, which are located in the vicinity of the Vondelpark. One of Amsterdam's busiest high streets is the narrow, medieval Kalverstraat in the heart of the city. Other shopping areas include the Negen Straatjes and Haarlemmerdijk and Haarlemmerstraat. Negen Straatjes are nine narrow streets within the Grachtengordel, the concentric canal system of Amsterdam. The Negen Straatjes differ from other shopping districts with the presence of a large diversity of privately owned shops. The Haarlemmerstraat and Haarlemmerdijk were voted best shopping street in the Netherlands in 2011. These streets have as the Negen Straatjes a large diversity of privately owned shops. However, as the Negen Straatjes are dominated by fashion stores, the Haarlemmerstraat and Haarlemmerdijk offer a wide variety of stores, just to name some specialities: candy and other food-related stores, lingerie, sneakers, wedding clothing, interior shops, books, Italian deli's, racing and mountain bikes, skatewear, etc. The city also features a large number of open-air markets such as the Albert Cuyp Market, Westerstraat-markt, Ten Katemarkt, and Dappermarkt. Some of these markets are held daily, like the Albert Cuypmarkt and the Dappermarkt. Others, like the Westerstraatmarkt, are held every week. Fashion Several fashion brands and designers are based in Amsterdam. Fashion designers include Iris van Herpen, Mart Visser, Viktor & Rolf, Marlies Dekkers and Frans Molenaar. Fashion models like Yfke Sturm, Doutzen Kroes and Kim Noorda started their careers in Amsterdam. Amsterdam has its garment centre in the World Fashion Center. Fashion photographers Inez van Lamsweerde and Vinoodh Matadin were born in Amsterdam. Culture During the later part of the 16th-century, Amsterdam's Rederijkerskamer (Chamber of rhetoric) organised contests between different Chambers in the reading of poetry and drama. In 1637, Schouwburg, the first theatre in Amsterdam was built, opening on 3 January 1638. The first ballet performances in the Netherlands were given in Schouwburg in 1642 with the Ballet of the Five Senses. In the 18th century, French theatre became popular. While Amsterdam was under the influence of German music in the 19th century there were few national opera productions; the Hollandse Opera of Amsterdam was built in 1888 for the specific purpose of promoting Dutch opera. In the 19th century, popular culture was centred on the Nes area in Amsterdam (mainly vaudeville and music-hall). An improved metronome was invented in 1812 by Dietrich Nikolaus Winkel. The Rijksmuseum (1885) and Stedelijk Museum (1895) were built and opened. In 1888, the Concertgebouworkest orchestra was established. With the 20th century came cinema, radio and television. Though most studios are located in Hilversum and Aalsmeer, Amsterdam's influence on programming is very strong. Many people who work in the television industry live in Amsterdam. Also, the headquarters of the Dutch SBS Broadcasting Group is located in Amsterdam. Museums The most important museums of Amsterdam are located on the Museumplein (Museum Square), located at the southwestern side of the Rijksmuseum. It was created in the last quarter of the 19th century on the grounds of the former World's fair. The northeastern part of the square is bordered by the large Rijksmuseum. In front of the Rijksmuseum on the square itself is a long, rectangular pond. This is transformed into an ice rink in winter. The northwestern part of the square is bordered by the Van Gogh Museum, House of Bols Cocktail & Genever Experience and Coster Diamonds. The southwestern border of the Museum Square is the Van Baerlestraat, which is a major thoroughfare in this part of Amsterdam. The Concertgebouw is located across this street from the square. To the southeast of the square are several large houses, one of which contains the American consulate. A parking garage can be found underneath the square, as well as a supermarket. The Museumplein is covered almost entirely with a lawn, except for the northeastern part of the square which is covered with gravel. The current appearance of the square was realised in 1999, when the square was remodelled. The square itself is the most prominent site in Amsterdam for festivals and outdoor concerts, especially in the summer. Plans were made in 2008 to remodel the square again because many inhabitants of Amsterdam are not happy with its current appearance. The Rijksmuseum possesses the largest and most important collection of classical Dutch art. It opened in 1885. Its collection consists of nearly one million objects. The artist most associated with Amsterdam is Rembrandt, whose work, and the work of his pupils, is displayed in the Rijksmuseum. Rembrandt's masterpiece The Night Watch is one of the top pieces of art of the museum. It also houses paintings from artists like Bartholomeus van der Helst, Johannes Vermeer, Frans Hals, Ferdinand Bol, Albert Cuyp, Jacob van Ruisdael and Paulus Potter. Aside from paintings, the collection consists of a large variety of decorative art. This ranges from Delftware to giant doll-houses from the 17th century. The architect of the gothic revival building was P.J.H. Cuypers. The museum underwent a 10-year, 375 million euro renovation starting in 2003. The full collection was reopened to the public on 13 April 2013 and the Rijksmuseum has remained the most visited museum in Amsterdam with 2.2 million visitors in 2016 and 2.16 million in 2017. Van Gogh lived in Amsterdam for a short while and there is a museum dedicated to his work. The museum is housed in one of the few modern buildings in this area of Amsterdam. The building was designed by Gerrit Rietveld. This building is where the permanent collection is displayed. A new building was added to the museum in 1999. This building, known as the performance wing, was designed by Japanese architect Kisho Kurokawa. Its purpose is to house temporary exhibitions of the museum. Some of Van Gogh's most famous paintings, like The Potato Eaters and Sunflowers, are in the collection. The Van Gogh museum is the second most visited museum in Amsterdam, not far behind the Rijksmuseum in terms of the number of visits, being approximately 2.1 million in 2016, for example. Next to the Van Gogh museum stands the Stedelijk Museum. This is Amsterdam's most important museum of modern art. The museum is as old as the square it borders and was opened in 1895. The permanent collection consists of works of art from artists like Piet Mondrian, Karel Appel, and Kazimir Malevich. After renovations lasting several years, the museum opened in September 2012 with a new composite extension that has been called 'The Bathtub' due to its resemblance to one. Amsterdam contains many other museums throughout the city. They range from small museums such as the Verzetsmuseum (Resistance Museum), the Anne Frank House, and the Rembrandt House Museum, to the very large, like the Tropenmuseum (Museum of the Tropics), Amsterdam Museum (formerly known as Amsterdam Historical Museum), Hermitage Amsterdam (a dependency of the Hermitage Museum in Saint Petersburg) and the Joods Historisch Museum (Jewish Historical Museum). The modern-styled Nemo is dedicated to child-friendly science exhibitions. Music Amsterdam's musical culture includes a large collection of songs that treat the city nostalgically and lovingly. The 1949 song "Aan de Amsterdamse grachten" ("On the canals of Amsterdam") was performed and recorded by many artists, including John Kraaijkamp Sr.; the best-known version is probably that by Wim Sonneveld (1962). In the 1950s Johnny Jordaan rose to fame with "Geef mij maar Amsterdam" ("I prefer Amsterdam"), which praises the city above all others (explicitly Paris); Jordaan sang especially about his own neighbourhood, the Jordaan ("Bij ons in de Jordaan"). Colleagues and contemporaries of Johnny include Tante Leen and Manke Nelis. Another notable Amsterdam song is "Amsterdam" by Jacques Brel (1964). A 2011 poll by Amsterdam newspaper Het Parool that Trio Bier's "Oude Wolf" was voted "Amsterdams lijflied". Notable Amsterdam bands from the modern era include the Osdorp Posse and The Ex. AFAS Live (formerly known as the Heineken Music Hall) is a concert hall located near the Johan Cruyff Arena (known as the Amsterdam Arena until 2018). Its main purpose is to serve as a podium for pop concerts for big audiences. Many famous international artists have performed there. Two other notable venues, Paradiso and the Melkweg are located near the Leidseplein. Both focus on broad programming, ranging from indie rock to hip hop, R&B, and other popular genres. Other more subcultural music venues are OCCII, OT301, De Nieuwe Anita, Winston Kingdom, and Zaal 100. Jazz has a strong following in Amsterdam, with the Bimhuis being the premier venue. In 2012, Ziggo Dome was opened, also near Amsterdam Arena, a state-of-the-art indoor music arena. AFAS Live is also host to many electronic dance music festivals, alongside many other venues. Armin van Buuren and Tiesto, some of the worlds leading Trance DJ's hail from the Netherlands and frequently perform in Amsterdam. Each year in October, the city hosts the Amsterdam Dance Event (ADE) which is one of the leading electronic music conferences and one of the biggest club festivals for electronic music in the world, attracting over 350,000 visitors each year. Another popular dance festival is 5daysoff, which takes place in the venues Paradiso and Melkweg. In the summertime, there are several big outdoor dance parties in or nearby Amsterdam, such as Awakenings, Dance Valley, Mystery Land, Loveland, A Day at the Park, Welcome to the Future, and Valtifest. Amsterdam has a world-class symphony orchestra, the Royal Concertgebouw Orchestra. Their home is the Concertgebouw, which is across the Van Baerlestraat from the Museum Square. It is considered by critics to be a concert hall with some of the best acoustics in the world. The building contains three halls, Grote Zaal, Kleine Zaal, and Spiegelzaal. Some nine hundred concerts and other events per year take place in the Concertgebouw, for a public of over 700,000, making it one of the most-visited concert halls in the world. The opera house of Amsterdam is located adjacent to the city hall. Therefore, the two buildings combined are often called the Stopera, (a word originally coined by protesters against it very construction: Stop the Opera[-house]). This huge modern complex, opened in 1986, lies in the former Jewish neighbourhood at Waterlooplein next to the river Amstel. The Stopera is the home base of Dutch National Opera, Dutch National Ballet and the Holland Symfonia. Muziekgebouw aan 't IJ is a concert hall, which is located in the IJ near the central station. Its concerts perform mostly modern classical music. Located adjacent to it, is the Bimhuis, a concert hall for improvised and Jazz music. Performing arts Amsterdam has three main theatre buildings. The Stadsschouwburg at the Leidseplein is the home base of Toneelgroep Amsterdam. The current building dates from 1894. Most plays are performed in the Grote Zaal (Great Hall). The normal program of events encompasses all sorts of theatrical forms. The Stadsschouwburg is currently being renovated and expanded. The third theatre space, to be operated jointly with next door Melkweg, will open in late 2009 or early 2010. The Dutch National Opera and Ballet (formerly known as Het Muziektheater), dating from 1986, is the principal opera house and home to Dutch National Opera and Dutch National Ballet. Royal Theatre Carré was built as a permanent circus theatre in 1887 and is currently mainly used for musicals, cabaret performances, and pop concerts. The recently re-opened DeLaMar Theater houses more commercial plays and musicals. A new theatre has also moved into the Amsterdam scene in 2014, joining other established venues: Theater Amsterdam is located in the west part of Amsterdam, on the Danzigerkade. It is housed in a modern building with a panoramic view over the harbour. The theatre is the first-ever purpose-built venue to showcase a single play entitled ANNE, the play based on Anne Frank's life. On the east side of town, there is a small theatre in a converted bathhouse, the Badhuistheater. The theatre often has English programming. The Netherlands has a tradition of cabaret or kleinkunst, which combines music, storytelling, commentary, theatre and comedy. Cabaret dates back to the 1930s and artists like Wim Kan, Wim Sonneveld and Toon Hermans were pioneers of this form of art in the Netherlands. In Amsterdam is the Kleinkunstacademie (English: Cabaret Academy) and Nederlied Kleinkunstkoor (English: Cabaret Choir). Contemporary popular artists are Youp van 't Hek, Freek de Jonge, Herman Finkers, Hans Teeuwen, Theo Maassen, Herman van Veen, Najib Amhali, Raoul Heertje, Jörgen Raymann, Brigitte Kaandorp and Comedytrain. The English spoken comedy scene was established with the founding of Boom Chicago in 1993. They have their own theatre at Leidseplein. Nightlife Amsterdam is famous for its vibrant and diverse nightlife. Amsterdam has many cafés (bars). They range from large and modern to small and cosy. The typical Bruine Kroeg (brown café) breathe a more old fashioned atmosphere with dimmed lights, candles, and somewhat older clientele. These brown cafés mostly offer a wide range of local and international artisanal beers. Most cafés have terraces in summertime. A common sight on the Leidseplein during summer is a square full of terraces packed with people drinking beer or wine. Many restaurants can be found in Amsterdam as well. Since Amsterdam is a multicultural city, a lot of different ethnic restaurants can be found. Restaurants range from being rather luxurious and expensive to being ordinary and affordable. Amsterdam also possesses many discothèques. The two main nightlife areas for tourists are the Leidseplein and the Rembrandtplein. The Paradiso, Melkweg and Sugar Factory are cultural centres, which turn into discothèques on some nights. Examples of discothèques near the Rembrandtplein are the Escape, Air, John Doe and Club Abe. Also noteworthy are Panama, Hotel Arena (East), TrouwAmsterdam and Studio 80. In recent years '24-hour' clubs opened their doors, most notably Radion De School, Shelter and Marktkantine. Bimhuis located near the Central Station, with its rich programming hosting the best in the field is considered one of the best jazz clubs in the world. The Reguliersdwarsstraat is the main street for the LGBT community and nightlife. Festivals In 2008, there were 140 festivals and events in Amsterdam. Famous festivals and events in Amsterdam include: Koningsdag (which was named Koninginnedag until the crowning of King Willem-Alexander in 2013) (King's Day – Queen's Day); the Holland Festival for the performing arts; the yearly Prinsengrachtconcert (classical concerto on the Prinsen canal) in August; the 'Stille Omgang' (a silent Roman Catholic evening procession held every March); Amsterdam Gay Pride; The Cannabis Cup; and the Uitmarkt. On Koningsdag—that is held each year on 27 April—hundreds of thousands of people travel to Amsterdam to celebrate with the city's residents. The entire city becomes overcrowded with people buying products from the freemarket, or visiting one of the many music concerts. The yearly Holland Festival attracts international artists and visitors from all over Europe. Amsterdam Gay Pride is a yearly local LGBT parade of boats in Amsterdam's canals, held on the first Saturday in August. The annual Uitmarkt is a three-day cultural event at the start of the cultural season in late August. It offers previews of many different artists, such as musicians and poets, who perform on podia. Sports Amsterdam is home of the Eredivisie football club AFC Ajax. The stadium Johan Cruyff Arena is the home of Ajax. It is located in the south-east of the city next to the new Amsterdam Bijlmer ArenA railway station. Before moving to their current location in 1996, Ajax played their regular matches in the now demolished De Meer Stadion in the eastern part of the city or in the Olympic Stadium. In 1928, Amsterdam hosted the Summer Olympics. The Olympic Stadium built for the occasion has been completely restored and is now used for cultural and sporting events, such as the Amsterdam Marathon. In 1920, Amsterdam assisted in hosting some of the sailing events for the Summer Olympics held in neighbouring Antwerp, Belgium by hosting events at Buiten IJ. The city holds the Dam to Dam Run, a race from Amsterdam to Zaandam, as well as the Amsterdam Marathon. The ice hockey team Amstel Tijgers play in the Jaap Eden ice rink. The team competes in the Dutch ice hockey premier league. Speed skating championships have been held on the 400-meter lane of this ice rink. Amsterdam holds two American football franchises: the Amsterdam Crusaders and the Amsterdam Panthers. The Amsterdam Pirates baseball team competes in the Dutch Major League. There are three field hockey teams: Amsterdam, Pinoké and Hurley, who play their matches around the Wagener Stadium in the nearby city of Amstelveen. The basketball team MyGuide Amsterdam competes in the Dutch premier division and play their games in the Sporthallen Zuid. There is one rugby club in Amsterdam, which also hosts sports training classes such as RTC (Rugby Talenten Centrum or Rugby Talent Centre) and the National Rugby stadium. Since 1999, the city of Amsterdam honours the best sportsmen and women at the Amsterdam Sports Awards. Boxer Raymond Joval and field hockey midfielder Carole Thate were the first to receive the awards, in 1999. Amsterdam hosted the World Gymnaestrada in 1991 and will do so again in 2023. Politics The city of Amsterdam is a municipality under the Dutch Municipalities Act. It is governed by a directly elected municipal council, a municipal executive board and a mayor. Since 1981, the municipality of Amsterdam has gradually been divided into semi-autonomous boroughs, called stadsdelen or 'districts'. Over time, a total of 15 boroughs were created. In May 2010, under a major reform, the number of Amsterdam boroughs was reduced to eight: Amsterdam-Centrum covering the city centre including the canal belt, Amsterdam-Noord consisting of the neighbourhoods north of the IJ lake, Amsterdam-Oost in the east, Amsterdam-Zuid in the south, Amsterdam-West in the west, Amsterdam Nieuw-West in the far west, Amsterdam Zuidoost in the southeast, and Westpoort covering the Port of Amsterdam area. City government As with all Dutch municipalities, Amsterdam is governed by a directly elected municipal council, a municipal executive board and a government appointed mayor (burgemeester). The mayor is a member of the municipal executive board, but also has individual responsibilities in maintaining public order. On 27 June 2018, Femke Halsema (former member of House of Representatives for GroenLinks from 1998 to 2011) was appointed as the first woman to be Mayor of Amsterdam by the King's Commissioner of North Holland for a six-year term after being nominated by the Amsterdam municipal council and began serving a six-year term on 12 July 2018. She replaces Eberhard van der Laan (Labour Party) who was the Mayor of Amsterdam from 2010 until his death in October 2017. After the 2014 municipal council elections, a governing majority of D66, VVD and SP was formed – the first coalition without the Labour Party since World War II. Next to the Mayor, the municipal executive board consists of eight wethouders ('alderpersons') appointed by the municipal council: four D66 alderpersons, two VVD alderpersons and two SP alderpersons. On 18 September 2017, it was announced by Eberhard van der Laan in an open letter to Amsterdam citizens that Kajsa Ollongren would take up his office as acting Mayor of Amsterdam with immediate effect due to ill health. Ollongren was succeeded as acting Mayor by Eric van der Burg on 26 October 2017 and by Jozias van Aartsen on 4 December 2017. Unlike most other Dutch municipalities, Amsterdam is subdivided into eight boroughs, called stadsdelen or 'districts', a system that was implemented gradually in the 1980s to improve local governance. The boroughs are responsible for many activities that had previously been run by the central city. In 2010, the number of Amsterdam boroughs reached fifteen. Fourteen of those had their own district council (deelraad), elected by a popular vote. The fifteenth, Westpoort, covers the harbour of Amsterdam and had very few residents. Therefore, it was governed by the central municipal council. Under the borough system, municipal decisions are made at borough level, except for those affairs pertaining to the whole city such as major infrastructure projects, which are the jurisdiction of the central municipal authorities. In 2010, the borough system was restructured, in which many smaller boroughs merged into larger boroughs. In 2014, under a reform of the Dutch Municipalities Act, the Amsterdam boroughs lost much of their autonomous status, as their district councils were abolished. The municipal council of Amsterdam voted to maintain the borough system by replacing the district councils with smaller, but still directly elected district committees (bestuurscommissies). Under a municipal ordinance, the new district committees were granted responsibilities through delegation of regulatory and executive powers by the central municipal council. Metropolitan area "Amsterdam" is usually understood to refer to the municipality of Amsterdam. Colloquially, some areas within the municipality, such as the town of Durgerdam, may not be considered part of Amsterdam. Statistics Netherlands uses three other definitions of Amsterdam: metropolitan agglomeration Amsterdam (Grootstedelijke Agglomeratie Amsterdam, not to be confused with Grootstedelijk Gebied Amsterdam, a synonym of Groot Amsterdam), Greater Amsterdam (Groot Amsterdam, a COROP region) and the urban region Amsterdam (Stadsgewest Amsterdam). The Amsterdam Department for Research and Statistics uses a fourth conurbation, namely the Stadsregio Amsterdam ('City Region of Amsterdam'). The city region is similar to Greater Amsterdam but includes the municipalities of Zaanstad and Wormerland. It excludes Graft-De Rijp. The smallest of these areas is the municipality of Amsterdam with a population of 802,938 in 2013. The conurbation had a population of 1,096,042 in 2013. It includes the municipalities of Zaanstad, Wormerland, Oostzaan, Diemen and Amstelveen only, as well as the municipality of Amsterdam. Greater Amsterdam includes 15 municipalities, and had a population of 1,293,208 in 2013. Though much larger in area, the population of this area is only slightly larger, because the definition excludes the relatively populous municipality of Zaanstad. The largest area by population, the Amsterdam Metropolitan Area (Dutch: Metropoolregio Amsterdam), has a population of 2,33 million. It includes for instance Zaanstad, Wormerland, Muiden, Abcoude, Haarlem, Almere and Lelystad but excludes Graft-De Rijp. Amsterdam is part of the conglomerate metropolitan area Randstad, with a total population of 6,659,300 inhabitants. Of these various metropolitan area configurations, only the Stadsregio Amsterdam (City Region of Amsterdam) has a formal governmental status. Its responsibilities include regional spatial planning and the metropolitan public transport concessions. National capital Under the Dutch Constitution, Amsterdam is the capital of the Netherlands. Since the 1983 constitutional revision, the constitution mentions "Amsterdam" and "capital" in chapter 2, article 32: The king's confirmation by oath and his coronation take place in "the capital Amsterdam" ("de hoofdstad Amsterdam"). Previous versions of the constitution only mentioned "the city of Amsterdam" ("de stad Amsterdam"). For a royal investiture, therefore, the States General of the Netherlands (the Dutch Parliament) meets for a ceremonial joint session in Amsterdam. The ceremony traditionally takes place at the Nieuwe Kerk on Dam Square, immediately after the former monarch has signed the act of abdication at the nearby Royal Palace of Amsterdam. Normally, however, the Parliament sits in The Hague, the city which has historically been the seat of the Dutch government, the Dutch monarchy, and the Dutch supreme court. Foreign embassies are also located in The Hague. Symbols The coat of arms of Amsterdam is composed of several historical elements. First and centre are three St Andrew's crosses, aligned in a vertical band on the city's shield (although Amsterdam's patron saint was Saint Nicholas). These St Andrew's crosses can also be found on the city shields of neighbours Amstelveen and Ouder-Amstel. This part of the coat of arms is the basis of the flag of Amsterdam, flown by the city government, but also as civil ensign for ships registered in Amsterdam. Second is the Imperial Crown of Austria. In 1489, out of gratitude for services and loans, Maximilian I awarded Amsterdam the right to adorn its coat of arms with the king's crown. Then, in 1508, this was replaced with Maximilian's imperial crown when he was crowned Holy Roman Emperor. In the early years of the 17th century, Maximilian's crown in Amsterdam's coat of arms was again replaced, this time with the crown of Emperor Rudolph II, a crown that became the Imperial Crown of Austria. The lions date from the late 16th century, when city and province became part of the Republic of the Seven United Netherlands. Last came the city's official motto: Heldhaftig, Vastberaden, Barmhartig ("Heroic, Determined, Merciful"), bestowed on the city in 1947 by Queen Wilhelmina, in recognition of the city's bravery during the Second World War. Transport Metro, tram and bus Currently, there are sixteen tram routes and five metro routes. All are operated by municipal public transport operator Gemeentelijk Vervoerbedrijf (GVB), which also runs the city bus network. Four fare-free GVB ferries carry pedestrians and cyclists across the IJ lake to the borough of Amsterdam-Noord, and two fare-charging ferries run east and west along the harbour. There are also privately operated water taxis, a water bus, a boat sharing operation, electric rental boats and canal cruises, that transport people along Amsterdam's waterways. Regional buses, and some suburban buses, are operated by Connexxion and EBS. International coach services are provided by Eurolines from Amsterdam Amstel railway station, IDBUS from Amsterdam Sloterdijk railway station, and Megabus from the Zuiderzeeweg in the east of the city. In order to facilitate easier transport to the centre of Amsterdam, the city has various P+R Locations where people can park their car at an affordable price and transfer to one of the numerous public transport lines. Car Amsterdam was intended in 1932 to be the hub, a kind of Kilometre Zero, of the highway system of the Netherlands, with freeways numbered One to Eight planned to originate from the city. The outbreak of the Second World War and shifting priorities led to the current situation, where only roads A1, A2, and A4 originate from Amsterdam according to the original plan. The A3 to Rotterdam was cancelled in 1970 in order to conserve the Groene Hart. Road A8, leading north to Zaandam and the A10 Ringroad were opened between 1968 and 1974. Besides the A1, A2, A4 and A8, several freeways, such as the A7 and A6, carry traffic mainly bound for Amsterdam. The A10 ringroad surrounding the city connects Amsterdam with the Dutch national network of freeways. Interchanges on the A10 allow cars to enter the city by transferring to one of the 18 city roads, numbered S101 through to S118. These city roads are regional roads without grade separation, and sometimes without a central reservation. Most are accessible by cyclists. The S100 Centrumring is a smaller ringroad circumnavigating the city's centre. In the city centre, driving a car is discouraged. Parking fees are expensive, and many streets are closed to cars or are one-way. The local government sponsors carsharing and carpooling initiatives such as Autodelen and Meerijden.nu. The local government has also started removing parking spaces in the city, with the goal of removing 10,000 spaces (roughly 1,500 per year) by 2025 National rail Amsterdam is served by ten stations of the Nederlandse Spoorwegen (Dutch Railways). Five are intercity stops: Sloterdijk, Zuid, Amstel, Bijlmer ArenA and Amsterdam Centraal. The stations for local services are: Lelylaan, RAI, Holendrecht, Muiderpoort and Science Park. Amsterdam Centraal is also an international railway station. From the station there are regular services to destinations such as Austria, Belarus, Belgium, Czechia, Denmark, France, Germany, Hungary, Poland, Russia, Switzerland and the United Kingdom. Among these trains are international trains of the Nederlandse Spoorwegen (Amsterdam-Berlin), the Eurostar (Amsterdam-Brussels-London), Thalys (Amsterdam-Brussels-Paris/Lille), and Intercity-Express (Amsterdam–Cologne–Frankfurt). Airport Amsterdam Airport Schiphol is less than 20 minutes by train from Amsterdam Centraal station and is served by domestic and international intercity trains, such as Thalys, Eurostar and Intercity Brussel. Schiphol is the largest airport in the Netherlands, the third-largest in Europe, and the 14th-largest in the world in terms of passengers. It handles over 68 million passengers per year and is the home base of four airlines, KLM, Transavia, Martinair and Arkefly. , Schiphol was the fifth busiest airport in the world measured by international passenger numbers. This airport is 4 meters below sea level. Although Schiphol is internationally known as Amsterdam Schiphol Airport it actually lies in the neighbouring municipality of Haarlemmermeer, southwest of the city. Cycling Amsterdam is one of the most bicycle-friendly large cities in the world and is a centre of bicycle culture with good facilities for cyclists such as bike paths and bike racks, and several guarded bike storage garages (fietsenstalling) which can be used. According to the most recent figures published by Central Bureau of Statistics (CBS), in 2015 the 442.693 households (850.000 residents) in Amsterdam together owned 847.000 bicycles – 1.91 bicycle per household. Previously, wildly different figures were arrived at using a Wisdom of the crowd approach. Theft is widespreadin 2011, about 83,000 bicycles were stolen in Amsterdam. Bicycles are used by all socio-economic groups because of their convenience, Amsterdam's small size, the of bike paths, the flat terrain, and the inconvenience of driving an automobile. Education Amsterdam has two universities: the University of Amsterdam (Universiteit van Amsterdam, UvA), and the Vrije Universiteit Amsterdam (VU). Other institutions for higher education include an art school – Gerrit Rietveld Academie, a university of applied sciences – the Hogeschool van Amsterdam, and the Amsterdamse Hogeschool voor de Kunsten. Amsterdam's International Institute of Social History is one of the world's largest documentary and research institutions concerning social history, and especially the history of the labour movement. Amsterdam's Hortus Botanicus, founded in the early 17th century, is one of the oldest botanical gardens in the world, with many old and rare specimens, among them the coffee plant that served as the parent for the entire coffee culture in Central and South America. There are over 200 primary schools in Amsterdam. Some of these primary schools base their teachings on particular pedagogic theories like the various Montessori schools. The biggest Montessori high school in Amsterdam is the Montessori Lyceum Amsterdam. Many schools, however, are based on religion. This used to be primarily Roman Catholicism and various Protestant denominations, but with the influx of Muslim immigrants, there has been a rise in the number of Islamic schools. Jewish schools can be found in the southern suburbs of Amsterdam. Amsterdam is noted for having five independent grammar schools (Dutch: gymnasia), the Vossius Gymnasium, Barlaeus Gymnasium, St. Ignatius Gymnasium, Het 4e Gymnasium and the Cygnus Gymnasium where a classical curriculum including Latin and classical Greek is taught. Though believed until recently by many to be an anachronistic and elitist concept that would soon die out, the gymnasia have recently experienced a revival, leading to the formation of a fourth and fifth grammar school in which the three aforementioned schools participate. Most secondary schools in Amsterdam offer a variety of different levels of education in the same school. The city also has various colleges ranging from art and design to politics and economics which are mostly also available for students coming from other countries. Schools for foreign nationals in Amsterdam include the Amsterdam International Community School, British School of Amsterdam, Albert Einstein International School Amsterdam, Lycée Vincent van Gogh La Haye-Amsterdam primary campus (French school), International School of Amsterdam, and the Japanese School of Amsterdam. Notable people Media Amsterdam is a prominent centre for national and international media. Some locally based newspapers include Het Parool, a national daily paper; De Telegraaf, the largest Dutch daily newspaper; the daily newspapers Trouw, de Volkskrant and NRC Handelsblad; De Groene Amsterdammer, a weekly newspaper; the free newspapers Metro and The Holland Times (printed in English). Amsterdam is home to the second-largest Dutch commercial TV group SBS Broadcasting Group, consisting of TV-stations SBS 6, Net 5 and Veronica. However, Amsterdam is not considered 'the media city of the Netherlands'. The town of Hilversum, south-east of Amsterdam, has been crowned with this unofficial title. Hilversum is the principal centre for radio and television broadcasting in the Netherlands. Radio Netherlands, heard worldwide via shortwave radio since the 1920s, is also based there. Hilversum is home to an extensive complex of audio and television studios belonging to the national broadcast production company NOS, as well as to the studios and offices of all the Dutch public broadcasting organisations and many commercial TV production companies. In 2012, the music video of Far East Movement, 'Live My Life', was filmed in various parts of Amsterdam. Also, several movies were filmed in Amsterdam, such as James Bond's Diamonds Are Forever, Ocean's Twelve, Girl with a Pearl Earring and The Hitman's Bodyguard. Amsterdam is also featured in John Green's book The Fault in Our Stars, which has been made into a film as well that partly takes place in Amsterdam. Housing From the late 1960s onwards many buildings in Amsterdam have been squatted both for housing and for using as social centres. A number of these squats have legalised and become well known, such as OCCII, OT301, Paradiso and Vrankrijk. Sister cities Manchester, United Kingdom, 2007 Zapopan, Mexico, 2011 See also Notes and references Citations Literature Charles Caspers & Peter Jan Margry (2017), Het Mirakel van Amsterdam. Biografie van een betwiste devotie (Amsterdam, Prometheus). Further reading External links Amsterdam.nl – Official government site I amsterdam – Portal for international visitors Tourist information about Amsterdam – Website of the Netherlands Capitals in Europe Cities in the Netherlands Municipalities of North Holland Olympic cycling venues Populated places established in the 13th century Populated places in North Holland Port cities and towns in the Netherlands Port cities and towns of the North Sea Venues of the 1928 Summer Olympics
846
https://en.wikipedia.org/wiki/Museum%20of%20Work
Museum of Work
The Museum of Work (Arbetets museum) is a museum located in Norrköping, Sweden. The museum is located in the Strykjärn (Clothes iron), a former weaving mill in the old industrial area on the Motala ström river in the city centre of Norrköping. The former textile factory Holmens Bruk (sv) operated in the building from 1917 to 1962. The museum documents work and everyday life by collecting personal stories about people's professional lives from both the past and the present. The museum's archive contain material from memory collections and documentation projects. Since 2009, the museum also houses the EWK — Center for Political Illustration Art, which is based on work of the satirist Ewert Karlsson (1918 — 2004). For decades he was frequently published in the Swedish tabloid, Aftonbladet. Overview The museum is a national central museum with the task of preserving and telling about work and everyday life. It has, among other things, exhibitions on the terms and conditions of the work and the history of the industrial society. The museum is also known to highlight gender perspective in their exhibitions. The work museum documents work and everyday life by collecting personal stories, including people's professional life from both the past and present. In the museum's archive, there is a rich material of memory collections and documentation projects — over 2600 interviews, stories and photodocumentations have been collected since the museum opened. The museum is also a support for the country's approximately 1,500 working life museums that are old workplaces preserved to convey their history. Exhibitions The Museum of Work shows exhibitions going on over several years, but also shorter exhibitions — including several photo exhibitions on themes that can be linked to work and everyday life. The history of Alva The history of Alva Karlsson is the only exhibition in the museum that is permanent. The exhibition connects to the museum's building and its history as part of the textile industry in Norrköping. Alva worked as a rollers between the years 1927 — 1962. Industriland One of the museum long-term exhibitions is Industriland — when Sweden became modern, the exhibition was in 2007 — 2013 and consisted of an ongoing bond with various objects that were somehow significant both for working life and everyday during the period 1930 — 1980. The exhibition also consisted of presentations of the working life museums in Sweden and a number of rooms with themes such as: leisure, world, living and consumption. Framtidsland (Future country) In 2014, the exhibition was inaugurated that takes by where Industriland ends: Future country. It is an exhibition that investigates what a sustainable society is will be part of the museum's exhibitions until 2019. The exhibition consists of materials that are designed based on conversations between young people and researchers around Sweden. The exhibition addresses themes such as work, environment and everyday life. A tour version of the exhibition is given in the locations Falun, Kristianstad and Örebro. EWK — The Center for Political Illustration Art Since 2009, the Museum also houses EWK — center for political illustration art. The museum preserves, develops and conveys the political illustrator Ewert Karlsson's production. The museum also holds theme exhibitions with national and international political illustrators with the aim of highlighting and strengthening the political art. See also List of museums in Sweden Culture of Sweden References External links Arbetetsmuseum Official site Museums in Östergötland County Norrköping Industry museums in Sweden Cultural heritage of Sweden
848
https://en.wikipedia.org/wiki/Audi
Audi
Audi AG () (commonly referred to as Audi) is a German automotive manufacturer of luxury vehicles headquartered in Ingolstadt, Bavaria, Germany. As a subsidiary of its parent company, the Volkswagen Group, Audi produces vehicles in nine production facilities worldwide. The origins of the company are complex, going back to the early 20th century and the initial enterprises (Horch and the Audiwerke) founded by engineer August Horch; and two other manufacturers (DKW and Wanderer), leading to the foundation of Auto Union in 1932. The modern Audi era began in the 1960s, when Auto Union was acquired by Volkswagen from Daimler-Benz. After relaunching the Audi brand with the 1965 introduction of the Audi F103 series, Volkswagen merged Auto Union with NSU Motorenwerke in 1969, thus creating the present-day form of the company. The company name is based on the Latin translation of the surname of the founder, August Horch. , meaning "listen" in German, becomes in Latin. The four rings of the Audi logo each represent one of four car companies that banded together to create Audi's predecessor company, Auto Union. Audi's slogan is , meaning "Being Ahead through Technology". Audi, along with fellow German marques BMW and Mercedes-Benz, is among the best-selling luxury automobile brands in the world. History Birth of the company and its name Automobile company Wanderer was originally established in 1885, later becoming a branch of Audi AG. Another company, NSU, which also later merged into Audi, was founded during this time, and later supplied the chassis for Gottlieb Daimler's four-wheeler. On 14 November 1899, August Horch (1868–1951) established the company A. Horch & Cie. in the Ehrenfeld district of Cologne. In 1902, he moved with his company to Reichenbach im Vogtland. On 10 May 1904, he founded the August Horch & Cie. Motorwagenwerke AG, a joint-stock company in Zwickau (State of Saxony). After troubles with Horch chief financial officer, August Horch left Motorwagenwerke and founded in Zwickau on 16 July 1909, his second company, the August Horch Automobilwerke GmbH. His former partners sued him for trademark infringement. The German Reichsgericht (Supreme Court) in Leipzig, eventually determined that the Horch brand belonged to his former company. Since August Horch was prohibited from using "Horch" as a trade name in his new car business, he called a meeting with close business friends, Paul and Franz Fikentscher from Zwickau. At the apartment of Franz Fikentscher, they discussed how to come up with a new name for the company. During this meeting, Franz's son was quietly studying Latin in a corner of the room. Several times he looked like he was on the verge of saying something but would just swallow his words and continue working, until he finally blurted out, "Father – audiatur et altera pars... wouldn't it be a good idea to call it audi instead of horch?" "Horch!" in German means "Hark!" or "hear", which is "Audi" in the singular imperative form of "audire" – "to listen" – in Latin. The idea was enthusiastically accepted by everyone attending the meeting. On 25 April 1910 the Audi Automobilwerke GmbH Zwickau (from 1915 on Audiwerke AG Zwickau) was entered in the company's register of Zwickau registration court. The first Audi automobile, the Audi Type A 10/ Sport-Phaeton, was produced in the same year, followed by the successor Type B 10/28PS in the same year. Audi started with a 2,612 cc inline-four engine model Type A, followed by a 3,564 cc model, as well as 4,680 cc and 5,720 cc models. These cars were successful even in sporting events. The first six-cylinder model Type M, 4,655 cc appeared in 1924. August Horch left the Audiwerke in 1920 for a high position at the ministry of transport, but he was still involved with Audi as a member of the board of trustees. In September 1921, Audi became the first German car manufacturer to present a production car, the Audi Type K, with left-handed drive. Left-hand drive spread and established dominance during the 1920s because it provided a better view of oncoming traffic, making overtaking safer when driving on the right. The merger of the four companies under the logo of four rings In August 1928, Jørgen Rasmussen, the owner of Dampf-Kraft-Wagen (DKW), acquired the majority of shares in Audiwerke AG. In the same year, Rasmussen bought the remains of the U.S. automobile manufacturer Rickenbacker, including the manufacturing equipment for 8-cylinder engines. These engines were used in Audi Zwickau and Audi Dresden models that were launched in 1929. At the same time, 6-cylinder and 4-cylinder (the "four" with a Peugeot engine) models were manufactured. Audi cars of that era were luxurious cars equipped with special bodywork. In 1932, Audi merged with Horch, DKW, and Wanderer, to form Auto Union AG, Chemnitz. It was during this period that the company offered the Audi Front that became the first European car to combine a six-cylinder engine with front-wheel drive. It used a power train shared with the Wanderer, but turned 180 degrees, so that the drive shaft faced the front. Before World War II, Auto Union used the four interlinked rings that make up the Audi badge today, representing these four brands. However, this badge was used only on Auto Union racing cars in that period while the member companies used their own names and emblems. The technological development became more and more concentrated and some Audi models were propelled by Horch- or Wanderer-built engines. Reflecting the economic pressures of the time, Auto Union concentrated increasingly on smaller cars through the 1930s, so that by 1938 the company's DKW brand accounted for 17.9% of the German car market, while Audi held only 0.1%. After the final few Audis were delivered in 1939 the "Audi" name disappeared completely from the new car market for more than two decades. Post-World War II Like most German manufacturing, at the onset of World War II the Auto Union plants were retooled for military production, and were a target for allied bombing during the war which left them damaged. Overrun by the Soviet Army in 1945, on the orders of the Soviet Union military administration the factories were dismantled as part of war reparations. Following this, the company's entire assets were expropriated without compensation. On 17 August 1948, Auto Union AG of Chemnitz was deleted from the commercial register. These actions had the effect of liquidating Germany's Auto Union AG. The remains of the Audi plant of Zwickau became the VEB (for "People Owned Enterprise") or AWZ (in English: Automobile Works Zwickau). With no prospect of continuing production in Soviet-controlled East Germany, Auto Union executives began the process of relocating what was left of the company to West Germany. A site was chosen in Ingolstadt, Bavaria, to start a spare parts operation in late 1945, which would eventually serve as the headquarters of the reformed Auto Union in 1949. The former Audi factory in Zwickau restarted assembly of the pre-war models in 1949. These DKW models were renamed to IFA F8 and IFA F9 and were similar to the West German versions. West and East German models were equipped with the traditional and renowned DKW two-stroke engines. The Zwickau plant manufactured the infamous Trabant until 1991, when it came under Volkswagen control—effectively bringing it under the same umbrella as Audi since 1945. New Auto Union unit A new West German headquartered Auto Union was launched in Ingolstadt with loans from the Bavarian state government and Marshall Plan aid. The reformed company was launched 3 September 1949 and continued DKW's tradition of producing front-wheel drive vehicles with two-stroke engines. This included production of a small but sturdy 125 cc motorcycle and a DKW delivery van, the DKW F89 L at Ingolstadt. The Ingolstadt site was large, consisting of an extensive complex of formerly military buildings which was suitable for administration as well as vehicle warehousing and distribution, but at this stage there was at Ingolstadt no dedicated plant suitable for mass production of automobiles: for manufacturing the company's first post-war mass-market passenger car plant capacity in Düsseldorf was rented from Rheinmetall-Borsig. It was only ten years later, after the company had attracted an investor, when funds became available for construction of major car plant at the Ingolstadt head office site. In 1958, in response to pressure from Friedrich Flick, then the company's largest single shareholder, Daimler-Benz took an 87% holding in the Auto Union company, and this was increased to a 100% holding in 1959. However, small two-stroke cars were not the focus of Daimler-Benz's interests, and while the early 1960s saw major investment in new Mercedes models and in a state of the art factory for Auto Union's, the company's aging model range at this time did not benefit from the economic boom of the early 1960s to the same extent as competitor manufacturers such as Volkswagen and Opel. The decision to dispose of the Auto Union business was based on its lack of profitability. Ironically, by the time they sold the business, it also included a large new factory and near production-ready modern four-stroke engine, which would enable the Auto Union business, under a new owner, to embark on a period of profitable growth, now producing not Auto Unions or DKWs, but using the "Audi" name, resurrected in 1965 after a 25-year gap. In 1964, Volkswagen acquired a 50% holding in the business, which included the new factory in Ingolstadt, the DKW and Audi brands along with the rights to the new engine design which had been funded by Daimler-Benz, who in return retained the dormant Horch trademark and the Düsseldorf factory which became a Mercedes-Benz van assembly plant. Eighteen months later, Volkswagen bought complete control of Ingolstadt, and by 1966 were using the spare capacity of the Ingolstadt plant to assemble an additional 60,000 Volkswagen Beetles per year. Two-stroke engines became less popular during the 1960s as customers were more attracted to the smoother four-stroke engines. In September 1965, the DKW F102 was fitted with a four-stroke engine and a facelift for the car's front and rear. Volkswagen dumped the DKW brand because of its associations with two-stroke technology, and having classified the model internally as the F103, sold it simply as the "Audi". Later developments of the model were named after their horsepower ratings and sold as the Audi 60, 75, 80, and Super 90, selling until 1972. Initially, Volkswagen was hostile to the idea of Auto Union as a standalone entity producing its own models having acquired the company merely to boost its own production capacity through the Ingolstadt assembly plant – to the point where Volkswagen executives ordered that the Auto Union name and flags bearing the four rings were removed from the factory buildings. Then VW chief Heinz Nordhoff explicitly forbade Auto Union from any further product development. Fearing that Volkswagen had no long-term ambition for the Audi brand, Auto Union engineers under the leadership of Ludwig Kraus developed the first Audi 100 in secret, without Nordhoff's knowledge. When presented with a finished prototype, Nordhoff was so impressed he authorised the car for production, which when launched in 1968, went on to be a huge success. With this, the resurrection of the Audi brand was now complete, this being followed by the first generation Audi 80 in 1972, which would in turn provide a template for VW's new front-wheel-drive water-cooled range which debuted from the mid-1970s onward. In 1969, Auto Union merged with NSU, based in Neckarsulm, near Stuttgart. In the 1950s, NSU had been the world's largest manufacturer of motorcycles, but had moved on to produce small cars like the NSU Prinz, the TT and TTS versions of which are still popular as vintage race cars. NSU then focused on new rotary engines based on the ideas of Felix Wankel. In 1967, the new NSU Ro 80 was a car well ahead of its time in technical details such as aerodynamics, light weight, and safety. However, teething problems with the rotary engines put an end to the independence of NSU. The Neckarsulm plant is now used to produce the larger Audi models A6 and A8. The Neckarsulm factory is also home of the "quattro GmbH" (from November 2016 "Audi Sport GmbH"), a subsidiary responsible for development and production of Audi high-performance models: the R8 and the RS model range. Modern era The new merged company was incorporated on 1 January 1969 and was known as Audi NSU Auto Union AG, with its headquarters at NSU's Neckarsulm plant, and saw the emergence of Audi as a separate brand for the first time since the pre-war era. Volkswagen introduced the Audi brand to the United States for the 1970 model year. That same year, the mid-sized car that NSU had been working on, the K70, originally intended to slot between the rear-engined Prinz models and the futuristic NSU Ro 80, was instead launched as a Volkswagen. After the launch of the Audi 100 of 1968, the Audi 80/Fox (which formed the basis for the 1973 Volkswagen Passat) followed in 1972 and the Audi 50 (later rebadged as the Volkswagen Polo) in 1974. The Audi 50 was a seminal design because it was the first incarnation of the Golf/Polo concept, one that led to a hugely successful world car. Ultimately, the Audi 80 and 100 (progenitors of the A4 and A6, respectively) became the company's biggest sellers, whilst little investment was made in the fading NSU range; the Prinz models were dropped in 1973 whilst the fatally flawed NSU Ro80 went out of production in 1977, spelling the effective end of the NSU brand. Production of the Audi 100 had been steadily moved from Ingolstadt to Neckarsulm as the 1970s had progressed, and by the appearance of the second generation C2 version in 1976, all production was now at the former NSU plant. Neckarsulm from that point onward would produce Audi's higher-end models. The Audi image at this time was a conservative one, and so, a proposal from chassis engineer Jörg Bensinger was accepted to develop the four-wheel drive technology in Volkswagen's Iltis military vehicle for an Audi performance car and rally racing car. The performance car, introduced in 1980, was named the "Audi Quattro", a turbocharged coupé which was also the first German large-scale production vehicle to feature permanent all-wheel drive through a centre differential. Commonly referred to as the "Ur-Quattro" (the "Ur-" prefix is a German augmentative used, in this case, to mean "original" and is also applied to the first generation of Audi's S4 and S6 Sport Saloons, as in "UrS4" and "UrS6"), few of these vehicles were produced (all hand-built by a single team), but the model was a great success in rallying. Prominent wins proved the viability of all-wheel-drive racecars, and the Audi name became associated with advances in automotive technology. In 1985, with the Auto Union and NSU brands effectively dead, the company's official name was now shortened to simply Audi AG. At the same time the company's headquarters moved back to Ingolstadt and two new wholly owned subsidiaries; Auto Union GmbH and NSU GmbH, were formed to own and manage the historical trademarks and intellectual property of the original constituent companies (the exception being Horch, which had been retained by Daimler-Benz after the VW takeover), and to operate Audi's heritage operations. In 1986, as the Passat-based Audi 80 was beginning to develop a kind of "grandfather's car" image, the type 89 was introduced. This completely new development sold extremely well. However, its modern and dynamic exterior belied the low performance of its base engine, and its base package was quite spartan (even the passenger-side mirror was an option.) In 1987, Audi put forward a new and very elegant Audi 90, which had a much superior set of standard features. In the early 1990s, sales began to slump for the Audi 80 series, and some basic construction problems started to surface. In the early part of the 21st century, Audi set forth on a German racetrack to claim and maintain several world records, such as top speed endurance. This effort was in-line with the company's heritage from the 1930s racing era Silver Arrows. Through the early 1990s, Audi began to shift its target market upscale to compete against German automakers Mercedes-Benz and BMW. This began with the release of the Audi V8 in 1990. It was essentially a new engine fitted to the Audi 100/200, but with noticeable bodywork differences. Most obvious was the new grille that was now incorporated in the bonnet. By 1991, Audi had the four-cylinder Audi 80, the 5-cylinder Audi 90 and Audi 100, the turbocharged Audi 200 and the Audi V8. There was also a coupé version of the 80/90 with both four- and five-cylinder engines. Although the five-cylinder engine was a successful and robust powerplant, it was still a little too different for the target market. With the introduction of an all-new Audi 100 in 1992, Audi introduced a 2.8L V6 engine. This engine was also fitted to a face-lifted Audi 80 (all 80 and 90 models were now badged 80 except for the USA), giving this model a choice of four-, five-, and six-cylinder engines, in saloon, coupé and convertible body styles. The five-cylinder was soon dropped as a major engine choice; however, a turbocharged version remained. The engine, initially fitted to the 200 quattro 20V of 1991, was a derivative of the engine fitted to the Sport Quattro. It was fitted to the Audi Coupé, named the S2, and also to the Audi 100 body, and named the S4. These two models were the beginning of the mass-produced S series of performance cars. Audi 5000 unintended acceleration allegations Sales in the United States fell after a series of recalls from 1982 to 1987 of Audi 5000 models associated with reported incidents of sudden unintended acceleration linked to six deaths and 700 accidents. At the time, NHTSA was investigating 50 car models from 20 manufacturers for sudden surges of power. A 60 Minutes report aired 23 November 1986, featuring interviews with six people who had sued Audi after reporting unintended acceleration, showing an Audi 5000 ostensibly suffering a problem when the brake pedal was pushed. Subsequent investigation revealed that 60 Minutes had engineered the failure – fitting a canister of compressed air on the passenger-side floor, linked via a hose to a hole drilled into the transmission. Audi contended, prior to findings by outside investigators, that the problems were caused by driver error, specifically pedal misapplication. Subsequently, the National Highway Traffic Safety Administration (NHTSA) concluded that the majority of unintended acceleration cases, including all the ones that prompted the 60 Minutes report, were caused by driver error such as confusion of pedals. CBS did not acknowledge the test results of involved government agencies, but did acknowledge the similar results of another study. In a review study published in 2012, NHTSA summarized its past findings about the Audi unintended acceleration problems: "Once an unintended acceleration had begun, in the Audi 5000, due to a failure in the idle-stabilizer system (producing an initial acceleration of 0.3g), pedal misapplication resulting from panic, confusion, or unfamiliarity with the Audi 5000 contributed to the severity of the incident." This summary is consistent with the conclusions of NHTSA's most technical analysis at the time: "Audi idle-stabilization systems were prone to defects which resulted in excessive idle speeds and brief unanticipated accelerations of up to 0.3g [which is similar in magnitude to an emergency stop in a subway car]. These accelerations could not be the sole cause of [(long-duration) sudden acceleration incidents (SAI)], but might have triggered some SAIs by startling the driver. The defective idle-stabilization system performed a type of electronic throttle control. Significantly: multiple "intermittent malfunctions of the electronic control unit were observed and recorded ... and [were also observed and] reported by Transport Canada." With a series of recall campaigns, Audi made several modifications; the first adjusted the distance between the brake and accelerator pedal on automatic-transmission models. Later repairs, of 250,000 cars dating back to 1978, added a device requiring the driver to press the brake pedal before shifting out of park. A legacy of the Audi 5000 and other reported cases of sudden unintended acceleration are intricate gear stick patterns and brake interlock mechanisms to prevent inadvertent shifting into forward or reverse. It is unclear how the defects in the idle-stabilization system were addressed. Audi's U.S. sales, which had reached 74,061 in 1985, dropped to 12,283 in 1991 and remained level for three years, – with resale values falling dramatically. Audi subsequently offered increased warranty protection and renamed the affected models – with the 5000 becoming the 100 and 200 in 1989 – and reached the same sales levels again only by model year 2000. A 2010 BusinessWeek article – outlining possible parallels between Audi's experience and 2009–2010 Toyota vehicle recalls – noted a class-action lawsuit filed in 1987 by about 7,500 Audi 5000-model owners remains unsettled and remains contested in Chicago's Cook County after appeals at the Illinois state and U.S. federal levels. Model introductions In the mid-to-late 1990s, Audi introduced new technologies including the use of aluminium construction. Produced from 1999 to 2005, the Audi A2 was a futuristic super mini, born from the Al2 concept, with many features that helped regain consumer confidence, like the aluminium space frame, which was a first in production car design. In the A2 Audi further expanded their TDI technology through the use of frugal three-cylinder engines. The A2 was extremely aerodynamic and was designed around a wind tunnel. The Audi A2 was criticised for its high price and was never really a sales success but it planted Audi as a cutting-edge manufacturer. The model, a Mercedes-Benz A-Class competitor, sold relatively well in Europe. However, the A2 was discontinued in 2005 and Audi decided not to develop an immediate replacement. The next major model change came in 1995 when the Audi A4 replaced the Audi 80. The new nomenclature scheme was applied to the Audi 100 to become the Audi A6 (with a minor facelift). This also meant the S4 became the S6 and a new S4 was introduced in the A4 body. The S2 was discontinued. The Audi Cabriolet continued on (based on the Audi 80 platform) until 1999, gaining the engine upgrades along the way. A new A3 hatchback model (sharing the Volkswagen Golf Mk4's platform) was introduced to the range in 1996, and the radical Audi TT coupé and roadster were debuted in 1998 based on the same underpinnings. The engines available throughout the range were now a 1.4 L, 1.6 L and 1.8 L four-cylinder, 1.8 L four-cylinder turbo, 2.6 L and 2.8 L V6, 2.2 L turbo-charged five-cylinder and the 4.2 L V8 engine. The V6s were replaced by new 2.4 L and 2.8 L 30V V6s in 1998, with marked improvement in power, torque and smoothness. Further engines were added along the way, including a 3.7 L V8 and 6.0 L W12 engine for the A8. Audi AG today Audi's sales grew strongly in the 2000s, with deliveries to customers increasing from 653,000 in 2000 to 1,003,000 in 2008. The largest sales increases came from Eastern Europe (+19.3%), Africa (+17.2%) and the Middle East (+58.5%). China in particular has become a key market, representing 108,000 out of 705,000 cars delivered in the first three quarters of 2009. One factor for its popularity in China is that Audis have become the car of choice for purchase by the Chinese government for officials, and purchases by the government are responsible for 20% of its sales in China. As of late 2009, Audi's operating profit of €1.17 billion ($1.85 billion) made it the biggest contributor to parent Volkswagen Group's nine-month operating profit of €1.5 billion, while the other marques in Group such as Bentley and SEAT had suffered considerable losses. May 2011 saw record sales for Audi of America with the new Audi A7 and Audi A3 TDI Clean Diesel. In May 2012, Audi reported a 10% increase in its sales—from 408 units to 480 in the last year alone. Audi manufactures vehicles in seven plants around the world, some of which are shared with other VW Group marques although many sub-assemblies such as engines and transmissions are manufactured within other Volkswagen Group plants. Audi's two principal assembly plants are: Ingolstadt, opened by Auto Union in 1964 (A3, A4, A5, Q5) Neckarsulm, acquired from NSU in 1969 (A4, A6, A7, A8, R8, and all RS variants) Outside of Germany, Audi produces vehicles at: Aurangabad, India, since 2006 Bratislava, Slovakia, shared with Volkswagen, SEAT, Škoda and Porsche (Q7 and Q8) Brussels, Belgium, acquired from Volkswagen in 2007 (e-tron) Changchun, China, since 1995 Győr, Hungary (TT and some A3 variants) Jakarta, Indonesia, since 2011 Martorell, Spain, shared with SEAT and Volkswagen (A1) San José Chiapa, Mexico (2nd gen Q5) In September 2012, Audi announced the construction of its first North American manufacturing plant in Puebla, Mexico. This plant became operative in 2016 and produces the second generation Q5. From 2002 up to 2003, Audi headed the Audi Brand Group, a subdivision of the Volkswagen Group's Automotive Division consisting of Audi, Lamborghini and SEAT, which was focused on sporty values, with the marques' product vehicles and performance being under the higher responsibility of the Audi brand. In January 2014, Audi, along with the Wireless Power Consortium, operated a booth which demonstrated a phone compartment using the Qi open interface standard at the Consumer Electronics Show (CES). In May, most of the Audi dealers in the UK falsely claimed that the Audi A7, A8, and R8 were Euro NCAP safety tested, all achieving five out of five stars. In fact none were tested. In 2015, Audi admitted that at least 2.1 million Audi cars had been involved in the Volkswagen emissions testing scandal in which software installed in the cars manipulated emissions data to fool regulators and allow the cars to pollute at higher than government-mandated levels. The A1, A3, A4, A5, A6, TT, Q3 and Q5 models were implicated in the scandal. Audi promised to quickly find a technical solution and upgrade the cars so they can function within emissions regulations. Ulrich Hackenberg, the head of research and development at Audi, was suspended in relation to the scandal. Despite widespread media coverage about the scandal through the month of September, Audi reported that U.S. sales for the month had increased by 16.2%. Audi's parent company Volkswagen announced on 18 June 2018 that Audi chief executive Rupert Stadler had been arrested. In November 2015, the U.S. Environmental Protection Agency implicated the 3-liter diesel engine versions of the 2016 Audi A6 Quattro, A7 Quattro, A8, A8L and the Q5 as further models that had emissions regulation defeat-device software installed. Thus, these models emitted nitrogen oxide at up to nine times the legal limit when the car detected that it was not hooked up to emissions testing equipment. In November 2016, Audi expressed an intention to establish an assembly factory in Pakistan, with the company's local partner acquiring land for a plant in Korangi Creek Industrial Park in Karachi. Approval of the plan would lead to an investment of $30 million in the new plant. Audi planned to cut 9,500 jobs in Germany starting from 2020 till 2025 to fund electric vehicles and digital working. In February 2020, Volkswagen AG announced that it plans to take over all Audi shares it does not own (totalling 0.36%) via a squeeze-out according to German stock corporation law, thus making Audi a fully owned subsidiary of the Volkswagen Group. This change took effect from 16 November 2020, when Audi became a wholly owned subsidiary of the Volkswagen Group. In January 2021, Audi announced that it is planning to sell 1 million vehicles in China in 2023, comparing to 726,000 vehicles in 2020. Technology Audi AI Audi AI is a driver assist feature offered by Audi. The company's stated intent is to offer fully autonomous driving at a future time, acknowledging that legal, regulatory and technical hurdles must be overcome to achieve this goal. On 4 June 2017, Audi stated that its new A8 will be fully self-driving for speeds up to 60 km/h using its Audi AI. Contrary to other cars, the driver will not have to do safety checks such as touching the steering wheel every 15 seconds to use this feature. The Audi A8 will therefore be the first production car to reach level 3 autonomous driving, meaning that the driver can safely turn their attention away from driving tasks, e.g. the driver can text or watch a movie. Audi will also be the first manufacturer to use a 3D Lidar system in addition to cameras and ultrasonic sensors for their AI. Bodyshells Audi produces 100% galvanised cars to prevent corrosion, and was the first mass-market vehicle to do so, following introduction of the process by Porsche, c. 1975. Along with other precautionary measures, the full-body zinc coating has proved to be very effective in preventing rust. The body's resulting durability even surpassed Audi's own expectations, causing the manufacturer to extend its original 10-year warranty against corrosion perforation to currently 12 years (except for aluminium bodies which do not rust). Space frame Audi introduced a new series of vehicles in the mid-1990s and continues to pursue new technology and high performance. An all-aluminium car was brought forward by Audi, and in 1994 the Audi A8 was launched, which introduced aluminium space frame technology (called Audi Space Frame or ASF) which saves weight and improves torsion rigidity compared to a conventional steel frame. Prior to that effort, Audi used examples of the Type 44 chassis fabricated out of aluminium as test-beds for the technique. The disadvantage of the aluminium frame is that it is very expensive to repair and requires a specialized aluminium bodyshop. The weight reduction is somewhat offset by the quattro four-wheel drive system which is standard in most markets. Nonetheless, the A8 is usually the lightest all-wheel drive car in the full-size luxury segment, also having best-in-class fuel economy. The Audi A2, Audi TT and Audi R8 also use Audi Space Frame designs. Drivetrains Layout For most of its lineup (excluding the A3, A1, and TT models), Audi has not adopted the transverse engine layout which is typically found in economy cars (such as Peugeot and Citroën), since that would limit the type and power of engines that can be installed. To be able to mount powerful engines (such as a V8 engine in the Audi S4 and Audi RS4, as well as the W12 engine in the Audi A8L W12), Audi has usually engineered its more expensive cars with a longitudinally front-mounted engine, in an "overhung" position, over the front wheels in front of the axle line - this layout dates back to the DKW and Auto Union saloons from the 1950s. But while this allows for the easy adoption of all-wheel drive, it goes against the ideal 50:50 weight distribution. In all its post Volkswagen-era models, Audi has firmly refused to adopt the traditional rear-wheel drive layout favored by its two archrivals Mercedes-Benz and BMW, favoring either front-wheel drive or all-wheel drive. The majority of Audi's lineup in the United States features all-wheel drive standard on most of its expensive vehicles (only the entry-level trims of the A4 and A6 are available with front-wheel drive), in contrast to Mercedes-Benz and BMW whose lineup treats all-wheel drive as an option. BMW did not offer all-wheel drive on its V8-powered cars (as opposed to crossover SUVs) until the 2010 BMW 7 Series and 2011 BMW 5 Series, while the Audi A8 has had all-wheel drive available/standard since the 1990s. Regarding high-performance variants, Audi S and RS models have always had all-wheel drive, unlike their direct rivals from BMW M and Mercedes-AMG whose cars are rear-wheel drive only (although their performance crossover SUVs are all-wheel drive). Audi has recently applied the quattro badge to models such as the A3 and TT which do not use the Torsen-based system as in prior years with a mechanical center differential, but with the Haldex Traction electro-mechanical clutch AWD system. Engines Prior to the introduction of the Audi 80 and Audi 50 in 1972 and 1974, respectively, Audi had led the development of the EA111 and EA827 inline-four engine families. These new power units underpinned the water-cooled revival of parent company Volkswagen (in the Polo, Golf, Passat and Scirocco), whilst the many derivatives and descendants of these two basic engine designs have appeared in every generation of VW Group vehicles right up to the present day. In the 1980s, Audi, along with Volvo, was the champion of the inline-five cylinder, 2.1/2.2 L engine as a longer-lasting alternative to more traditional six-cylinder engines. This engine was used not only in production cars but also in their race cars. The 2.1 L inline five-cylinder engine was used as a base for the rally cars in the 1980s, providing well over after modification. Before 1990, there were engines produced with a displacement between 2.0 L and 2.3 L. This range of engine capacity allowed for both fuel economy and power. For the ultra-luxury version of its Audi A8 fullsize luxury flagship sedan, the Audi A8L W12, Audi uses the Volkswagen Group W12 engine instead of the conventional V12 engine favored by rivals Mercedes-Benz and BMW. The W12 engine configuration (also known as a "WR12") is created by forming two imaginary narrow-angle 15° VR6 engines at an angle of 72°, and the narrow angle of each set of cylinders allows just two overhead camshafts to drive each pair of banks, so just four are needed in total. The advantage of the W12 engine is its compact packaging, allowing Audi to build a 12-cylinder sedan with all-wheel drive, whereas a conventional V12 engine could have only a rear-wheel drive configuration as it would have no space in the engine bay for a differential and other components required to power the front wheels. In fact, the 6.0 L W12 in the Audi A8L W12 is smaller in overall dimensions than the 4.2 L V8 that powers the Audi A8 4.2 variants. The 2011 Audi A8 debuted a revised 6.3-litre version of the W12 (WR12) engine with . Fuel Stratified Injection New models of the A3, A4, A6 and A8 have been introduced, with the ageing 1.8-litre engine now having been replaced by new Fuel Stratified Injection (FSI) engines. Nearly every petroleum burning model in the range now incorporates this fuel-saving technology. Direct-Shift Gearbox In 2003, Volkswagen introduced the Direct-Shift Gearbox (DSG), a type of dual-clutch transmission. It is a type of automatic transmission, drivable like a conventional torque converter automatic transmission. Based on the gearbox found in the Group B S1, the system includes dual electro-hydraulically controlled clutches instead of a torque converter. This is implemented in some VW Golfs, Audi A3, Audi A4 and TT models where DSG is called S-Tronic. LED daytime running lights Beginning in 2005, Audi has implemented white LED technology as daytime running lights (DRL) in their products. The distinctive shape of the DRLs has become a trademark of sorts. LEDs were first introduced on the Audi A8 W12, the world's first production car to have LED DRLs, and have since spread throughout the entire model range. The LEDs are present on some Audi billboards. Since 2010, Audi has also offered the LED technology in low- and high-beam headlights. Multi Media Interface Starting with the 2003 Audi A8, Audi has used a centralised control interface for its on-board infotainment systems, called Multi Media Interface (MMI). It is essentially a rotating control knob and 'segment' buttons – designed to control all in-car entertainment devices (radio, CD changer, iPod, TV tuner), satellite navigation, heating and ventilation, and other car controls with a screen. The availability of MMI has gradually filtered down the Audi lineup, and following its introduction on the third generation A3 in 2011, MMI is now available across the entire range. It has been generally well received, as it requires less menu-surfing with its segment buttons around a central knob, along with 'main function' direct access buttons – with shortcuts to the radio or phone functions. The colour screen is mounted on the upright dashboard, and on the A4 (new), A5, A6, A8, and Q7, the controls are mounted horizontally. Synthetic fuels Audi has assisted with technology to produce synthetic diesel from water and carbon dioxide. Audi calls the synthetic diesel E-diesel. It is also working on synthetic gasoline (which it calls E-gasoline). Logistics Audi uses scanning gloves for parts registration during assembly, and automatic robots to transfer cars from factory to rail cars. Models Current model range The following tables list Audi production vehicles that are sold as of 2018: S and RS models Electric vehicles Audi is planning an alliance with the Japanese electronics giant Sanyo to develop a pilot hybrid electric project for the Volkswagen Group. The alliance could result in Sanyo batteries and other electronic components being used in future models of the Volkswagen Group. Concept electric vehicles unveiled to date include the Audi A1 Sportback Concept, Audi A4 TDI Concept E, and the fully electric Audi e-tron Concept Supercar. Self-driving cars In December 2018, Audi announced to invest 14 billion Euro ($15.9 billion) in e-mobility, self-driving cars. Production figures Data from 1998 to 2010. Figures for different body types/versions of models have been merged to create overall figures for each model. Motorsport Audi has competed in various forms of motorsports. Audi's tradition in motorsport began with their former company Auto Union in the 1930s. In the 1990s, Audi found success in the Touring and Super Touring categories of motor racing after success in circuit racing in North America. Rallying In 1980, Audi released the Quattro, a four-wheel drive (4WD) turbocharged car that went on to win rallies and races worldwide. It is considered one of the most significant rally cars of all time, because it was one of the first to take advantage of the then-recently changed rules which allowed the use of four-wheel drive in competition racing. Many critics doubted the viability of four-wheel drive racers, thinking them to be too heavy and complex, yet the Quattro was to become a successful car. It led its first rally before going off the road, however, the rally world had been served notice 4WD was the future. The Quattro went on to achieve much success in the World Rally Championship. It won the 1983 (Hannu Mikkola) and the 1984 (Stig Blomqvist) drivers' titles, and brought Audi the manufacturers' title in 1982 and 1984. In 1984, Audi launched the short-wheelbase Sport Quattro which dominated rally races in Monte Carlo and Sweden, with Audi taking all podium places, but succumbed to problems further into WRC contention. In 1985, after another season mired in mediocre finishes, Walter Röhrl finished the season in his Sport Quattro S1, and helped place Audi second in the manufacturers' points. Audi also received rally honours in the Hong Kong to Beijing rally in that same year. Michèle Mouton, the only female driver to win a round of the World Rally Championship and a driver for Audi, took the Sport Quattro S1, now simply called the "S1", and raced in the Pikes Peak International Hill Climb. The climb race pits a driver and car to drive to the summit of the Pikes Peak mountain in Colorado, and in 1985, Michèle Mouton set a new record of 11:25.39, and being the first woman to set a Pikes Peak record. In 1986, Audi formally left international rally racing following an accident in Portugal involving driver Joaquim Santos in his Ford RS200. Santos swerved to avoid hitting spectators in the road, and left the track into the crowd of spectators on the side, killing three and injuring 30. Bobby Unser used an Audi in that same year to claim a new record for the Pikes Peak Hill Climb at 11:09.22. In 1987, Walter Röhrl claimed the title for Audi setting a new Pikes Peak International Hill Climb record of 10:47.85 in his Audi S1, which he had retired from the WRC two years earlier. The Audi S1 employed Audi's time-tested inline-five-cylinder turbocharged engine, with the final version generating . The engine was mated to a six-speed gearbox and ran on Audi's famous four-wheel drive system. All of Audi's top drivers drove this car; Hannu Mikkola, Stig Blomqvist, Walter Röhrl and Michèle Mouton. This Audi S1 started the range of Audi 'S' cars, which now represents an increased level of sports-performance equipment within the mainstream Audi model range. In the United States As Audi moved away from rallying and into circuit racing, they chose to move first into America with the Trans-Am in 1988. In 1989, Audi moved to International Motor Sports Association (IMSA) GTO with the Audi 90, however as they avoided the two major endurance events (Daytona and Sebring) despite winning on a regular basis, they would lose out on the title. Touring cars In 1990, having completed their objective to market cars in North America, Audi returned to Europe, turning first to the Deutsche Tourenwagen Meisterschaft (DTM) series with the Audi V8, and then in 1993, being unwilling to build cars for the new formula, they turned their attention to the fast-growing Super Touring series, which are a series of national championships. Audi first entered in the French Supertourisme and Italian Superturismo. In the following year, Audi would switch to the German Super Tourenwagen Cup (known as STW), and then to British Touring Car Championship (BTCC) the year after that. The Fédération Internationale de l'Automobile (FIA), having difficulty regulating the quattro four-wheel drive system, and the impact it had on the competitors, would eventually ban all four-wheel drive cars from competing in the series in 1998, but by then, Audi switched all their works efforts to sports car racing. By 2000, Audi would still compete in the US with their RS4 for the SCCA Speed World GT Challenge, through dealer/team Champion Racing competing against Corvettes, Vipers, and smaller BMWs (where it is one of the few series to permit 4WD cars). In 2003, Champion Racing entered an RS6. Once again, the quattro four-wheel drive was superior, and Champion Audi won the championship. They returned in 2004 to defend their title, but a newcomer, Cadillac with the new Omega Chassis CTS-V, gave them a run for their money. After four victories in a row, the Audis were sanctioned with several negative changes that deeply affected the car's performance. Namely, added ballast weights, and Champion Audi deciding to go with different tyres, and reducing the boost pressure of the turbocharger. In 2004, after years of competing with the TT-R in the revitalised DTM series, with privateer team Abt Racing/Christian Abt taking the 2002 title with Laurent Aïello, Audi returned as a full factory effort to touring car racing by entering two factory-supported Joest Racing A4 DTM cars. 24 Hours of Le Mans Audi began racing prototype sportscars in 1999, debuting at the Le Mans 24 hour. Two car concepts were developed and raced in their first season - the Audi R8R (open-cockpit 'roadster' prototype) and the Audi R8C (closed-cockpit 'coupé' GT-prototype). The R8R scored a credible podium on its racing debut at Le Mans and was the concept which Audi continued to develop into the 2000 season due to favourable rules for open-cockpit prototypes. However, most of the competitors (such as BMW, Toyota, Mercedes and Nissan) retired at the end of 1999. The factory-supported Joest Racing team won at Le Mans three times in a row with the Audi R8 (2000–2002), as well as winning every race in the American Le Mans Series in its first year. Audi also sold the car to customer teams such as Champion Racing. In 2003, two Bentley Speed 8s, with engines designed by Audi, and driven by Joest drivers loaned to the fellow Volkswagen Group company, competed in the GTP class, and finished the race in the top two positions, while the Champion Racing R8 finished third overall, and first in the LMP900 class. Audi returned to the winner's podium at the 2004 race, with the top three finishers all driving R8s: Audi Sport Japan Team Goh finished first, Audi Sport UK Veloqx second, and Champion Racing third. At the 2005 24 Hours of Le Mans, Champion Racing entered two R8s, along with an R8 from the Audi PlayStation Team Oreca. The R8s (which were built to old LMP900 regulations) received a narrower air inlet restrictor, reducing power, and an additional of weight compared to the newer LMP1 chassis. On average, the R8s were about 2–3 seconds off pace compared to the Pescarolo–Judd. But with a team of excellent drivers and experience, both Champion R8s were able to take first and third, while the Oreca team took fourth. The Champion team was also the first American team to win Le Mans since the Gulf Ford GTs in 1967. This also ends the long era of the R8; however, its replacement for 2006, called the Audi R10 TDI, was unveiled on 13 December 2005. The R10 TDI employed many new and innovative features, the most notable being the twin-turbocharged direct injection diesel engine. It was first raced in the 2006 12 Hours of Sebring as a race-test in preparation for the 2006 24 Hours of Le Mans, which it later went on to win. Audi had a win in the first diesel sports car at 12 Hours of Sebring (the car was developed with a Diesel engine due to ACO regulations that favor diesel engines). As well as winning the 24 Hours of Le Mans in 2006, the R10 TDI beat the Peugeot 908 HDi FAP in , and in , (however Peugeot won the 24h in 2009) with a podium clean-sweep (all four 908 entries retired) while breaking a distance record (set by the Porsche 917K of Martini Racing in ), in with the R15 TDI Plus. Audi's sports car racing success would continue with the Audi R18's victory at the 2011 24 Hours of Le Mans. Audi Sport Team Joest's Benoît Tréluyer earned Audi their first pole position in five years while the team's sister car locked out the front row. Early accidents eliminated two of Audi's three entries, but the sole remaining Audi R18 TDI of Tréluyer, Marcel Fässler, and André Lotterer held off the trio of Peugeot 908s to claim victory by a margin of 13.8 seconds. Results American Le Mans Series Audi entered a factory racing team run by Joest Racing into the American Le Mans Series under the Audi Sport North America name in 2000. This was a successful operation with the team winning on its debut in the series at the 2000 12 Hours of Sebring. Factory-backed Audi R8s were the dominant car in ALMS taking 25 victories between 2000 and the end of the 2002 season. In 2003, Audi sold customer cars to Champion Racing as well as continuing to race the factory Audi Sport North America team. Champion Racing won many races as a private team running Audi R8s and eventually replaced Team Joest as the Audi Sport North America between 2006 and 2008. Since 2009 Audi has not taken part in full American Le Mans Series Championships, but has competed in the series opening races at Sebring, using the 12-hour race as a test for Le Mans, and also as part of the 2012 FIA World Endurance Championship season calendar. Results European Le Mans Series Audi participated in the 2003 1000km of Le Mans which was a one-off sports car race in preparation for the 2004 European Le Mans Series. The factory team Audi Sport UK won races and the championship in the 2004 season but Audi was unable to match their sweeping success of Audi Sport North America in the American Le Mans Series, partly due to the arrival of a factory competitor in LMP1, Peugeot. The French manufacturer's 908 HDi FAP became the car to beat in the series from 2008 onwards with 20 LMP wins. However, Audi were able to secure the championship in 2008 even though Peugeot scored more race victories in the season. Results World Endurance Championship 2012 In 2012, the FIA sanctioned a World Endurance Championship which would be organised by the ACO as a continuation of the ILMC. Audi competed won the first WEC race at Sebring and followed this up with a further three successive wins, including the 2012 24 Hours of Le Mans. Audi scored a final 5th victory in the 2012 WEC in Bahrain and were able to win the inaugural WEC Manufacturers' Championship. 2013 As defending champions, Audi once again entered the Audi R18 e-tron quattro chassis into the 2013 WEC and the team won the first five consecutive races, including the 2013 24 Hours of Le Mans. The victory at Round 5, Circuit of the Americas, was of particular significance as it marked the 100th win for Audi in Le Mans prototypes. Audi secured their second consecutive WEC Manufacturers' Championship at Round 6 after taking second place and half points in the red-flagged Fuji race. 2014 For the 2014 season, Audi entered a redesigned and upgraded R18 e-tron quattro which featured a 2 MJ energy recovery system. As defending champions, Audi would once again face a challenge in LMP1 from Toyota, and additionally from Porsche who returned to endurance racing after a 16-year absence. The season-opening 6hrs of Silverstone was a disaster for Audi who saw both cars retire from the race, marking the first time that an Audi car has failed to score a podium in a World Endurance Championship race. Results Formula E Audi provide factory support to Abt Sportsline in the FIA Formula E Championship, The team competed under the title of Audi Sport Abt Formula E Team in the inaugural 2014-15 Formula E season. On 13 February 2014 the team announced its driver line up as Daniel Abt and World Endurance Championship driver Lucas di Grassi. Formula One Audi has been linked to Formula One in recent years but has always resisted due to the company's opinion that it is not relevant to road cars, but hybrid power unit technology has been adopted into the sport, swaying the company's view and encouraging research into the program by former Ferrari team principal Stefano Domenicali. Marketing Branding The Audi emblem is four overlapping rings that represent the four marques of Auto Union. The Audi emblem symbolises the amalgamation of Audi with DKW, Horch and Wanderer: the first ring from the left represents Audi, the second represents DKW, third is Horch, and the fourth and last ring Wanderer. The design is popularly believed to have been the idea of Klaus von Oertzen, the director of sales at Wanderer – when Berlin was chosen as the host city for the 1936 Summer Olympics and that a form of the Olympic logo symbolized the newly established Auto Union's desire to succeed. Somewhat ironically, the International Olympic Committee later sued Audi in the International Trademark Court in 1995, where they lost. The original "Audi" script, with the distinctive slanted tails on the "A" and "d" was created for the historic Audi company in 1920 by the famous graphic designer Lucian Bernhard, and was resurrected when Volkswagen revived the brand in 1965. Following the demise of NSU in 1977, less prominence was given to the four rings, in preference to the "Audi" script encased within a black (later red) ellipse, and was commonly displayed next to the Volkswagen roundel when the two brands shared a dealer network under the V.A.G banner. The ellipse (known as the Audi Oval) was phased out after 1994, when Audi formed its own independent dealer network, and prominence was given back to the four rings – at the same time Audi Sans (a derivative of Univers) was adopted as the font for all marketing materials, corporate communications and was also used in the vehicles themselves. As part of Audi's centennial celebration in 2009, the company updated the logo, changing the font to left-aligned Audi Type, and altering the shading for the overlapping rings. The revised logo was designed by Rayan Abdullah. Audi developed a Corporate Sound concept, with Audi Sound Studio designed for producing the Corporate Sound. The Corporate Sound project began with sound agency Klangerfinder GmbH & Co KG and s12 GmbH. Audio samples were created in Klangerfinder's sound studio in Stuttgart, becoming part of Audi Sound Studio collection. Other Audi Sound Studio components include The Brand Music Pool, The Brand Voice. Audi also developed Sound Branding Toolkit including certain instruments, sound themes, rhythm and car sounds which all are supposed to reflect the AUDI sound character. Audi started using a beating heart sound trademark beginning in 1996. An updated heartbeat sound logo, developed by agencies KLANGERFINDER GmbH & Co KG of Stuttgart and S12 GmbH of Munich, was first used in 2010 in an Audi A8 commercial with the slogan The Art of Progress. Slogans Audi's corporate tagline is , meaning "Progress through Technology". The German-language tagline is used in many European countries, including the United Kingdom (but not in Italy, where is used), and in other markets, such as Latin America, Oceania, Africa and parts of Asia including Japan. Originally, the American tagline was Innovation through technology, but in Canada Vorsprung durch Technik was used. Since 2007, Audi has used the slogan Truth in Engineering in the U.S. However, since the Audi emissions testing scandal came to light in September 2015, this slogan was lambasted for being discordant with reality. In fact, just hours after disgraced Volkswagen CEO Martin Winterkorn admitted to cheating on emissions data, an advertisement during the 2015 Primetime Emmy Awards promoted Audi's latest advances in low emissions technology with Kermit the Frog stating, "It's not that easy being green." Vorsprung durch Technik was first used in English-language advertising after Sir John Hegarty of the Bartle Bogle Hegarty advertising agency visited the Audi factory in 1982. In the original British television commercials, the phrase was voiced by Geoffrey Palmer. After its repeated use in advertising campaigns, the phrase found its way into popular culture, including the British comedy Only Fools and Horses, the U2 song "Zooropa" and the Blur song "Parklife". Similar-sounding phrases have also been used, including as the punchline for a joke in the movie Lock, Stock, and Two Smoking Barrels and in the British TV series Peep Show. Typography Audi Sans (based on Univers Extended) was originally created in 1997 by Ole Schäfer for MetaDesign. MetaDesign was later commissioned for a new corporate typeface called Audi Type, designed by Paul van der Laan and Pieter van Rosmalen of Bold Monday. The font began to appear in Audi's 2009 products and marketing materials. Sponsorships Audi is a strong partner of different kinds of sports. In football, long partnerships exist between Audi and domestic clubs including Bayern Munich, Hamburger SV, 1. FC Nürnberg, Hertha BSC, and Borussia Mönchengladbach and international clubs including Chelsea, Real Madrid, FC Barcelona, A.C. Milan, AFC Ajax and Perspolis. Audi also sponsors winter sports: The Audi FIS Alpine Ski World Cup is named after the company. Additionally, Audi supports the German Ski Association (DSV) as well as the alpine skiing national teams of Switzerland, Sweden, Finland, France, Liechtenstein, Italy, Austria and the U.S. For almost two decades, Audi fosters golf sport: for example with the Audi quattro Cup and the HypoVereinsbank Ladies German Open presented by Audi. In sailing, Audi is engaged in the Medcup regatta and supports the team Luna Rossa during the Louis Vuitton Pacific Series and also is the primary sponsor of the Melges 20 sailboat. Further, Audi sponsors the regional teams ERC Ingolstadt (hockey) and FC Ingolstadt 04 (soccer). In 2009, the year of Audi's 100th anniversary, the company organized the Audi Cup for the first time. Audi also sponsor the New York Yankees as well. In October 2010 they agreed to a three sponsorship year-deal with Everton. Audi also sponsors the England Polo Team and holds the Audi Polo Awards. Marvel Cinematic Universe Since the start of the Marvel Cinematic Universe, Audi signed a deal to sponsor, promote and provide vehicles for several films. So far these have been, Iron Man, Iron Man 2, Iron Man 3, Avengers: Age of Ultron, Captain America: Civil War, Spider-Man: Homecoming, Avengers: Endgame and Spider-Man: Far From Home. The R8 supercar became the personal vehicle for Tony Stark (played by Robert Downey Jr.) for six of these films. The e-tron vehicles were promoted in Endgame and Far From Home. Several commercials were co-produced by Marvel and Audi to promote several new concepts and some of the latest vehicles such as the A8, SQ7 and the e-Tron fleet. Multitronic campaign In 2001, Audi promoted the new multitronic continuously variable transmission with television commercials throughout Europe, featuring an impersonator of musician and actor Elvis Presley. A prototypical dashboard figure – later named "Wackel-Elvis" ("Wobble Elvis" or "Wobbly Elvis") – appeared in the commercials to demonstrate the smooth ride in an Audi equipped with the multitronic transmission. The dashboard figure was originally intended for use in the commercials only, but after they aired the demand for Wackel-Elvis fans grew among fans and the figure was mass-produced in China and marketed by Audi in their factory outlet store. Audi TDI As part of Audi's attempt to promote its Diesel technology in 2009, the company began Audi Mileage Marathon. The driving tour featured a fleet of 23 Audi TDI vehicles from 4 models (Audi Q7 3.0 TDI, Audi Q5 3.0 TDI, Audi A4 3.0 TDI, Audi A3 Sportback 2.0 TDI with S tronic transmission) travelling across the American continent from New York to Los Angeles, passing major cities like Chicago, Dallas and Las Vegas during the 13 daily stages, as well as natural wonders including the Rocky Mountains, Death Valley and the Grand Canyon. Audi e-tron The next phase of technology Audi is developing is the e-tron electric drive powertrain system. They have shown several concept cars , each with different levels of size and performance. The original e-tron concept shown at the 2009 Frankfurt motor show is based on the platform of the R8 and has been scheduled for limited production. Power is provided by electric motors at all four wheels. The second concept was shown at the 2010 Detroit Motor Show. Power is provided by two electric motors at the rear axle. This concept is also considered to be the direction for a future mid-engined gas-powered 2-seat performance coupe. The Audi A1 e-tron concept, based on the Audi A1 production model, is a hybrid vehicle with a range extending Wankel rotary engine to provide power after the initial charge of the battery is depleted. It is the only concept of the three to have range-extending capability. The car is powered through the front wheels, always using electric power. It is all set to be displayed at the Auto Expo 2012 in New Delhi, India, from 5 January. Powered by a 1.4 litre engine, and can cover a distance up to 54 km s on a single charge. The e-tron was also shown in the 2013 blockbuster film Iron Man 3 and was driven by Tony Stark (Iron Man). In video games Audi has supported the European version of PlayStation Home, the PlayStation 3's online community-based service, by releasing a dedicated Home space. Audi is the first carmaker to develop such a space for Home. On 17 December 2009, Audi released two spaces; the Audi Home Terminal and the Audi Vertical Run. The Audi Home Terminal features an Audi TV channel delivering video content, an Internet Browser feature, and a view of a city. The Audi Vertical Run is where users can access the mini-game Vertical Run, a futuristic mini-game featuring Audi's e-tron concept. Players collect energy and race for the highest possible speeds and the fastest players earn a place in the Audi apartments located in a large tower in the centre of the Audi Space. In both the Home Terminal and Vertical Run spaces, there are teleports where users can teleport back and forth between the two spaces. Audi had stated that additional content would be added in 2010. On 31 March 2015 Sony shutdown the PlayStation Home service rendering all content for it inaccessible. See also DKW Horch Wanderer (company) Notes References External links Companies based in Baden-Württemberg Car manufacturers of Germany Companies based in Bavaria Companies based in Ingolstadt Companies formerly listed on the Frankfurt Stock Exchange Vehicle manufacturing companies established in 1909 Vehicle manufacturing companies disestablished in 1939 Vehicle manufacturing companies established in 1965 Re-established companies German brands Luxury motor vehicle manufacturers Companies based in Saxony Sports car manufacturers Volkswagen Group Car brands German companies established in 1909
849
https://en.wikipedia.org/wiki/Aircraft
Aircraft
An aircraft is a vehicle or machine that is able to fly by gaining support from the air. It counters the force of gravity by using either static lift or by using the dynamic lift of an airfoil, or in a few cases the downward thrust from jet engines. Common examples of aircraft include airplanes, helicopters, airships (including blimps), gliders, paramotors, and hot air balloons. The human activity that surrounds aircraft is called aviation. The science of aviation, including designing and building aircraft, is called aeronautics. Crewed aircraft are flown by an onboard pilot, but unmanned aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as lift type, aircraft propulsion, usage and others. History Flying model craft and stories of manned flight go back many centuries; however, the first manned ascent — and safe descent — in modern times took place by larger hot-air balloons developed in the 18th century. Each of the two World Wars led to great technical advances. Consequently, the history of aircraft can be divided into five eras: Pioneers of flight, from the earliest experiments to 1914. First World War, 1914 to 1918. Aviation between the World Wars, 1918 to 1939. Second World War, 1939 to 1945. Postwar era, also called the Jet Age, 1945 to the present day. Methods of lift Lighter than air – aerostats Aerostats use buoyancy to float in the air in much the same way that ships float on the water. They are characterized by one or more large cells or canopies, filled with a relatively low-density gas such as helium, hydrogen, or hot air, which is less dense than the surrounding air. When the weight of this is added to the weight of the aircraft structure, it adds up to the same weight as the air that the craft displaces. Small hot-air balloons, called sky lanterns, were first invented in ancient China prior to the 3rd century BC and used primarily in cultural celebrations, and were only the second type of aircraft to fly, the first being kites, which were first invented in ancient China over two thousand years ago. (See Han Dynasty) A balloon was originally any aerostat, while the term airship was used for large, powered aircraft designs — usually fixed-wing. In 1919, Frederick Handley Page was reported as referring to "ships of the air," with smaller passenger types as "Air yachts." In the 1930s, large intercontinental flying boats were also sometimes referred to as "ships of the air" or "flying-ships". — though none had yet been built. The advent of powered balloons, called dirigible balloons, and later of rigid hulls allowing a great increase in size, began to change the way these words were used. Huge powered aerostats, characterized by a rigid outer framework and separate aerodynamic skin surrounding the gas bags, were produced, the Zeppelins being the largest and most famous. There were still no fixed-wing aircraft or non-rigid balloons large enough to be called airships, so "airship" came to be synonymous with these aircraft. Then several accidents, such as the Hindenburg disaster in 1937, led to the demise of these airships. Nowadays a "balloon" is an unpowered aerostat and an "airship" is a powered one. A powered, steerable aerostat is called a dirigible. Sometimes this term is applied only to non-rigid balloons, and sometimes dirigible balloon is regarded as the definition of an airship (which may then be rigid or non-rigid). Non-rigid dirigibles are characterized by a moderately aerodynamic gasbag with stabilizing fins at the back. These soon became known as blimps. During World War II, this shape was widely adopted for tethered balloons; in windy weather, this both reduces the strain on the tether and stabilizes the balloon. The nickname blimp was adopted along with the shape. In modern times, any small dirigible or airship is called a blimp, though a blimp may be unpowered as well as powered. Heavier-than-air – aerodynes Heavier-than-air aircraft, such as airplanes, must find some way to push air or gas downwards so that a reaction occurs (by Newton's laws of motion) to push the aircraft upwards. This dynamic movement through the air is the origin of the term. There are two ways to produce dynamic upthrust — aerodynamic lift, and powered lift in the form of engine thrust. Aerodynamic lift involving wings is the most common, with fixed-wing aircraft being kept in the air by the forward movement of wings, and rotorcraft by spinning wing-shaped rotors sometimes called rotary wings. A wing is a flat, horizontal surface, usually shaped in cross-section as an aerofoil. To fly, air must flow over the wing and generate lift. A flexible wing is a wing made of fabric or thin sheet material, often stretched over a rigid frame. A kite is tethered to the ground and relies on the speed of the wind over its wings, which may be flexible or rigid, fixed, or rotary. With powered lift, the aircraft directs its engine thrust vertically downward. V/STOL aircraft, such as the Harrier Jump Jet and Lockheed Martin F-35B take off and land vertically using powered lift and transfer to aerodynamic lift in steady flight. A pure rocket is not usually regarded as an aerodyne because it does not depend on the air for its lift (and can even fly into space); however, many aerodynamic lift vehicles have been powered or assisted by rocket motors. Rocket-powered missiles that obtain aerodynamic lift at very high speed due to airflow over their bodies are a marginal case. Fixed-wing The forerunner of the fixed-wing aircraft is the kite. Whereas a fixed-wing aircraft relies on its forward speed to create airflow over the wings, a kite is tethered to the ground and relies on the wind blowing over its wings to provide lift. Kites were the first kind of aircraft to fly and were invented in China around 500 BC. Much aerodynamic research was done with kites before test aircraft, wind tunnels, and computer modelling programs became available. The first heavier-than-air craft capable of controlled free-flight were gliders. A glider designed by George Cayley carried out the first true manned, controlled flight in 1853. The practical, powered, fixed-wing aircraft (the airplane or aeroplane) was invented by Wilbur and Orville Wright. Besides the method of propulsion, fixed-wing aircraft are in general characterized by their wing configuration. The most important wing characteristics are: Number of wings — monoplane, biplane, etc. Wing support — Braced or cantilever, rigid, or flexible. Wing planform — including aspect ratio, angle of sweep, and any variations along the span (including the important class of delta wings). Location of the horizontal stabilizer, if any. Dihedral angle — positive, zero, or negative (anhedral). A variable geometry aircraft can change its wing configuration during flight. A flying wing has no fuselage, though it may have small blisters or pods. The opposite of this is a lifting body, which has no wings, though it may have small stabilizing and control surfaces. Wing-in-ground-effect vehicles are generally not considered aircraft. They "fly" efficiently close to the surface of the ground or water, like conventional aircraft during takeoff. An example is the Russian ekranoplan nicknamed the "Caspian Sea Monster". Man-powered aircraft also rely on ground effect to remain airborne with minimal pilot power, but this is only because they are so underpowered—in fact, the airframe is capable of flying higher. Rotorcraft Rotorcraft, or rotary-wing aircraft, use a spinning rotor with aerofoil section blades (a rotary wing) to provide lift. Types include helicopters, autogyros, and various hybrids such as gyrodynes and compound rotorcraft. Helicopters have a rotor turned by an engine-driven shaft. The rotor pushes air downward to create lift. By tilting the rotor forward, the downward flow is tilted backward, producing thrust for forward flight. Some helicopters have more than one rotor and a few have rotors turned by gas jets at the tips. Autogyros have unpowered rotors, with a separate power plant to provide thrust. The rotor is tilted backward. As the autogyro moves forward, air blows upward across the rotor, making it spin. This spinning increases the speed of airflow over the rotor, to provide lift. Rotor kites are unpowered autogyros, which are towed to give them forward speed or tethered to a static anchor in high-wind for kited flight. Cyclogyros rotate their wings about a horizontal axis. Compound rotorcraft have wings that provide some or all of the lift in forward flight. They are nowadays classified as powered lift types and not as rotorcraft. Tiltrotor aircraft (such as the Bell Boeing V-22 Osprey), tiltwing, tail-sitter, and coleopter aircraft have their rotors/propellers horizontal for vertical flight and vertical for forward flight. Other methods of lift A lifting body is an aircraft body shaped to produce lift. If there are any wings, they are too small to provide significant lift and are used only for stability and control. Lifting bodies are not efficient: they suffer from high drag, and must also travel at high speed to generate enough lift to fly. Many of the research prototypes, such as the Martin Marietta X-24, which led up to the Space Shuttle, were lifting bodies, though the Space Shuttle is not, and some supersonic missiles obtain lift from the airflow over a tubular body. Powered lift types rely on engine-derived lift for vertical takeoff and landing (VTOL). Most types transition to fixed-wing lift for horizontal flight. Classes of powered lift types include VTOL jet aircraft (such as the Harrier Jump Jet) and tiltrotors, such as the Bell Boeing V-22 Osprey, among others. A few experimental designs rely entirely on engine thrust to provide lift throughout the whole flight, including personal fan-lift hover platforms and jetpacks. VTOL research designs include the Rolls-Royce Thrust Measuring Rig. The Flettner airplane uses a rotating cylinder in place of a fixed wing, obtaining lift from the Magnus effect. The ornithopter obtains thrust by flapping its wings. Size and speed extremes Size The smallest aircraft are toys/recreational items, and nano aircraft. The largest aircraft by dimensions and volume (as of 2016) is the long British Airlander 10, a hybrid blimp, with helicopter and fixed-wing features, and reportedly capable of speeds up to , and an airborne endurance of two weeks with a payload of up to . The largest aircraft by weight and largest regular fixed-wing aircraft ever built, , is the Antonov An-225 Mriya. That Ukrainian-built six-engine Russian transport of the 1980s is long, with an wingspan. It holds the world payload record, after transporting of goods, and has recently flown loads commercially. With a maximum loaded weight of , it is also the heaviest aircraft built to date. It can cruise at . The largest military airplanes are the Ukrainian Antonov An-124 Ruslan (world's second-largest airplane, also used as a civilian transport), and American Lockheed C-5 Galaxy transport, weighing, loaded, over . The 8-engine, piston/propeller Hughes H-4 Hercules "Spruce Goose" — an American World War II wooden flying boat transport with a greater wingspan (94m/260ft) than any current aircraft and a tail height equal to the tallest (Airbus A380-800 at 24.1m/78ft) — flew only one short hop in the late 1940s and never flew out of ground effect. The largest civilian airplanes, apart from the above-noted An-225 and An-124, are the Airbus Beluga cargo transport derivative of the Airbus A300 jet airliner, the Boeing Dreamlifter cargo transport derivative of the Boeing 747 jet airliner/transport (the 747-200B was, at its creation in the 1960s, the heaviest aircraft ever built, with a maximum weight of over ), and the double-decker Airbus A380 "super-jumbo" jet airliner (the world's largest passenger airliner). Speeds The fastest recorded powered aircraft flight and fastest recorded aircraft flight of an air-breathing powered aircraft was of the NASA X-43A Pegasus, a scramjet-powered, hypersonic, lifting body experimental research aircraft, at Mach 9.6, exactly . The X-43A set that new mark, and broke its own world record of Mach 6.3, exactly , set in March 2004, on its third and final flight on 16 November 2004. Prior to the X-43A, the fastest recorded powered airplane flight (and still the record for the fastest manned, powered airplane / fastest manned, non-spacecraft aircraft) was of the North American X-15A-2, rocket-powered airplane at Mach 6.72, or , on 3 October 1967. On one flight it reached an altitude of . The fastest known, production aircraft (other than rockets and missiles) currently or formerly operational (as of 2016) are: The fastest fixed-wing aircraft, and fastest glider, is the Space Shuttle, a rocket-glider hybrid, which has re-entered the atmosphere as a fixed-wing glider at more than Mach 25, equal to . The fastest military airplane ever built: Lockheed SR-71 Blackbird, a U.S. reconnaissance jet fixed-wing aircraft, known to fly beyond Mach 3.3, equal to . On 28 July 1976, an SR-71 set the record for the fastest and highest-flying operational aircraft with an absolute speed record of and an absolute altitude record of . At its retirement in January 1990, it was the fastest air-breathing aircraft / fastest jet aircraft in the world, a record still standing . Note: Some sources refer to the above-mentioned X-15 as the "fastest military airplane" because it was partly a project of the U.S. Navy and Air Force; however, the X-15 was not used in non-experimental actual military operations. The fastest current military aircraft are the Soviet/Russian Mikoyan-Gurevich MiG-25 — capable of Mach 3.2, equal to , at the expense of engine damage, or Mach 2.83, equal to , normally — and the Russian Mikoyan MiG-31E (also capable of Mach 2.83 normally). Both are fighter-interceptor jet airplanes, in active operations as of 2016. The fastest civilian airplane ever built, and fastest passenger airliner ever built: the briefly operated Tupolev Tu-144 supersonic jet airliner (Mach 2.35, 1,600 mph, 2,587 km/h), which was believed to cruise at about Mach 2.2. The Tu-144 (officially operated from 1968 to 1978, ending after two crashes of the small fleet) was outlived by its rival, the Concorde (Mach 2.23), a French/British supersonic airliner, known to cruise at Mach 2.02 (1.450 mph, 2,333 kmh at cruising altitude), operating from 1976 until the small Concorde fleet was grounded permanently in 2003, following the crash of one in the early 2000s. The fastest civilian airplane currently flying: the Cessna Citation X, an American business jet, capable of Mach 0.935, or . Its rival, the American Gulfstream G650 business jet, can reach Mach 0.925, or The fastest airliner currently flying is the Boeing 747, quoted as being capable of cruising over Mach 0.885, . Previously, the fastest were the troubled, short-lived Russian (Soviet Union) Tupolev Tu-144 SST (Mach 2.35; equal to ) and the French/British Concorde, with a maximum speed of Mach 2.23 or and a normal cruising speed of Mach 2 or . Before them, the Convair 990 Coronado jet airliner of the 1960s flew at over . Propulsion Unpowered aircraft Gliders are heavier-than-air aircraft that do not employ propulsion once airborne. Take-off may be by launching forward and downward from a high location, or by pulling into the air on a tow-line, either by a ground-based winch or vehicle, or by a powered "tug" aircraft. For a glider to maintain its forward air speed and lift, it must descend in relation to the air (but not necessarily in relation to the ground). Many gliders can "soar", i.e., gain height from updrafts such as thermal currents. The first practical, controllable example was designed and built by the British scientist and pioneer George Cayley, whom many recognise as the first aeronautical engineer. Common examples of gliders are sailplanes, hang gliders and paragliders. Balloons drift with the wind, though normally the pilot can control the altitude, either by heating the air or by releasing ballast, giving some directional control (since the wind direction changes with altitude). A wing-shaped hybrid balloon can glide directionally when rising or falling; but a spherically shaped balloon does not have such directional control. Kites are aircraft that are tethered to the ground or other object (fixed or mobile) that maintains tension in the tether or kite line; they rely on virtual or real wind blowing over and under them to generate lift and drag. Kytoons are balloon-kite hybrids that are shaped and tethered to obtain kiting deflections, and can be lighter-than-air, neutrally buoyant, or heavier-than-air. Powered aircraft Powered aircraft have one or more onboard sources of mechanical power, typically aircraft engines although rubber and manpower have also been used. Most aircraft engines are either lightweight reciprocating engines or gas turbines. Engine fuel is stored in tanks, usually in the wings but larger aircraft also have additional fuel tanks in the fuselage. Propeller aircraft Propeller aircraft use one or more propellers (airscrews) to create thrust in a forward direction. The propeller is usually mounted in front of the power source in tractor configuration but can be mounted behind in pusher configuration. Variations of propeller layout include contra-rotating propellers and ducted fans. Many kinds of power plant have been used to drive propellers. Early airships used man power or steam engines. The more practical internal combustion piston engine was used for virtually all fixed-wing aircraft until World War II and is still used in many smaller aircraft. Some types use turbine engines to drive a propeller in the form of a turboprop or propfan. Human-powered flight has been achieved, but has not become a practical means of transport. Unmanned aircraft and models have also used power sources such as electric motors and rubber bands. Jet aircraft Jet aircraft use airbreathing jet engines, which take in air, burn fuel with it in a combustion chamber, and accelerate the exhaust rearwards to provide thrust. Different jet engine configurations include the turbojet and turbofan, sometimes with the addition of an afterburner. Those with no rotating turbomachinery include the pulsejet and ramjet. These mechanically simple engines produce no thrust when stationary, so the aircraft must be launched to flying speed using a catapult, like the V-1 flying bomb, or a rocket, for example. Other engine types include the motorjet and the dual-cycle Pratt & Whitney J58. Compared to engines using propellers, jet engines can provide much higher thrust, higher speeds and, above about , greater efficiency. They are also much more fuel-efficient than rockets. As a consequence nearly all large, high-speed or high-altitude aircraft use jet engines. Rotorcraft Some rotorcraft, such as helicopters, have a powered rotary wing or rotor, where the rotor disc can be angled slightly forward so that a proportion of its lift is directed forwards. The rotor may, like a propeller, be powered by a variety of methods such as a piston engine or turbine. Experiments have also used jet nozzles at the rotor blade tips. Other types of powered aircraft Rocket-powered aircraft have occasionally been experimented with, and the Messerschmitt Me 163 Komet fighter even saw action in the Second World War. Since then, they have been restricted to research aircraft, such as the North American X-15, which traveled up into space where air-breathing engines cannot work (rockets carry their own oxidant). Rockets have more often been used as a supplement to the main power plant, typically for the rocket-assisted take off of heavily loaded aircraft, but also to provide high-speed dash capability in some hybrid designs such as the Saunders-Roe SR.53. The ornithopter obtains thrust by flapping its wings. It has found practical use in a model hawk used to freeze prey animals into stillness so that they can be captured, and in toy birds. Design and construction Aircraft are designed according to many factors such as customer and manufacturer demand, safety protocols and physical and economic constraints. For many types of aircraft the design process is regulated by national airworthiness authorities. The key parts of an aircraft are generally divided into three categories: The structure comprises the main load-bearing elements and associated equipment. The propulsion system (if it is powered) comprises the power source and associated equipment, as described above. The avionics comprise the control, navigation and communication systems, usually electrical in nature. Structure The approach to structural design varies widely between different types of aircraft. Some, such as paragliders, comprise only flexible materials that act in tension and rely on aerodynamic pressure to hold their shape. A balloon similarly relies on internal gas pressure, but may have a rigid basket or gondola slung below it to carry its payload. Early aircraft, including airships, often employed flexible doped aircraft fabric covering to give a reasonably smooth aeroshell stretched over a rigid frame. Later aircraft employed semi-monocoque techniques, where the skin of the aircraft is stiff enough to share much of the flight loads. In a true monocoque design there is no internal structure left. With the recent emphasis on sustainability hemp has picked up some attention, having a way smaller carbon foot print and 10 times stronger than steel, hemp could become the standard of manufacturing in the future. The key structural parts of an aircraft depend on what type it is. Aerostats Lighter-than-air types are characterised by one or more gasbags, typically with a supporting structure of flexible cables or a rigid framework called its hull. Other elements such as engines or a gondola may also be attached to the supporting structure. Aerodynes Heavier-than-air types are characterised by one or more wings and a central fuselage. The fuselage typically also carries a tail or empennage for stability and control, and an undercarriage for takeoff and landing. Engines may be located on the fuselage or wings. On a fixed-wing aircraft the wings are rigidly attached to the fuselage, while on a rotorcraft the wings are attached to a rotating vertical shaft. Smaller designs sometimes use flexible materials for part or all of the structure, held in place either by a rigid frame or by air pressure. The fixed parts of the structure comprise the airframe. Avionics The avionics comprise the aircraft flight control systems and related equipment, including the cockpit instrumentation, navigation, radar, monitoring, and communications systems. Flight characteristics Flight envelope The flight envelope of an aircraft refers to its approved design capabilities in terms of airspeed, load factor and altitude. The term can also refer to other assessments of aircraft performance such as maneuverability. When an aircraft is abused, for instance by diving it at too-high a speed, it is said to be flown outside the envelope, something considered foolhardy since it has been taken beyond the design limits which have been established by the manufacturer. Going beyond the envelope may have a known outcome such as flutter or entry to a non-recoverable spin (possible reasons for the boundary). Range The range is the distance an aircraft can fly between takeoff and landing, as limited by the time it can remain airborne. For a powered aircraft the time limit is determined by the fuel load and rate of consumption. For an unpowered aircraft, the maximum flight time is limited by factors such as weather conditions and pilot endurance. Many aircraft types are restricted to daylight hours, while balloons are limited by their supply of lifting gas. The range can be seen as the average ground speed multiplied by the maximum time in the air. The Airbus A350-900ULR is now the longest range airliner. Flight dynamics Flight dynamics is the science of air vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation around three axes which pass through the vehicle's center of gravity, known as pitch, roll, and yaw. Roll is a rotation about the longitudinal axis (equivalent to the rolling or heeling of a ship) giving an up-down movement of the wing tips measured by the roll or bank angle. Pitch is a rotation about the sideways horizontal axis giving an up-down movement of the aircraft nose measured by the angle of attack. Yaw is a rotation about the vertical axis giving a side-to-side movement of the nose known as sideslip. Flight dynamics is concerned with the stability and control of an aircraft's rotation about each of these axes. Stability An aircraft that is unstable tends to diverge from its intended flight path and so is difficult to fly. A very stable aircraft tends to stay on its flight path and is difficult to maneuver. Therefore, it is important for any design to achieve the desired degree of stability. Since the widespread use of digital computers, it is increasingly common for designs to be inherently unstable and rely on computerised control systems to provide artificial stability. A fixed wing is typically unstable in pitch, roll, and yaw. Pitch and yaw stabilities of conventional fixed wing designs require horizontal and vertical stabilisers, which act similarly to the feathers on an arrow. These stabilizing surfaces allow equilibrium of aerodynamic forces and to stabilise the flight dynamics of pitch and yaw. They are usually mounted on the tail section (empennage), although in the canard layout, the main aft wing replaces the canard foreplane as pitch stabilizer. Tandem wing and tailless aircraft rely on the same general rule to achieve stability, the aft surface being the stabilising one. A rotary wing is typically unstable in yaw, requiring a vertical stabiliser. A balloon is typically very stable in pitch and roll due to the way the payload is slung underneath the center of lift. Control Flight control surfaces enable the pilot to control an aircraft's flight attitude and are usually part of the wing or mounted on, or integral with, the associated stabilizing surface. Their development was a critical advance in the history of aircraft, which had until that point been uncontrollable in flight. Aerospace engineers develop control systems for a vehicle's orientation (attitude) about its center of mass. The control systems include actuators, which exert forces in various directions, and generate rotational forces or moments about the aerodynamic center of the aircraft, and thus rotate the aircraft in pitch, roll, or yaw. For example, a pitching moment is a vertical force applied at a distance forward or aft from the aerodynamic center of the aircraft, causing the aircraft to pitch up or down. Control systems are also sometimes used to increase or decrease drag, for example to slow the aircraft to a safe speed for landing. The two main aerodynamic forces acting on any aircraft are lift supporting it in the air and drag opposing its motion. Control surfaces or other techniques may also be used to affect these forces directly, without inducing any rotation. Impacts of aircraft use Aircraft permit long distance, high speed travel and may be a more fuel efficient mode of transportation in some circumstances. Aircraft have environmental and climate impacts beyond fuel efficiency considerations, however. They are also relatively noisy compared to other forms of travel and high altitude aircraft generate contrails, which experimental evidence suggests may alter weather patterns. Uses for aircraft Aircraft are produced in several different types optimized for various uses; military aircraft, which includes not just combat types but many types of supporting aircraft, and civil aircraft, which include all non-military types, experimental and model. Military A military aircraft is any aircraft that is operated by a legal or insurrectionary armed service of any type. Military aircraft can be either combat or non-combat: Combat aircraft are aircraft designed to destroy enemy equipment using its own armament. Combat aircraft divide broadly into fighters and bombers, with several in-between types, such as fighter-bombers and attack aircraft, including attack helicopters. Non-combat aircraft are not designed for combat as their primary function, but may carry weapons for self-defense. Non-combat roles include search and rescue, reconnaissance, observation, transport, training, and aerial refueling. These aircraft are often variants of civil aircraft. Most military aircraft are powered heavier-than-air types. Other types, such as gliders and balloons, have also been used as military aircraft; for example, balloons were used for observation during the American Civil War and World War I, and military gliders were used during World War II to land troops. Civil Civil aircraft divide into commercial and general types, however there are some overlaps. Commercial aircraft include types designed for scheduled and charter airline flights, carrying passengers, mail and other cargo. The larger passenger-carrying types are the airliners, the largest of which are wide-body aircraft. Some of the smaller types are also used in general aviation, and some of the larger types are used as VIP aircraft. General aviation is a catch-all covering other kinds of private (where the pilot is not paid for time or expenses) and commercial use, and involving a wide range of aircraft types such as business jets (bizjets), trainers, homebuilt, gliders, warbirds and hot air balloons to name a few. The vast majority of aircraft today are general aviation types. Experimental An experimental aircraft is one that has not been fully proven in flight, or that carries a Special Airworthiness Certificate, called an Experimental Certificate in United States parlance. This often implies that the aircraft is testing new aerospace technologies, though the term also refers to amateur-built and kit-built aircraft, many of which are based on proven designs. Model A model aircraft is a small unmanned type made to fly for fun, for static display, for aerodynamic research or for other purposes. A scale model is a replica of some larger design. See also Lists Early flying machines Flight altitude record List of aircraft List of civil aircraft List of fighter aircraft List of individual aircraft List of large aircraft List of aviation, aerospace and aeronautical terms Topics Aircraft hijacking Aircraft spotting Air traffic control Airport Flying car Personal air vehicle Powered parachute Spacecraft Spaceplane References External links History The Evolution of Modern Aircraft (NASA) Virtual Museum Smithsonian Air and Space Museum — Online collection with a particular focus on history of aircraft and spacecraft Amazing Early Flying Machines slideshow by Life magazine Information Airliners.net Aviation Dictionary Free aviation terms, phrases and jargons New Scientist's Aviation page
851
https://en.wikipedia.org/wiki/Alfred%20Nobel
Alfred Nobel
Alfred Bernhard Nobel ( , ; 21 October 1833 – 10 December 1896) was a Swedish chemist, engineer, inventor, businessman, and philanthropist. He is best known for having bequeathed his fortune to establish the Nobel Prize, though he also made several important contributions to science, holding 355 patents in his lifetime. Nobel's most famous invention was dynamite, a safer and easier means of harnessing the explosive power of nitroglycerin; it was patented in 1867 and was soon used worldwide for mining and infrastructure development. Nobel displayed an early aptitude for science and learning, particularly in chemistry and languages; he became fluent in six languages and filed his first patent at age 24. He embarked on many business ventures with his family, most notably owning Bofors, an iron and steel producer that he developed into a major manufacturer of cannons and other armaments. After reading an erroneous obituary condemning him as a war profiteer, Nobel was inspired to bequeath his fortune to the Nobel Prize institution, which would annually recognize those who "conferred the greatest benefit to humankind". The synthetic element nobelium was named after him, and his name and legacy also survives in companies such as Dynamit Nobel and AkzoNobel, which descend from mergers with companies he founded. Nobel was elected a member of the Royal Swedish Academy of Sciences, which, pursuant to his will, would be responsible for choosing the Nobel laureates in physics and in chemistry. Personal life Early life and education Alfred Nobel was born in Stockholm, United Kingdoms of Sweden and Norway on 21 October 1833. He was the third son of Immanuel Nobel (1801–1872), an inventor and engineer, and Karolina Andriette Nobel (née Ahlsell 1805–1889). The couple married in 1827 and had eight children. The family was impoverished and only Alfred and his three brothers survived beyond childhood. Through his father, Alfred Nobel was a descendant of the Swedish scientist Olaus Rudbeck (1630–1702), and in his turn, the boy was interested in engineering, particularly explosives, learning the basic principles from his father at a young age. Alfred Nobel's interest in technology was inherited from his father, an alumnus of Royal Institute of Technology in Stockholm.Following various business failures, Nobel's father moved to Saint Petersburg, Russia and grew successful there as a manufacturer of machine tools and explosives. He invented the veneer lathe (which made possible the production of modern plywood) and started work on the torpedo. In 1842, the family joined him in the city. Now prosperous, his parents were able to send Nobel to private tutors and the boy excelled in his studies, particularly in chemistry and languages, achieving fluency in English, French, German and Russian. For 18 months, from 1841 to 1842, Nobel went to the only school he ever attended as a child, in Stockholm. Nobel gained proficiency in Swedish, French, Russian, English, German, and Italian. He also developed sufficient literary skill to write poetry in English. His Nemesis is a prose tragedy in four acts about Beatrice Cenci. It was printed while he was dying, but the entire stock was destroyed immediately after his death except for three copies, being regarded as scandalous and blasphemous. It was published in Sweden in 2003 and has been translated into Slovenian and French. Religion Nobel was Lutheran and regularly attended the Church of Sweden Abroad during his Paris years, led by pastor Nathan Söderblom who received the Nobel Peace Prize in 1930. He became an agnostic in youth and was an atheist later in life, though still donated generously to the Church. Health and relationships Nobel travelled for much of his business life, maintaining companies in Europe and America while keeping a home in Paris from 1873 to 1891. He remained a solitary character, given to periods of depression. He remained unmarried, although his biographers note that he had at least three loves, the first in Russia with a girl named Alexandra who rejected his proposal. In 1876, Austro-Bohemian Countess Bertha Kinsky became his secretary, but she left him after a brief stay to marry her previous lover Baron Arthur Gundaccar von Suttner. Her contact with Nobel was brief, yet she corresponded with him until his death in 1896, and probably influenced his decision to include a peace prize in his will. She was awarded the 1905 Nobel Peace prize "for her sincere peace activities". Nobel's longest-lasting relationship was with Sofija Hess from Celje whom he met in 1876. The liaison lasted for 18 years. Residences In the years of 1865 to 1873, Alfred Nobel had his home in Krümmel, Hamburg, he afterward moved to a house in the Avenue Malakoff in Paris that same year.In 1894, when he acquired Bofors-Gullspång, the Björkborn Manor was included, he stayed at his manor house in Sweden during the summers. The manor house became his very last residence in Sweden and has after his death functioned as a museum. Alfred Nobel died on 10 December 1896, in Sanremo, Italy, at his very last residence, Villa Nobel, overlooking the Mediterranean Sea. Scientific career As a young man, Nobel studied with chemist Nikolai Zinin; then, in 1850, went to Paris to further the work. There he met Ascanio Sobrero, who had invented nitroglycerin three years before. Sobrero strongly opposed the use of nitroglycerin because it was unpredictable, exploding when subjected to variable heat or pressure. But Nobel became interested in finding a way to control and use nitroglycerin as a commercially usable explosive; it had much more power than gunpowder. In 1851 at age 18, he went to the United States for one year to study, working for a short period under Swedish-American inventor John Ericsson, who designed the American Civil War ironclad, USS Monitor. Nobel filed his first patent, an English patent for a gas meter, in 1857, while his first Swedish patent, which he received in 1863, was on "ways to prepare gunpowder".The family factory produced armaments for the Crimean War (1853–1856), but had difficulty switching back to regular domestic production when the fighting ended and they filed for bankruptcy. In 1859, Nobel's father left his factory in the care of the second son, Ludvig Nobel (1831–1888), who greatly improved the business. Nobel and his parents returned to Sweden from Russia and Nobel devoted himself to the study of explosives, and especially to the safe manufacture and use of nitroglycerin. Nobel invented a detonator in 1863, and in 1865 designed the blasting cap. On 3 September 1864, a shed used for preparation of nitroglycerin exploded at the factory in Heleneborg, Stockholm, Sweden, killing five people, including Nobel's younger brother Emil. Fazed by the accident, Nobel founded the company Nitroglycerin Aktiebolaget AB in Vinterviken so that he could continue to work in a more isolated area. Nobel invented dynamite in 1867, a substance easier and safer to handle than the more unstable nitroglycerin. Dynamite was patented in the US and the UK and was used extensively in mining and the building of transport networks internationally. In 1875, Nobel invented gelignite, more stable and powerful than dynamite, and in 1887, patented ballistite, a predecessor of cordite. Nobel was elected a member of the Royal Swedish Academy of Sciences in 1884, the same institution that would later select laureates for two of the Nobel prizes, and he received an honorary doctorate from Uppsala University in 1893. Nobel's brothers Ludvig and Robert founded the oil company Branobel and became hugely rich in their own right. Nobel invested in these and amassed great wealth through the development of these new oil regions. During his life, Nobel was issued 355 patents internationally, and by his death, his business had established more than 90 armaments factories, despite his apparently pacifist character. Inventions Nobel found that when nitroglycerin was incorporated in an absorbent inert substance like kieselguhr (diatomaceous earth) it became safer and more convenient to handle, and this mixture he patented in 1867 as "dynamite". Nobel demonstrated his explosive for the first time that year, at a quarry in Redhill, Surrey, England. In order to help reestablish his name and improve the image of his business from the earlier controversies associated with dangerous explosives, Nobel had also considered naming the highly powerful substance "Nobel's Safety Powder", but settled with Dynamite instead, referring to the Greek word for "power" (). Nobel later combined nitroglycerin with various nitrocellulose compounds, similar to collodion, but settled on a more efficient recipe combining another nitrate explosive, and obtained a transparent, jelly-like substance, which was a more powerful explosive than dynamite. Gelignite, or blasting gelatine, as it was named, was patented in 1876; and was followed by a host of similar combinations, modified by the addition of potassium nitrate and various other substances. Gelignite was more stable, transportable and conveniently formed to fit into bored holes, like those used in drilling and mining, than the previously used compounds. It was adopted as the standard technology for mining in the "Age of Engineering", bringing Nobel a great amount of financial success, though at a cost to his health. An offshoot of this research resulted in Nobel's invention of ballistite, the precursor of many modern smokeless powder explosives and still used as a rocket propellant. Nobel Prize In 1888, the death of his brother Ludvig caused several newspapers to publish obituaries of Alfred in error. One French newspaper condemned him for his invention of military explosives—not, as is commonly quoted, dynamite, which was mainly used for civilian applications—and is said to have brought about his decision to leave a better legacy after his death. The obituary stated, ("The merchant of death is dead"), and went on to say, "Dr. Alfred Nobel, who became rich by finding ways to kill more people faster than ever before, died yesterday." Nobel read the obituary and was appalled at the idea that he would be remembered in this way. His decision to posthumously donate the majority of his wealth to found the Nobel Prize has been credited at least in part to him wanting to leave behind a better legacy. On 27 November 1895, at the Swedish-Norwegian Club in Paris, Nobel signed his last will and testament and set aside the bulk of his estate to establish the Nobel Prizes, to be awarded annually without distinction of nationality. After taxes and bequests to individuals, Nobel's will allocated 94% of his total assets, 31,225,000 Swedish kronor, to establish the five Nobel Prizes. This converted to £1,687,837 (GBP) at the time. In 2012, the capital was worth around SEK 3.1 billion (US$472 million, EUR 337 million), which is almost twice the amount of the initial capital, taking inflation into account. The first three of these prizes are awarded for eminence in physical science, in chemistry and in medical science or physiology; the fourth is for literary work "in an ideal direction" and the fifth prize is to be given to the person or society that renders the greatest service to the cause of international fraternity, in the suppression or reduction of standing armies, or in the establishment or furtherance of peace congresses. The formulation for the literary prize being given for a work "in an ideal direction" ( in Swedish), is cryptic and has caused much confusion. For many years, the Swedish Academy interpreted "ideal" as "idealistic" () and used it as a reason not to give the prize to important but less romantic authors, such as Henrik Ibsen and Leo Tolstoy. This interpretation has since been revised, and the prize has been awarded to, for example, Dario Fo and José Saramago, who do not belong to the camp of literary idealism. There was room for interpretation by the bodies he had named for deciding on the physical sciences and chemistry prizes, given that he had not consulted them before making the will. In his one-page testament, he stipulated that the money go to discoveries or inventions in the physical sciences and to discoveries or improvements in chemistry. He had opened the door to technological awards, but had not left instructions on how to deal with the distinction between science and technology. Since the deciding bodies he had chosen were more concerned with the former, the prizes went to scientists more often than engineers, technicians or other inventors. Sweden's central bank Sveriges Riksbank celebrated its 300th anniversary in 1968 by donating a large sum of money to the Nobel Foundation to be used to set up a sixth prize in the field of economics in honour of Alfred Nobel. In 2001, Alfred Nobel's great-great-nephew, Peter Nobel (born 1931), asked the Bank of Sweden to differentiate its award to economists given "in Alfred Nobel's memory" from the five other awards. This request added to the controversy over whether the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel is actually a legitimate "Nobel Prize". Death Nobel was accused of high treason against France for selling Ballistite to Italy, so he moved from Paris to Sanremo, Italy in 1891. On 10 December 1896, he suffered a stroke and died. He had left most of his wealth in trust, unbeknownst to his family, in order to fund the Nobel Prize awards. He is buried in Norra begravningsplatsen in Stockholm. Monuments and legacy The Monument to Alfred Nobel (, ) in Saint Petersburg is located along the Bolshaya Nevka River on Petrogradskaya Embankment. It was dedicated in 1991 to mark the 90th anniversary of the first Nobel Prize presentation. Diplomat Thomas Bertelman and Professor Arkady Melua were initiators of the creation of the monument (1989). Professor A. Melua has provided funds for the establishment of the monument (J.S.Co. "Humanistica", 1990–1991). The abstract metal sculpture was designed by local artists Sergey Alipov and Pavel Shevchenko, and appears to be an explosion or branches of a tree. Petrogradskaya Embankment is the street where the Nobel's family lived until 1859. Criticism of Nobel focuses on his leading role in weapons manufacturing and sales, and some question his motives in creating his prizes, suggesting they are intended to improve his reputation. See also Nobel Foundation References Further reading Schück, H, and Sohlman, R., (1929). The Life of Alfred Nobel. London: William Heineman Ltd. Alfred Nobel US Patent No 78,317, dated 26 May 1868 Evlanoff, M. and Fluor, M. Alfred Nobel – The Loneliest Millionaire. Los Angeles, Ward Ritchie Press, 1969. Sohlman, R. The Legacy of Alfred Nobel, transl. Schubert E. London: The Bodley Head, 1983 (Swedish original, Ett Testamente, published in 1950). External links Alfred Nobel – Man behind the Prizes Biography at the Norwegian Nobel Institute Nobelprize.org Documents of Life and Activity of The Nobel Family. Under the editorship of Professor Arkady Melua. Series of books. "The Nobels in Baku" in Azerbaijan International, Vol 10.2 (Summer 2002), 56–59. The Nobel Prize in Postage Stamps A German branch or followup (German) Alfred Nobel and his unknown coworker 1833 births 1896 deaths Burials at Norra begravningsplatsen Members of the Royal Swedish Academy of Sciences Alfred Nobel Prize Engineers from Stockholm 19th-century Swedish businesspeople 19th-century Swedish scientists 19th-century Swedish engineers Swedish chemists Swedish philanthropists Explosives engineers
852
https://en.wikipedia.org/wiki/Alexander%20Graham%20Bell
Alexander Graham Bell
Alexander Graham Bell (, born Alexander Bell; March 3, 1847 – August 2, 1922) was a Scottish-born inventor, scientist, and engineer who is credited with patenting the first practical telephone. He also co-founded the American Telephone and Telegraph Company (AT&T) in 1885. Bell's father, grandfather, and brother had all been associated with work on elocution and speech and both his mother and wife were deaf; profoundly influencing Bell's life's work. His research on hearing and speech further led him to experiment with hearing devices which eventually culminated in Bell being awarded the first U.S. patent for the telephone, on March 7, 1876. Bell considered his invention an intrusion on his real work as a scientist and refused to have a telephone in his study. Many other inventions marked Bell's later life, including groundbreaking work in optical telecommunications, hydrofoils, and aeronautics. Although Bell was not one of the 33 founders of the National Geographic Society, he had a strong influence on the magazine while serving as the second president from January 7, 1898, until 1903. Beyond his work in engineering, Bell had a deep interest in the emerging science of heredity. Early life Alexander Bell was born in Edinburgh, Scotland, on March 3, 1847. The family home was at South Charlotte Street, and has a stone inscription marking it as Alexander Graham Bell's birthplace. He had two brothers: Melville James Bell (1845–1870) and Edward Charles Bell (1848–1867), both of whom would die of tuberculosis. His father was Professor Alexander Melville Bell, a phonetician, and his mother was Eliza Grace Bell (née Symonds). Born as just "Alexander Bell", at age 10, he made a plea to his father to have a middle name like his two brothers. For his 11th birthday, his father acquiesced and allowed him to adopt the name "Graham", chosen out of respect for Alexander Graham, a Canadian being treated by his father who had become a family friend. To close relatives and friends he remained "Aleck". First invention As a child, young Bell displayed a curiosity about his world; he gathered botanical specimens and ran experiments at an early age. His best friend was Ben Herdman, a neighbour whose family operated a flour mill. At the age of 12, Bell built a homemade device that combined rotating paddles with sets of nail brushes, creating a simple dehusking machine that was put into operation at the mill and used steadily for a number of years. In return, Ben's father John Herdman gave both boys the run of a small workshop in which to "invent". From his early years, Bell showed a sensitive nature and a talent for art, poetry, and music that was encouraged by his mother. With no formal training, he mastered the piano and became the family's pianist. Despite being normally quiet and introspective, he revelled in mimicry and "voice tricks" akin to ventriloquism that continually entertained family guests during their occasional visits. Bell was also deeply affected by his mother's gradual deafness (she began to lose her hearing when he was 12), and learned a manual finger language so he could sit at her side and tap out silently the conversations swirling around the family parlour. He also developed a technique of speaking in clear, modulated tones directly into his mother's forehead wherein she would hear him with reasonable clarity. Bell's preoccupation with his mother's deafness led him to study acoustics. His family was long associated with the teaching of elocution: his grandfather, Alexander Bell, in London, his uncle in Dublin, and his father, in Edinburgh, were all elocutionists. His father published a variety of works on the subject, several of which are still well known, especially his The Standard Elocutionist (1860), which appeared in Edinburgh in 1868. The Standard Elocutionist appeared in 168 British editions and sold over a quarter of a million copies in the United States alone. In this treatise, his father explains his methods of how to instruct deaf-mutes (as they were then known) to articulate words and read other people's lip movements to decipher meaning. Bell's father taught him and his brothers not only to write Visible Speech but to identify any symbol and its accompanying sound. Bell became so proficient that he became a part of his father's public demonstrations and astounded audiences with his abilities. He could decipher Visible Speech representing virtually every language, including Latin, Scottish Gaelic, and even Sanskrit, accurately reciting written tracts without any prior knowledge of their pronunciation. Education As a young child, Bell, like his brothers, received his early schooling at home from his father. At an early age, he was enrolled at the Royal High School, Edinburgh, Scotland, which he left at the age of 15, having completed only the first four forms. His school record was undistinguished, marked by absenteeism and lacklustre grades. His main interest remained in the sciences, especially biology, while he treated other school subjects with indifference, to the dismay of his father. Upon leaving school, Bell travelled to London to live with his grandfather, Alexander Bell, on Harrington Square. During the year he spent with his grandfather, a love of learning was born, with long hours spent in serious discussion and study. The elder Bell took great efforts to have his young pupil learn to speak clearly and with conviction, the attributes that his pupil would need to become a teacher himself. At the age of 16, Bell secured a position as a "pupil-teacher" of elocution and music, in Weston House Academy at Elgin, Moray, Scotland. Although he was enrolled as a student in Latin and Greek, he instructed classes himself in return for board and £10 per session. The following year, he attended the University of Edinburgh, joining his older brother Melville who had enrolled there the previous year. In 1868, not long before he departed for Canada with his family, Bell completed his matriculation exams and was accepted for admission to University College London. First experiments with sound His father encouraged Bell's interest in speech and, in 1863, took his sons to see a unique automaton developed by Sir Charles Wheatstone based on the earlier work of Baron Wolfgang von Kempelen. The rudimentary "mechanical man" simulated a human voice. Bell was fascinated by the machine and after he obtained a copy of von Kempelen's book, published in German, and had laboriously translated it, he and his older brother Melville built their own automaton head. Their father, highly interested in their project, offered to pay for any supplies and spurred the boys on with the enticement of a "big prize" if they were successful. While his brother constructed the throat and larynx, Bell tackled the more difficult task of recreating a realistic skull. His efforts resulted in a remarkably lifelike head that could "speak", albeit only a few words. The boys would carefully adjust the "lips" and when a bellows forced air through the windpipe, a very recognizable "Mama" ensued, to the delight of neighbours who came to see the Bell invention. Intrigued by the results of the automaton, Bell continued to experiment with a live subject, the family's Skye Terrier, "Trouve". After he taught it to growl continuously, Bell would reach into its mouth and manipulate the dog's lips and vocal cords to produce a crude-sounding "Ow ah oo ga ma ma". With little convincing, visitors believed his dog could articulate "How are you, grandmama?" Indicative of his playful nature, his experiments convinced onlookers that they saw a "talking dog". These initial forays into experimentation with sound led Bell to undertake his first serious work on the transmission of sound, using tuning forks to explore resonance. At age 19, Bell wrote a report on his work and sent it to philologist Alexander Ellis, a colleague of his father. Ellis immediately wrote back indicating that the experiments were similar to existing work in Germany, and also lent Bell a copy of Hermann von Helmholtz's work, The Sensations of Tone as a Physiological Basis for the Theory of Music. Dismayed to find that groundbreaking work had already been undertaken by Helmholtz who had conveyed vowel sounds by means of a similar tuning fork "contraption", Bell pored over the German scientist's book. Working from his own erroneous mistranslation of a French edition, Bell fortuitously then made a deduction that would be the underpinning of all his future work on transmitting sound, reporting: "Without knowing much about the subject, it seemed to me that if vowel sounds could be produced by electrical means, so could consonants, so could articulate speech." He also later remarked: "I thought that Helmholtz had done it ... and that my failure was due only to my ignorance of electricity. It was a valuable blunder ... If I had been able to read German in those days, I might never have commenced my experiments!" Family tragedy In 1865, when the Bell family moved to London, Bell returned to Weston House as an assistant master and, in his spare hours, continued experiments on sound using a minimum of laboratory equipment. Bell concentrated on experimenting with electricity to convey sound and later installed a telegraph wire from his room in Somerset College to that of a friend. Throughout late 1867, his health faltered mainly through exhaustion. His younger brother, Edward "Ted," was similarly bed-ridden, suffering from tuberculosis. While Bell recovered (by then referring to himself in correspondence as "A. G. Bell") and served the next year as an instructor at Somerset College, Bath, England, his brother's condition deteriorated. Edward would never recover. Upon his brother's death, Bell returned home in 1867. His older brother Melville had married and moved out. With aspirations to obtain a degree at University College London, Bell considered his next years as preparation for the degree examinations, devoting his spare time at his family's residence to studying. Helping his father in Visible Speech demonstrations and lectures brought Bell to Susanna E. Hull's private school for the deaf in South Kensington, London. His first two pupils were deaf-mute girls who made remarkable progress under his tutelage. While his older brother seemed to achieve success on many fronts including opening his own elocution school, applying for a patent on an invention, and starting a family, Bell continued as a teacher. However, in May 1870, Melville died from complications due to tuberculosis, causing a family crisis. His father had also suffered a debilitating illness earlier in life and had been restored to health by a convalescence in Newfoundland. Bell's parents embarked upon a long-planned move when they realized that their remaining son was also sickly. Acting decisively, Alexander Melville Bell asked Bell to arrange for the sale of all the family property, conclude all of his brother's affairs (Bell took over his last student, curing a pronounced lisp), and join his father and mother in setting out for the "New World". Reluctantly, Bell also had to conclude a relationship with Marie Eccleston, who, as he had surmised, was not prepared to leave England with him. Canada In 1870, 23-year-old Bell travelled with his parents and his brother's widow, Caroline Margaret Ottaway, to Paris, Ontario, to stay with Thomas Henderson, a Baptist minister and family friend. The Bell family soon purchased a farm of at Tutelo Heights (now called Tutela Heights), near Brantford, Ontario. The property consisted of an orchard, large farmhouse, stable, pigsty, hen-house, and a carriage house, which bordered the Grand River. At the homestead, Bell set up his own workshop in the converted carriage house near to what he called his "dreaming place", a large hollow nestled in trees at the back of the property above the river. Despite his frail condition upon arriving in Canada, Bell found the climate and environs to his liking, and rapidly improved. He continued his interest in the study of the human voice and when he discovered the Six Nations Reserve across the river at Onondaga, he learned the Mohawk language and translated its unwritten vocabulary into Visible Speech symbols. For his work, Bell was awarded the title of Honorary Chief and participated in a ceremony where he donned a Mohawk headdress and danced traditional dances. After setting up his workshop, Bell continued experiments based on Helmholtz's work with electricity and sound. He also modified a melodeon (a type of pump organ) so that it could transmit its music electrically over a distance. Once the family was settled in, both Bell and his father made plans to establish a teaching practice and in 1871, he accompanied his father to Montreal, where Melville was offered a position to teach his System of Visible Speech. Work with the deaf Bell's father was invited by Sarah Fuller, principal of the Boston School for Deaf Mutes (which continues today as the public Horace Mann School for the Deaf), in Boston, Massachusetts, United States, to introduce the Visible Speech System by providing training for Fuller's instructors, but he declined the post in favour of his son. Travelling to Boston in April 1871, Bell proved successful in training the school's instructors. He was subsequently asked to repeat the programme at the American Asylum for Deaf-mutes in Hartford, Connecticut, and the Clarke School for the Deaf in Northampton, Massachusetts. Returning home to Brantford after six months abroad, Bell continued his experiments with his "harmonic telegraph". The basic concept behind his device was that messages could be sent through a single wire if each message was transmitted at a different pitch, but work on both the transmitter and receiver was needed. Unsure of his future, he first contemplated returning to London to complete his studies, but decided to return to Boston as a teacher. His father helped him set up his private practice by contacting Gardiner Greene Hubbard, the president of the Clarke School for the Deaf for a recommendation. Teaching his father's system, in October 1872, Alexander Bell opened his "School of Vocal Physiology and Mechanics of Speech" in Boston, which attracted a large number of deaf pupils, with his first class numbering 30 students. While he was working as a private tutor, one of his pupils was Helen Keller, who came to him as a young child unable to see, hear, or speak. She was later to say that Bell dedicated his life to the penetration of that "inhuman silence which separates and estranges". In 1893, Keller performed the sod-breaking ceremony for the construction of Bell's new Volta Bureau, dedicated to "the increase and diffusion of knowledge relating to the deaf". Throughout his lifetime, Bell sought to integrate the deaf and hard of hearing with the hearing world. To achieve complete assimilation in society, Bell encouraged speech therapy and lip reading as well as sign language. He outlined this in a 1898 paper detailing his belief that with resources and effort, the deaf could be taught to read lips and speak (known as oralism) thus enabling their integration within the wider society from which many were often being excluded. Owing to his efforts to balance oralism with the teaching of sign language, Bell is often viewed negatively by those embracing Deaf culture. Ironically, Bell's last words to his deaf wife, Mabell, were signed. Continuing experimentation In 1872, Bell became professor of Vocal Physiology and Elocution at the Boston University School of Oratory. During this period, he alternated between Boston and Brantford, spending summers in his Canadian home. At Boston University, Bell was "swept up" by the excitement engendered by the many scientists and inventors residing in the city. He continued his research in sound and endeavored to find a way to transmit musical notes and articulate speech, but although absorbed by his experiments, he found it difficult to devote enough time to experimentation. While days and evenings were occupied by his teaching and private classes, Bell began to stay awake late into the night, running experiment after experiment in rented facilities at his boarding house. Keeping "night owl" hours, he worried that his work would be discovered and took great pains to lock up his notebooks and laboratory equipment. Bell had a specially made table where he could place his notes and equipment inside a locking cover. Worse still, his health deteriorated as he suffered severe headaches. Returning to Boston in fall 1873, Bell made a far-reaching decision to concentrate on his experiments in sound. Deciding to give up his lucrative private Boston practice, Bell retained only two students, six-year-old "Georgie" Sanders, deaf from birth, and 15-year-old Mabel Hubbard. Each pupil would play an important role in the next developments. George's father, Thomas Sanders, a wealthy businessman, offered Bell a place to stay in nearby Salem with Georgie's grandmother, complete with a room to "experiment". Although the offer was made by George's mother and followed the year-long arrangement in 1872 where her son and his nurse had moved to quarters next to Bell's boarding house, it was clear that Mr. Sanders was backing the proposal. The arrangement was for teacher and student to continue their work together, with free room and board thrown in. Mabel was a bright, attractive girl who was ten years Bell's junior but became the object of his affection. Having lost her hearing after a near-fatal bout of scarlet fever close to her fifth birthday, she had learned to read lips but her father, Gardiner Greene Hubbard, Bell's benefactor and personal friend, wanted her to work directly with her teacher. The telephone By 1874, Bell's initial work on the harmonic telegraph had entered a formative stage, with progress made both at his new Boston "laboratory" (a rented facility) and at his family home in Canada a big success. While working that summer in Brantford, Bell experimented with a "phonautograph", a pen-like machine that could draw shapes of sound waves on smoked glass by tracing their vibrations. Bell thought it might be possible to generate undulating electrical currents that corresponded to sound waves. Bell also thought that multiple metal reeds tuned to different frequencies like a harp would be able to convert the undulating currents back into sound. But he had no working model to demonstrate the feasibility of these ideas. In 1874, telegraph message traffic was rapidly expanding and in the words of Western Union President William Orton, had become "the nervous system of commerce". Orton had contracted with inventors Thomas Edison and Elisha Gray to find a way to send multiple telegraph messages on each telegraph line to avoid the great cost of constructing new lines. When Bell mentioned to Gardiner Hubbard and Thomas Sanders that he was working on a method of sending multiple tones on a telegraph wire using a multi-reed device, the two wealthy patrons began to financially support Bell's experiments. Patent matters would be handled by Hubbard's patent attorney, Anthony Pollok. In March 1875, Bell and Pollok visited the scientist Joseph Henry, who was then director of the Smithsonian Institution, and asked Henry's advice on the electrical multi-reed apparatus that Bell hoped would transmit the human voice by telegraph. Henry replied that Bell had "the germ of a great invention". When Bell said that he did not have the necessary knowledge, Henry replied, "Get it!" That declaration greatly encouraged Bell to keep trying, even though he did not have the equipment needed to continue his experiments, nor the ability to create a working model of his ideas. However, a chance meeting in 1874 between Bell and Thomas A. Watson, an experienced electrical designer and mechanic at the electrical machine shop of Charles Williams, changed all that. With financial support from Sanders and Hubbard, Bell hired Thomas Watson as his assistant, and the two of them experimented with acoustic telegraphy. On June 2, 1875, Watson accidentally plucked one of the reeds and Bell, at the receiving end of the wire, heard the overtones of the reed; overtones that would be necessary for transmitting speech. That demonstrated to Bell that only one reed or armature was necessary, not multiple reeds. This led to the "gallows" sound-powered telephone, which could transmit indistinct, voice-like sounds, but not clear speech. The race to the patent office In 1875, Bell developed an acoustic telegraph and drew up a patent application for it. Since he had agreed to share U.S. profits with his investors Gardiner Hubbard and Thomas Sanders, Bell requested that an associate in Ontario, George Brown, attempt to patent it in Britain, instructing his lawyers to apply for a patent in the U.S. only after they received word from Britain (Britain would issue patents only for discoveries not previously patented elsewhere). Meanwhile, Elisha Gray was also experimenting with acoustic telegraphy and thought of a way to transmit speech using a water transmitter. On February 14, 1876, Gray filed a caveat with the U.S. Patent Office for a telephone design that used a water transmitter. That same morning, Bell's lawyer filed Bell's application with the patent office. There is considerable debate about who arrived first and Gray later challenged the primacy of Bell's patent. Bell was in Boston on February 14 and did not arrive in Washington until February 26. Bell's patent 174,465, was issued to Bell on March 7, 1876, by the U.S. Patent Office. Bell's patent covered "the method of, and apparatus for, transmitting vocal or other sounds telegraphically ... by causing electrical undulations, similar in form to the vibrations of the air accompanying the said vocal or other sound" Bell returned to Boston the same day and the next day resumed work, drawing in his notebook a diagram similar to that in Gray's patent caveat. On March 10, 1876, three days after his patent was issued, Bell succeeded in getting his telephone to work, using a liquid transmitter similar to Gray's design. Vibration of the diaphragm caused a needle to vibrate in the water, varying the electrical resistance in the circuit. When Bell spoke the sentence "Mr. Watson—Come here—I want to see you" into the liquid transmitter, Watson, listening at the receiving end in an adjoining room, heard the words clearly. Although Bell was, and still is, accused of stealing the telephone from Gray, Bell used Gray's water transmitter design only after Bell's patent had been granted, and only as a proof of concept scientific experiment, to prove to his own satisfaction that intelligible "articulate speech" (Bell's words) could be electrically transmitted. After March 1876, Bell focused on improving the electromagnetic telephone and never used Gray's liquid transmitter in public demonstrations or commercial use. The question of priority for the variable resistance feature of the telephone was raised by the examiner before he approved Bell's patent application. He told Bell that his claim for the variable resistance feature was also described in Gray's caveat. Bell pointed to a variable resistance device in his previous application in which he described a cup of mercury, not water. He had filed the mercury application at the patent office a year earlier on February 25, 1875, long before Elisha Gray described the water device. In addition, Gray abandoned his caveat, and because he did not contest Bell's priority, the examiner approved Bell's patent on March 3, 1876. Gray had reinvented the variable resistance telephone, but Bell was the first to write down the idea and the first to test it in a telephone. The patent examiner, Zenas Fisk Wilber, later stated in an affidavit that he was an alcoholic who was much in debt to Bell's lawyer, Marcellus Bailey, with whom he had served in the Civil War. He claimed he showed Gray's patent caveat to Bailey. Wilber also claimed (after Bell arrived in Washington D.C. from Boston) that he showed Gray's caveat to Bell and that Bell paid him $100 (). Bell claimed they discussed the patent only in general terms, although in a letter to Gray, Bell admitted that he learned some of the technical details. Bell denied in an affidavit that he ever gave Wilber any money. Later developments On March 10, 1876, Bell used "the instrument" in Boston to call Thomas Watson who was in another room but out of earshot. He said, "Mr. Watson, come here – I want to see you" and Watson soon appeared at his side. Continuing his experiments in Brantford, Bell brought home a working model of his telephone. On August 3, 1876, from the telegraph office in Brantford, Ontario, Bell sent a tentative telegram to the village of Mount Pleasant distant, indicating that he was ready. He made a telephone call via telegraph wires and faint voices were heard replying. The following night, he amazed guests as well as his family with a call between the Bell Homestead and the office of the Dominion Telegraph Company in Brantford along an improvised wire strung up along telegraph lines and fences, and laid through a tunnel. This time, guests at the household distinctly heard people in Brantford reading and singing. The third test on August 10, 1876, was made via the telegraph line between Brantford and Paris, Ontario, distant. This test was said by many sources to be the "world's first long-distance call". The final test certainly proved that the telephone could work over long distances, at least as a one-way call. The first two-way (reciprocal) conversation over a line occurred between Cambridge and Boston (roughly 2.5 miles) on October 9, 1876. During that conversation, Bell was on Kilby Street in Boston and Watson was at the offices of the Walworth Manufacturing Company. Bell and his partners, Hubbard and Sanders, offered to sell the patent outright to Western Union for $100,000. The president of Western Union balked, countering that the telephone was nothing but a toy. Two years later, he told colleagues that if he could get the patent for $25 million he would consider it a bargain. By then, the Bell company no longer wanted to sell the patent. Bell's investors would become millionaires while he fared well from residuals and at one point had assets of nearly one million dollars. Bell began a series of public demonstrations and lectures to introduce the new invention to the scientific community as well as the general public. A short time later, his demonstration of an early telephone prototype at the 1876 Centennial Exposition in Philadelphia brought the telephone to international attention. Influential visitors to the exhibition included Emperor Pedro II of Brazil. One of the judges at the Exhibition, Sir William Thomson (later, Lord Kelvin), a renowned Scottish scientist, described the telephone as "the greatest by far of all the marvels of the electric telegraph". On January 14, 1878, at Osborne House, on the Isle of Wight, Bell demonstrated the device to Queen Victoria, placing calls to Cowes, Southampton and London. These were the first publicly witnessed long-distance telephone calls in the UK. The queen considered the process to be "quite extraordinary" although the sound was "rather faint". She later asked to buy the equipment that was used, but Bell offered to make "a set of telephones" specifically for her. The Bell Telephone Company was created in 1877, and by 1886, more than 150,000 people in the U.S. owned telephones. Bell Company engineers made numerous other improvements to the telephone, which emerged as one of the most successful products ever. In 1879, the Bell company acquired Edison's patents for the carbon microphone from Western Union. This made the telephone practical for longer distances, and it was no longer necessary to shout to be heard at the receiving telephone. Emperor Pedro II of Brazil was the first person to buy stock in Bell's company, the Bell Telephone Company. One of the first telephones in a private residence was installed in his palace in Petrópolis, his summer retreat from Rio de Janeiro. In January 1915, Bell made the first ceremonial transcontinental telephone call. Calling from the AT&T head office at 15 Dey Street in New York City, Bell was heard by Thomas Watson at 333 Grant Avenue in San Francisco. The New York Times reported: Competitors As is sometimes common in scientific discoveries, simultaneous developments can occur, as evidenced by a number of inventors who were at work on the telephone. Over a period of 18 years, the Bell Telephone Company faced 587 court challenges to its patents, including five that went to the U.S. Supreme Court, but none was successful in establishing priority over the original Bell patent and the Bell Telephone Company never lost a case that had proceeded to a final trial stage. Bell's laboratory notes and family letters were the key to establishing a long lineage to his experiments. The Bell company lawyers successfully fought off myriad lawsuits generated initially around the challenges by Elisha Gray and Amos Dolbear. In personal correspondence to Bell, both Gray and Dolbear had acknowledged his prior work, which considerably weakened their later claims. On January 13, 1887, the U.S. Government moved to annul the patent issued to Bell on the grounds of fraud and misrepresentation. After a series of decisions and reversals, the Bell company won a decision in the Supreme Court, though a couple of the original claims from the lower court cases were left undecided. By the time that the trial wound its way through nine years of legal battles, the U.S. prosecuting attorney had died and the two Bell patents (No. 174,465 dated March 7, 1876, and No. 186,787 dated January 30, 1877) were no longer in effect, although the presiding judges agreed to continue the proceedings due to the case's importance as a precedent. With a change in administration and charges of conflict of interest (on both sides) arising from the original trial, the US Attorney General dropped the lawsuit on November 30, 1897, leaving several issues undecided on the merits. During a deposition filed for the 1887 trial, Italian inventor Antonio Meucci also claimed to have created the first working model of a telephone in Italy in 1834. In 1886, in the first of three cases in which he was involved, Meucci took the stand as a witness in the hope of establishing his invention's priority. Meucci's testimony in this case was disputed due to a lack of material evidence for his inventions, as his working models were purportedly lost at the laboratory of American District Telegraph (ADT) of New York, which was later incorporated as a subsidiary of Western Union in 1901. Meucci's work, like many other inventors of the period, was based on earlier acoustic principles and despite evidence of earlier experiments, the final case involving Meucci was eventually dropped upon Meucci's death. However, due to the efforts of Congressman Vito Fossella, the U.S. House of Representatives on June 11, 2002, stated that Meucci's "work in the invention of the telephone should be acknowledged". This did not put an end to the still-contentious issue. Some modern scholars do not agree with the claims that Bell's work on the telephone was influenced by Meucci's inventions. The value of the Bell patent was acknowledged throughout the world, and patent applications were made in most major countries, but when Bell delayed the German patent application, the electrical firm of Siemens & Halske set up a rival manufacturer of Bell telephones under their own patent. The Siemens company produced near-identical copies of the Bell telephone without having to pay royalties. The establishment of the International Bell Telephone Company in Brussels, Belgium in 1880, as well as a series of agreements in other countries eventually consolidated a global telephone operation. The strain put on Bell by his constant appearances in court, necessitated by the legal battles, eventually resulted in his resignation from the company. Family life On July 11, 1877, a few days after the Bell Telephone Company was established, Bell married Mabel Hubbard (1857–1923) at the Hubbard estate in Cambridge, Massachusetts. His wedding present to his bride was to turn over 1,487 of his 1,497 shares in the newly formed Bell Telephone Company. Shortly thereafter, the newlyweds embarked on a year-long honeymoon in Europe. During that excursion, Bell took a handmade model of his telephone with him, making it a "working holiday". The courtship had begun years earlier; however, Bell waited until he was more financially secure before marrying. Although the telephone appeared to be an "instant" success, it was not initially a profitable venture and Bell's main sources of income were from lectures until after 1897. One unusual request exacted by his fiancée was that he use "Alec" rather than the family's earlier familiar name of "Aleck". From 1876, he would sign his name "Alec Bell". They had four children: Elsie May Bell (1878–1964) who married Gilbert Hovey Grosvenor of National Geographic fame. Marian Hubbard Bell (1880–1962) who was referred to as "Daisy". Married David Fairchild. Two sons who died in infancy (Edward in 1881 and Robert in 1883). The Bell family home was in Cambridge, Massachusetts, until 1880 when Bell's father-in-law bought a house in Washington, D.C.; in 1882 he bought a home in the same city for Bell's family, so they could be with him while he attended to the numerous court cases involving patent disputes. Bell was a British subject throughout his early life in Scotland and later in Canada until 1882 when he became a naturalized citizen of the United States. In 1915, he characterized his status as: "I am not one of those hyphenated Americans who claim allegiance to two countries." Despite this declaration, Bell has been proudly claimed as a "native son" by all three countries he resided in: the United States, Canada, and the United Kingdom. By 1885, a new summer retreat was contemplated. That summer, the Bells had a vacation on Cape Breton Island in Nova Scotia, spending time at the small village of Baddeck. Returning in 1886, Bell started building an estate on a point across from Baddeck, overlooking Bras d'Or Lake. By 1889, a large house, christened The Lodge was completed and two years later, a larger complex of buildings, including a new laboratory, were begun that the Bells would name Beinn Bhreagh (Gaelic: Beautiful Mountain) after Bell's ancestral Scottish highlands. Bell also built the Bell Boatyard on the estate, employing up to 40 people building experimental craft as well as wartime lifeboats and workboats for the Royal Canadian Navy and pleasure craft for the Bell family. He was an enthusiastic boater, and Bell and his family sailed or rowed a long series of vessels on Bras d'Or Lake, ordering additional vessels from the H.W. Embree and Sons boatyard in Port Hawkesbury, Nova Scotia. In his final, and some of his most productive years, Bell split his residency between Washington, D.C., where he and his family initially resided for most of the year, and Beinn Bhreagh, where they spent increasing amounts of time. Until the end of his life, Bell and his family would alternate between the two homes, but Beinn Bhreagh would, over the next 30 years, become more than a summer home as Bell became so absorbed in his experiments that his annual stays lengthened. Both Mabel and Bell became immersed in the Baddeck community and were accepted by the villagers as "their own". The Bells were still in residence at Beinn Bhreagh when the Halifax Explosion occurred on December 6, 1917. Mabel and Bell mobilized the community to help victims in Halifax. Later inventions Although Alexander Graham Bell is most often associated with the invention of the telephone, his interests were extremely varied. According to one of his biographers, Charlotte Gray, Bell's work ranged "unfettered across the scientific landscape" and he often went to bed voraciously reading the Encyclopædia Britannica, scouring it for new areas of interest. The range of Bell's inventive genius is represented only in part by the 18 patents granted in his name alone and the 12 he shared with his collaborators. These included 14 for the telephone and telegraph, four for the photophone, one for the phonograph, five for aerial vehicles, four for "hydroairplanes", and two for selenium cells. Bell's inventions spanned a wide range of interests and included a metal jacket to assist in breathing, the audiometer to detect minor hearing problems, a device to locate icebergs, investigations on how to separate salt from seawater, and work on finding alternative fuels. Bell worked extensively in medical research and invented techniques for teaching speech to the deaf. During his Volta Laboratory period, Bell and his associates considered impressing a magnetic field on a record as a means of reproducing sound. Although the trio briefly experimented with the concept, they could not develop a workable prototype. They abandoned the idea, never realizing they had glimpsed a basic principle which would one day find its application in the tape recorder, the hard disc and floppy disc drive, and other magnetic media. Bell's own home used a primitive form of air conditioning, in which fans blew currents of air across great blocks of ice. He also anticipated modern concerns with fuel shortages and industrial pollution. Methane gas, he reasoned, could be produced from the waste of farms and factories. At his Canadian estate in Nova Scotia, he experimented with composting toilets and devices to capture water from the atmosphere. In a magazine interview published shortly before his death, he reflected on the possibility of using solar panels to heat houses. Photophone Bell and his assistant Charles Sumner Tainter jointly invented a wireless telephone, named a photophone, which allowed for the transmission of both sounds and normal human conversations on a beam of light. Both men later became full associates in the Volta Laboratory Association. On June 21, 1880, Bell's assistant transmitted a wireless voice telephone message a considerable distance, from the roof of the Franklin School in Washington, D.C., to Bell at the window of his laboratory, some away, 19 years before the first voice radio transmissions. Bell believed the photophone's principles were his life's "greatest achievement", telling a reporter shortly before his death that the photophone was "the greatest invention [I have] ever made, greater than the telephone". The photophone was a precursor to the fiber-optic communication systems which achieved popular worldwide usage in the 1980s. Its master patent was issued in December 1880, many decades before the photophone's principles came into popular use. Metal detector Bell is also credited with developing one of the early versions of a metal detector through the use of an induction balance, after the shooting of U.S. President James A. Garfield in 1881. According to some accounts, the metal detector worked flawlessly in tests but did not find Guiteau's bullet, partly because the metal bed frame on which the President was lying disturbed the instrument, resulting in static. Garfield's surgeons, led by self-appointed chief physician Doctor Willard Bliss, were skeptical of the device, and ignored Bell's requests to move the President to a bed not fitted with metal springs. Alternatively, although Bell had detected a slight sound on his first test, the bullet may have been lodged too deeply to be detected by the crude apparatus. Bell's own detailed account, presented to the American Association for the Advancement of Science in 1882, differs in several particulars from most of the many and varied versions now in circulation, by concluding that extraneous metal was not to blame for failure to locate the bullet. Perplexed by the peculiar results he had obtained during an examination of Garfield, Bell "proceeded to the Executive Mansion the next morning ... to ascertain from the surgeons whether they were perfectly sure that all metal had been removed from the neighborhood of the bed. It was then recollected that underneath the horse-hair mattress on which the President lay was another mattress composed of steel wires. Upon obtaining a duplicate, the mattress was found to consist of a sort of net of woven steel wires, with large meshes. The extent of the [area that produced a response from the detector] having been so small, as compared with the area of the bed, it seemed reasonable to conclude that the steel mattress had produced no detrimental effect." In a footnote, Bell adds, "The death of President Garfield and the subsequent post-mortem examination, however, proved that the bullet was at too great a distance from the surface to have affected our apparatus." Hydrofoils The March 1906 Scientific American article by American pioneer William E. Meacham explained the basic principle of hydrofoils and hydroplanes. Bell considered the invention of the hydroplane as a very significant achievement. Based on information gained from that article, he began to sketch concepts of what is now called a hydrofoil boat. Bell and assistant Frederick W. "Casey" Baldwin began hydrofoil experimentation in the summer of 1908 as a possible aid to airplane takeoff from water. Baldwin studied the work of the Italian inventor Enrico Forlanini and began testing models. This led him and Bell to the development of practical hydrofoil watercraft. During his world tour of 1910–11, Bell and Baldwin met with Forlanini in France. They had rides in the Forlanini hydrofoil boat over Lake Maggiore. Baldwin described it as being as smooth as flying. On returning to Baddeck, a number of initial concepts were built as experimental models, including the Dhonnas Beag (Scottish Gaelic for little devil), the first self-propelled Bell-Baldwin hydrofoil. The experimental boats were essentially proof-of-concept prototypes that culminated in the more substantial HD-4, powered by Renault engines. A top speed of was achieved, with the hydrofoil exhibiting rapid acceleration, good stability, and steering, along with the ability to take waves without difficulty. In 1913, Dr. Bell hired Walter Pinaud, a Sydney yacht designer and builder as well as the proprietor of Pinaud's Yacht Yard in Westmount, Nova Scotia, to work on the pontoons of the HD-4. Pinaud soon took over the boatyard at Bell Laboratories on Beinn Bhreagh, Bell's estate near Baddeck, Nova Scotia. Pinaud's experience in boat-building enabled him to make useful design changes to the HD-4. After the First World War, work began again on the HD-4. Bell's report to the U.S. Navy permitted him to obtain two engines in July 1919. On September 9, 1919, the HD-4 set a world marine speed record of , a record which stood for ten years. Aeronautics In 1891, Bell had begun experiments to develop motor-powered heavier-than-air aircraft. The AEA was first formed as Bell shared the vision to fly with his wife, who advised him to seek "young" help as Bell was at the age of 60. In 1898, Bell experimented with tetrahedral box kites and wings constructed of multiple compound tetrahedral kites covered in maroon silk. The tetrahedral wings were named Cygnet I, II, and III, and were flown both unmanned and manned (Cygnet I crashed during a flight carrying Selfridge) in the period from 1907 to 1912. Some of Bell's kites are on display at the Alexander Graham Bell National Historic Site. Bell was a supporter of aerospace engineering research through the Aerial Experiment Association (AEA), officially formed at Baddeck, Nova Scotia, in October 1907 at the suggestion of his wife Mabel and with her financial support after the sale of some of her real estate. The AEA was headed by Bell and the founding members were four young men: American Glenn H. Curtiss, a motorcycle manufacturer at the time and who held the title "world's fastest man", having ridden his self-constructed motor bicycle around in the shortest time, and who was later awarded the Scientific American Trophy for the first official one-kilometre flight in the Western hemisphere, and who later became a world-renowned airplane manufacturer; Lieutenant Thomas Selfridge, an official observer from the U.S. Federal government and one of the few people in the army who believed that aviation was the future; Frederick W. Baldwin, the first Canadian and first British subject to pilot a public flight in Hammondsport, New York; and J. A. D. McCurdy–Baldwin and McCurdy being new engineering graduates from the University of Toronto. The AEA's work progressed to heavier-than-air machines, applying their knowledge of kites to gliders. Moving to Hammondsport, the group then designed and built the Red Wing, framed in bamboo and covered in red silk and powered by a small air-cooled engine. On March 12, 1908, over Keuka Lake, the biplane lifted off on the first public flight in North America. The innovations that were incorporated into this design included a cockpit enclosure and tail rudder (later variations on the original design would add ailerons as a means of control). One of the AEA's inventions, a practical wingtip form of the aileron, was to become a standard component on all aircraft. The White Wing and June Bug were to follow and by the end of 1908, over 150 flights without mishap had been accomplished. However, the AEA had depleted its initial reserves and only a $15,000 grant from Mrs. Bell allowed it to continue with experiments. Lt. Selfridge had also become the first person killed in a powered heavier-than-air flight in a crash of the Wright Flyer at Fort Myer, Virginia, on September 17, 1908. Their final aircraft design, the Silver Dart, embodied all of the advancements found in the earlier machines. On February 23, 1909, Bell was present as the Silver Dart flown by J. A. D. McCurdy from the frozen ice of Bras d'Or made the first aircraft flight in Canada. Bell had worried that the flight was too dangerous and had arranged for a doctor to be on hand. With the successful flight, the AEA disbanded and the Silver Dart would revert to Baldwin and McCurdy, who began the Canadian Aerodrome Company and would later demonstrate the aircraft to the Canadian Army. Heredity and genetics Bell, along with many members of the scientific community at the time, took an interest in the popular science of heredity which grew out of the publication of Charles Darwin's book On the Origin of Species in 1859. On his estate in Nova Scotia, Bell conducted meticulously recorded breeding experiments with rams and ewes. Over the course of more than 30 years, Bell sought to produce a breed of sheep with multiple nipples that would bear twins. He specifically wanted to see if selective breeding could produce sheep with four functional nipples with enough milk for twin lambs. This interest in animal breeding caught the attention of scientists focused on the study of heredity and genetics in humans. In November 1883, Bell presented a paper at a meeting of the National Academy of Sciences titled "Upon the Formation of a Deaf Variety of the Human Race". The paper is a compilation of data on the hereditary aspects of deafness. Bell's research indicated that a hereditary tendency toward deafness, as indicated by the possession of deaf relatives, was an important element in determining the production of deaf offspring. He noted that the proportion of deaf children born to deaf parents was many times greater than the proportion of deaf children born to the general population. In the paper, Bell delved into social commentary and discussed hypothetical public policies to bring an end to deafness. He also criticized educational practices that segregated deaf children rather than integrated them fulling into mainstream classrooms. The paper did not propose sterilization of deaf people or prohibition on intermarriage, noting that “We cannot dictate to men and women whom they should marry and natural selection no longer influences mankind to any great extent.” A review of Bell's "Memoir upon the Formation of a Deaf Variety of the Human Race" appearing in an 1885 issue of the "American Annals of the Deaf and Dumb" states that "Dr. Bell does not advocate legislative interference with the marriages of the deaf for several reasons one of which is that the results of such marriages have not yet been sufficiently investigated." The article goes on to say that "the editorial remarks based thereon did injustice to the author." The paper's author concludes by saying “A wiser way to prevent the extension of hereditary deafness, it seems to us, would be to continue the investigations which Dr. Bell has so admirable begun until the laws of the transmission of the tendency to deafness are fully understood, and then by explaining those laws to the pupils of our schools to lead them to choose their partners in marriage in such a way that deaf-mute offspring will not be the result." Historians have noted that Bell explicitly opposed laws regulating marriage, and never mentioned sterilization in any of his writings. Even after Bell agreed to engage with scientists conducting eugenic research, he consistently refused to support public policy that limited the rights or privileges of the deaf. Bell's interest and research on heredity attracted the interest of Charles Davenport, a Harvard professor and head of the Cold Spring Harbor Laboratory. In 1906, Davenport, who was also the founder of the American Breeder's Association, approached Bell about joining a new committee on eugenics chaired by David Starr Jordan. In 1910, Davenport opened the Eugenics Records office at Cold Spring Harbor. To give the organization scientific credibility, Davenport set up a Board of Scientific Directors naming Bell as chairman. Other members of the board included Luther Burbank, Roswell H. Johnson, Vernon L. Kellogg, and William E. Castle. In 1921, a Second International Congress of Eugenics was held in New York at the Museum of Natural History and chaired by Davenport. Although Bell did not present any research or speak as part of the proceedings, he was named as honorary president as a means to attract other scientists to attend the event. A summary of the event notes that Bell was a "pioneering investigator in the field of human heredity". Death Bell died of complications arising from diabetes on August 2, 1922, at his private estate in Cape Breton, Nova Scotia, at age 75. Bell had also been afflicted with pernicious anemia. His last view of the land he had inhabited was by moonlight on his mountain estate at 2:00 a.m. While tending to him after his long illness, Mabel, his wife, whispered, "Don't leave me." By way of reply, Bell signed "no...", lost consciousness, and died shortly after. On learning of Bell's death, the Canadian Prime Minister, Mackenzie King, cabled Mrs. Bell, saying: Bell's coffin was constructed of Beinn Bhreagh pine by his laboratory staff, lined with the same red silk fabric used in his tetrahedral kite experiments. To help celebrate his life, his wife asked guests not to wear black (the traditional funeral color) while attending his service, during which soloist Jean MacDonald sang a verse of Robert Louis Stevenson's "Requiem": Upon the conclusion of Bell's funeral, for one minute at 6:25 p.m. Eastern Time, "every phone on the continent of North America was silenced in honor of the man who had given to mankind the means for direct communication at a distance". Alexander Graham Bell was buried atop Beinn Bhreagh mountain, on his estate where he had resided increasingly for the last 35 years of his life, overlooking Bras d'Or Lake. He was survived by his wife Mabel, his two daughters, Elsie May and Marian, and nine of his grandchildren. Legacy and honors Honors and tributes flowed to Bell in increasing numbers as his invention became ubiquitous and his personal fame grew. Bell received numerous honorary degrees from colleges and universities to the point that the requests almost became burdensome. During his life, he also received dozens of major awards, medals, and other tributes. These included statuary monuments to both him and the new form of communication his telephone created, including the Bell Telephone Memorial erected in his honor in Alexander Graham Bell Gardens in Brantford, Ontario, in 1917. A large number of Bell's writings, personal correspondence, notebooks, papers, and other documents reside in both the United States Library of Congress Manuscript Division (as the Alexander Graham Bell Family Papers), and at the Alexander Graham Bell Institute, Cape Breton University, Nova Scotia; major portions of which are available for online viewing. A number of historic sites and other marks commemorate Bell in North America and Europe, including the first telephone companies in the United States and Canada. Among the major sites are: The Alexander Graham Bell National Historic Site, maintained by Parks Canada, which incorporates the Alexander Graham Bell Museum, in Baddeck, Nova Scotia, close to the Bell estate Beinn Bhreagh The Bell Homestead National Historic Site, includes the Bell family home, "Melville House", and farm overlooking Brantford, Ontario and the Grand River. It was their first home in North America; Canada's first telephone company building, the "Henderson Home" of the late 1870s, a predecessor of the Bell Telephone Company of Canada (officially chartered in 1880). In 1969, the building was carefully moved to the historic Bell Homestead National Historic Site in Brantford, Ontario, and was refurbished to become a telephone museum. The Bell Homestead, the Henderson Home telephone museum, and the National Historic Site's reception centre are all maintained by the Bell Homestead Society; The Alexander Graham Bell Memorial Park, which features a broad neoclassical monument built in 1917 by public subscription. The monument depicts mankind's ability to span the globe through telecommunications; The Alexander Graham Bell Museum (opened in 1956), part of the Alexander Graham Bell National Historic Site which was completed in 1978 in Baddeck, Nova Scotia. Many of the museum's artifacts were donated by Bell's daughters; In 1880, Bell received the Volta Prize with a purse of 50,000 French francs (approximately US$ in today's dollars) for the invention of the telephone from the French government. Among the luminaries who judged were Victor Hugo and Alexandre Dumas, fils. The Volta Prize was conceived by Napoleon III in 1852, and named in honor of Alessandro Volta, with Bell becoming the second recipient of the grand prize in its history. Since Bell was becoming increasingly affluent, he used his prize money to create endowment funds (the 'Volta Fund') and institutions in and around the United States capital of Washington, D.C.. These included the prestigious 'Volta Laboratory Association' (1880), also known as the Volta Laboratory and as the 'Alexander Graham Bell Laboratory', and which eventually led to the Volta Bureau (1887) as a center for studies on deafness which is still in operation in Georgetown, Washington, D.C. The Volta Laboratory became an experimental facility devoted to scientific discovery, and the very next year it improved Edison's phonograph by substituting wax for tinfoil as the recording medium and incising the recording rather than indenting it, key upgrades that Edison himself later adopted. The laboratory was also the site where he and his associate invented his "proudest achievement", "the photophone", the "optical telephone" which presaged fibre optical telecommunications while the Volta Bureau would later evolve into the Alexander Graham Bell Association for the Deaf and Hard of Hearing (the AG Bell), a leading center for the research and pedagogy of deafness. In partnership with Gardiner Greene Hubbard, Bell helped establish the publication Science during the early 1880s. In 1898, Bell was elected as the second president of the National Geographic Society, serving until 1903, and was primarily responsible for the extensive use of illustrations, including photography, in the magazine. He also served for many years as a Regent of the Smithsonian Institution (1898–1922). The French government conferred on him the decoration of the Légion d'honneur (Legion of Honor); the Royal Society of Arts in London awarded him the Albert Medal in 1902; the University of Würzburg, Bavaria, granted him a PhD, and he was awarded the Franklin Institute's Elliott Cresson Medal in 1912. He was one of the founders of the American Institute of Electrical Engineers in 1884 and served as its president from 1891 to 1892. Bell was later awarded the AIEE's Edison Medal in 1914 "For meritorious achievement in the invention of the telephone". The bel (B) and the smaller decibel (dB) are units of measurement of sound pressure level (SPL) invented by Bell Labs and named after him. Since 1976, the IEEE's Alexander Graham Bell Medal has been awarded to honor outstanding contributions in the field of telecommunications. In 1936, the US Patent Office declared Bell first on its list of the country's greatest inventors, leading to the US Post Office issuing a commemorative stamp honoring Bell in 1940 as part of its 'Famous Americans Series'. The First Day of Issue ceremony was held on October 28 in Boston, Massachusetts, the city where Bell spent considerable time on research and working with the deaf. The Bell stamp became very popular and sold out in little time. The stamp became, and remains to this day, the most valuable one of the series. The 150th anniversary of Bell's birth in 1997 was marked by a special issue of commemorative £1 banknotes from the Royal Bank of Scotland. The illustrations on the reverse of the note include Bell's face in profile, his signature, and objects from Bell's life and career: users of the telephone over the ages; an audio wave signal; a diagram of a telephone receiver; geometric shapes from engineering structures; representations of sign language and the phonetic alphabet; the geese which helped him to understand flight; and the sheep which he studied to understand genetics. Additionally, the Government of Canada honored Bell in 1997 with a C$100 gold coin, in tribute also to the 150th anniversary of his birth, and with a silver dollar coin in 2009 in honor of the 100th anniversary of flight in Canada. That first flight was made by an airplane designed under Dr. Bell's tutelage, named the Silver Dart. Bell's image, and also those of his many inventions have graced paper money, coinage, and postal stamps in numerous countries worldwide for many dozens of years. Alexander Graham Bell was ranked 57th among the 100 Greatest Britons (2002) in an official BBC nationwide poll, and among the Top Ten Greatest Canadians (2004), and the 100 Greatest Americans (2005). In 2006, Bell was also named as one of the 10 greatest Scottish scientists in history after having been listed in the National Library of Scotland's 'Scottish Science Hall of Fame'. Bell's name is still widely known and used as part of the names of dozens of educational institutes, corporate namesakes, street and place names around the world. Honorary degrees Alexander Graham Bell, who could not complete the university program of his youth, received at least a dozen honorary degrees from academic institutions, including eight honorary LL.D.s (Doctorate of Laws), two Ph.D.s, a D.Sc., and an M.D.: Gallaudet College (then named National Deaf-Mute College) in Washington, D.C. (Ph.D.) in 1880 University of Würzburg in Würzburg, Bavaria (Ph.D.) in 1882 Heidelberg University in Heidelberg, Germany (M.D.) in 1886 Harvard University in Cambridge, Massachusetts (LL.D.) in 1896 Illinois College, in Jacksonville, Illinois (LL.D.) in 1896, possibly 1881 Amherst College in Amherst, Massachusetts (LL.D.) in 1901 St. Andrew's University in St Andrews, Scotland (LL.D) in 1902 University of Oxford in Oxford, England (D.Sc.) in 1906 University of Edinburgh in Edinburgh, Scotland (LL.D.) in 1906 George Washington University in Washington, D.C. (LL.D.) in 1913 Queen's University at Kingston in Kingston, Ontario, Canada (LL.D.) in 1908 Dartmouth College in Hanover, New Hampshire (LL.D.) in 1913, possibly 1914 Portrayal in film and television The 1939 film The Story of Alexander Graham Bell was based on his life and works. The 1992 film The Sound and the Silence was a TV film. Biography aired an episode Alexander Graham Bell: Voice of Invention on August 6, 1996. Eyewitness No. 90 A Great Inventor Is Remembered, a 1957 NFB short about Bell. Bibliography Also published as: See also Alexander Graham Bell Association for the Deaf and Hard of Hearing Alexander Graham Bell National Historic Site Bell Boatyard Bell Homestead National Historic Site Bell Telephone Memorial Berliner, Emile Bourseul, Charles IEEE Alexander Graham Bell Medal John Peirce, submitted telephone ideas to Bell Manzetti, Innocenzo Meucci, Antonio Oriental Telephone Company People on Scottish banknotes Pioneers, a Volunteer Network Reis, Philipp The Story of Alexander Graham Bell, a 1939 movie of his life The Telephone Cases Volta Laboratory and Bureau William Francis Channing, submitted telephone ideas to Bell References Notes Citations Further reading Mullett, Mary B. The Story of A Famous Inventor. New York: Rogers and Fowle, 1921. Walters, Eric. The Hydrofoil Mystery. Toronto, Ontario, Canada: Puffin Books, 1999. . Winzer, Margret A. The History Of Special Education: From Isolation To Integration. Washington, D.C.: Gallaudet University Press, 1993. . External links Alexander and Mabel Bell Legacy Foundation Alexander Graham Bell Institute at Cape Breton University Bell Telephone Memorial, Brantford, Ontario Bell Homestead National Historic Site, Brantford, Ontario Alexander Graham Bell National Historic Site of Canada, Baddeck, Nova Scotia Alexander Graham Bell Family Papers at the Library of Congress Biography at the Dictionary of Canadian Biography Online Science.ca profile: Alexander Graham Bell Alexander Graham Bell's notebooks at the Internet Archive "Téléphone et photophone : les contributions indirectes de Graham Bell à l'idée de la vision à distance par l'électricité" at the Histoire de la télévision Multimedia Alexander Graham Bell at The Biography Channel Shaping The Future, from the Heritage Minutes and Radio Minutes collection at HistoricaCanada.ca (1:31 audio drama, Adobe Flash required) 1847 births 1922 deaths 19th-century Scottish scientists Alumni of the University of Edinburgh Alumni of University College London American agnostics American educational theorists American eugenicists American physicists American Unitarians Aviation pioneers Canadian agnostics Canadian Aviation Hall of Fame inductees Canadian emigrants to the United States Canadian eugenicists 19th-century Canadian inventors Canadian physicists Canadian Unitarians Deaths from diabetes Fellows of the American Academy of Arts and Sciences History of telecommunications IEEE Edison Medal recipients Language teachers Members of the American Philosophical Society Members of the American Antiquarian Society Members of the United States National Academy of Sciences National Aviation Hall of Fame inductees National Geographic Society Officiers of the Légion d'honneur People educated at the Royal High School, Edinburgh People from Baddeck, Nova Scotia Businesspeople from Boston People from Brantford Scientists from Edinburgh People from Washington, D.C. Scottish agnostics 19th-century Scottish businesspeople Scottish emigrants to Canada Scottish eugenicists Scottish inventors Scottish Unitarians Smithsonian Institution people Hall of Fame for Great Americans inductees George Washington University trustees Canadian activists Gardiner family Articles containing video clips 19th-century British inventors Scottish emigrants to the United States John Fritz Medal recipients 20th-century American scientists 20th-century American inventors Canadian educational theorists Scottish physicists 19th-century Canadian scientists 20th-century Canadian scientists Scottish Engineering Hall of Fame inductees
854
https://en.wikipedia.org/wiki/Anatolia
Anatolia
Anatolia, also known as Asia Minor, is a large peninsula in Western Asia and the westernmost protrusion of the Asian continent. It constitutes the major part of modern-day Turkey. The region is bounded by the Turkish Straits to the northwest, the Black Sea to the north, the Armenian Highlands to the east, the Mediterranean Sea to the south, and the Aegean Sea to the west. The Sea of Marmara forms a connection between the Black and Aegean seas through the Bosporus and Dardanelles straits and separates Anatolia from Thrace on the Balkan peninsula of Southeast Europe. The eastern border of Anatolia has been held to be a line between the Gulf of Alexandretta and the Black Sea, bounded by the Armenian Highlands to the east and Mesopotamia to the southeast. By this definition Anatolia comprises approximately the western two-thirds of the Asian part of Turkey. Today, Anatolia is sometimes considered to be synonymous with Asian Turkey, thereby including the western part of the Armenian Highlands and northern Mesopotamia; its eastern and southern borders are coterminous with Turkey's borders. The ancient Anatolian peoples spoke the now-extinct Anatolian languages of the Indo-European language family, which were largely replaced by the Greek language during classical antiquity as well as during the Hellenistic, Roman, and Byzantine periods. The major Anatolian languages included Hittite, Luwian, and Lydian, while other, poorly attested local languages included Phrygian and Mysian. Hurro-Urartian languages were spoken in the southeastern kingdom of Mitanni, while Galatian, a Celtic language, was spoken in Galatia, central Anatolia. The Turkification of Anatolia began under the rule of the Seljuk Empire in the late 11th century and it continued under the rule of the Ottoman Empire between the late 13th and the early 20th century and it has continued under the rule of today's Republic of Turkey. However, various non-Turkic languages continue to be spoken by minorities in Anatolia today, including Kurdish, Neo-Aramaic, Armenian, Arabic, Laz, Georgian and Greek. Other ancient peoples in the region included Galatians, Hurrians, Assyrians, Hattians, Cimmerians, as well as Ionian, Dorian, and Aeolic Greeks. Geography Traditionally, Anatolia is considered to extend in the east to an indefinite line running from the Gulf of Alexandretta to the Black Sea, coterminous with the Anatolian Plateau. This traditional geographical definition is used, for example, in the latest edition of Merriam-Webster's Geographical Dictionary. Under this definition, Anatolia is bounded to the east by the Armenian Highlands, and the Euphrates before that river bends to the southeast to enter Mesopotamia. To the southeast, it is bounded by the ranges that separate it from the Orontes valley in Syria and the Mesopotamian plain. Following the Armenian genocide, Western Armenia was renamed the Eastern Anatolia Region by the newly established Turkish government. In 1941, with the First Geography Congress which divided Turkey into seven geographical regions based on differences in climate and landscape, the eastern provinces of Turkey were placed into the Eastern Anatolia Region, which largely corresponds to the historical region of Western Armenia (named as such after the division of Greater Armenia between the Roman/Byzantine Empire (Western Armenia) and Sassanid Persia (Eastern Armenia) in 387 AD). Vazken Davidian terms the expanded use of "Anatolia" to apply to territory in eastern Turkey that was formerly referred to as Armenia (which had a sizeable Armenian population before the Armenian genocide) an "ahistorical imposition" and notes that a growing body of literature is uncomfortable with referring to the Ottoman East as "Eastern Anatolia." The highest mountain in the Eastern Anatolia Region (also the highest peak in the Armenian Highlands) is Mount Ararat (5123 m). The Euphrates, Araxes, Karasu and Murat rivers connect the Armenian Highlands to the South Caucasus and the Upper Euphrates Valley. Along with the Çoruh, these rivers are the longest in the Eastern Anatolia Region. Etymology The English-language name Anatolia derives from the Greek () meaning "the East" and designating (from a Greek point of view) eastern regions in general. The Greek word refers to the direction where the sun rises, coming from ἀνατέλλω anatello '(Ι) rise up,' comparable to terms in other languages such as "levant" from Latin levo 'to rise,' "orient" from Latin orior 'to arise, to originate,' Hebrew מִזְרָח mizraḥ 'east' from זָרַח zaraḥ 'to rise, to shine,' Aramaic מִדְנָח midnaḥ from דְּנַח denaḥ 'to rise, to shine.' The use of Anatolian designations has varied over time, perhaps originally referring to the Aeolian, Ionian and Dorian colonies situated along the eastern coasts of the Aegean Sea, but also encompassing eastern regions in general. Such use of Anatolian designations was employed during the reign of Roman Emperor Diocletian (284–305), who created the Diocese of the East, known in Greek as the Eastern (Ανατολής / Anatolian) Diocese, but completely unrelated to the regions of Asia Minor. In their widest territorial scope, Anatolian designations were employed during the reign of Roman Emperor Constantine I (306–337), who created the Praetorian prefecture of the East, known in Greek as the Eastern (Ανατολής / Anatolian) Prefecture, encompassing all eastern regions of the Late Roman Empire and spaning from Thrace to Egypt. Only after the loss of other eastern regions during the 7th century and the reduction of Byzantine eastern domains to Asia Minor, that region became the only remaining part of the Byzantine East, and thus commonly referred to (in Greek) as the Eastern (Ανατολής / Anatolian) part of the Empire. In the same time, the Anatolic Theme (Ἀνατολικὸν θέμα / "the Eastern theme") was created, as a province (theme) covering the western and central parts of Turkey's present-day Central Anatolia Region, centered around Iconium, but ruled from the city of Amorium. The Latinized form "," with its -ia ending, is probably a Medieval Latin innovation. The modern Turkish form Anadolu derives directly from the Greek name Aνατολή (Anatolḗ). The Russian male name Anatoly, the French Anatole and plain Anatol, all stemming from saints Anatolius of Laodicea (d. 283) and Anatolius of Constantinople (d. 458; the first Patriarch of Constantinople), share the same linguistic origin. Names The oldest known name for any region within Anatolia is related to its central area, known as the "Land of Hatti" – a designation that was initially used for the land of ancient Hattians, but later became the most common name for the entire territory under the rule of ancient Hittites. The first recorded name the Greeks used for the Anatolian peninsula, though not particularly popular at the time, was Ἀσία (Asía), perhaps from an Akkadian expression for the "sunrise" or possibly echoing the name of the Assuwa league in western Anatolia. The Romans used it as the name of their province, comprising the west of the peninsula plus the nearby Aegean Islands. As the name "Asia" broadened its scope to apply to the vaster region east of the Mediterranean, some Greeks in Late Antiquity came to use the name Asia Minor (Μικρὰ Ἀσία, Mikrà Asía), meaning "Lesser Asia" to refer to present-day Anatolia, whereas the administration of the Empire preferred the description Ἀνατολή (Anatolḗ "the East"). The endonym Ῥωμανία (Rōmanía "the land of the Romans, i.e. the Eastern Roman Empire") was understood as another name for the province by the invading Seljuq Turks, who founded a Sultanate of Rûm in 1077. Thus (land of the) Rûm became another name for Anatolia. By the 12th century Europeans had started referring to Anatolia as Turchia. During the era of the Ottoman Empire, mapmakers outside the Empire referred to the mountainous plateau in eastern Anatolia as Armenia. Other contemporary sources called the same area Kurdistan. Geographers have variously used the terms East Anatolian Plateau and Armenian Plateau to refer to the region, although the territory encompassed by each term largely overlaps with the other. According to archaeologist Lori Khatchadourian, this difference in terminology "primarily result[s] from the shifting political fortunes and cultural trajectories of the region since the nineteenth century." Turkey's First Geography Congress in 1941 created two geographical regions of Turkey to the east of the Gulf of Iskenderun-Black Sea line, the Eastern Anatolia Region and the Southeastern Anatolia Region, the former largely corresponding to the western part of the Armenian Highlands, the latter to the northern part of the Mesopotamian plain. According to Richard Hovannisian, this changing of toponyms was "necessary to obscure all evidence" of the Armenian presence as part of the policy of Armenian genocide denial embarked upon by the newly established Turkish government and what Hovannisian calls its "foreign collaborators." History Prehistoric Anatolia Human habitation in Anatolia dates back to the Paleolithic. Neolithic settlements include Çatalhöyük, Çayönü, Nevali Cori, Aşıklı Höyük, Boncuklu Höyük Hacilar, Göbekli Tepe, Norşuntepe, Kosk, and Mersin. Çatalhöyük (7.000 BCE) is considered the most advanced of these. Neolithic Anatolia has been proposed as the homeland of the Indo-European language family, although linguists tend to favour a later origin in the steppes north of the Black Sea. However, it is clear that the Anatolian languages, the earliest attested branch of Indo-European, have been spoken in Anatolia since at least the 19th century BCE. Ancient Anatolia The earliest historical data related to Anatolia appear during the Bronze Age and continue throughout the Iron Age. The most ancient period in the history of Anatolia spans from the emergence of ancient Hattians, up to the conquest of Anatolia by the Achaemenid Empire in the 6th century BCE. Hattians and Hurrians The earliest historically attested populations of Anatolia were the Hattians in central Anatolia, and Hurrians further to the east. The Hattians were an indigenous people, whose main center was the city of Hattush. Affiliation of Hattian language remains unclear, while Hurrian language belongs to a distinctive family of Hurro-Urartian languages. All of those languages are extinct; relationships with indigenous languages of the Caucasus have been proposed, but are not generally accepted. The region became famous for exporting raw materials. Organized trade between Anatolia and Mesopotamia started to emerge during the period of the Akkadian Empire, and was continued and intensified during the period of the Old Assyrian Empire, between the 21st and the 18th centuries BCE. Assyrian traders were bringing tin and textiles in exchange for copper, silver or gold. Cuneiform records, dated circa 20th century BCE, found in Anatolia at the Assyrian colony of Kanesh, use an advanced system of trading computations and credit lines. Hittite Anatolia (18th–12th century BCE) Unlike the Akkadians and Assyrians, whose Anatolian trading posts were peripheral to their core lands in Mesopotamia, the Hittites were centered at Hattusa (modern Boğazkale) in north-central Anatolia by the 17th century BCE. They were speakers of an Indo-European language, the Hittite language, or nesili (the language of Nesa) in Hittite. The Hittites originated from local ancient cultures that grew in Anatolia, in addition to the arrival of Indo-European languages. Attested for the first time in the Assyrian tablets of Nesa around 2000 BCE, they conquered Hattusa in the 18th century BCE, imposing themselves over Hattian- and Hurrian-speaking populations. According to the widely accepted Kurgan theory on the Proto-Indo-European homeland, however, the Hittites (along with the other Indo-European ancient Anatolians) were themselves relatively recent immigrants to Anatolia from the north. However, they did not necessarily displace the population genetically; they assimilated into the former peoples' culture, preserving the Hittite language. The Hittites adopted the Mesopotamian cuneiform script. In the Late Bronze Age, Hittite New Kingdom (c. 1650 BCE) was founded, becoming an empire in the 14th century BCE after the conquest of Kizzuwatna in the south-east and the defeat of the Assuwa league in western Anatolia. The empire reached its height in the 13th century BCE, controlling much of Asia Minor, northwestern Syria, and northwest upper Mesopotamia. However, the Hittite advance toward the Black Sea coast was halted by the semi-nomadic pastoralist and tribal Kaskians, a non-Indo-European people who had earlier displaced the Palaic-speaking Indo-Europeans. Much of the history of the Hittite Empire concerned war with the rival empires of Egypt, Assyria and the Mitanni. The Egyptians eventually withdrew from the region after failing to gain the upper hand over the Hittites and becoming wary of the power of Assyria, which had destroyed the Mitanni Empire. The Assyrians and Hittites were then left to battle over control of eastern and southern Anatolia and colonial territories in Syria. The Assyrians had better success than the Egyptians, annexing much Hittite (and Hurrian) territory in these regions. Post-Hittite Anatolia (12th–6th century BCE) After 1180 BCE, during the Late Bronze Age collapse, the Hittite empire disintegrated into several independent Syro-Hittite states, subsequent to losing much territory to the Middle Assyrian Empire and being finally overrun by the Phrygians, another Indo-European people who are believed to have migrated from the Balkans. The Phrygian expansion into southeast Anatolia was eventually halted by the Assyrians, who controlled that region. Luwians Another Indo-European people, the Luwians, rose to prominence in central and western Anatolia circa 2000 BCE. Their language belonged to the same linguistic branch as Hittite. The general consensus amongst scholars is that Luwian was spoken across a large area of western Anatolia, including (possibly) Wilusa (Troy), the Seha River Land (to be identified with the Hermos and/or Kaikos valley), and the kingdom of Mira-Kuwaliya with its core territory of the Maeander valley. From the 9th century BCE, Luwian regions coalesced into a number of states such as Lydia, Caria, and Lycia, all of which had Hellenic influence. Arameans Arameans encroached over the borders of south-central Anatolia in the century or so after the fall of the Hittite empire, and some of the Syro-Hittite states in this region became an amalgam of Hittites and Arameans. These became known as Syro-Hittite states. Neo-Assyrian Empire From the 10th to late 7th centuries BCE, much of Anatolia (particularly the southeastern regions) fell to the Neo-Assyrian Empire, including all of the Syro-Hittite states, Tabal, Kingdom of Commagene, the Cimmerians and Scythians and swathes of Cappadocia. The Neo-Assyrian empire collapsed due to a bitter series of civil wars followed by a combined attack by Medes, Persians, Scythians and their own Babylonian relations. The last Assyrian city to fall was Harran in southeast Anatolia. This city was the birthplace of the last king of Babylon, the Assyrian Nabonidus and his son and regent Belshazzar. Much of the region then fell to the short-lived Iran-based Median Empire, with the Babylonians and Scythians briefly appropriating some territory. Cimmerian and Scythian invasions From the late 8th century BCE, a new wave of Indo-European-speaking raiders entered northern and northeast Anatolia: the Cimmerians and Scythians. The Cimmerians overran Phrygia and the Scythians threatened to do the same to Urartu and Lydia, before both were finally checked by the Assyrians. Early Greek presence The north-western coast of Anatolia was inhabited by Greeks of the Achaean/Mycenaean culture from the 20th century BCE, related to the Greeks of southeastern Europe and the Aegean. Beginning with the Bronze Age collapse at the end of the 2nd millennium BCE, the west coast of Anatolia was settled by Ionian Greeks, usurping the area of the related but earlier Mycenaean Greeks. Over several centuries, numerous Ancient Greek city-states were established on the coasts of Anatolia. Greeks started Western philosophy on the western coast of Anatolia (Pre-Socratic philosophy). Classical Anatolia In classical antiquity, Anatolia was described by Herodotus and later historians as divided into regions that were diverse in culture, language and religious practices. The northern regions included Bithynia, Paphlagonia and Pontus; to the west were Mysia, Lydia and Caria; and Lycia, Pamphylia and Cilicia belonged to the southern shore. There were also several inland regions: Phrygia, Cappadocia, Pisidia and Galatia. Languages spoken included the late surviving Anatolic languages Isaurian and Pisidian, Greek in Western and coastal regions, Phrygian spoken until the 7th century CE, local variants of Thracian in the Northwest, the Galatian variant of Gaulish in Galatia until the 6th century CE, Cappadocian and Armenian in the East, and Kartvelian languages in the Northeast. Anatolia is known as the birthplace of minted coinage (as opposed to unminted coinage, which first appears in Mesopotamia at a much earlier date) as a medium of exchange, some time in the 7th century BCE in Lydia. The use of minted coins continued to flourish during the Greek and Roman eras. During the 6th century BCE, all of Anatolia was conquered by the Persian Achaemenid Empire, the Persians having usurped the Medes as the dominant dynasty in Iran. In 499 BCE, the Ionian city-states on the west coast of Anatolia rebelled against Persian rule. The Ionian Revolt, as it became known, though quelled, initiated the Greco-Persian Wars, which ended in a Greek victory in 449 BCE, and the Ionian cities regained their independence. By the Peace of Antalcidas (387 BCE), which ended the Corinthian War, Persia regained control over Ionia. In 334 BCE, the Macedonian Greek king Alexander the Great conquered the peninsula from the Achaemenid Persian Empire. Alexander's conquest opened up the interior of Asia Minor to Greek settlement and influence. Following the death of Alexander and the breakup of his empire, Anatolia was ruled by a series of Hellenistic kingdoms, such as the Attalids of Pergamum and the Seleucids, the latter controlling most of Anatolia. A period of peaceful Hellenization followed, such that the local Anatolian languages had been supplanted by Greek by the 1st century BCE. In 133 BCE the last Attalid king bequeathed his kingdom to the Roman Republic, and western and central Anatolia came under Roman control, but Hellenistic culture remained predominant. Further annexations by Rome, in particular of the Kingdom of Pontus by Pompey, brought all of Anatolia under Roman control, except for the eastern frontier with the Parthian Empire, which remained unstable for centuries, causing a series of wars, culminating in the Roman-Parthian Wars. Early Christian Period After the division of the Roman Empire, Anatolia became part of the East Roman, or Byzantine Empire. Anatolia was one of the first places where Christianity spread, so that by the 4th century CE, western and central Anatolia were overwhelmingly Christian and Greek-speaking. For the next 600 years, while Imperial possessions in Europe were subjected to barbarian invasions, Anatolia would be the center of the Hellenic world. It was one of the wealthiest and most densely populated places in the Late Roman Empire. Anatolia's wealth grew during the 4th and 5th centuries thanks, in part, to the Pilgrim's Road that ran through the peninsula. Literary evidence about the rural landscape stems from the hagiographies of 6th century Nicholas of Sion and 7th century Theodore of Sykeon. Large urban centers included Ephesus, Pergamum, Sardis and Aphrodisias. Scholars continue to debate the cause of urban decline in the 6th and 7th centuries variously attributing it to the Plague of Justinian (541), and the 7th century Persian incursion and Arab conquest of the Levant. In the ninth and tenth century a resurgent Byzantine Empire regained its lost territories, including even long lost territory such as Armenia and Syria (ancient Aram). Medieval Period In the 10 years following the Battle of Manzikert in 1071, the Seljuk Turks from Central Asia migrated over large areas of Anatolia, with particular concentrations around the northwestern rim. The Turkish language and the Islamic religion were gradually introduced as a result of the Seljuk conquest, and this period marks the start of Anatolia's slow transition from predominantly Christian and Greek-speaking, to predominantly Muslim and Turkish-speaking (although ethnic groups such as Armenians, Greeks, and Assyrians remained numerous and retained Christianity and their native languages). In the following century, the Byzantines managed to reassert their control in western and northern Anatolia. Control of Anatolia was then split between the Byzantine Empire and the Seljuk Sultanate of Rûm, with the Byzantine holdings gradually being reduced. In 1255, the Mongols swept through eastern and central Anatolia, and would remain until 1335. The Ilkhanate garrison was stationed near Ankara. After the decline of the Ilkhanate from 1335 to 1353, the Mongol Empire's legacy in the region was the Uyghur Eretna Dynasty that was overthrown by Kadi Burhan al-Din in 1381. By the end of the 14th century, most of Anatolia was controlled by various Anatolian beyliks. Smyrna fell in 1330, and the last Byzantine stronghold in Anatolia, Philadelphia, fell in 1390. The Turkmen Beyliks were under the control of the Mongols, at least nominally, through declining Seljuk sultans. The Beyliks did not mint coins in the names of their own leaders while they remained under the suzerainty of the Mongol Ilkhanids. The Osmanli ruler Osman I was the first Turkish ruler who minted coins in his own name in 1320s; they bear the legend "Minted by Osman son of Ertugrul". Since the minting of coins was a prerogative accorded in Islamic practice only to a sovereign, it can be considered that the Osmanli, or Ottoman Turks, had become formally independent from the Mongol Khans. Ottoman Empire Among the Turkish leaders, the Ottomans emerged as great power under Osman I and his son Orhan I. The Anatolian beyliks were successively absorbed into the rising Ottoman Empire during the 15th century. It is not well understood how the Osmanlı, or Ottoman Turks, came to dominate their neighbours, as the history of medieval Anatolia is still little known. The Ottomans completed the conquest of the peninsula in 1517 with the taking of Halicarnassus (modern Bodrum) from the Knights of Saint John. Modern times With the acceleration of the decline of the Ottoman Empire in the early 19th century, and as a result of the expansionist policies of the Russian Empire in the Caucasus, many Muslim nations and groups in that region, mainly Circassians, Tatars, Azeris, Lezgis, Chechens and several Turkic groups left their homelands and settled in Anatolia. As the Ottoman Empire further shrank in the Balkan regions and then fragmented during the Balkan Wars, much of the non-Christian populations of its former possessions, mainly Balkan Muslims (Bosnian Muslims, Albanians, Turks, Muslim Bulgarians and Greek Muslims such as the Vallahades from Greek Macedonia), were resettled in various parts of Anatolia, mostly in formerly Christian villages throughout Anatolia. A continuous reverse migration occurred since the early 19th century, when Greeks from Anatolia, Constantinople and Pontus area migrated toward the newly independent Kingdom of Greece, and also towards the United States, the southern part of the Russian Empire, Latin America, and the rest of Europe. Following the Russo-Persian Treaty of Turkmenchay (1828) and the incorporation of Eastern Armenia into the Russian Empire, another migration involved the large Armenian population of Anatolia, which recorded significant migration rates from Western Armenia (Eastern Anatolia) toward the Russian Empire, especially toward its newly established Armenian provinces. Anatolia remained multi-ethnic until the early 20th century (see the rise of nationalism under the Ottoman Empire). During World War I, the Armenian genocide, the Greek genocide (especially in Pontus), and the Assyrian genocide almost entirely removed the ancient indigenous communities of Armenian, Greek, and Assyrian populations in Anatolia and surrounding regions. Following the Greco-Turkish War of 1919–1922, most remaining ethnic Anatolian Greeks were forced out during the 1923 population exchange between Greece and Turkey. Of the remainder, most have left Turkey since then, leaving fewer than 5,000 Greeks in Anatolia today. Geology Anatolia's terrain is structurally complex. A central massif composed of uplifted blocks and downfolded troughs, covered by recent deposits and giving the appearance of a plateau with rough terrain, is wedged between two folded mountain ranges that converge in the east. True lowland is confined to a few narrow coastal strips along the Aegean, Mediterranean, and the Black Sea coasts. Flat or gently sloping land is rare and largely confined to the deltas of the Kızıl River, the coastal plains of Çukurova and the valley floors of the Gediz River and the Büyük Menderes River as well as some interior high plains in Anatolia, mainly around Lake Tuz (Salt Lake) and the Konya Basin (Konya Ovasi). There are two mountain ranges in southern Anatolia: the Taurus and the Zagros mountains. Climate Anatolia has a varied range of climates. The central plateau is characterized by a continental climate, with hot summers and cold snowy winters. The south and west coasts enjoy a typical Mediterranean climate, with mild rainy winters, and warm dry summers. The Black Sea and Marmara coasts have a temperate oceanic climate, with cool foggy summers and much rainfall throughout the year. Ecoregions There is a diverse number of plant and animal communities. The mountains and coastal plain of northern Anatolia experience a humid and mild climate. There are temperate broadleaf, mixed and coniferous forests. The central and eastern plateau, with its drier continental climate, has deciduous forests and forest steppes. Western and southern Anatolia, which have a Mediterranean climate, contain Mediterranean forests, woodlands, and scrub ecoregions. Euxine-Colchic deciduous forests: These temperate broadleaf and mixed forests extend across northern Anatolia, lying between the mountains of northern Anatolia and the Black Sea. They include the enclaves of temperate rainforest lying along the southeastern coast of the Black Sea in eastern Turkey and Georgia. Northern Anatolian conifer and deciduous forests: These forests occupy the mountains of northern Anatolia, running east and west between the coastal Euxine-Colchic forests and the drier, continental climate forests of central and eastern Anatolia. Central Anatolian deciduous forests: These forests of deciduous oaks and evergreen pines cover the plateau of central Anatolia. Central Anatolian steppe: These dry grasslands cover the drier valleys and surround the saline lakes of central Anatolia, and include halophytic (salt tolerant) plant communities. Eastern Anatolian deciduous forests: This ecoregion occupies the plateau of eastern Anatolia. The drier and more continental climate is beneficial for steppe-forests dominated by deciduous oaks, with areas of shrubland, montane forest, and valley forest. Anatolian conifer and deciduous mixed forests: These forests occupy the western, Mediterranean-climate portion of the Anatolian plateau. Pine forests and mixed pine and oak woodlands and shrublands are predominant. Aegean and Western Turkey sclerophyllous and mixed forests: These Mediterranean-climate forests occupy the coastal lowlands and valleys of western Anatolia bordering the Aegean Sea. The ecoregion has forests of Turkish pine (Pinus brutia), oak forests and woodlands, and maquis shrubland of Turkish pine and evergreen sclerophyllous trees and shrubs, including Olive (Olea europaea), Strawberry Tree (Arbutus unedo), Arbutus andrachne, Kermes Oak (Quercus coccifera), and Bay Laurel (Laurus nobilis). Southern Anatolian montane conifer and deciduous forests: These mountain forests occupy the Mediterranean-climate Taurus Mountains of southern Anatolia. Conifer forests are predominant, chiefly Anatolian black pine (Pinus nigra), Cedar of Lebanon (Cedrus libani), Taurus fir (Abies cilicica), and juniper (Juniperus foetidissima and J. excelsa). Broadleaf trees include oaks, hornbeam, and maples. Eastern Mediterranean conifer-sclerophyllous-broadleaf forests: This ecoregion occupies the coastal strip of southern Anatolia between the Taurus Mountains and the Mediterranean Sea. Plant communities include broadleaf sclerophyllous maquis shrublands, forests of Aleppo Pine (Pinus halepensis) and Turkish Pine (Pinus brutia), and dry oak (Quercus spp.) woodlands and steppes. Demographics See also Aeolis Anatolian hypothesis Anatolianism Anatolian leopard Anatolian Plate Anatolian Shepherd Ancient kingdoms of Anatolia Antigonid dynasty Doris (Asia Minor) Empire of Nicaea Empire of Trebizond Gordium Lycaonia Midas Miletus Myra Pentarchy Pontic Greeks Rumi Saint Anatolia Saint John Saint Nicholas Saint Paul Seleucid Empire Seven churches of Asia Seven Sleepers Tarsus Troad Turkic migration Notes References Citations Sources Further reading Akat, Uücel, Neşe Özgünel, and Aynur Durukan. 1991. Anatolia: A World Heritage. Ankara: Kültür Bakanliǧi. Brewster, Harry. 1993. Classical Anatolia: The Glory of Hellenism. London: I.B. Tauris. Donbaz, Veysel, and Şemsi Güner. 1995. The Royal Roads of Anatolia. Istanbul: Dünya. Dusinberre, Elspeth R. M. 2013. Empire, Authority, and Autonomy In Achaemenid Anatolia. Cambridge: Cambridge University Press. Gates, Charles, Jacques Morin, and Thomas Zimmermann. 2009. Sacred Landscapes In Anatolia and Neighboring Regions. Oxford: Archaeopress. Mikasa, Takahito, ed. 1999. Essays On Ancient Anatolia. Wiesbaden: Harrassowitz. Takaoğlu, Turan. 2004. Ethnoarchaeological Investigations In Rural Anatolia. İstanbul: Ege Yayınları. Taracha, Piotr. 2009. Religions of Second Millennium Anatolia. Wiesbaden: Harrassowitz. Taymaz, Tuncay, Y. Yilmaz, and Yildirim Dilek. 2007. The Geodynamics of the Aegean and Anatolia. London: Geological Society. External links Peninsulas of Asia Geography of Western Asia Geography of the Middle East Near East Geography of Armenia Geography of Turkey Peninsulas of Turkey Regions of Turkey Regions of Asia Ancient Near East Ancient Greek geography Physiographic provinces Historical regions Eurasia
856
https://en.wikipedia.org/wiki/Apple%20Inc.
Apple Inc.
Apple Inc. is an American multinational technology company that specializes in consumer electronics, software and online services. Apple is the largest information technology company by revenue (totaling in 2021) and, as of January 2021, it is the world's most valuable company, the fourth-largest personal computer vendor by unit sales and second-largest mobile phone manufacturer. It is one of the Big Five American information technology companies, alongside Alphabet, Amazon, Meta, and Microsoft. Apple was founded as Apple Computer Company on April 1, 1976, by Steve Jobs, Steve Wozniak and Ronald Wayne to develop and sell Wozniak's Apple I personal computer. It was incorporated by Jobs and Wozniak as Apple Computer, Inc. in 1977 and the company's next computer, the Apple II became a best seller. Apple went public in 1980, to instant financial success. The company went onto develop new computers featuring innovative graphical user interfaces, including the original Macintosh, announced in a critically acclaimed advertisement, "1984", directed by Ridley Scott. By 1985, the high cost of its products and power struggles between executives caused problems. Wozniak stepped back from Apple amicably, while Jobs resigned to found NeXT, taking some Apple employees with him. As the market for personal computers expanded and evolved throughout the 1990s, Apple lost considerable market share to the lower-priced duopoly of the Microsoft Windows operating system on Intel-powered PC clones (also known as "Wintel"). In 1997, weeks away from bankruptcy, the company bought NeXT to resolve Apple's unsuccessful operating system strategy and entice Jobs back to the company. Over the next decade, Jobs guided Apple back to profitability through a number of tactics including introducing the iMac, iPod, iPhone and iPad to critical acclaim, launching memorable advertising campaigns, opening the Apple Store retail chain, and acquiring numerous companies to broaden the company's product portfolio. Jobs resigned in 2011 for health reasons, and died two months later. He was succeeded as CEO by Tim Cook. Apple became the first publicly traded U.S. company to be valued at over $1 trillion in August 2018, then $2 trillion in August 2020, and most recently $3 trillion in January 2022. The company receives criticism regarding the labor practices of its contractors, its environmental practices, and its business ethics, including anti-competitive practices and materials sourcing. The company enjoys a high level of brand loyalty, and is ranked as one of the world's most valuable brands. History 1976–1980: Founding and incorporation Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne as a business partnership. The company's first product was the Apple I, a computer designed and hand-built entirely by Wozniak. To finance its creation, Jobs sold his only motorized means of transportation, a VW Bus, for a few hundred dollars, and Wozniak sold his HP-65 calculator for . Wozniak debuted the first prototype Apple I at the Homebrew Computer Club in July 1976. The Apple I was sold as a motherboard with CPU, RAM, and basic textual-video chips—a base kit concept which would not yet be marketed as a complete personal computer. It went on sale soon after debut for . Wozniak later said he was unaware of the coincidental mark of the beast in the number 666, and that he came up with the price because he liked "repeating digits". Apple Computer, Inc. was incorporated on January 3, 1977, without Wayne, who had left and sold his share of the company back to Jobs and Wozniak for $800 only twelve days after having co-founded Apple. Multimillionaire Mike Markkula provided essential business expertise and funding of to Jobs and Wozniak during the incorporation of Apple. During the first five years of operations, revenues grew exponentially, doubling about every four months. Between September 1977 and September 1980, yearly sales grew from $775,000 to $118 million, an average annual growth rate of 533%. The Apple II, also invented by Wozniak, was introduced on April 16, 1977, at the first West Coast Computer Faire. It differed from its major rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics and open architecture. While the Apple I and early Apple II models used ordinary audio cassette tapes as storage devices, they were superseded by the introduction of a -inch floppy disk drive and interface called the Disk II in 1978. The Apple II was chosen to be the desktop platform for the first "killer application" of the business world: VisiCalc, a spreadsheet program released in 1979. VisiCalc created a business market for the Apple II and gave home users an additional reason to buy an Apple II: compatibility with the office. Before VisiCalc, Apple had been a distant third place competitor to Commodore and Tandy. By the end of the 1970s, Apple had become the leading computer manufacturer in the United States. On December 12, 1980, Apple (ticker symbol "AAPL") went public selling 4.6 million shares at $22 per share ($.39 per share when adjusting for stock splits ), generating over $100 million, which was more capital than any IPO since Ford Motor Company in 1956. By the end of the day, 300 millionaires were created, from a stock price of $29 per share and a market cap of $1.778 billion. 1980–1990: Success with Macintosh A critical moment in the company's history came in December 1979 when Jobs and several Apple employees, including human–computer interface expert Jef Raskin, visited Xerox PARC in to see a demonstration of the Xerox Alto, a computer using a graphical user interface. Xerox granted Apple engineers three days of access to the PARC facilities in return for the option to buy 100,000 shares (5.6 million split-adjusted shares ) of Apple at the pre-IPO price of $10 a share. After the demonstration, Jobs was immediately convinced that all future computers would use a graphical user interface, and development of a GUI began for the Apple Lisa, named after Jobs's daughter. The Lisa division would be plagued by infighting, and in 1982 Jobs was pushed off the project. The Lisa launched in 1983 and became the first personal computer sold to the public with a GUI, but was a commercial failure due to its high price and limited software titles. Jobs, angered by being pushed off the Lisa team, took over the company's Macintosh division. Wozniak and Raskin had envisioned the Macintosh as low-cost-computer with a text-based interface like the Apple II, but a plane crash in 1981 forced Wozniak to step back from the project. Jobs quickly redefined the Macintosh as a graphical system that would be cheaper than the Lisa, undercutting his former division. Jobs was also hostile to the Apple II division, which at the time, generated most of the company's revenue. In 1984, Apple launched the Macintosh, the first personal computer to be sold without a programming language. Its debut was signified by "1984", a $1.5 million television advertisement directed by Ridley Scott that aired during the third quarter of Super Bowl XVIII on January 22, 1984. This is now hailed as a watershed event for Apple's success and was called a "masterpiece" by CNN and one of the greatest TV advertisements of all time by TV Guide. The advertisement created great interest in the original Macintosh, and sales were initially good, but began to taper off dramatically after the first three months as reviews started to come in. Jobs had made the decision to equip the original Macintosh with 128 kilobytes of RAM, attempting to reach a price point, which limited its speed and the software that could be used. The Macintosh would eventually ship for , a price panned by critics in light of its slow performance. In early 1985, this sales slump triggered a power struggle between Steve Jobs and CEO John Sculley, who had been hired away from Pepsi two years earlier by Jobs using the famous line, "Do you want to sell sugar water for the rest of your life or come with me and change the world?" Sculley decided to remove Jobs as the head of the Macintosh division, with unanimous support from the Apple board of directors. The board of directors instructed Sculley to contain Jobs and his ability to launch expensive forays into untested products. Rather than submit to Sculley's direction, Jobs attempted to oust him from his leadership role at Apple. Informed by Jean-Louis Gassée, Sculley found out that Jobs had been attempting to organize a boardroom coup and called an emergency meeting at which Apple's executive staff sided with Sculley and stripped Jobs of all operational duties. Jobs resigned from Apple in September 1985 and took a number of Apple employees with him to found NeXT. Wozniak had also quit his active employment at Apple earlier in 1985 to pursue other ventures, expressing his frustration with Apple's treatment of the Apple II division and stating that the company had "been going in the wrong direction for the last five years". Despite Wozniak's grievances, he officially remained employed by Apple, and to this day continues to work for the company as a representative, receiving a stipend estimated to be $120,000 per year for this role. Both Jobs and Wozniak remained Apple shareholders after their departures. After the departures of Jobs and Wozniak, Sculley worked to improve the Macintosh in 1985 by quadrupling the RAM and introducing the LaserWriter, the first reasonably priced PostScript laser printer. PageMaker, an early desktop publishing application taking advantage of the PostScript language, was also released by Aldus Corporation in July 1985. It has been suggested that the combination of Macintosh, LaserWriter and PageMaker was responsible for the creation of the desktop publishing market. This dominant position in the desktop publishing market allowed the company to focus on higher price points, the so-called "high-right policy" named for the position on a chart of price vs. profits. Newer models selling at higher price points offered higher profit margin, and appeared to have no effect on total sales as power users snapped up every increase in speed. Although some worried about pricing themselves out of the market, the high-right policy was in full force by the mid-1980s, notably due to Jean-Louis Gassée's mantra of "fifty-five or die", referring to the 55% profit margins of the Macintosh II. This policy began to backfire in the last years of the decade as desktop publishing programs appeared on PC clones that offered some or much of the same functionality of the Macintosh, but at far lower price points. The company lost its dominant position in the desktop publishing market and estranged many of its original consumer customer base who could no longer afford their high-priced products. The Christmas season of 1989 was the first in the company's history to have declining sales, which led to a 20% drop in Apple's stock price. During this period, the relationship between Sculley and Gassée deteriorated, leading Sculley to effectively demote Gassée in January 1990 by appointing Michael Spindler as the chief operating officer. Gassée left the company later that year. 1990–1997: Decline and restructuring The company pivoted strategy and in October 1990 introduced three lower-cost models, the Macintosh Classic, the Macintosh LC, and the Macintosh IIsi, all of which saw significant sales due to pent-up demand. In 1991, Apple introduced the hugely successful PowerBook with a design that set the current shape for almost all modern laptops. The same year, Apple introduced System 7, a major upgrade to the Macintosh operating system, adding color to the interface and introducing new networking capabilities. The success of the lower-cost Macs and PowerBook brought increasing revenue. For some time, Apple was doing incredibly well, introducing fresh new products and generating increasing profits in the process. The magazine MacAddict named the period between 1989 and 1991 as the "first golden age" of the Macintosh. The success of Apple's lower-cost consumer models, especially the LC, also led to the cannibalization of their higher-priced machines. To address this, management introduced several new brands, selling largely identical machines at different price points, aimed at different markets: the high-end Quadra models, the mid-range Centris line, and the consumer-marketed Performa series. This led to significant market confusion, as customers did not understand the difference between models. The early 1990s also saw the discontinuation of the Apple II series, which was expensive to produce, and the company felt was still taking sales away from lower-cost Macintosh models. After the launch of the LC, Apple began encouraging developers to create applications for Macintosh rather than Apple II, and authorized salespersons to direct consumers towards Macintosh and away from Apple II. The Apple IIe was discontinued in 1993. Throughout this period, Microsoft continued to gain market share with its Windows graphical user interface that it sold to manufacturers of generally less expensive PC clones. While the Macintosh was more expensive, it offered a more tightly integrated user experience, but the company struggled to make the case to consumers. Apple also experimented with a number of other unsuccessful consumer targeted products during the 1990s, including digital cameras, portable CD audio players, speakers, video game consoles, the eWorld online service, and TV appliances. Most notably, enormous resources were invested in the problem-plagued Newton tablet division, based on John Sculley's unrealistic market forecasts. personal computers, while Apple was delivering a richly engineered but expensive experience. Apple relied on high profit margins and never developed a clear response; instead, they sued Microsoft for using a GUI similar to the Apple Lisa in Apple Computer, Inc. v. Microsoft Corp. The lawsuit dragged on for years before it was finally dismissed. The major product flops and the rapid loss of market share to Windows sullied Apple's reputation, and in 1993 Sculley was replaced as CEO by Michael Spindler. With Spindler at the helm Apple, IBM, and Motorola formed the AIM alliance in 1994 with the goal of creating a new computing platform (the PowerPC Reference Platform; PReP), which would use IBM and Motorola hardware coupled with Apple software. The AIM alliance hoped that PReP's performance and Apple's software would leave the PC far behind and thus counter the dominance of Windows. The same year, Apple introduced the Power Macintosh, the first of many Apple computers to use Motorola's PowerPC processor. In the wake of the alliance, Apple opened up to the idea of allowing Motorola and other companies to build Macintosh clones. Over the next two years, 75 distinct Macintosh clone models were introduced. However, by 1996 Apple executives were worried that the clones were cannibalizing sales of their own high-end computers, where profit margins were highest. In 1996, Spindler was replaced by Gil Amelio as CEO. Hired for his reputation as a corporate rehabilitator, Amelio made deep changes, including extensive layoffs and cost-cutting. This period was also marked by numerous failed attempts to modernize the Macintosh operating system (MacOS). The original Macintosh operating system (System 1) was not built for multitasking (running several applications at once). The company attempted to correct this with by introducing cooperative multitasking in System 5, but the company still felt it needed a more modern approach. This led to the Pink project in 1988, A/UX that same year, Copland in 1994, and the attempted purchase of BeOS in 1996. Talks with Be stalled the CEO, former Apple executive Jean-Louis Gassée, demanded $300 million instead of the $125 million Apple wanted to pay. Only weeks away from bankruptcy, Apple's board decided NeXTSTEP was a better choice for its next operating system and purchased NeXT in late 1996 for $429 million, bringing back Apple co-founder Steve Jobs. 1997–2007: Return to profitability The NeXT acquisition was finalized on February 9, 1997, and the board brought Jobs back to Apple as an advisor. On July 9, 1997, Jobs staged a boardroom coup that resulted in Amelio's resignation after overseeing a three-year record-low stock price and crippling financial losses. The board named Jobs as interim CEO and he immediately began a review of the company's products. Jobs would order 70% of the company's products to be cancelled, resulting in the loss of 3,000 jobs, and taking Apple back to the core of its computer offerings. The next month, in August 1997, Steve Jobs convinced Microsoft to make a $150 million investment in Apple and a commitment to continue developing software for the Mac. The investment was seen as an "antitrust insurance policy" for Microsoft who had recently settled with the Department of Justice over anti-competitive practices. Jobs also ended the Mac clone deals and in September 1997, purchased the largest clone maker, Power Computing. On November 10, 1997, Apple introduced the Apple Store website, which was tied to a new build-to-order manufacturing that had been successfully used by PC manufacturer Dell. The moves paid off for Jobs, at the end of his first year as CEO, the company turned a $309 million profit. On May 6, 1998, Apple introduced a new all-in-one computer reminiscent of the original Macintosh: the iMac. The iMac was a huge success for Apple selling 800,000 units in its first five months and ushered in major shifts in the industry by abandoning legacy technologies like the 3½-inch diskette, being an early adopter of the USB connector, and coming pre-installed with internet connectivity (the "i" in iMac) via Ethernet and a dial-up modem. The device also had a striking eardrop shape and translucent materials, designed by Jonathan Ive, who although hired by Amelio, would go on to work collaboratively with Jobs for the next decade to chart a new course the design of Apple's products. A little more than a year later on July 21, 1999, Apple introduced the iBook, a laptop for consumers. It was the culmination of a strategy established by Jobs to produce only four products: refined versions of the Power Macintosh G3 desktop and PowerBook G3 laptop for professionals, along with the iMac desktop and iBook laptop for consumers. Jobs felt the small product line allowed for a greater focus on quality and innovation. At around the same time, Apple also completed numerous acquisitions to create a portfolio of digital media production software for both professionals and consumers. Apple acquired of Macromedia's Key Grip digital video editing software project which was renamed Final Cut Pro when it was launched on the retail market in April 1999. The development of Key Grip also led to Apple's release of the consumer video-editing product iMovie in October 1999. Next, Apple successfully acquired the German company Astarte in April 2000, which had developed the DVD authoring software DVDirector, which Apple would sell as the professional-oriented DVD Studio Pro software product, and used the same technology to create iDVD for the consumer market. In 2000, Apple purchased the SoundJam MP audio player software from Casady & Greene. Apple renamed the program iTunes, while simplifying the user interface and adding the ability to burn CDs. 2001 would be a pivotal year for the Apple with the company making three announcements that would change the course of the company. The first announcement came on March 24, 2001, that Apple was nearly ready to release a new modern operating system, Mac OS X. The announcement came after numerous failed attempts in the early 1990s, and several years of development. Mac OS X was based on NeXTSTEP, OPENSTEP, and BSD Unix, with Apple aiming to combine the stability, reliability, and security of Unix with the ease of use afforded by an overhauled user interface, heavily influenced by NeXTSTEP. To aid users in migrating from Mac OS 9, the new operating system allowed the use of OS 9 applications within Mac OS X via the Classic Environment. In May 2001 the company opened its first two Apple Store retail locations in Virginia and California, offering an improved presentation of the company's products. At the time, many speculated that the stores would fail, but they went on to become highly successful, and the first of more than 500 stores around the world. On October 23, 2001, Apple debuted the iPod portable digital audio player. The product, which was first sold on November 10, 2001, was phenomenally successful with over 100 million units sold within six years. In 2003, Apple's iTunes Store was introduced. The service offered music downloads for $0.99 a song and integration with the iPod. The iTunes Store quickly became the market leader in online music services, with over five billion downloads by June 19, 2008. Two years later, the iTunes Store was the world's largest music retailer. In 2002, Apple purchased Nothing Real for their advanced digital compositing application Shake, as well as Emagic for the music productivity application Logic. The purchase of Emagic made Apple the first computer manufacturer to own a music software company. The acquisition was followed by the development of Apple's consumer-level GarageBand application. The release of iPhoto in the same year completed the iLife suite. At the Worldwide Developers Conference keynote address on June 6, 2005, Jobs announced that Apple would move away from PowerPC processors, and the Mac would transition to Intel processors in 2006. On January 10, 2006, the new MacBook Pro and iMac became the first Apple computers to use Intel's Core Duo CPU. By August 7, 2006, Apple made the transition to Intel chips for the entire Mac product line—over one year sooner than announced. The Power Mac, iBook, and PowerBook brands were retired during the transition; the Mac Pro, MacBook, and MacBook Pro became their respective successors. On April 29, 2009, The Wall Street Journal reported that Apple was building its own team of engineers to design microchips. Apple also introduced Boot Camp in 2006 to help users install Windows XP or Windows Vista on their Intel Macs alongside Mac OS X. Apple's success during this period was evident in its stock price. Between early 2003 and 2006, the price of Apple's stock increased more than tenfold, from around $6 per share (split-adjusted) to over $80. When Apple surpassed Dell's market cap in January 2006, Jobs sent an email to Apple employees saying Dell's CEO Michael Dell should eat his words. Nine years prior, Dell had said that if he ran Apple he would "shut it down and give the money back to the shareholders". 2007–2011: Success with mobile devices During his keynote speech at the Macworld Expo on January 9, 2007, Jobs announced that Apple Computer, Inc. would thereafter be known as "Apple Inc.", because the company had shifted its emphasis from computers to consumer electronics. This event also saw the announcement of the iPhone and the Apple TV. The company sold 270,000 iPhone units during the first 30 hours of sales, and the device was called "a game changer for the industry". In an article posted on Apple's website on February 6, 2007, Jobs wrote that Apple would be willing to sell music on the iTunes Store without digital rights management (DRM) , thereby allowing tracks to be played on third-party players, if record labels would agree to drop the technology. On April 2, 2007, Apple and EMI jointly announced the removal of DRM technology from EMI's catalog in the iTunes Store, effective in May 2007. Other record labels eventually followed suit and Apple published a press release in January 2009 to announce that all songs on the iTunes Store are available without their FairPlay DRM. In July 2008, Apple launched the App Store to sell third-party applications for the iPhone and iPod Touch. Within a month, the store sold 60 million applications and registered an average daily revenue of $1 million, with Jobs speculating in August 2008 that the App Store could become a billion-dollar business for Apple. By October 2008, Apple was the third-largest mobile handset supplier in the world due to the popularity of the iPhone. On January 14, 2009, Jobs announced in an internal memo that he would be taking a six-month medical leave of absence from Apple until the end of June 2009 and would spend the time focusing on his health. In the email, Jobs stated that "the curiosity over my personal health continues to be a distraction not only for me and my family, but everyone else at Apple as well", and explained that the break would allow the company "to focus on delivering extraordinary products". Though Jobs was absent, Apple recorded its best non-holiday quarter (Q1 FY 2009) during the recession with revenue of $8.16 billion and profit of $1.21 billion. After years of speculation and multiple rumored "leaks", Apple unveiled a large screen, tablet-like media device known as the iPad on January 27, 2010. The iPad ran the same touch-based operating system as the iPhone, and all iPhone apps were compatible with the iPad. This gave the iPad a large app catalog on launch, though having very little development time before the release. Later that year on April 3, 2010, the iPad was launched in the US. It sold more than 300,000 units on its first day, and 500,000 by the end of the first week. In May of the same year, Apple's market cap exceeded that of competitor Microsoft for the first time since 1989. In June 2010, Apple released the iPhone 4, which introduced video calling using FaceTime, multitasking, and a new uninsulated stainless steel design that acted as the phone's antenna. Later that year, Apple again refreshed its iPod line of MP3 players by introducing a multi-touch iPod Nano, an iPod Touch with FaceTime, and an iPod Shuffle that brought back the clickwheel buttons of earlier generations. It also introduced the smaller, cheaper second generation Apple TV which allowed renting of movies and shows. On January 17, 2011, Jobs announced in an internal Apple memo that he would take another medical leave of absence for an indefinite period to allow him to focus on his health. Chief Operating Officer Tim Cook assumed Jobs's day-to-day operations at Apple, although Jobs would still remain "involved in major strategic decisions". Apple became the most valuable consumer-facing brand in the world. In June 2011, Jobs surprisingly took the stage and unveiled iCloud, an online storage and syncing service for music, photos, files, and software which replaced MobileMe, Apple's previous attempt at content syncing. This would be the last product launch Jobs would attend before his death. On August 24, 2011, Jobs resigned his position as CEO of Apple. He was replaced by Cook and Jobs became Apple's chairman. Apple did not have a chairman at the time and instead had two co-lead directors, Andrea Jung and Arthur D. Levinson, who continued with those titles until Levinson replaced Jobs as chairman of the board in November after Jobs' death. 2011–present: Post–Jobs era, Tim Cook's leadership On October 5, 2011, Steve Jobs died, marking the end of an era for Apple. The first major product announcement by Apple following Jobs's passing occurred on January 19, 2012, when Apple's Phil Schiller introduced iBook's Textbooks for iOS and iBook Author for Mac OS X in New York City. Jobs stated in the biography "Jobs" that he wanted to reinvent the textbook industry and education. From 2011 to 2012, Apple released the iPhone 4S and iPhone 5, which featured improved cameras, an intelligent software assistant named Siri, and cloud-synced data with iCloud; the third and fourth generation iPads, which featured Retina displays; and the iPad Mini, which featured a 7.9-inch screen in contrast to the iPad's 9.7-inch screen. These launches were successful, with the iPhone 5 (released September 21, 2012) becoming Apple's biggest iPhone launch with over two million pre-orders and sales of three million iPads in three days following the launch of the iPad Mini and fourth generation iPad (released November 3, 2012). Apple also released a third-generation 13-inch MacBook Pro with a Retina display and new iMac and Mac Mini computers. On August 20, 2012, Apple's rising stock price increased the company's market capitalization to a then-record $624 billion. This beat the non-inflation-adjusted record for market capitalization previously set by Microsoft in 1999. On August 24, 2012, a US jury ruled that Samsung should pay Apple $1.05 billion (£665m) in damages in an intellectual property lawsuit. Samsung appealed the damages award, which was reduced by $450 million and further granted Samsung's request for a new trial. On November 10, 2012, Apple confirmed a global settlement that dismissed all existing lawsuits between Apple and HTC up to that date, in favor of a ten-year license agreement for current and future patents between the two companies. It is predicted that Apple will make $280 million a year from this deal with HTC. In May 2014, the company confirmed its intent to acquire Dr. Dre and Jimmy Iovine's audio company Beats Electronics—producer of the "Beats by Dr. Dre" line of headphones and speaker products, and operator of the music streaming service Beats Music—for $3 billion, and to sell their products through Apple's retail outlets and resellers. Iovine believed that Beats had always "belonged" with Apple, as the company modeled itself after Apple's "unmatched ability to marry culture and technology." The acquisition was the largest purchase in Apple's history. During a press event on September 9, 2014, Apple introduced a smartwatch, the Apple Watch. Initially, Apple marketed the device as a fashion accessory and a complement to the iPhone, that would allow people to look at their smartphones less. Over time, the company has focused on developing health and fitness-oriented features on the watch, in an effort to compete with dedicated activity trackers. In January 2016, it was announced that one billion Apple devices were in active use worldwide. On June 6, 2016, Fortune released Fortune 500, their list of companies ranked on revenue generation. In the trailing fiscal year (2015), Apple appeared on the list as the top tech company. It ranked third, overall, with $233 billion in revenue. This represents a movement upward of two spots from the previous year's list. In June 2017, Apple announced the HomePod, its smart speaker aimed to compete against Sonos, Google Home, and Amazon Echo. Towards the end of the year, TechCrunch reported that Apple was acquiring Shazam, a company that introduced its products at WWDC and specializing in music, TV, film and advertising recognition. The acquisition was confirmed a few days later, reportedly costing Apple $400 million, with media reports noting that the purchase looked like a move to acquire data and tools bolstering the Apple Music streaming service. The purchase was approved by the European Union in September 2018. Also in June 2017, Apple appointed Jamie Erlicht and Zack Van Amburg to head the newly formed worldwide video unit. In November 2017, Apple announced it was branching out into original scripted programming: a drama series starring Jennifer Aniston and Reese Witherspoon, and a reboot of the anthology series Amazing Stories with Steven Spielberg. In June 2018, Apple signed the Writers Guild of America's minimum basic agreement and Oprah Winfrey to a multi-year content partnership. Additional partnerships for original series include Sesame Workshop and DHX Media and its subsidiary Peanuts Worldwide, as well as a partnership with A24 to create original films. On August 19, 2020, Apple's share price briefly topped $467.77, making Apple the first US company with a market capitalization of $2 trillion. During its annual WWDC keynote speech on June 22, 2020, Apple announced it would move away from Intel processors, and the Mac would transition to processors developed in-house. The announcement was expected by industry analysts, and it has been noted that Macs featuring Apple's processors would allow for big increases in performance over current Intel-based models. On November 10, 2020, the MacBook Air, MacBook Pro, and the Mac Mini became the first Mac devices powered by an Apple-designed processor, the Apple M1. Products Macintosh Macintosh, commonly known as Mac, is Apple's line of personal computers that use the company's proprietary macOS operating system. Personal computers were Apple's original business line, but they account for only about 10 percent of the company's revenue. The company is in the process of switching Mac computers from Intel processors to Apple silicon, a custom-designed system on a chip platform. , there are five Macintosh computer families in production: iMac: Consumer all-in-one desktop computer, introduced in 1998. Mac Mini: Consumer sub-desktop computer, introduced in 2005. MacBook Pro: Professional notebook, introduced in 2006. Mac Pro: Professional workstation, introduced in 2006. MacBook Air: Consumer ultra-thin notebook, introduced in 2008. Apple also sells a variety of accessories for Macs, including the Pro Display XDR, Magic Mouse, Magic Trackpad, and Magic Keyboard. The company also develops several pieces of software that are included in the purchase price of a Mac, including the Safari web browser, the iMovie video editor, the GarageBand audio editor and the iWork productivity suite. Additionally, the company sells several professional software applications including the Final Cut Pro video editor, Motion for video animations, the Logic Pro audio editor, MainStage for live audio production, and Compressor for media compression and encoding. iPhone iPhone is Apple's line of smartphones that use the company's proprietary iOS operating system, derived from macOS. The first-generation iPhone was announced by then-Apple CEO Steve Jobs on January 9, 2007. Since then, Apple has annually released new iPhone models and iOS updates. The iPhone has a user interface built around a multi-touch screen, which at the time of its introduction was described as "revolutionary" and a "game-changer" for the mobile phone industry. The device has been credited with popularizing the smartphone and slate form factor, and with creating a large market for smartphone apps, or "app economy". iOS is one of the two largest smartphone platforms in the world alongside Android. The iPhone has generated large profits for the company, and is credited with helping to make Apple one of the world's most valuable publicly traded companies. , the iPhone accounts for more than half of the company's revenue. , 33 iPhone models have been produced, with five smartphone families in production: iPhone 13 iPhone 13 Pro iPhone 12 iPhone SE (2nd generation) iPhone 11 iPad iPad is Apple's line of tablet computers that use the company's proprietary iPadOS operating system, derived from macOS and iOS. The first-generation iPad was announced on January 27, 2010. The iPad took the multi-touch user interface first introduced in the iPhone, and adapted it to a larger screen, marked for interaction with multimedia formats including newspapers, books, photos, videos, music, documents, video games, and most existing iPhone apps. Earlier generations of the iPad used the same iOS operating system as the company's smartphones before being split off in 2019. Apple has sold more than 500 million iPads, though sales peaked in 2013. However, the iPad remains the most popular tablet computer by sales , and accounted for nine percent of the company's revenue . In recent years, Apple has started offering more powerful versions of the device, with the current iPad Pro sharing the same Apple silicon as Macintosh computers, along with a smaller version of the device called iPad mini, and an upgraded version called iPad Air. , there are four iPad families in production: iPad (9th generation) iPad mini (6th generation) iPad Pro (5th generation) iPad Air (4th generation) Wearables, Home and Accessories Apple also makes several other products that it categorizes as "Wearables, Home and Accessories." These products include the AirPods line of wireless headphones, Apple TV digital media players, Apple Watch smartwatches, Beats headphones, HomePod Mini smart speakers, and the iPod touch, the last remaining device in Apple's successful line of iPod portable media players. , this broad line of products comprises about 11% of the company's revenues. Services Apple also offers a broad line of services that it earns revenue on, including advertising in the App Store and Apple News app, the AppleCare+ extended warranty plan, the iCloud+ cloud-based data storage service, payment services through the Apple Card credit card and the Apple Pay processing platform, a digital content services including Apple Books, Apple Fitness+, Apple Music, Apple News+, Apple TV+, and the iTunes Store. , services comprise about 19% of the company's revenue. Many of the services have been launched since 2019 when Apple announced it would be making a concerted effort to expand its service revenues. Corporate identity Logo According to Steve Jobs, the company's name was inspired by his visit to an apple farm while on a fruitarian diet. Jobs thought the name "Apple" was "fun, spirited and not intimidating". Apple's first logo, designed by Ron Wayne, depicts Sir Isaac Newton sitting under an apple tree. It was almost immediately replaced by Rob Janoff's "rainbow Apple", the now-familiar rainbow-colored silhouette of an apple with a bite taken out of it. Janoff presented Jobs with several different monochromatic themes for the "bitten" logo, and Jobs immediately took a liking to it. However, Jobs insisted that the logo be colorized to humanize the company. The logo was designed with a bite so that it would not be confused with a cherry. The colored stripes were conceived to make the logo more accessible, and to represent the fact the Apple II could generate graphics in color. This logo is often erroneously referred to as a tribute to Alan Turing, with the bite mark a reference to his method of suicide. Both Janoff and Apple deny any homage to Turing in the design of the logo. On August 27, 1999 (the year following the introduction of the iMac G3), Apple officially dropped the rainbow scheme and began to use monochromatic logos nearly identical in shape to the previous rainbow incarnation. An Aqua-themed version of the monochrome logo was used from 1998 to 2003, and a glass-themed version was used from 2007 to 2013. Steve Jobs and Steve Wozniak were fans of the Beatles, but Apple Inc. had name and logo trademark issues with Apple Corps Ltd., a multimedia company started by the Beatles in 1968. This resulted in a series of lawsuits and tension between the two companies. These issues ended with the settling of their lawsuit in 2007. Advertising Apple's first slogan, "Byte into an Apple", was coined in the late 1970s. From 1997 to 2002, the slogan "Think Different" was used in advertising campaigns, and is still closely associated with Apple. Apple also has slogans for specific product lines — for example, "iThink, therefore iMac" was used in 1998 to promote the iMac, and "Say hello to iPhone" has been used in iPhone advertisements. "Hello" was also used to introduce the original Macintosh, Newton, iMac ("hello (again)"), and iPod. From the introduction of the Macintosh in 1984, with the 1984 Super Bowl advertisement to the more modern Get a Mac adverts, Apple has been recognized for its efforts towards effective advertising and marketing for its products. However, claims made by later campaigns were criticized, particularly the 2005 Power Mac ads. Apple's product advertisements gained a lot of attention as a result of their eye-popping graphics and catchy tunes. Musicians who benefited from an improved profile as a result of their songs being included on Apple advertisements include Canadian singer Feist with the song "1234" and Yael Naïm with the song "New Soul". Brand loyalty Apple customers gained a reputation for devotion and loyalty early in the company's history. In 1984, BYTE stated that: Apple evangelists were actively engaged by the company at one time, but this was after the phenomenon had already been firmly established. Apple evangelist Guy Kawasaki has called the brand fanaticism "something that was stumbled upon," while Ive explained in 2014 that "People have an incredibly personal relationship" with Apple's products. Apple Store openings and new product releases can draw crowds of hundreds, with some waiting in line as much as a day before the opening. The opening of New York City's Apple Fifth Avenue store in 2006 was highly attended, and had visitors from Europe who flew in for the event. In June 2017, a newlywed couple took their wedding photos inside the then-recently opened Orchard Road Apple Store in Singapore. The high level of brand loyalty has been criticized and ridiculed, applying the epithet "Apple fanboy" and mocking the lengthy lines before a product launch. An internal memo leaked in 2015 suggested the company planned to discourage long lines and direct customers to purchase its products on its website. Fortune magazine named Apple the most admired company in the United States in 2008, and in the world from 2008 to 2012. On September 30, 2013, Apple surpassed Coca-Cola to become the world's most valuable brand in the Omnicom Group's "Best Global Brands" report. Boston Consulting Group has ranked Apple as the world's most innovative brand every year since 2005. The New York Times in 1985 stated that "Apple above all else is a marketing company". John Sculley agreed, telling The Guardian newspaper in 1997 that "People talk about technology, but Apple was a marketing company. It was the marketing company of the decade." Research in 2002 by NetRatings indicate that the average Apple consumer was usually more affluent and better educated than other PC company consumers. The research indicated that this correlation could stem from the fact that on average Apple Inc. products were more expensive than other PC products. In response to a query about the devotion of loyal Apple consumers, Jonathan Ive responded: there are 1.65 billion Apple products in active use. Headquarters and major facilities Apple Inc.'s world corporate headquarters are located in Cupertino, in the middle of California's Silicon Valley, at Apple Park, a massive circular groundscraper building with a circumference of . The building opened in April 2017 and houses more than 12,000 employees. Apple co-founder Steve Jobs wanted Apple Park to look less like a business park and more like a nature refuge, and personally appeared before the Cupertino City Council in June 2011 to make the proposal, in his final public appearance before his death. Apple also operates from the Apple Campus (also known by its address, 1 Infinite Loop), a grouping of six buildings in Cupertino that total located about to the west of Apple Park. The Apple Campus was the company's headquarters from its opening in 1993, until the opening of Apple Park in 2017. The buildings, located at 1–6 Infinite Loop, are arranged in a circular pattern around a central green space, in a design that has been compared to that of a university. In addition to Apple Park and the Apple Campus, Apple occupies an additional thirty office buildings scattered throughout the city of Cupertino, including three buildings that also served as prior headquarters: "Stephens Creek Three" (1977–1978), Bandley One" (1978–1982), and "Mariani One" (1982–1993). In total, Apple occupies almost 40% of the available office space in the city. Apple's headquarters for Europe, the Middle East and Africa (EMEA) are located in Cork in the south of Ireland, called the Hollyhill campus. The facility, which opened in 1980, houses 5,500 people and was Apple's first location outside of the United States. Apple's international sales and distribution arms operate out of the campus in Cork. Apple has two campuses near Austin, Texas: a campus opened in 2014 houses 500 engineers who work on Apple silicon and a campus opened in 2021 where 6,000 people to work in technical support, supply chain management, online store curation, and Apple Maps data management. The company, also has several other locations in Boulder, Colo., Culver City, Calif., Herzliya (Israel), London, New York, Pittsburgh, San Diego and Seattle that each employ hundreds of people. Stores The first Apple Stores were originally opened as two locations in May 2001 by then-CEO Steve Jobs, after years of attempting but failing store-within-a-store concepts. Seeing a need for improved retail presentation of the company's products, he began an effort in 1997 to revamp the retail program to get an improved relationship to consumers, and hired Ron Johnson in 2000. Jobs relaunched Apple's online store in 1997, and opened the first two physical stores in 2001. The media initially speculated that Apple would fail, but its stores were highly successful, bypassing the sales numbers of competing nearby stores and within three years reached US$1 billion in annual sales, becoming the fastest retailer in history to do so. Over the years, Apple has expanded the number of retail locations and its geographical coverage, with 499 stores across 22 countries worldwide . Strong product sales have placed Apple among the top-tier retail stores, with sales over $16 billion globally in 2011. In May 2016, Angela Ahrendts, Apple's then Senior Vice President of Retail, unveiled a significantly redesigned Apple Store in Union Square, San Francisco, featuring large glass doors for the entry, open spaces, and re-branded rooms. In addition to purchasing products, consumers can get advice and help from "Creative Pros" – individuals with specialized knowledge of creative arts; get product support in a tree-lined Genius Grove; and attend sessions, conferences and community events, with Ahrendts commenting that the goal is to make Apple Stores into "town squares", a place where people naturally meet up and spend time. The new design will be applied to all Apple Stores worldwide, a process that has seen stores temporarily relocate or close. Many Apple Stores are located inside shopping malls, but Apple has built several stand-alone "flagship" stores in high-profile locations. It has been granted design patents and received architectural awards for its stores' designs and construction, specifically for its use of glass staircases and cubes. The success of Apple Stores have had significant influence over other consumer electronics retailers, who have lost traffic, control and profits due to a perceived higher quality of service and products at Apple Stores. Apple's notable brand loyalty among consumers causes long lines of hundreds of people at new Apple Store openings or product releases. Due to the popularity of the brand, Apple receives a large number of job applications, many of which come from young workers. Although Apple Store employees receive above-average pay, are offered money toward education and health care, and receive product discounts, there are limited or no paths of career advancement. A May 2016 report with an anonymous retail employee highlighted a hostile work environment with harassment from customers, intense internal criticism, and a lack of significant bonuses for securing major business contracts. Due to the COVID-19 pandemic, Apple closed its stores outside China until March 27, 2020. Despite the stores being closed, hourly workers continue to be paid. Workers across the company are allowed to work remotely if their jobs permit it. On March 24, 2020, in a memo, Senior Vice President of People and Retail Deirdre O’Brien announced that some of its retail stores are expected to reopen at the beginning of April. Corporate affairs Corporate culture Apple is one of several highly successful companies founded in the 1970s that bucked the traditional notions of corporate culture. Jobs often walked around the office barefoot even after Apple became a Fortune 500 company. By the time of the "1984" television advertisement, Apple's informal culture had become a key trait that differentiated it from its competitors. According to a 2011 report in Fortune, this has resulted in a corporate culture more akin to a startup rather than a multinational corporation. In a 2017 interview, Wozniak credited watching Star Trek and attending Star Trek conventions while in his youth as a source of inspiration for his co-founding Apple. As the company has grown and been led by a series of differently opinionated chief executives, it has arguably lost some of its original character. Nonetheless, it has maintained a reputation for fostering individuality and excellence that reliably attracts talented workers, particularly after Jobs returned to the company. Numerous Apple employees have stated that projects without Jobs's involvement often took longer than projects with it. To recognize the best of its employees, Apple created the Apple Fellows program which awards individuals who make extraordinary technical or leadership contributions to personal computing while at the company. The Apple Fellowship has so far been awarded to individuals including Bill Atkinson, Steve Capps, Rod Holt, Alan Kay, Guy Kawasaki, Al Alcorn, Don Norman, Rich Page, Steve Wozniak, and Phil Schiller. At Apple, employees are intended to be specialists who are not exposed to functions outside their area of expertise. Jobs saw this as a means of having "best-in-class" employees in every role. For instance, Ron Johnson—Senior Vice President of Retail Operations until November 1, 2011—was responsible for site selection, in-store service, and store layout, yet had no control of the inventory in his stores. This was done by Tim Cook, who had a background in supply-chain management. Apple is known for strictly enforcing accountability. Each project has a "directly responsible individual" or "DRI" in Apple jargon. As an example, when iOS senior vice president Scott Forstall refused to sign Apple's official apology for numerous errors in the redesigned Maps app, he was forced to resign. Unlike other major U.S. companies, Apple provides a relatively simple compensation policy for executives that does not include perks enjoyed by other CEOs like country club fees or private use of company aircraft. The company typically grants stock options to executives every other year. In 2015, Apple had 110,000 full-time employees. This increased to 116,000 full-time employees the next year, a notable hiring decrease, largely due to its first revenue decline. Apple does not specify how many of its employees work in retail, though its 2014 SEC filing put the number at approximately half of its employee base. In September 2017, Apple announced that it had over 123,000 full-time employees. Apple has a strong culture of corporate secrecy, and has an anti-leak Global Security team that recruits from the National Security Agency, the Federal Bureau of Investigation, and the United States Secret Service. In December 2017, Glassdoor said Apple was the 48th best place to work, having originally entered at rank 19 in 2009, peaking at rank 10 in 2012, and falling down the ranks in subsequent years. Lack of innovation An editorial article in The Verge in September 2016 by technology journalist Thomas Ricker explored some of the public's perceived lack of innovation at Apple in recent years, specifically stating that Samsung has "matched and even surpassed Apple in terms of smartphone industrial design" and citing the belief that Apple is incapable of producing another breakthrough moment in technology with its products. He goes on to write that the criticism focuses on individual pieces of hardware rather than the ecosystem as a whole, stating "Yes, iteration is boring. But it's also how Apple does business. [...] It enters a new market and then refines and refines and continues refining until it yields a success". He acknowledges that people are wishing for the "excitement of revolution", but argues that people want "the comfort that comes with harmony". Furthermore, he writes that "a device is only the starting point of an experience that will ultimately be ruled by the ecosystem in which it was spawned", referring to how decent hardware products can still fail without a proper ecosystem (specifically mentioning that Walkman did not have an ecosystem to keep users from leaving once something better came along), but how Apple devices in different hardware segments are able to communicate and cooperate through the iCloud cloud service with features including Universal Clipboard (in which text copied on one device can be pasted on a different device) as well as inter-connected device functionality including Auto Unlock (in which an Apple Watch can unlock a Mac in close proximity). He argues that Apple's ecosystem is its greatest innovation. The Wall Street Journal reported in June 2017 that Apple's increased reliance on Siri, its virtual personal assistant, has raised questions about how much Apple can actually accomplish in terms of functionality. Whereas Google and Amazon make use of big data and analyze customer information to personalize results, Apple has a strong pro-privacy stance, intentionally not retaining user data. "Siri is a textbook of leading on something in tech and then losing an edge despite having all the money and the talent and sitting in Silicon Valley", Holger Mueller, a technology analyst, told the Journal. The report further claims that development on Siri has suffered due to team members and executives leaving the company for competitors, a lack of ambitious goals, and shifting strategies. Though switching Siri's functions to machine learning and algorithms, which dramatically cut its error rate, the company reportedly still failed to anticipate the popularity of Amazon's Echo, which features the Alexa personal assistant. Improvements to Siri stalled, executives clashed, and there were disagreements over the restrictions imposed on third-party app interactions. While Apple acquired an England-based startup specializing in conversational assistants, Google's Assistant had already become capable of helping users select Wi-Fi networks by voice, and Siri was lagging in functionality. In December 2017, two articles from The Verge and ZDNet debated what had been a particularly devastating week for Apple's macOS and iOS software platforms. The former had experienced a severe security vulnerability, in which Macs running the then-latest macOS High Sierra software were vulnerable to a bug that let anyone gain administrator privileges by entering "root" as the username in system prompts, leaving the password field empty and twice clicking "unlock", gaining full access. The bug was publicly disclosed on Twitter, rather than through proper bug bounty programs. Apple released a security fix within a day and issued an apology, stating that "regrettably we stumbled" in regards to the security of the latest updates. After installing the security patch, however, file sharing was broken for users, with Apple releasing a support document with instructions to separately fix that issue. Though Apple publicly stated the promise of "auditing our development processes to help prevent this from happening again", users who installed the security update while running the older 10.13.0 version of the High Sierra operating system rather than the then-newest 10.13.1 release experienced that the "root" security vulnerability was re-introduced, and persisted even after fully updating their systems. On iOS, a date bug caused iOS devices that received local app notifications at 12:15am on December 2, 2017, to repeatedly restart. Users were recommended to turn off notifications for their apps. Apple quickly released an update, done during the nighttime in Cupertino, California time and outside of their usual software release window, with one of the headlining features of the update needing to be delayed for a few days. The combined problems of the week on both macOS and iOS caused The Verges Tom Warren to call it a "nightmare" for Apple's software engineers and described it as a significant lapse in Apple's ability to protect its more than 1 billion devices. ZDNets Adrian Kingsley-Hughes wrote that "it's hard to not come away from the last week with the feeling that Apple is slipping". Kingsley-Hughes also concluded his piece by referencing an earlier article, in which he wrote that "As much as I don't want to bring up the tired old 'Apple wouldn't have done this under Steve Jobs's watch' trope, a lot of what's happening at Apple lately is different from what they came to expect under Jobs. Not to say that things didn't go wrong under his watch, but product announcements and launches felt a lot tighter for sure, as did the overall quality of what Apple was releasing." He did, however, also acknowledge that such failures "may indeed have happened" with Jobs in charge, though returning to the previous praise for his demands of quality, stating "it's almost guaranteed that given his personality that heads would have rolled, which limits future failures". Manufacturing and assembling The company's manufacturing, procurement, and logistics enable it to execute massive product launches without having to maintain large, profit-sapping inventories. In 2011, Apple's profit margins were 40 percent, compared with between 10 and 20 percent for most other hardware companies. Cook's catchphrase to describe his focus on the company's operational arm is: "Nobody wants to buy sour milk". In May 2017, the company announced a $1 billion funding project for "advanced manufacturing" in the United States, and subsequently invested $200 million in Corning Inc., a manufacturer of toughened Gorilla Glass technology used in its iPhone devices. The following December, Apple's chief operating officer, Jeff Williams, told CNBC that the "$1 billion" amount was "absolutely not" the final limit on its spending, elaborating that "We're not thinking in terms of a fund limit. ... We're thinking about, where are the opportunities across the U.S. to help nurture companies that are making the advanced technology — and the advanced manufacturing that goes with that — that quite frankly is essential to our innovation". As of 2021, Apple uses components from 43 different countries. The majority of assembling is done by Taiwanese original design manufacturer firms Foxconn, Pegatron, Wistron and Compal Electronics mostly in factories located inside China, but also Brazil, and India. During the Mac's early history Apple generally refused to adopt prevailing industry standards for hardware, instead creating their own. This trend was largely reversed in the late 1990s, beginning with Apple's adoption of the PCI bus in the 7500/8500/9500 Power Macs. Apple has since joined the industry standards groups to influence the future direction of technology standards such as USB, AGP, HyperTransport, Wi-Fi, NVMe, PCIe and others in its products. FireWire is an Apple-originated standard that was widely adopted across the industry after it was standardized as IEEE 1394 and is a legally mandated port in all Cable TV boxes in the United States. Apple has gradually expanded its efforts in getting its products into the Indian market. In July 2012, during a conference call with investors, CEO Tim Cook said that he "[loves] India", but that Apple saw larger opportunities outside the region. India's requirement that 30% of products sold be manufactured in the country was described as "really adds cost to getting product to market". In May 2016, Apple opened an iOS app development center in Bangalore and a maps development office for 4,000 staff in Hyderabad. In March, The Wall Street Journal reported that Apple would begin manufacturing iPhone models in India "over the next two months", and in May, the Journal wrote that an Apple manufacturer had begun production of iPhone SE in the country, while Apple told CNBC that the manufacturing was for a "small number" of units. In April 2019, Apple initiated manufacturing of iPhone 7 at its Bengaluru facility, keeping in mind demand from local customers even as they seek more incentives from the government of India. At the beginning of 2020, Tim Cook announced that Apple schedules the opening of its first physical outlet in India for 2021, while an online store is to be launched by the end of the year. Labor practices The company advertised its products as being made in America until the late 1990s; however, as a result of outsourcing initiatives in the 2000s, almost all of its manufacturing is now handled abroad. According to a report by The New York Times, Apple insiders "believe the vast scale of overseas factories, as well as the flexibility, diligence and industrial skills of foreign workers, have so outpaced their American counterparts that "Made in the USA" is no longer a viable option for most Apple products". In 2006, one complex of factories in Shenzhen, China that assembled the iPod and other items had over 200,000 workers living and working within it. Employees regularly worked more than 60 hours per week and made around $100 per month. A little over half of the workers' earnings was required to pay for rent and food from the company. Apple immediately launched an investigation after the 2006 media report, and worked with their manufacturers to ensure acceptable working conditions. In 2007, Apple started yearly audits of all its suppliers regarding worker's rights, slowly raising standards and pruning suppliers that did not comply. Yearly progress reports have been published since 2008. In 2011, Apple admitted that its suppliers' child labor practices in China had worsened. The Foxconn suicides occurred between January and November 2010, when 18 Foxconn (Chinese: 富士康) employees attempted suicide, resulting in 14 deaths—the company was the world's largest contract electronics manufacturer, for clients including Apple, at the time. The suicides drew media attention, and employment practices at Foxconn were investigated by Apple. Apple issued a public statement about the suicides, and company spokesperson Steven Dowling said: The statement was released after the results from the company's probe into its suppliers' labor practices were published in early 2010. Foxconn was not specifically named in the report, but Apple identified a series of serious labor violations of labor laws, including Apple's own rules, and some child labor existed in a number of factories. Apple committed to the implementation of changes following the suicides. Also in 2010, workers in China planned to sue iPhone contractors over poisoning by a cleaner used to clean LCD screens. One worker claimed that he and his coworkers had not been informed of possible occupational illnesses. After a high suicide rate in a Foxconn facility in China making iPads and iPhones, albeit a lower rate than that of China as a whole, workers were forced to sign a legally binding document guaranteeing that they would not kill themselves. Workers in factories producing Apple products have also been exposed to hexane, a neurotoxin that is a cheaper alternative than alcohol for cleaning the products. A 2014 BBC investigation found excessive hours and other problems persisted, despite Apple's promise to reform factory practice after the 2010 Foxconn suicides. The Pegatron factory was once again the subject of review, as reporters gained access to the working conditions inside through recruitment as employees. While the BBC maintained that the experiences of its reporters showed that labor violations were continuing since 2010, Apple publicly disagreed with the BBC and stated: "We are aware of no other company doing as much as Apple to ensure fair and safe working conditions". In December 2014, the Institute for Global Labour and Human Rights published a report which documented inhumane conditions for the 15,000 workers at a Zhen Ding Technology factory in Shenzhen, China, which serves as a major supplier of circuit boards for Apple's iPhone and iPad. According to the report, workers are pressured into 65-hour work weeks which leaves them so exhausted that they often sleep during lunch breaks. They are also made to reside in "primitive, dark and filthy dorms" where they sleep "on plywood, with six to ten workers in each crowded room." Omnipresent security personnel also routinely harass and beat the workers. In 2019, there were reports stating that some of Foxconn's managers had used rejected parts to build iPhones and that Apple was investigating the issue. Environmental practices and initiatives Apple Energy Apple Energy, LLC is a wholly owned subsidiary of Apple Inc. that sells solar energy. , Apple's solar farms in California and Nevada have been declared to provide 217.9 megawatts of solar generation capacity. In addition to the company's solar energy production, Apple has received regulatory approval to construct a landfill gas energy plant in North Carolina. Apple will use the methane emissions to generate electricity. Apple's North Carolina data center is already powered entirely with energy from renewable sources. Energy and resources Following a Greenpeace protest, Apple released a statement on April 17, 2012, committing to ending its use of coal and shifting to 100% renewable clean energy. By 2013, Apple was using 100% renewable energy to power their data centers. Overall, 75% of the company's power came from clean renewable sources. In 2010, Climate Counts, a nonprofit organization dedicated to directing consumers toward the greenest companies, gave Apple a score of 52 points out of a possible 100, which puts Apple in their top category "Striding". This was an increase from May 2008, when Climate Counts only gave Apple 11 points out of 100, which placed the company last among electronics companies, at which time Climate Counts also labeled Apple with a "stuck icon", adding that Apple at the time was "a choice to avoid for the climate-conscious consumer". In May 2015, Greenpeace evaluated the state of the Green Internet and commended Apple on their environmental practices saying, "Apple's commitment to renewable energy has helped set a new bar for the industry, illustrating in very concrete terms that a 100% renewable Internet is within its reach, and providing several models of intervention for other companies that want to build a sustainable Internet." , Apple states that 100% of its U.S. operations run on renewable energy, 100% of Apple's data centers run on renewable energy and 93% of Apple's global operations run on renewable energy. However, the facilities are connected to the local grid which usually contains a mix of fossil and renewable sources, so Apple carbon offsets its electricity use. The Electronic Product Environmental Assessment Tool (EPEAT) allows consumers to see the effect a product has on the environment. Each product receives a Gold, Silver, or Bronze rank depending on its efficiency and sustainability. Every Apple tablet, notebook, desktop computer, and display that EPEAT ranks achieves a Gold rating, the highest possible. Although Apple's data centers recycle water 35 times, the increased activity in retail, corporate and data centers also increase the amount of water use to in 2015. During an event on March 21, 2016, Apple provided a status update on its environmental initiative to be 100% renewable in all of its worldwide operations. Lisa P. Jackson, Apple's vice president of Environment, Policy and Social Initiatives who reports directly to CEO, Tim Cook, announced that , 93% of Apple's worldwide operations are powered with renewable energy. Also featured was the company's efforts to use sustainable paper in their product packaging; 99% of all paper used by Apple in the product packaging comes from post-consumer recycled paper or sustainably managed forests, as the company continues its move to all paper packaging for all of its products. Apple working in partnership with Conservation Fund, have preserved 36,000 acres of working forests in Maine and North Carolina. Another partnership announced is with the World Wildlife Fund to preserve up to of forests in China. Featured was the company's installation of a 40 MW solar power plant in the Sichuan province of China that was tailor-made to coexist with the indigenous yaks that eat hay produced on the land, by raising the panels to be several feet off of the ground so the yaks and their feed would be unharmed grazing beneath the array. This installation alone compensates for more than all of the energy used in Apple's Stores and Offices in the whole of China, negating the company's energy carbon footprint in the country. In Singapore, Apple has worked with the Singaporean government to cover the rooftops of 800 buildings in the city-state with solar panels allowing Apple's Singapore operations to be run on 100% renewable energy. Liam was introduced to the world, an advanced robotic disassembler and sorter designed by Apple Engineers in California specifically for recycling outdated or broken iPhones. Reuses and recycles parts from traded in products. Apple announced on August 16, 2016, that Lens Technology, one of its major suppliers in China, has committed to power all its glass production for Apple with 100 percent renewable energy by 2018. The commitment is a large step in Apple's efforts to help manufacturers lower their carbon footprint in China. Apple also announced that all 14 of its final assembly sites in China are now compliant with UL's Zero Waste to Landfill validation. The standard, which started in January 2015, certifies that all manufacturing waste is reused, recycled, composted, or converted into energy (when necessary). Since the program began, nearly, 140,000 metric tons of waste have been diverted from landfills. On July 21, 2020, Apple announced its plan to become carbon neutral across its entire business, manufacturing supply chain, and product life cycle by 2030. In the next 10 years, Apple will try to lower emissions with a series of innovative actions, including: low carbon product design, expanding energy efficiency, renewable energy, process and material innovations, and carbon removal. In April 2021, Apple said that it had started a $200 million fund in order to combat climate change by removing 1 million metric tons of carbon dioxide from the atmosphere each year. Toxins Following further campaigns by Greenpeace, in 2008, Apple became the first electronics manufacturer to fully eliminate all polyvinyl chloride (PVC) and brominated flame retardants (BFRs) in its complete product line. In June 2007, Apple began replacing the cold cathode fluorescent lamp (CCFL) backlit LCD displays in its computers with mercury-free LED-backlit LCD displays and arsenic-free glass, starting with the upgraded MacBook Pro. Apple offers comprehensive and transparent information about the CO2e, emissions, materials, and electrical usage concerning every product they currently produce or have sold in the past (and which they have enough data needed to produce the report), in their portfolio on their homepage. Allowing consumers to make informed purchasing decisions on the products they offer for sale. In June 2009, Apple's iPhone 3GS was free of PVC, arsenic, and BFRs. All Apple products now have mercury-free LED-backlit LCD displays, arsenic-free glass, and non-PVC cables. All Apple products have EPEAT Gold status and beat the latest Energy Star guidelines in each product's respective regulatory category. In November 2011, Apple was featured in Greenpeace's Guide to Greener Electronics, which ranks electronics manufacturers on sustainability, climate and energy policy, and how "green" their products are. The company ranked fourth of fifteen electronics companies (moving up five places from the previous year) with a score of 4.6/10. Greenpeace praises Apple's sustainability, noting that the company exceeded its 70% global recycling goal in 2010. It continues to score well on the products rating with all Apple products now being free of PVC plastic and BFRs. However, the guide criticizes Apple on the Energy criteria for not seeking external verification of its greenhouse gas emissions data and for not setting out any targets to reduce emissions. In January 2012, Apple requested that its cable maker, Volex, begin producing halogen-free USB and power cables. Green bonds In February 2016, Apple issued a US$1.5 billion green bond (climate bond), the first ever of its kind by a U.S. tech company. The green bond proceeds are dedicated to the financing of environmental projects. Racial Justice and Equality Initiatives In June 2020, Apple committed $100 million for its Racial Equity and Justice initiative (REJI) and in Jan 2021 announced various projects as part of the initiative. Finance Apple is the world's largest information technology company by revenue, the world's largest technology company by total assets, and the world's second-largest mobile phone manufacturer after Samsung. In its fiscal year ending in September 2011, Apple Inc. reported a total of $108 billion in annual revenues—a significant increase from its 2010 revenues of $65 billion—and nearly $82 billion in cash reserves. On March 19, 2012, Apple announced plans for a $2.65-per-share dividend beginning in fourth quarter of 2012, per approval by their board of directors. The company's worldwide annual revenue in 2013 totaled $170 billion. In May 2013, Apple entered the top ten of the Fortune 500 list of companies for the first time, rising 11 places above its 2012 ranking to take the sixth position. , Apple has around US$234 billion of cash and marketable securities, of which 90% is located outside the United States for tax purposes. Apple amassed 65% of all profits made by the eight largest worldwide smartphone manufacturers in quarter one of 2014, according to a report by Canaccord Genuity. In the first quarter of 2015, the company garnered 92% of all earnings. On April 30, 2017, The Wall Street Journal reported that Apple had cash reserves of $250 billion, officially confirmed by Apple as specifically $256.8 billion a few days later. , Apple was the largest publicly traded corporation in the world by market capitalization. On August 2, 2018, Apple became the first publicly traded U.S. company to reach a $1 trillion market value. Apple was ranked No. 4 on the 2018 Fortune 500 rankings of the largest United States corporations by total revenue. Tax practices Apple has created subsidiaries in low-tax places such as Ireland, the Netherlands, Luxembourg, and the British Virgin Islands to cut the taxes it pays around the world. According to The New York Times, in the 1980s Apple was among the first tech companies to designate overseas salespeople in high-tax countries in a manner that allowed the company to sell on behalf of low-tax subsidiaries on other continents, sidestepping income taxes. In the late 1980s, Apple was a pioneer of an accounting technique known as the "Double Irish with a Dutch sandwich," which reduces taxes by routing profits through Irish subsidiaries and the Netherlands and then to the Caribbean. British Conservative Party Member of Parliament Charlie Elphicke published research on October 30, 2012, which showed that some multinational companies, including Apple Inc., were making billions of pounds of profit in the UK, but were paying an effective tax rate to the UK Treasury of only 3 percent, well below standard corporation tax. He followed this research by calling on the Chancellor of the Exchequer George Osborne to force these multinationals, which also included Google and The Coca-Cola Company, to state the effective rate of tax they pay on their UK revenues. Elphicke also said that government contracts should be withheld from multinationals who do not pay their fair share of UK tax. Apple Inc. claims to be the single largest taxpayer to the Department of the Treasury of the United States of America with an effective tax rate of approximately of 26% as of the second quarter of the Apple fiscal year 2016. In an interview with the German newspaper FAZ in October 2017, Tim Cook stated, that Apple is the biggest taxpayer worldwide. In 2015, Reuters reported that Apple had earnings abroad of $54.4 billion which were untaxed by the IRS of the United States. Under U.S. tax law governed by the IRC, corporations don't pay income tax on overseas profits unless the profits are repatriated into the United States and as such Apple argues that to benefit its shareholders it will leave it overseas until a repatriation holiday or comprehensive tax reform takes place in the United States. On July 12, 2016, the Central Statistics Office of Ireland announced that 2015 Irish GDP had grown by 26.3%, and 2015 Irish GNP had grown by 18.7%. The figures attracted international scorn, and were labelled by Nobel-prize winning economist, Paul Krugman, as leprechaun economics. It was not until 2018 that Irish economists could definitively prove that the 2015 growth was due to Apple restructuring its controversial double Irish subsidiaries (Apple Sales International), which Apple converted into a new Irish capital allowances for intangible assets tax scheme (expires in January 2020). The affair required the Central Bank of Ireland to create a new measure of Irish economic growth, Modified GNI* to replace Irish GDP, given the distortion of Apple's tax schemes. Irish GDP is 143% of Irish Modified GNI*. On August 30, 2016, after a two-year investigation, the EU Competition Commissioner concluded Apple received "illegal state aid" from Ireland. The EU ordered Apple to pay 13 billion euros ($14.5 billion), plus interest, in unpaid Irish taxes for 2004–2014. It is the largest tax fine in history. The Commission found that Apple had benefited from a private Irish Revenue Commissioners tax ruling regarding its double Irish tax structure, Apple Sales International (ASI). Instead of using two companies for its double Irish structure, Apple was given a ruling to split ASI into two internal "branches". The Chancellor of Austria, Christian Kern, put this decision into perspective by stating that "every Viennese cafe, every sausage stand pays more tax in Austria than a multinational corporation". , Apple agreed to start paying €13 billion in back taxes to the Irish government, the repayments will be held in an escrow account while Apple and the Irish government continue their appeals in EU courts. On July 15, 2020, the EU General Court annuls the European Commission's decision in Apple State aid case: Apple will not have to repay €13 billion to Ireland. Board of directors the following individuals sit on the board of Apple Inc. Arthur D. Levinson (chairman) Tim Cook (executive director and CEO) James A. Bell (non-executive director) Al Gore (non-executive director) Andrea Jung (non-executive director) Ronald Sugar (non-executive director) Susan Wagner (non-executive director) Executive management the management of Apple Inc. includes: Tim Cook (chief executive officer) Jeff Williams (chief operating officer) Luca Maestri (senior vice president and chief financial officer) Katherine L. Adams (senior vice president and general counsel) Eddy Cue (senior vice president – Internet Software and Services) Craig Federighi (senior vice president – Software Engineering) John Giannandrea (senior vice president – Machine Learning and AI Strategy) Deirdre O'Brien (senior vice president – Retail + People) John Ternus (senior vice president – Hardware Engineering) Greg Josiwak (senior vice president – Worldwide Marketing) Johny Srouji (senior vice president – Hardware Technologies) Sabih Khan (senior vice president – Operations) Lisa P. Jackson (vice president – Environment, Policy, and Social Initiatives) Isabel Ge Mahe (vice president and managing director – Greater China) Tor Myhren (vice president – Marketing Communications) Adrian Perica (vice president – Corporate Development) List of chief executives Michael Scott (1977–1981) Mike Markkula (1981–1983) John Sculley (1983–1993) Michael Spindler (1993–1996) Gil Amelio (1996–1997) Steve Jobs (1997–2011) Tim Cook (2011–present) List of chairmen The role of chairman of the Board has not always been in use; notably, between 1981 to 1985, and 1997 to 2011. Mike Markkula (1977–1981) Steve Jobs (1985) Mike Markkula (1985–1993); second term John Sculley (1993) Mike Markkula (1993–1997); third term Steve Jobs (2011); second term Arthur D. Levinson (2011–present) Litigation Apple has been a participant in various legal proceedings and claims since it began operation. In particular, Apple is known for and promotes itself as actively and aggressively enforcing its intellectual property interests. Some litigation examples include Apple v. Samsung, Apple v. Microsoft, Motorola Mobility v. Apple Inc., and Apple Corps v. Apple Computer. Apple has also had to defend itself against charges on numerous occasions of violating intellectual property rights. Most have been dismissed in the courts as shell companies known as patent trolls, with no evidence of actual use of patents in question. On December 21, 2016, Nokia announced that in the U.S. and Germany, it has filed a suit against Apple, claiming that the latter's products infringe on Nokia's patents. Most recently, in November 2017, the United States International Trade Commission announced an investigation into allegations of patent infringement in regards to Apple's remote desktop technology; Aqua Connect, a company that builds remote desktop software, has claimed that Apple infringed on two of its patents. Privacy stance Apple has a notable pro-privacy stance, actively making privacy-conscious features and settings part of its conferences, promotional campaigns, and public image. With its iOS 8 mobile operating system in 2014, the company started encrypting all contents of iOS devices through users' passcodes, making it impossible at the time for the company to provide customer data to law enforcement requests seeking such information. With the popularity rise of cloud storage solutions, Apple began a technique in 2016 to do deep learning scans for facial data in photos on the user's local device and encrypting the content before uploading it to Apple's iCloud storage system. It also introduced "differential privacy", a way to collect crowdsourced data from many users, while keeping individual users anonymous, in a system that Wired described as "trying to learn as much as possible about a group while learning as little as possible about any individual in it". Users are explicitly asked if they want to participate, and can actively opt-in or opt-out. With Apple's release of an update to iOS 14, Apple required all developers of iPhone, iPad, and iPod touch applications to directly ask iPhone users permission to track them. The feature, titled "App Tracking Transparency", received heavy criticism from Facebook, whose primary business model revolves around the tracking of users' data and sharing such data with advertisers so users can see more relevant ads, a technique commonly known as targeted advertising. Despite Facebook's measures, including purchasing full-page newspaper advertisements protesting App Tracking Transparency, Apple released the update in mid-spring 2021. A study by Verizon subsidiary Flurry Analytics reported only 4% of iOS users in the United States and 12% worldwide have opted into tracking. However, Apple aids law enforcement in criminal investigations by providing iCloud backups of users' devices, and the company's commitment to privacy has been questioned by its efforts to promote biometric authentication technology in its newer iPhone models, which don't have the same level of constitutional privacy as a passcode in the United States. Prior to the release of iOS 15, Apple announced new efforts at combating child sexual abuse material on iOS and Mac platforms. Parents of minor iMessage users can now be alerted if their child sends or receives nude photographs. Additionally, on-device hashing would take place on media destined for upload to iCloud, and hashes would be compared to a list of known abusive images provided by law enforcement; if enough matches were found, Apple would be alerted and authorities informed. The new features received praise from law enforcement and victims rights advocates, however privacy advocates, including the Electronic Frontier Foundation, condemned the new features as invasive and highly prone to abuse by authoritarian governments. Charitable causes Apple is a partner of (PRODUCT)RED, a fundraising campaign for AIDS charity. In November 2014, Apple arranged for all App Store revenue in a two-week period to go to the fundraiser, generating more than US$20 million, and in March 2017, it released an iPhone 7 with a red color finish. Apple contributes financially to fundraisers in times of natural disasters. In November 2012, it donated $2.5 million to the American Red Cross to aid relief efforts after Hurricane Sandy, and in 2017 it donated $5 million to relief efforts for both Hurricane Irma and Hurricane Harvey, as well as for the 2017 Central Mexico earthquake. The company has also used its iTunes platform to encourage donations in the wake of environmental disasters and humanitarian crises, such as the 2010 Haiti earthquake, the 2011 Japan earthquake, Typhoon Haiyan in the Philippines in November 2013, and the 2015 European migrant crisis. Apple emphasizes that it does not incur any processing or other fees for iTunes donations, sending 100% of the payments directly to relief efforts, though it also acknowledges that the Red Cross does not receive any personal information on the users donating and that the payments may not be tax deductible. On April 14, 2016, Apple and the World Wide Fund for Nature (WWF) announced that they have engaged in a partnership to, "help protect life on our planet." Apple released a special page in the iTunes App Store, Apps for Earth. In the arrangement, Apple has committed that through April 24, WWF will receive 100% of the proceeds from the applications participating in the App Store via both the purchases of any paid apps and the In-App Purchases. Apple and WWF's Apps for Earth campaign raised more than $8 million in total proceeds to support WWF's conservation work. WWF announced the results at WWDC 2016 in San Francisco. During the COVID-19 pandemic, Apple's CEO Cook announced that the company will be donating "millions" of masks to health workers in the United States and Europe. On January 13, 2021, Apple announced a $100 million "Racial Equity and Justice Initiative" to help combat institutional racism worldwide. Criticism and controversies Apple has been criticized for alleged unethical business practices such as anti-competitive behavior, rash litigation, dubious tax tactics, production methods involving the use of sweatshop labor, customer service issues involving allegedly misleading warranties and insufficient data security, and its products' environmental footprint. Apple has also received criticism for its willingness to work and conduct business with nations such as China and Russia, engaging in practices that have been criticized by human rights groups. Critics have claimed that Apple products combine stolen or purchased designs that Apple claims are its original creations. It has been criticized for its alleged collaboration with the U.S. surveillance program PRISM. The company denied any collaboration. Products and services Apple's issues regarding music over the years include those with the European Union regarding iTunes, trouble over updating the Spotify app on Apple devices and collusion with record labels. In 2018–19, Apple faced criticism for its failure to approve NVIDIA web drivers for GPUs installed on legacy Mac Pro machines (up to mid 2012 5,1 running macOS Mojave 10.14). Without access to Apple-approved NVIDIA web drivers, Apple users faced replacing their NVIDIA cards with graphic cards produced by supported brands (such as the AMD Radeon), from a list of recommendations provided by Apple to its consumers. In June 2019, Apple issued a recall for its 2015 MacBook Pro Retina 15" following reports of batteries catching fire. The recall affected 432,000 units, and Apple was criticized for the long waiting periods consumers experienced, sometimes extending up to 3 weeks for replacements to arrive; the company also did not provide alternative replacements or repair options. In July 2019, following a campaign by the "right to repair" movement, challenging Apple's tech repair restrictions on devices, the FTC held a workshop to establish the framework of a future nationwide Right to Repair rule. The movement argues Apple is preventing consumers from legitimately fixing their devices at local repair shops which is having a negative impact on consumers. On November 19, 2020, it was announced that Apple will be paying out $113 million related to lawsuits stemming from their iPhone's battery problems and subsequent performance slow-downs. Apple continues to face litigation related to the performance throttling of iPhone 6 and 7 devices, an action that Apple argued was done in order to balance the functionality of the software with the impacts of a chemically aged battery. On January 25, 2021, Apple was hit with another lawsuit from an Italian consumer group, with more groups to follow, despite the rationale for the throttling. On November 30, 2020, the Italian antitrust authority AGCM fined Apple $12 million for misleading trade practices. AGCM stated that Apple's claims of the iPhone's water resistance weren't true as the phones could only resist water up to 4 meters deep in ideal laboratory conditions and not in regular circumstances. The authority added that Apple provided no assistance to customers with water-damaged phones, which it said constituted an aggressive trade practice. Privacy Ireland's Data Protection Commission also launched a privacy investigation to examine whether Apple complied with the EU's GDPR law following an investigation into how the company processes personal data with targeted ads on its platform. In December 2019, a report found that the iPhone 11 Pro continues tracking location and collecting user data even after users have disabled location services. In response, an Apple engineer said the Location Services icon "appears for system services that do not have a switch in settings." Antitrust The United States Department of Justice also began a review of Big Tech firms to establish whether they could be unlawfully stifling competition in a broad antitrust probe in 2019. On March 16, 2020, France fined Apple €1.1 billion for colluding with two wholesalers to stifle competition and keep prices high by handicapping independent resellers. The arrangement created aligned prices for Apple products such as iPads and personal computers for about half the French retail market. According to the French regulators, the abuses occurred between 2005 and 2017 but were first discovered after a complaint by an independent reseller, eBizcuss, in 2012. On August 13, 2020, Epic Games, the maker of the popular game Fortnite, sued Apple and Google after its hugely popular video game was removed from Apple and Google's App Store. The suits come after both Apple and Google blocked the game after it introduced a direct payment system, effectively shutting out the tech titans from collecting fees. In September 2020 Epic Games founded the Coalition for App Fairness together with other thirteen companies, which aims for better conditions for the inclusion of apps in the app stores. Later in December 2020, Facebook agreed to assist Epic in their legal game against Apple, planning to support the company by providing materials and documents to Epic. Facebook had, however, stated that the company will not participate directly with the lawsuit, although did commit to helping with the discovery of evidence relating to the trial of 2021. In the months prior to their agreement, Facebook had been dealing with feuds against Apple relating to the prices of paid apps as well as privacy rule changes. Head of ad products for Facebook Dan Levy commented, saying that "this is not really about privacy for them, this is about an attack on personalized ads and the consequences it's going to have on small-business owners," commenting on the full-page ads placed by Facebook in various newspapers in December 2020. Politics In January 2020, US President Donald Trump and attorney general William P. Barr criticized Apple for refusing to unlock two iPhones of a Saudi national, Mohammed Saeed Alshamrani, who shot and killed three American sailors and injured eight others in the Naval Air Station Pensacola. The shooting was declared an "act of terrorism" by the FBI, but Apple denied the request to crack the phones to reveal possible terrorist information citing its data privacy policy. Apple Inc., shareholders increased pressure on the company to publicly commit “to respect freedom of expression as a human right”, upon which Apple committed to freedom of expression and information in its human rights policy document. It said that the policy is based on the guidelines of the United Nations on business and human rights, in early September 2020. In 2021, Apple complied with a request by the Chinese government to ban a Quran app from its devices and platforms. The request occurred in the context of the Chinese government's ongoing mass repression of Muslims, particularly Uyghurs, in Xinjiang, which some have labeled a genocide. In December 2021, The Information reported that CEO Tim Cook had negotiated in 2016 a five-year agreement with the Chinese government, motivated in part to allay regulatory issues that had harmed the company's business in China. The agreement entailed promised investments totaling $275 billion. In September 2021, Apple removed an app from its App Store created by Alexei Navalny meant to coordinate protest voting during the 2021 Russian legislative election. The Russian government had threatened to arrest individual Apple employees working in the country unless Apple complied. Patents In January 2022, Ericsson sued Apple over payment of royalty of 5G technology. See also List of Apple Inc. media events Pixar References Bibliography Further reading External links 1976 establishments in California 1980s initial public offerings American brands Companies based in Cupertino, California Companies in the Dow Jones Industrial Average Companies in the PRISM network Companies listed on the Nasdaq Computer companies established in 1976 Computer companies of the United States Display technology companies Electronics companies of the United States Home computer hardware companies Mobile phone manufacturers Multinational companies headquartered in the United States Networking hardware companies Portable audio player manufacturers Retail companies of the United States Software companies based in the San Francisco Bay Area Software companies established in 1976 Steve Jobs Technology companies based in the San Francisco Bay Area Technology companies established in 1976 Technology companies of the United States
857
https://en.wikipedia.org/wiki/Aberdeenshire
Aberdeenshire
Aberdeenshire (; ) is one of the 32 council areas of Scotland. It takes its name from the County of Aberdeen which has substantially different boundaries. The Aberdeenshire Council area includes all of the area of the historic counties of Aberdeenshire and Kincardineshire (except the area making up the City of Aberdeen), as well as part of Banffshire. The county boundaries are officially used for a few purposes, namely land registration and lieutenancy. Aberdeenshire Council is headquartered at Woodhill House, in Aberdeen, making it the only Scottish council whose headquarters are located outside its jurisdiction. Aberdeen itself forms a different council area (Aberdeen City). Aberdeenshire borders onto Angus and Perth and Kinross to the south, Highland and Moray to the west and Aberdeen City to the east. Traditionally, it has been economically dependent upon the primary sector (agriculture, fishing, and forestry) and related processing industries. Over the last 40 years, the development of the oil and gas industry and associated service sector has broadened Aberdeenshire's economic base, and contributed to a rapid population growth of some 50% since 1975. Its land represents 8% of Scotland's overall territory. It covers an area of . History Aberdeenshire has a rich prehistoric and historic heritage. It is the locus of a large number of Neolithic and Bronze Age archaeological sites, including Longman Hill, Kempstone Hill, Catto Long Barrow and Cairn Lee. The area was settled in the Bronze Age by the Beaker culture, who arrived from the south around 2000–1800 BC. Stone circles and cairns were constructed predominantly in this era. In the Iron Age, hill forts were built. Around the 1st century AD, the Taexali people, who have left little history, were believed to have resided along the coast. The Picts were the next documented inhabitants of the area, and were no later than 800–900 AD. The Romans also were in the area during this period, as they left signs at Kintore. Christianity influenced the inhabitants early on, and there were Celtic monasteries at Old Deer and Monymusk. Since medieval times there have been a number of traditional paths that crossed the Mounth (a spur of mountainous land that extends from the higher inland range to the North Sea slightly north of Stonehaven) through present-day Aberdeenshire from the Scottish Lowlands to the Highlands. Some of the most well known and historically important trackways are the Causey Mounth and Elsick Mounth. Aberdeenshire played an important role in the fighting between the Scottish clans. Clan MacBeth and the Clan Canmore were two of the larger clans. Macbeth fell at Lumphanan in 1057. During the Anglo-Norman penetration, other families arrive such as House of Balliol, Clan Bruce, and Clan Cumming (Comyn). When the fighting amongst these newcomers resulted in the Scottish Wars of Independence, the English king Edward I travelled across the area twice, in 1296 and 1303. In 1307, Robert the Bruce was victorious near Inverurie. Along with his victory came new families, namely the Forbeses and the Gordons. These new families set the stage for the upcoming rivalries during the 14th and 15th centuries. This rivalry grew worse during and after the Protestant Reformation, when religion was another reason for conflict between the clans. The Gordon family adhered to Catholicism and the Forbeses to Protestantism. Aberdeenshire was the historic seat of the clan Dempster. Three universities were founded in the area prior to the 17th century, King's College in Old Aberdeen (1494), Marischal College in Aberdeen (1593), and the University of Fraserburgh (1597). After the end of the Revolution of 1688, an extended peaceful period was interrupted only by such fleeting events such as the Rising of 1715 and the Rising of 1745. The latter resulted in the end of the ascendancy of Episcopalianism and the feudal power of landowners. An era began of increased agricultural and industrial progress. During the 17th century, Aberdeenshire was the location of more fighting, centred on the Marquess of Montrose and the English Civil Wars. This period also saw increased wealth due to the increase in trade with Germany, Poland, and the Low Countries. The present council area is named after the historic county of Aberdeenshire, which has different boundaries and was abandoned as an administrative area in 1975 under the Local Government (Scotland) Act 1973. It was replaced by Grampian Regional Council and five district councils: Banff and Buchan, Gordon, Kincardine and Deeside, Moray and the City of Aberdeen. Local government functions were shared between the two levels. In 1996, under the Local Government, etc. (Scotland) Act 1994, the Banff and Buchan District, Gordon District and Kincardine and Deeside District were merged to form the present Aberdeenshire Council area. Moray and the City of Aberdeen were made their own council areas. The present Aberdeenshire Council area consists of all of the historic counties of Aberdeenshire and Kincardineshire (except the area of those two counties making up the City of Aberdeen), as well as north-east portions of Banffshire. Demographics The population of the council area has risen over 50% since 1971 to approximately , representing 4.7% of Scotland's total. Aberdeenshire's population has increased by 9.1% since 2001, while Scotland's total population grew by 3.8%. The census lists a relatively high proportion of under 16s and slightly fewer people of working age compared with the Scottish average. Aberdeenshire is one of the most homogeneous/indigenous regions of the UK. In 2011, 82.2% of residents identified as 'White Scottish', followed by 12.3% who are 'White British', whilst ethnic minorities constitute only 0.9% of the population. The largest ethnic minority group are Asian Scottish/British at 0.8%. In addition to the English language, 48.8% of residents reported being able to speak and understand the Scots language. The fourteen biggest settlements in Aberdeenshire (with 2011 population estimates) are: Peterhead (17,790) Fraserburgh (12,540) Inverurie (11,529) Westhill (11,220) Stonehaven (10,820) Ellon (9,910) Portlethen (7,327) Banchory (7,111) Turriff (4,804) Kintore (4,476) Huntly (4,461) Banff (3,931) Kemnay (3,830) Macduff (3,711) Economy Aberdeenshire's Gross Domestic Product (GDP) is estimated at £3,496M (2011), representing 5.2% of the Scottish total. Aberdeenshire's economy is closely linked to Aberdeen City's (GDP £7,906M), and in 2011, the region as a whole was calculated to contribute 16.8% of Scotland's GDP. Between 2012 and 2014, the combined Aberdeenshire and Aberdeen City economic forecast GDP growth rate is 8.6%, the highest growth rate of any local council area in the UK and above the Scottish rate of 4.8%. A significant proportion of Aberdeenshire's working residents commute to Aberdeen City for work, varying from 11.5% from Fraserburgh to 65% from Westhill. Average Gross Weekly Earnings (for full-time employees employed in workplaces in Aberdeenshire in 2011) are £572.60. This is lower than the Scottish average by £2.10 and a fall of 2.6% on the 2010 figure. The average gross weekly pay of people resident in Aberdeenshire is much higher, at £741.90, as many people commute out of Aberdeenshire, principally into Aberdeen City. Total employment (excluding farm data) in Aberdeenshire is estimated at 93,700 employees (Business Register and Employment Survey 2009). The majority of employees work within the service sector, predominantly in public administration, education and health. Almost 19% of employment is within the public sector. Aberdeenshire's economy remains closely linked to Aberdeen City's and the North Sea oil industry, with many employees in oil-related jobs. The average monthly unemployment (claimant count) rate for Aberdeenshire in 2011 was 1.5%. This is lower than the average rate of Aberdeen City (2.3%), Scotland (4.2%) and the UK (3.8%). Major industries Energy – There are significant energy-related infrastructure, presence and expertise in Aberdeenshire. Peterhead is an important centre for the energy industry. Peterhead Port, which includes an extensive new quay with adjacent lay down area at Smith Quay, is a major support location for North Sea oil and gas exploration and production and the fast-growing global sub-sea sector. The Gas Terminal at St Fergus handles around 15% of the UK's natural gas requirements and the Peterhead power station is looking to host Britain's first carbon capture and storage power generation project.There are numerous offshore wind turbines near the coast. Fishing – Aberdeenshire is Scotland's foremost fishing area. In 2010, catches landed at Aberdeenshire's ports accounted for over half the total fish landings of Scotland, and almost 45% in the UK. Peterhead and Fraserburgh ports, alongside Aberdeen City, provide much of the employment in these sectors. The River Deeis also rich in salmon. Agriculture – Aberdeenshire is rich in arable land, with an estimated 9,000 people employed in the sector, and is best known for rearing livestock, mainly cattle. Sheep are important in the higher ground. Tourism – this sector continues to grow, with a range of sights to be seen in the area. From the lively Cairngorm Mountain range to the bustling fishing ports on the north-east coast, Aberdeenshire samples a bit of everything. Aberdeenshire also has a rugged coastline, many sandy beaches and is a hot spot for tourist activity throughout the year. Almost 1.3 million tourists visited the region in 2011 – up 3% on the previous year. Whisky distilling is still a practised art in the area. Governance and politics The council has 70 councillors, elected in 19 multi-member wards by single transferable vote. The 2017 elections resulted in the following representation: The overall political composition of the council, following subsequent defections and by-elections, is as follows: The council is the first in Scotland to have councillors form an Alba party political group: these councillors are Leigh Wilson, Alastair Bews and Brian Topping. The council's Revenue Budget for 2012/13 totals approx £548 million. The Education, Learning and Leisure Service takes the largest share of budget (52.3%), followed by Housing and Social Work (24.3%), Infrastructure Services (15.9%), Joint Boards (such as Fire and Police) and Misc services (7.9%) and Trading Activities (0.4%). 21.5% of the revenue is raised locally through the Council Tax. Average Band D Council Tax is £1,141 (2012/13), no change on the previous year. The current chief executive of the council is Jim Savege and the elected Council Leader is Jim Gifford. Aberdeenshire also has a provost, who is Councillor Bill Howatson. The council has devolved power to six area committees: Banff and Buchan; Buchan; Formartine; Garioch; Marr; and Kincardine and Mearns. Each area committee takes decisions on local issues such as planning applications, and the split is meant to reflect the diverse circumstances of each area. (Boundary map) In the 2014 Scottish independence referendum, 60.36% of voters in Aberdeenshire voted for the Union, while 39.64% opted for independence. Notable features The following significant structures or places are within Aberdeenshire: Balmoral Castle, Scottish Highland residence of the British royal family. Bennachie Burn O'Vat Cairness House Cairngorms National Park Corgarff Castle Crathes Castle Causey Mounth, an ancient road Drum Castle Dunnottar Castle Fetteresso Castle Fowlsheugh Nature Reserve Haddo House Herscha Hill Huntly Castle Kildrummy Castle Loch of Strathbeg Lochnagar Monboddo House Muchalls Castle Pitfour estate Portlethen Moss Raedykes Roman Camp River Dee River Don Sands of Forvie Nature Reserve Slains Castles, Old and New Stonehaven Tolbooth Ythan Estuary Nature Reserve Hydrology and climate There are numerous rivers and burns in Aberdeenshire, including Cowie Water, Carron Water, Burn of Muchalls, River Dee, River Don, River Ury, River Ythan, Water of Feugh, Burn of Myrehouse, Laeca Burn and Luther Water. Numerous bays and estuaries are found along the seacoast of Aberdeenshire, including Banff Bay, Ythan Estuary, Stonehaven Bay and Thornyhive Bay. Aberdeenshire has a marine west coast climate on the Köppen climate classification. Aberdeenshire is in the rain shadow of the Grampians, therefore it has a generally dry climate for a maritime region, with portions of the coast, receiving of moisture annually. Summers are mild and winters are typically cold in Aberdeenshire; Coastal temperatures are moderated by the North Sea such that coastal areas are typically cooler in the summer and warmer in winter than inland locations. Coastal areas are also subject to haar, or coastal fog. Notable residents John Skinner, (1721–1807) author, poet and ecclesiastic. Penned the famous verse, "Tullochgorum". Hugh Mercer, (1726–1777), born in the manse of Pitsligo Kirk, near Rosehearty, brigadier general of the Continental Army during the American Revolution. Alexander Garden, (1730–1791), born in Birse, noted naturalist and physician. He moved to North America in 1754, and discovered two species of lizards. He was a Loyalist during the American Revolutionary War, which led to the confiscation of his property and his banishment in 1782. The gardenia flower is named in his honour. John Kemp, (1763–1812), born in Auchlossan, was a noted educator at Columbia University who is said to have influenced DeWitt Clinton's opinions and policies. George MacDonald (1824–1905), author, poet, and theologian born and raised in Huntly. Dame Evelyn Glennie, DBE, born and raised in Ellon on 19 July 1965, is a virtuoso percussionist, and the first full-time solo percussionist in 20th-century western society. She is very highly regarded in the Scottish musical community, and has proven that her profound deafness does not inhibit her musical talent or day-to-day life. Evan Duthie, (born 2000), an award-winning DJ and producer. Peter Nicol, MBE, born in Inverurie on 5 April 1973, is a former professional squash player who represented first Scotland and then England in international squash. Peter Shepherd, (1841–1879), Surgeon Major, Royal Army Medical Corps Johanna Basford (born 1983), illustrator and textile designer References External links Aberdeenshire Council Aberdeenshire Tourist Guide Aberdeenshire Libraries Service Aberdeenshire Museums Service Peterhead and Buchan Tourism Web Site Aberdeenshire Arts Aberdeenshire Sports Council Council areas of Scotland
859
https://en.wikipedia.org/wiki/Aztlan%20Underground
Aztlan Underground
Aztlan Underground is a band from Los Angeles, California that combines Hip-Hop, Punk Rock, Jazz, and electronic music with Chicano and Native American themes, and indigenous instrumentation. They are often cited as progenitors of Chicano rap. Background The band traces its roots to the late-1980s hardcore scene in the Eastside of Los Angeles. They have played rapcore, with elements of punk, hip hop, rock, funk, jazz, indigenous music, and spoken word. Indigenous drums, flutes, and rattles are also commonly used in their music. Their lyrics often address the family and economic issues faced by the Chicano community, and they have been noted as activists for that community. As an example of the politically active and culturally important artists in Los Angeles in the 1990s, Aztlan Underground appeared on Culture Clash on Fox in 1993; and was part of Breaking Out, a concert on pay per view in 1998, The band was featured in the independent films Algun Dia and Frontierland in the 1990s, and on the upcoming Studio 49. The band has been mentioned or featured in various newspapers and magazines: the Vancouver Sun, New Times, BLU Magazine (an underground hip hop magazine), BAM Magazine, La Banda Elastica Magazine, and the Los Angeles Times calendar section. The band is also the subject of a chapter in the book It's Not About a Salary, by Brian Cross. Aztlan Underground remains active in the community, lending their voice to annual events such as The Farce of July, and the recent movement to recognize Indigenous People's Day in Los Angeles and beyond. In addition to forming their own label, Xicano Records and Film, Aztlan Underground were signed to the Basque record label Esan Ozenki in 1999 which enabled them to tour Spain extensively and perform in France and Portugal. Aztlan Underground have also performed in Canada, Australia, and Venezuela. The band has been recognized for their music with nominations in the New Times 1998 "Best Latin Influenced" category, the BAM Magazine 1999 "Best Rock en Español" category, and the LA Weekly 1999 "Best Hip Hop" category. The release of their eponymous third album on August 29, 2009 was met with positive reviews and earned the band four Native American Music Award (NAMMY) nominations in 2010. Discography Decolonize Year:1995 "Teteu Innan" "Killing Season" "Lost Souls" "My Blood Is Red" "Natural Enemy" "Sacred Circle" "Blood On Your Hands" "Interlude" "Aug 2 the 9" "Indigena" "Lyrical Drive By" Sub-Verses Year:1998 "Permiso" "They Move In Silence" "No Soy Animal" "Killing Season" "Blood On Your Hands" "Reality Check" "Lemon Pledge" "Revolution" "Preachers of the Blind State" "Lyrical Drive-By" "Nahui Ollin" "How to Catch a Bullet" "Ik Otik" "Obsolete Man" "Decolonize" "War Flowers" Aztlan Underground Year: 2009 "Moztlitta" "Be God" "Light Shines" "Prey" "In the Field" "9 10 11 12" "Smell the Dead" "Sprung" "Medicine" "Acabando" "Crescent Moon" See also Chicano rap Native American hip hop Rapcore Chicano rock References External links Myspace link Facebook page Native American rappers American rappers of Mexican descent Musical groups from Los Angeles Rapcore groups West Coast hip hop musicians
863
https://en.wikipedia.org/wiki/American%20Civil%20War
American Civil War
The American Civil War (April 12, 1861 – May 9, 1865; also known by other names) was a civil war in the United States between the Union (states that remained loyal to the federal union, or "the North") and the Confederacy (states that voted to secede, or "the South"). The central cause of the war was the status of slavery, especially the expansion of slavery into territories acquired as a result of the Louisiana Purchase and the Mexican–American War. On the eve of the Civil War in 1860, four million of the 32 million Americans (~13%) were enslaved black people, almost all in the South. The practice of slavery in the United States was one of the key political issues of the 19th century. Decades of political unrest over slavery led up to the Civil War. Disunion came after Abraham Lincoln won the 1860 United States presidential election on an anti-slavery expansion platform. An initial seven southern slave states declared their secession from the country to form the Confederacy. Confederate forces seized federal forts within territory they claimed. The last minute Crittenden Compromise tried to avert conflict but failed; both sides prepared for war. Fighting broke out in April 1861 when the Confederate army began the Battle of Fort Sumter in South Carolina, just over a month after the first inauguration of Abraham Lincoln. The Confederacy grew to control at least a majority of territory in eleven states (out of the 34 U.S. states in February 1861), and asserted claims to two more. Both sides raised large volunteer and conscription armies. Four years of intense combat, mostly in the South, ensued. During 1861–1862 in the war's Western Theater, the Union made significant permanent gainsthough in the war's Eastern Theater the conflict was inconclusive. On January 1, 1863, Lincoln issued the Emancipation Proclamation, which made ending slavery a war goal, declaring all persons held as slaves in states in rebellion "forever free." To the west, the Union destroyed the Confederate river navy by the summer of 1862, then much of its western armies, and seized New Orleans. The successful 1863 Union siege of Vicksburg split the Confederacy in two at the Mississippi River. In 1863, Confederate General Robert E. Lee's incursion north ended at the Battle of Gettysburg. Western successes led to General Ulysses S. Grant's command of all Union armies in 1864. Inflicting an ever-tightening naval blockade of Confederate ports, the Union marshaled resources and manpower to attack the Confederacy from all directions. This led to the fall of Atlanta in 1864 to Union General William Tecumseh Sherman and his march to the sea. The last significant battles raged around the ten-month Siege of Petersburg, gateway to the Confederate capital of Richmond. The Civil War effectively ended on April 9, 1865, when Confederate General Lee surrendered to Union General Grant at the Battle of Appomattox Court House, after Lee had abandoned Petersburg and Richmond. Confederate generals throughout the Confederate army followed suit. The conclusion of the American Civil War lacks a clean end date: land forces continued surrendering until June 23. By the end of the war, much of the South's infrastructure was destroyed, especially its railroads. The Confederacy collapsed, slavery was abolished, and four million enslaved black people were freed. The war-torn nation then entered the Reconstruction era in a partially successful attempt to rebuild the country and grant civil rights to freed slaves. The Civil War is one of the most studied and written about episodes in the history of the United States. It remains the subject of cultural and historiographical debate. Of particular interest is the persisting myth of the Lost Cause of the Confederacy. The American Civil War was among the earliest to use industrial warfare. Railroads, the telegraph, steamships, the ironclad warship, and mass-produced weapons saw wide use. In total the war left between 620,000 and 750,000 soldiers dead, along with an undetermined number of civilian casualties. President Lincoln was assassinated just five days after Lee's surrender. The Civil War remains the deadliest military conflict in American history. The technology and brutality of the Civil War foreshadowed the coming World Wars. Causes of secession The causes of secession were complex and have been controversial since the war began, but most academic scholars identify slavery as the central cause of the war. The issue has been further complicated by historical revisionists, who have tried to offer a variety of reasons for the war. Slavery was the central source of escalating political tension in the 1850s. The Republican Party was determined to prevent any spread of slavery to the territories, which, after they were admitted as states, would give the North greater representation in Congress and the Electoral College. Many Southern leaders had threatened secession if the Republican candidate, Lincoln, won the 1860 election. After Lincoln won, many Southern leaders felt that disunion was their only option, fearing that the loss of representation would hamper their ability to promote pro-slavery acts and policies. In his second inaugural address, Lincoln said that "slaves constituted a peculiar and powerful interest. All knew that this interest was, somehow, the cause of the war. To strengthen, perpetuate, and extend this interest was the object for which the insurgents would rend the Union, even by war; while the government claimed no right to do more than to restrict the territorial enlargement of it." Slavery Slavery was the main cause of disunion. Slavery had been a controversial issue during the framing of the Constitution but had been left unsettled. The issue of slavery had confounded the nation since its inception, and increasingly separated the United States into a slaveholding South and a free North. The issue was exacerbated by the rapid territorial expansion of the country, which repeatedly brought to the fore the issue of whether new territory should be slaveholding or free. The issue had dominated politics for decades leading up to the war. Key attempts to solve the issue included the Missouri Compromise and the Compromise of 1850, but these only postponed an inevitable showdown over slavery. The motivations of the average person were not inherently those of their faction; some Northern soldiers were even indifferent on the subject of slavery, but a general pattern can be established. Confederate soldiers fought the war primarily to protect a Southern society of which slavery was an integral part. From the anti-slavery perspective, the issue was primarily whether slavery was an anachronistic evil incompatible with republicanism. The strategy of the anti-slavery forces was containment—to stop the expansion of slavery and thereby put it on a path to ultimate extinction. The slaveholding interests in the South denounced this strategy as infringing upon their constitutional rights. Southern whites believed that the emancipation of slaves would destroy the South's economy, due to the large amount of capital invested in slaves and fears of integrating the ex-slave black population. In particular, many Southerners feared a repeat of 1804 Haiti massacre (also known as "the horrors of Santo Domingo"), in which former slaves systematically murdered most of what was left of the country's white population — including men, women, children, and even many sympathetic to abolition — after the successful slave revolt in Haiti. Historian Thomas Fleming points to the historical phrase "a disease in the public mind" used by critics of this idea and proposes it contributed to the segregation in the Jim Crow era following emancipation. These fears were exacerbated by the 1859 attempt of John Brown to instigate an armed slave rebellion in the South. Abolitionists The abolitionists – those advocating the end of slavery – were very active in the decades leading up to the Civil War. They traced their philosophical roots back to the Puritans, who strongly believed that slavery was morally wrong. One of the early Puritan writings on this subject was The Selling of Joseph, by Samuel Sewall in 1700. In it, Sewall condemned slavery and the slave trade and refuted many of the era's typical justifications for slavery. The American Revolution and the cause of liberty added tremendous impetus to the abolitionist cause. Slavery, which had been around for thousands of years, was considered normal and was not a significant issue of public debate prior to the Revolution. The Revolution changed that and made it into an issue that had to be addressed. As a result, during and shortly after the Revolution, the northern states quickly started outlawing slavery. Even in southern states, laws were changed to limit slavery and facilitate manumission. The amount of indentured servitude dropped dramatically throughout the country. An Act Prohibiting Importation of Slaves sailed through Congress with little opposition. President Thomas Jefferson supported it, and it went into effect on January 1, 1808. Benjamin Franklin and James Madison each helped found manumission societies. Influenced by the Revolution, many slave owners freed their slaves, but some, such as George Washington, did so only in their wills. The number of free blacks as a proportion of the black population in the upper South increased from less than 1 percent to nearly 10 percent between 1790 and 1810 as a result of these actions. The establishment of the Northwest Territory as "free soil" – no slavery – by Manasseh Cutler and Rufus Putnam (who both came from Puritan New England) would also prove crucial. This territory (which became the states of Ohio, Michigan, Indiana, Illinois, Wisconsin and part of Minnesota) doubled the size of the United States. In the decades leading up to the Civil War, abolitionists, such as Theodore Parker, Ralph Waldo Emerson, Henry David Thoreau and Frederick Douglass, repeatedly used the Puritan heritage of the country to bolster their cause. The most radical anti-slavery newspaper, The Liberator, invoked the Puritans and Puritan values over a thousand times. Parker, in urging New England Congressmen to support the abolition of slavery, wrote that "The son of the Puritan ... is sent to Congress to stand up for Truth and Right...." Literature served as a means to spread the message to common folks. Key works included Twelve Years a Slave, the Narrative of the Life of Frederick Douglass, American Slavery as It Is, and the most important: Uncle Tom's Cabin, the best-selling book of the 19th century aside from the Bible. By 1840 more than 15,000 people were members of abolitionist societies in the United States. Abolitionism in the United States became a popular expression of moralism, and led directly to the Civil War. In churches, conventions and newspapers, reformers promoted an absolute and immediate rejection of slavery. Support for abolition among the religious was not universal though. As the war approached, even the main denominations split along political lines, forming rival southern and northern churches. In 1845, for example, Baptists split into the Northern Baptists and Southern Baptists over the issue of slavery. Abolitionist sentiment was not strictly religious or moral in origin. The Whig Party became increasingly opposed to slavery because they saw it as inherently against the ideals of capitalism and the free market. Whig leader William H. Seward (who would serve in Lincoln's cabinet) proclaimed that there was an "irrepressible conflict" between slavery and free labor, and that slavery had left the South backward and undeveloped. As the Whig party dissolved in the 1850s, the mantle of abolition fell to its newly formed successor, the Republican Party. Territorial crisis Manifest destiny heightened the conflict over slavery, as each new territory acquired had to face the thorny question of whether to allow or disallow the "peculiar institution". Between 1803 and 1854, the United States achieved a vast expansion of territory through purchase, negotiation, and conquest. At first, the new states carved out of these territories entering the union were apportioned equally between slave and free states. Pro- and anti-slavery forces collided over the territories west of the Mississippi. The Mexican–American War and its aftermath was a key territorial event in the leadup to the war. As the Treaty of Guadalupe Hidalgo finalized the conquest of northern Mexico west to California in 1848, slaveholding interests looked forward to expanding into these lands and perhaps Cuba and Central America as well. Prophetically, Ralph Waldo Emerson wrote that "Mexico will poison us", referring to the ensuing divisions around whether the newly conquered lands would end up slave or free. Northern "free soil" interests vigorously sought to curtail any further expansion of slave territory. The Compromise of 1850 over California balanced a free-soil state with stronger fugitive slave laws for a political settlement after four years of strife in the 1840s. But the states admitted following California were all free: Minnesota (1858), Oregon (1859), and Kansas (1861). In the Southern states, the question of the territorial expansion of slavery westward again became explosive. Both the South and the North drew the same conclusion: "The power to decide the question of slavery for the territories was the power to determine the future of slavery itself." By 1860, four doctrines had emerged to answer the question of federal control in the territories, and they all claimed they were sanctioned by the Constitution, implicitly or explicitly. The first of these "conservative" theories, represented by the Constitutional Union Party, argued that the Missouri Compromise apportionment of territory north for free soil and south for slavery should become a Constitutional mandate. The Crittenden Compromise of 1860 was an expression of this view. The second doctrine of Congressional preeminence, championed by Abraham Lincoln and the Republican Party, insisted that the Constitution did not bind legislators to a policy of balance—that slavery could be excluded in a territory as it was done in the Northwest Ordinance of 1787 at the discretion of Congress; thus Congress could restrict human bondage, but never establish it. The ill-fated Wilmot Proviso announced this position in 1846. The Proviso was a pivotal moment in national politics, as it was the first time slavery had become a major congressional issue based on sectionalism, instead of party lines. Its bipartisan support by northern Democrats and Whigs, and bipartisan opposition by southerners was a dark omen of coming divisions. Senator Stephen A. Douglas proclaimed the third doctrine: territorial or "popular" sovereignty, which asserted that the settlers in a territory had the same rights as states in the Union to establish or disestablish slavery as a purely local matter. The Kansas–Nebraska Act of 1854 legislated this doctrine. In the Kansas Territory, years of pro and anti-slavery violence and political conflict erupted; the U.S. House of Representatives voted to admit Kansas as a free state in early 1860, but its admission did not pass the Senate until January 1861, after the departure of Southern senators. The fourth doctrine was advocated by Mississippi Senator Jefferson Davis, one of state sovereignty ("states' rights"), also known as the "Calhoun doctrine", named after the South Carolinian political theorist and statesman John C. Calhoun. Rejecting the arguments for federal authority or self-government, state sovereignty would empower states to promote the expansion of slavery as part of the federal union under the U.S. Constitution. "States' rights" was an ideology formulated and applied as a means of advancing slave state interests through federal authority. As historian Thomas L. Krannawitter points out, the "Southern demand for federal slave protection represented a demand for an unprecedented expansion of Federal power." These four doctrines comprised the dominant ideologies presented to the American public on the matters of slavery, the territories, and the U.S. Constitution before the 1860 presidential election. States' rights A long running dispute over the origin of the Civil War is to what extent states' rights triggered the conflict. The consensus among historians is that the Civil War was fought about states' rights. But the issue is frequently referenced in popular accounts of the war and has much traction among Southerners. The South argued that just as each state had decided to join the Union, a state had the right to secede—leave the Union—at any time. Northerners (including pro-slavery President Buchanan) rejected that notion as opposed to the will of the Founding Fathers, who said they were setting up a perpetual union. Historian James McPherson points out that even if Confederates genuinely fought over states' rights, it boiled down to states' right to slavery. McPherson writes concerning states' rights and other non-slavery explanations: Before the Civil War, the Southern states used federal powers in enforcing and extending slavery at the national level, with the Fugitive Slave Act of 1850 and Dred Scott v. Sandford decision. The faction that pushed for secession often infringed on states' rights. Because of the overrepresentation of pro-slavery factions in the federal government, many Northerners, even non-abolitionists, feared the Slave Power conspiracy. Some Northern states resisted the enforcement of the Fugitive Slave Act. Historian Eric Foner stated the act "could hardly have been designed to arouse greater opposition in the North. It overrode numerous state and local laws and legal procedures and 'commanded' individual citizens to assist, when called upon, in capturing runaways." He continues, "It certainly did not reveal, on the part of slaveholders, sensitivity to states’ rights." According to historian Paul Finkelman "the southern states mostly complained that the northern states were asserting their states’ rights and that the national government was not powerful enough to counter these northern claims." The Confederate constitution also "federally" required slavery to be legal in all Confederate states and claimed territories. Sectionalism Sectionalism resulted from the different economies, social structure, customs, and political values of the North and South. Regional tensions came to a head during the War of 1812, resulting in the Hartford Convention, which manifested Northern dissatisfaction with a foreign trade embargo that affected the industrial North disproportionately, the Three-Fifths Compromise, dilution of Northern power by new states, and a succession of Southern presidents. Sectionalism increased steadily between 1800 and 1860 as the North, which phased slavery out of existence, industrialized, urbanized, and built prosperous farms, while the deep South concentrated on plantation agriculture based on slave labor, together with subsistence agriculture for poor whites. In the 1840s and 1850s, the issue of accepting slavery (in the guise of rejecting slave-owning bishops and missionaries) split the nation's largest religious denominations (the Methodist, Baptist, and Presbyterian churches) into separate Northern and Southern denominations. Historians have debated whether economic differences between the mainly industrial North and the mainly agricultural South helped cause the war. Most historians now disagree with the economic determinism of historian Charles A. Beard in the 1920s, and emphasize that Northern and Southern economies were largely complementary. While socially different, the sections economically benefited each other. Protectionism Owners of slaves preferred low-cost manual labor with no mechanization. Northern manufacturing interests supported tariffs and protectionism while Southern planters demanded free trade. The Democrats in Congress, controlled by Southerners, wrote the tariff laws in the 1830s, 1840s, and 1850s, and kept reducing rates so that the 1857 rates were the lowest since 1816. The Republicans called for an increase in tariffs in the 1860 election. The increases were only enacted in 1861 after Southerners resigned their seats in Congress. The tariff issue was a Northern grievance. However, neo-Confederate writers have claimed it as a Southern grievance. In 1860–61 none of the groups that proposed compromises to head off secession raised the tariff issue. Pamphleteers North and South rarely mentioned the tariff. Nationalism and honor Nationalism was a powerful force in the early 19th century, with famous spokesmen such as Andrew Jackson and Daniel Webster. While practically all Northerners supported the Union, Southerners were split between those loyal to the entirety of the United States (called "Southern Unionists") and those loyal primarily to the Southern region and then the Confederacy. Perceived insults to Southern collective honor included the enormous popularity of Uncle Tom's Cabin, and the actions of abolitionist John Brown in trying to incite a rebellion of slaves in 1859. While the South moved towards a Southern nationalism, leaders in the North were also becoming more nationally minded, and they rejected any notion of splitting the Union. The Republican national electoral platform of 1860 warned that Republicans regarded disunion as treason and would not tolerate it. The South ignored the warnings; Southerners did not realize how ardently the North would fight to hold the Union together. Lincoln's election The election of Abraham Lincoln in November 1860 was the final trigger for secession. Efforts at compromise, including the Corwin Amendment and the Crittenden Compromise, failed. Southern leaders feared that Lincoln would stop the expansion of slavery and put it on a course toward extinction. When Lincoln won the presidential election in 1860, the South lost any hope of compromise. Jefferson Davis claimed that all the cotton states would secede from the Union. The Confederacy was formed of seven states of the Deep South: Alabama, Florida, Georgia, Louisiana, Mississippi, South Carolina, and Texas in January and February 1861. They wrote the Confederate Constitution, which provided greater states' rights than the Constitution of the United States. Until elections were held, Davis was the provisional president. Lincoln was inaugurated on March 4, 1861. According to Lincoln, the American people had shown that they had been successful in establishing and administering a republic, but a third challenge faced the nation: maintaining a republic based on the people's vote, in the face of an attempt to destroy it. Outbreak of the war Secession crisis The election of Lincoln provoked the legislature of South Carolina to call a state convention to consider secession. Before the war, South Carolina did more than any other Southern state to advance the notion that a state had the right to nullify federal laws, and even to secede from the United States. The convention unanimously voted to secede on December 20, 1860, and adopted a secession declaration. It argued for states' rights for slave owners in the South, but contained a complaint about states' rights in the North in the form of opposition to the Fugitive Slave Act, claiming that Northern states were not fulfilling their federal obligations under the Constitution. The "cotton states" of Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas followed suit, seceding in January and February 1861. Among the ordinances of secession passed by the individual states, those of three—Texas, Alabama, and Virginia—specifically mentioned the plight of the "slaveholding states" at the hands of Northern abolitionists. The rest make no mention of the slavery issue and are often brief announcements of the dissolution of ties by the legislatures. However, at least four states—South Carolina, Mississippi, Georgia, and Texas—also passed lengthy and detailed explanations of their causes for secession, all of which laid the blame squarely on the movement to abolish slavery and that movement's influence over the politics of the Northern states. The Southern states believed slaveholding was a constitutional right because of the Fugitive Slave Clause of the Constitution. These states agreed to form a new federal government, the Confederate States of America, on February 4, 1861. They took control of federal forts and other properties within their boundaries with little resistance from outgoing President James Buchanan, whose term ended on March 4, 1861. Buchanan said that the Dred Scott decision was proof that the South had no reason for secession, and that the Union "was intended to be perpetual", but that "The power by force of arms to compel a State to remain in the Union" was not among the "enumerated powers granted to Congress". One-quarter of the U.S. Army—the entire garrison in Texas—was surrendered in February 1861 to state forces by its commanding general, David E. Twiggs, who then joined the Confederacy. As Southerners resigned their seats in the Senate and the House, Republicans were able to pass projects that had been blocked by Southern senators before the war. These included the Morrill Tariff, land grant colleges (the Morrill Act), a Homestead Act, a transcontinental railroad (the Pacific Railroad Acts), the National Bank Act, the authorization of United States Notes by the Legal Tender Act of 1862, and the ending of slavery in the District of Columbia. The Revenue Act of 1861 introduced the income tax to help finance the war. In December 1860, the Crittenden Compromise was proposed to re-establish the Missouri Compromise line by constitutionally banning slavery in territories to the north of the line while guaranteeing it to the south. The adoption of this compromise likely would have prevented the secession of the Southern states, but Lincoln and the Republicans rejected it. Lincoln stated that any compromise that would extend slavery would in time bring down the Union. A pre-war February Peace Conference of 1861 met in Washington, proposing a solution similar to that of the Crittenden compromise; it was rejected by Congress. The Republicans proposed an alternative compromise to not interfere with slavery where it existed but the South regarded it as insufficient. Nonetheless, the remaining eight slave states rejected pleas to join the Confederacy following a two-to-one no-vote in Virginia's First Secessionist Convention on April 4, 1861. On March 4, 1861, Abraham Lincoln was sworn in as president. In his inaugural address, he argued that the Constitution was a more perfect union than the earlier Articles of Confederation and Perpetual Union, that it was a binding contract, and called any secession "legally void". He had no intent to invade Southern states, nor did he intend to end slavery where it existed, but said that he would use force to maintain possession of Federal property, including forts, arsenals, mints, and customhouses that had been seized by the Southern states. The government would make no move to recover post offices, and if resisted, mail delivery would end at state lines. Where popular conditions did not allow peaceful enforcement of Federal law, U.S. marshals and judges would be withdrawn. No mention was made of bullion lost from U.S. mints in Louisiana, Georgia, and North Carolina. He stated that it would be U.S. policy to only collect import duties at its ports; there could be no serious injury to the South to justify the armed revolution during his administration. His speech closed with a plea for restoration of the bonds of union, famously calling on "the mystic chords of memory" binding the two regions. The Davis government of the new Confederacy sent three delegates to Washington to negotiate a peace treaty with the United States of America. Lincoln rejected any negotiations with Confederate agents because he claimed the Confederacy was not a legitimate government, and that making any treaty with it would be tantamount to recognition of it as a sovereign government. Lincoln instead attempted to negotiate directly with the governors of individual seceded states, whose administrations he continued to recognize. Complicating Lincoln's attempts to defuse the crisis were the actions of the new Secretary of State, William Seward. Seward had been Lincoln's main rival for the Republican presidential nomination. Shocked and deeply embittered by this defeat, Seward only agreed to support Lincoln's candidacy after he was guaranteed the executive office which was considered at that time to be by far the most powerful and important after the presidency itself. Even in the early stages of Lincoln's presidency Seward still held little regard for the new chief executive due to his perceived inexperience, and therefore viewed himself as the de facto head of government or "prime minister" behind the throne of Lincoln. In this role, Seward attempted to engage in unauthorized and indirect negotiations that failed. However, President Lincoln was determined to hold all remaining Union-occupied forts in the Confederacy: Fort Monroe in Virginia, Fort Pickens, Fort Jefferson and Fort Taylor in Florida, and Fort Sumter – located at the cockpit of secession in Charleston, South Carolina. Battle of Fort Sumter Fort Sumter is located in the middle of the harbor of Charleston, South Carolina. Its garrison had recently moved there to avoid incidents with local militias in the streets of the city. Lincoln told its commander, Major Robert Anderson, to hold on until fired upon. Confederate president Jefferson Davis ordered the surrender of the fort. Anderson gave a conditional reply, which the Confederate government rejected, and Davis ordered General P. G. T. Beauregard to attack the fort before a relief expedition could arrive. He bombarded Fort Sumter on April 12–13, forcing its capitulation. The attack on Fort Sumter enormously invigorated the North to the defense of American nationalism. On April 15, 1861, Lincoln called on all the states to send forces to recapture the fort and other federal properties. The scale of the rebellion appeared to be small, so he called for only 75,000 volunteers for 90 days. In western Missouri, local secessionists seized Liberty Arsenal. On May 3, 1861, Lincoln called for an additional 42,000 volunteers for a period of three years. Shortly after this, Virginia, Tennessee, Arkansas, and North Carolina seceded and joined the Confederacy. To reward Virginia, the Confederate capital was moved to Richmond. Attitude of the border states Maryland, Delaware, Missouri, and Kentucky were slave states that had divided loyalties to Northern and Southern businesses and family members. Some men enlisted in the Union Army and others in the Confederate Army. West Virginia separated from Virginia and was admitted to the Union on June 20, 1863. Maryland's territory surrounded the United States' capital of Washington, D.C., and could cut it off from the North. It had numerous anti-Lincoln officials who tolerated anti-army rioting in Baltimore and the burning of bridges, both aimed at hindering the passage of troops to the South. Maryland's legislature voted overwhelmingly (53–13) to stay in the Union, but also rejected hostilities with its southern neighbors, voting to close Maryland's rail lines to prevent them from being used for war. Lincoln responded by establishing martial law and unilaterally suspending habeas corpus in Maryland, along with sending in militia units from the North. Lincoln rapidly took control of Maryland and the District of Columbia by seizing many prominent figures, including arresting 1/3 of the members of the Maryland General Assembly on the day it reconvened. All were held without trial, ignoring a ruling by the Chief Justice of the U.S. Supreme Court Roger Taney, a Maryland native, that only Congress (and not the president) could suspend habeas corpus (Ex parte Merryman). Federal troops imprisoned a prominent Baltimore newspaper editor, Frank Key Howard, Francis Scott Key's grandson, after he criticized Lincoln in an editorial for ignoring the Supreme Court Chief Justice's ruling. In Missouri, an elected convention on secession voted decisively to remain within the Union. When pro-Confederate Governor Claiborne F. Jackson called out the state militia, it was attacked by federal forces under General Nathaniel Lyon, who chased the governor and the rest of the State Guard to the southwestern corner of the state (see also: Missouri secession). In the resulting vacuum, the convention on secession reconvened and took power as the Unionist provisional government of Missouri. Kentucky did not secede; for a time, it declared itself neutral. When Confederate forces entered the state in September 1861, neutrality ended and the state reaffirmed its Union status while maintaining slavery. During a brief invasion by Confederate forces in 1861, Confederate sympathizers organized a secession convention, formed the shadow Confederate Government of Kentucky, inaugurated a governor, and gained recognition from the Confederacy. Its jurisdiction extended only as far as Confederate battle lines in the Commonwealth, and it went into exile for good after October 1862. After Virginia's secession, a Unionist government in Wheeling asked 48 counties to vote on an ordinance to create a new state on October 24, 1861. A voter turnout of 34 percent approved the statehood bill (96 percent approving). Twenty-four secessionist counties were included in the new state, and the ensuing guerrilla war engaged about 40,000 Federal troops for much of the war. Congress admitted West Virginia to the Union on June 20, 1863. West Virginia provided about 20,000–22,000 soldiers to both the Confederacy and the Union. A Unionist secession attempt occurred in East Tennessee, but was suppressed by the Confederacy, which arrested over 3,000 men suspected of being loyal to the Union. They were held without trial. General features of the war The Civil War was a contest marked by the ferocity and frequency of battle. Over four years, 237 named battles were fought, as were many more minor actions and skirmishes, which were often characterized by their bitter intensity and high casualties. In his book The American Civil War, John Keegan writes that "The American Civil War was to prove one of the most ferocious wars ever fought". In many cases, without geographic objectives, the only target for each side was the enemy's soldier. Mobilization As the first seven states began organizing a Confederacy in Montgomery, the entire U.S. army numbered 16,000. However, Northern governors had begun to mobilize their militias. The Confederate Congress authorized the new nation up to 100,000 troops sent by governors as early as February. By May, Jefferson Davis was pushing for 100,000 men under arms for one year or the duration, and that was answered in kind by the U.S. Congress. In the first year of the war, both sides had far more volunteers than they could effectively train and equip. After the initial enthusiasm faded, reliance on the cohort of young men who came of age every year and wanted to join was not enough. Both sides used a draft law—conscription—as a device to encourage or force volunteering; relatively few were drafted and served. The Confederacy passed a draft law in April 1862 for young men aged 18 to 35; overseers of slaves, government officials, and clergymen were exempt. The U.S. Congress followed in July, authorizing a militia draft within a state when it could not meet its quota with volunteers. European immigrants joined the Union Army in large numbers, including 177,000 born in Germany and 144,000 born in Ireland. When the Emancipation Proclamation went into effect in January 1863, ex-slaves were energetically recruited by the states and used to meet the state quotas. States and local communities offered higher and higher cash bonuses for white volunteers. Congress tightened the law in March 1863. Men selected in the draft could provide substitutes or, until mid-1864, pay commutation money. Many eligibles pooled their money to cover the cost of anyone drafted. Families used the substitute provision to select which man should go into the army and which should stay home. There was much evasion and overt resistance to the draft, especially in Catholic areas. The draft riot in New York City in July 1863 involved Irish immigrants who had been signed up as citizens to swell the vote of the city's Democratic political machine, not realizing it made them liable for the draft. Of the 168,649 men procured for the Union through the draft, 117,986 were substitutes, leaving only 50,663 who had their services conscripted. In both the North and South, the draft laws were highly unpopular. In the North, some 120,000 men evaded conscription, many of them fleeing to Canada, and another 280,000 soldiers deserted during the war. At least 100,000 Southerners deserted, or about 10 percent; Southern desertion was high because, according to one historian writing in 1991, the highly localized Southern identity meant that many Southern men had little investment in the outcome of the war, with individual soldiers caring more about the fate of their local area than any grand ideal. In the North, "bounty jumpers" enlisted to get the generous bonus, deserted, then went back to a second recruiting station under a different name to sign up again for a second bonus; 141 were caught and executed. From a tiny frontier force in 1860, the Union and Confederate armies had grown into the "largest and most efficient armies in the world" within a few years. Some European observers at the time dismissed them as amateur and unprofessional, but British historian John Keegan concluded that each outmatched the French, Prussian, and Russian armies of the time, and without the Atlantic, would have threatened any of them with defeat. Prisoners At the start of the Civil War, a system of paroles operated. Captives agreed not to fight until they were officially exchanged. Meanwhile, they were held in camps run by their army. They were paid, but they were not allowed to perform any military duties. The system of exchanges collapsed in 1863 when the Confederacy refused to exchange black prisoners. After that, about 56,000 of the 409,000 POWs died in prisons during the war, accounting for nearly 10 percent of the conflict's fatalities. Women Historian Elizabeth D. Leonard writes that, according to various estimates, between five hundred and one thousand women enlisted as soldiers on both sides of the war, disguised as men. Women also served as spies, resistance activists, nurses, and hospital personnel. Women served on the Union hospital ship Red Rover and nursed Union and Confederate troops at field hospitals. Mary Edwards Walker, the only woman ever to receive the Medal of Honor, served in the Union Army and was given the medal for her efforts to treat the wounded during the war. Her name was deleted from the Army Medal of Honor Roll in 1917 (along with over 900 other, male MOH recipients); however, it was restored in 1977. Naval tactics The small U.S. Navy of 1861 was rapidly enlarged to 6,000 officers and 45,000 men in 1865, with 671 vessels, having a tonnage of 510,396. Its mission was to blockade Confederate ports, take control of the river system, defend against Confederate raiders on the high seas, and be ready for a possible war with the British Royal Navy. Meanwhile, the main riverine war was fought in the West, where a series of major rivers gave access to the Confederate heartland. The U.S. Navy eventually gained control of the Red, Tennessee, Cumberland, Mississippi, and Ohio rivers. In the East, the Navy shelled Confederate forts and provided support for coastal army operations. Modern navy evolves The Civil War occurred during the early stages of the industrial revolution. Many naval innovations emerged during this time, most notably the advent of the ironclad warship. It began when the Confederacy, knowing they had to meet or match the Union's naval superiority, responded to the Union blockade by building or converting more than 130 vessels, including twenty-six ironclads and floating batteries. Only half of these saw active service. Many were equipped with ram bows, creating "ram fever" among Union squadrons wherever they threatened. But in the face of overwhelming Union superiority and the Union's ironclad warships, they were unsuccessful. In addition to ocean-going warships coming up the Mississippi, the Union Navy used timberclads, tinclads, and armored gunboats. Shipyards at Cairo, Illinois, and St. Louis built new boats or modified steamboats for action. The Confederacy experimented with the submarine , which did not work satisfactorily, and with building an ironclad ship, , which was based on rebuilding a sunken Union ship, . On its first foray, on March 8, 1862, Virginia inflicted significant damage to the Union's wooden fleet, but the next day the first Union ironclad, , arrived to challenge it in the Chesapeake Bay. The resulting three-hour Battle of Hampton Roads was a draw, but it proved that ironclads were effective warships. Not long after the battle, the Confederacy was forced to scuttle the Virginia to prevent its capture, while the Union built many copies of the Monitor. Lacking the technology and infrastructure to build effective warships, the Confederacy attempted to obtain warships from Great Britain. However, this failed, because Great Britain had no interest in selling warships to a nation that was at war with a far stronger enemy, and doing so could sour relations with the U.S. Union blockade By early 1861, General Winfield Scott had devised the Anaconda Plan to win the war with as little bloodshed as possible. Scott argued that a Union blockade of the main ports would weaken the Confederate economy. Lincoln adopted parts of the plan, but he overruled Scott's caution about 90-day volunteers. Public opinion, however, demanded an immediate attack by the army to capture Richmond. In April 1861, Lincoln announced the Union blockade of all Southern ports; commercial ships could not get insurance and regular traffic ended. The South blundered in embargoing cotton exports in 1861 before the blockade was effective; by the time they realized the mistake, it was too late. "King Cotton" was dead, as the South could export less than 10 percent of its cotton. The blockade shut down the ten Confederate seaports with railheads that moved almost all the cotton, especially New Orleans, Mobile, and Charleston. By June 1861, warships were stationed off the principal Southern ports, and a year later nearly 300 ships were in service. Blockade runners The Confederates began the war short on military supplies and in desperate need of large quantities of arms which the agrarian South could not provide. Arms manufactures in the industrial North were restricted by an arms embargo, keeping shipments of arms from going to the South, and ending all existing and future contracts. The Confederacy subsequently looked to foreign sources for their enormous military needs and sought out financiers and companies like S. Isaac, Campbell & Company and the London Armoury Company in Britain, who acted as purchasing agents for the Confederacy, connecting them with Britain's many arms manufactures, and ultimately becoming the Confederacy's main source of arms. To get the arms safely to the Confederacy British investors built small, fast, steam-driven blockade runners that traded arms and supplies brought in from Britain through Bermuda, Cuba, and the Bahamas in return for high-priced cotton. Many of the ships were lightweight and designed for speed and could only carry a relatively small amount of cotton back to England. When the Union Navy seized a blockade runner, the ship and cargo were condemned as a prize of war and sold, with the proceeds given to the Navy sailors; the captured crewmen were mostly British, and they were released. Economic impact The Southern economy nearly collapsed during the war. There were multiple reasons for this: the severe deterioration of food supplies, especially in cities, the failure of Southern railroads, the loss of control of the main rivers, foraging by Northern armies, and the seizure of animals and crops by Confederate armies. Most historians agree that the blockade was a major factor in ruining the Confederate economy; however, Wise argues that the blockade runners provided just enough of a lifeline to allow Lee to continue fighting for additional months, thanks to fresh supplies of 400,000 rifles, lead, blankets, and boots that the homefront economy could no longer supply. Surdam argues that the blockade was a powerful weapon that eventually ruined the Southern economy, at the cost of few lives in combat. Practically, the entire Confederate cotton crop was useless (although it was sold to Union traders), costing the Confederacy its main source of income. Critical imports were scarce and the coastal trade was largely ended as well. The measure of the blockade's success was not the few ships that slipped through, but the thousands that never tried it. Merchant ships owned in Europe could not get insurance and were too slow to evade the blockade, so they stopped calling at Confederate ports. To fight an offensive war, the Confederacy purchased ships in Britain, converted them to warships, and raided American merchant ships in the Atlantic and Pacific oceans. Insurance rates skyrocketed and the American flag virtually disappeared from international waters. However, the same ships were reflagged with European flags and continued unmolested. After the war ended, the U.S. government demanded that Britain compensate them for the damage done by the raiders outfitted in British ports. Britain acquiesced to their demand, paying the U.S. $15 million in 1871. Diplomacy Although the Confederacy hoped that Britain and France would join them against the Union, this was never likely, and so they instead tried to bring the British and French governments in as mediators. The Union, under Lincoln and Secretary of State William H. Seward, worked to block this and threatened war if any country officially recognized the existence of the Confederate States of America. In 1861, Southerners voluntarily embargoed cotton shipments, hoping to start an economic depression in Europe that would force Britain to enter the war to get cotton, but this did not work. Worse, Europe turned to Egypt and India for cotton, which they found superior, hindering the South's recovery after the war. Cotton diplomacy proved a failure as Europe had a surplus of cotton, while the 1860–62 crop failures in Europe made the North's grain exports of critical importance. It also helped to turn European opinion further away from the Confederacy. It was said that "King Corn was more powerful than King Cotton", as U.S. grain went from a quarter of the British import trade to almost half. Meanwhile, the war created employment for arms makers, ironworkers, and ships to transport weapons. Lincoln's administration initially failed to appeal to European public opinion. At first, diplomats explained that the United States was not committed to the ending of slavery, and instead repeated legalistic arguments about the unconstitutionality of secession. Confederate representatives, on the other hand, started off much more successful, by ignoring slavery and instead focusing on their struggle for liberty, their commitment to free trade, and the essential role of cotton in the European economy. The European aristocracy was "absolutely gleeful in pronouncing the American debacle as proof that the entire experiment in popular government had failed. European government leaders welcomed the fragmentation of the ascendant American Republic." However, there was still a European public with liberal sensibilities, that the U.S. sought to appeal to by building connections with the international press. As early as 1861, many Union diplomats such as Carl Schurz realized emphasizing the war against slavery was the Union's most effective moral asset in the struggle for public opinion in Europe. Seward was concerned that an overly radical case for reunification would distress the European merchants with cotton interests; even so, Seward supported a widespread campaign of public diplomacy. U.S. minister to Britain Charles Francis Adams proved particularly adept and convinced Britain not to openly challenge the Union blockade. The Confederacy purchased several warships from commercial shipbuilders in Britain (, , , , , and some others). The most famous, the , did considerable damage and led to serious postwar disputes. However, public opinion against slavery in Britain created a political liability for British politicians, where the anti-slavery movement was powerful. War loomed in late 1861 between the U.S. and Britain over the Trent affair, involving the U.S. Navy's boarding of the British ship and seizing two Confederate diplomats. However, London and Washington were able to smooth over the problem after Lincoln released the two. Prince Albert had left his deathbed to issue diplomatic instructions to Lord Lyons during the Trent affair, which began when the United States Navy captured two Confederate envoys from a British ship. His request was honored due to the respect he enjoyed by the government. As a result, the British response to the United States was toned down and helped avert the British becoming involved in the war. In 1862, the British government considered mediating between the Union and Confederacy, though even such an offer would have risked war with the United States. British Prime Minister Lord Palmerston reportedly read Uncle Tom's Cabin three times when deciding on what his decision would be. The Union victory in the Battle of Antietam caused the British to delay this decision. The Emancipation Proclamation over time would reinforce the political liability of supporting the Confederacy. Realizing that Washington could not intervene in Mexico as long as the Confederacy controlled Texas, France invaded Mexico in 1861. Washington repeatedly protested France's violation of the Monroe Doctrine. Despite sympathy for the Confederacy, France's seizure of Mexico ultimately deterred it from war with the Union. Confederate offers late in the war to end slavery in return for diplomatic recognition were not seriously considered by London or Paris. After 1863, the Polish revolt against Russia further distracted the European powers and ensured that they would remain neutral. Russia supported the Union, largely because it believed that the U.S. served as a counterbalance to its geopolitical rival, the United Kingdom. In 1863, the Russian Navy's Baltic and Pacific fleets wintered in the American ports of New York and San Francisco, respectively. Eastern theater The Eastern theater refers to the military operations east of the Appalachian Mountains, including the states of Virginia, West Virginia, Maryland, and Pennsylvania, the District of Columbia, and the coastal fortifications and seaports of North Carolina. Background Army of the Potomac Maj. Gen. George B. McClellan took command of the Union Army of the Potomac on July 26, 1861 (he was briefly general-in-chief of all the Union armies, but was subsequently relieved of that post in favor of Maj. Gen. Henry W. Halleck), and the war began in earnest in 1862. The 1862 Union strategy called for simultaneous advances along four axes: McClellan would lead the main thrust in Virginia towards Richmond. Ohio forces would advance through Kentucky into Tennessee. The Missouri Department would drive south along the Mississippi River. The westernmost attack would originate from Kansas. Army of Northern Virginia The primary Confederate force in the Eastern theater was the Army of Northern Virginia. The Army originated as the (Confederate) Army of the Potomac, which was organized on June 20, 1861, from all operational forces in northern Virginia. On July 20 and 21, the Army of the Shenandoah and forces from the District of Harpers Ferry were added. Units from the Army of the Northwest were merged into the Army of the Potomac between March 14 and May 17, 1862. The Army of the Potomac was renamed Army of Northern Virginia on March 14. The Army of the Peninsula was merged into it on April 12, 1862. When Virginia declared its secession in April 1861, Robert E. Lee chose to follow his home state, despite his desire for the country to remain intact and an offer of a senior Union command. Lee's biographer, Douglas S. Freeman, asserts that the army received its final name from Lee when he issued orders assuming command on June 1, 1862. However, Freeman does admit that Lee corresponded with Brigadier General Joseph E. Johnston, his predecessor in army command, before that date and referred to Johnston's command as the Army of Northern Virginia. Part of the confusion results from the fact that Johnston commanded the Department of Northern Virginia (as of October 22, 1861) and the name Army of Northern Virginia can be seen as an informal consequence of its parent department's name. Jefferson Davis and Johnston did not adopt the name, but it is clear that the organization of units as of March 14 was the same organization that Lee received on June 1, and thus it is generally referred to today as the Army of Northern Virginia, even if that is correct only in retrospect. On July 4 at Harper's Ferry, Colonel Thomas J. Jackson assigned Jeb Stuart to command all the cavalry companies of the Army of the Shenandoah. He eventually commanded the Army of Northern Virginia's cavalry. Battles In one of the first highly visible battles, in July 1861, a march by Union troops under the command of Maj. Gen. Irvin McDowell on the Confederate forces led by Gen. P. G. T. Beauregard near Washington was repulsed at the First Battle of Bull Run (also known as First Manassas). The Union had the upper hand at first, nearly pushing confederate forces holding a defensive position into a rout, but Confederate reinforcements under Joseph E. Johnston arrived from the Shenandoah Valley by railroad, and the course of the battle quickly changed. A brigade of Virginians under the relatively unknown brigadier general from the Virginia Military Institute, Thomas J. Jackson, stood its ground, which resulted in Jackson receiving his famous nickname, "Stonewall". Upon the strong urging of President Lincoln to begin offensive operations, McClellan attacked Virginia in the spring of 1862 by way of the peninsula between the York River and James River, southeast of Richmond. McClellan's army reached the gates of Richmond in the Peninsula Campaign, Also in the spring of 1862, in the Shenandoah Valley, Stonewall Jackson led his Valley Campaign. Employing audacity and rapid, unpredictable movements on interior lines, Jackson's 17,000 men marched 646 miles (1,040 km) in 48 days and won several minor battles as they successfully engaged three Union armies (52,000 men), including those of Nathaniel P. Banks and John C. Fremont, preventing them from reinforcing the Union offensive against Richmond. The swiftness of Jackson's men earned them the nickname of "foot cavalry". Johnston halted McClellan's advance at the Battle of Seven Pines, but he was wounded in the battle, and Robert E. Lee assumed his position of command. General Lee and top subordinates James Longstreet and Stonewall Jackson defeated McClellan in the Seven Days Battles and forced his retreat. The Northern Virginia Campaign, which included the Second Battle of Bull Run, ended in yet another victory for the South. McClellan resisted General-in-Chief Halleck's orders to send reinforcements to John Pope's Union Army of Virginia, which made it easier for Lee's Confederates to defeat twice the number of combined enemy troops. Emboldened by Second Bull Run, the Confederacy made its first invasion of the North with the Maryland Campaign. General Lee led 45,000 men of the Army of Northern Virginia across the Potomac River into Maryland on September 5. Lincoln then restored Pope's troops to McClellan. McClellan and Lee fought at the Battle of Antietam near Sharpsburg, Maryland, on September 17, 1862, the bloodiest single day in United States military history. Lee's army checked at last, returned to Virginia before McClellan could destroy it. Antietam is considered a Union victory because it halted Lee's invasion of the North and provided an opportunity for Lincoln to announce his Emancipation Proclamation. When the cautious McClellan failed to follow up on Antietam, he was replaced by Maj. Gen. Ambrose Burnside. Burnside was soon defeated at the Battle of Fredericksburg on December 13, 1862, when more than 12,000 Union soldiers were killed or wounded during repeated futile frontal assaults against Marye's Heights. After the battle, Burnside was replaced by Maj. Gen. Joseph Hooker. Hooker, too, proved unable to defeat Lee's army; despite outnumbering the Confederates by more than two to one, his Chancellorsville Campaign proved ineffective and he was humiliated in the Battle of Chancellorsville in May 1863. Chancellorsville is known as Lee's "perfect battle" because his risky decision to divide his army in the presence of a much larger enemy force resulted in a significant Confederate victory. Gen. Stonewall Jackson was shot in the arm by accidental friendly fire during the battle and subsequently died of complications. Lee famously said: "He has lost his left arm, but I have lost my right arm." The fiercest fighting of the battle—and the second bloodiest day of the Civil War—occurred on May 3 as Lee launched multiple attacks against the Union position at Chancellorsville. That same day, John Sedgwick advanced across the Rappahannock River, defeated the small Confederate force at Marye's Heights in the Second Battle of Fredericksburg, and then moved to the west. The Confederates fought a successful delaying action at the Battle of Salem Church. Gen. Hooker was replaced by Maj. Gen. George Meade during Lee's second invasion of the North, in June. Meade defeated Lee at the Battle of Gettysburg (July 1 to 3, 1863). This was the bloodiest battle of the war and has been called the war's turning point. Pickett's Charge on July 3 is often considered the high-water mark of the Confederacy because it signaled the collapse of serious Confederate threats of victory. Lee's army suffered 28,000 casualties (versus Meade's 23,000). Western theater The Western theater refers to military operations between the Appalachian Mountains and the Mississippi River, including the states of Alabama, Georgia, Florida, Mississippi, North Carolina, Kentucky, South Carolina and Tennessee, as well as parts of Louisiana. Background Army of the Tennessee and Army of the Cumberland The primary Union forces in the Western theater were the Army of the Tennessee and the Army of the Cumberland, named for the two rivers, the Tennessee River and Cumberland River. After Meade's inconclusive fall campaign, Lincoln turned to the Western Theater for new leadership. At the same time, the Confederate stronghold of Vicksburg surrendered, giving the Union control of the Mississippi River, permanently isolating the western Confederacy, and producing the new leader Lincoln needed, Ulysses S. Grant. Army of Tennessee The primary Confederate force in the Western theater was the Army of Tennessee. The army was formed on November 20, 1862, when General Braxton Bragg renamed the former Army of Mississippi. While the Confederate forces had numerous successes in the Eastern Theater, they were defeated many times in the West. Battles The Union's key strategist and tactician in the West was Ulysses S. Grant, who won victories at Forts Henry (February 6, 1862) and Donelson (February 11 to 16, 1862), earning him the nickname of "Unconditional Surrender" Grant, by which the Union seized control of the Tennessee and Cumberland Rivers. Nathan Bedford Forrest rallied nearly 4,000 Confederate troops and led them to escape across the Cumberland. Nashville and central Tennessee thus fell to the Union, leading to attrition of local food supplies and livestock and a breakdown in social organization. Leonidas Polk's invasion of Columbus ended Kentucky's policy of neutrality and turned it against the Confederacy. Grant used river transport and Andrew Foote's gunboats of the Western Flotilla to threaten the Confederacy's "Gibraltar of the West" at Columbus, Kentucky. Although rebuffed at Belmont, Grant cut off Columbus. The Confederates, lacking their gunboats, were forced to retreat and the Union took control of western Kentucky and opened Tennessee in March 1862. At the Battle of Shiloh (Pittsburg Landing), in Tennessee in April 1862, the Confederates made a surprise attack that pushed Union forces against the river as night fell. Overnight, the Navy landed additional reinforcements, and Grant counter-attacked. Grant and the Union won a decisive victory—the first battle with the high casualty rates that would repeat over and over. The Confederates lost Albert Sidney Johnston, considered their finest general before the emergence of Lee. One of the early Union objectives in the war was the capture of the Mississippi River, to cut the Confederacy in half. The Mississippi River was opened to Union traffic to the southern border of Tennessee with the taking of Island No. 10 and New Madrid, Missouri, and then Memphis, Tennessee. In April 1862, the Union Navy captured New Orleans. "The key to the river was New Orleans, the South's largest port [and] greatest industrial center." U.S. Naval forces under Farragut ran past Confederate defenses south of New Orleans. Confederate forces abandoned the city, giving the Union a critical anchor in the deep South. which allowed Union forces to begin moving up the Mississippi. Memphis fell to Union forces on June 6, 1862, and became a key base for further advances south along the Mississippi River. Only the fortress city of Vicksburg, Mississippi, prevented Union control of the entire river. Bragg's second invasion of Kentucky in the Confederate Heartland Offensive included initial successes such as Kirby Smith's triumph at the Battle of Richmond and the capture of the Kentucky capital of Frankfort on September 3, 1862. However, the campaign ended with a meaningless victory over Maj. Gen. Don Carlos Buell at the Battle of Perryville. Bragg was forced to end his attempt at invading Kentucky and retreat due to lack of logistical support and lack of infantry recruits for the Confederacy in that state. Bragg was narrowly defeated by Maj. Gen. William Rosecrans at the Battle of Stones River in Tennessee, the culmination of the Stones River Campaign. Naval forces assisted Grant in the long, complex Vicksburg Campaign that resulted in the Confederates surrendering at the Battle of Vicksburg in July 1863, which cemented Union control of the Mississippi River and is considered one of the turning points of the war. The one clear Confederate victory in the West was the Battle of Chickamauga. After Rosecrans' successful Tullahoma Campaign, Bragg, reinforced by Lt. Gen. James Longstreet's corps (from Lee's army in the east), defeated Rosecrans, despite the heroic defensive stand of Maj. Gen. George Henry Thomas. Rosecrans retreated to Chattanooga, which Bragg then besieged in the Chattanooga Campaign. Grant marched to the relief of Rosecrans and defeated Bragg at the Third Battle of Chattanooga, eventually causing Longstreet to abandon his Knoxville Campaign and driving Confederate forces out of Tennessee and opening a route to Atlanta and the heart of the Confederacy. Trans-Mississippi theater Background The Trans-Mississippi theater refers to military operations west of the Mississippi River, encompassing most of Missouri, Arkansas, most of Louisiana, and Indian Territory (now Oklahoma). The Trans-Mississippi District was formed by the Confederate Army to better coordinate Ben McCulloch's command of troops in Arkansas and Louisiana, Sterling Price's Missouri State Guard, as well as the portion of Earl Van Dorn's command that included the Indian Territory and excluded the Army of the West. The Union's command was the Trans-Mississippi Division, or the Military Division of West Mississippi. Battles The first battle of the Trans-Mississippi theater was the Battle of Wilson's Creek (August 1861). The Confederates were driven from Missouri early in the war as a result of the Battle of Pea Ridge. Extensive guerrilla warfare characterized the trans-Mississippi region, as the Confederacy lacked the troops and the logistics to support regular armies that could challenge Union control. Roving Confederate bands such as Quantrill's Raiders terrorized the countryside, striking both military installations and civilian settlements. The "Sons of Liberty" and "Order of the American Knights" attacked pro-Union people, elected officeholders, and unarmed uniformed soldiers. These partisans could not be entirely driven out of the state of Missouri until an entire regular Union infantry division was engaged. By 1864, these violent activities harmed the nationwide anti-war movement organizing against the re-election of Lincoln. Missouri not only stayed in the Union but Lincoln took 70 percent of the vote for re-election. Numerous small-scale military actions south and west of Missouri sought to control Indian Territory and New Mexico Territory for the Union. The Battle of Glorieta Pass was the decisive battle of the New Mexico Campaign. The Union repulsed Confederate incursions into New Mexico in 1862, and the exiled Arizona government withdrew into Texas. In the Indian Territory, civil war broke out within tribes. About 12,000 Indian warriors fought for the Confederacy and smaller numbers for the Union. The most prominent Cherokee was Brigadier General Stand Watie, the last Confederate general to surrender. After the fall of Vicksburg in July 1863, General Kirby Smith in Texas was informed by Jefferson Davis that he could expect no further help from east of the Mississippi River. Although he lacked resources to beat Union armies, he built up a formidable arsenal at Tyler, along with his own Kirby Smithdom economy, a virtual "independent fiefdom" in Texas, including railroad construction and international smuggling. The Union, in turn, did not directly engage him. Its 1864 Red River Campaign to take Shreveport, Louisiana, was a failure and Texas remained in Confederate hands throughout the war. Lower Seaboard theater Background The Lower Seaboard theater refers to military and naval operations that occurred near the coastal areas of the Southeast (Alabama, Florida, Louisiana, Mississippi, South Carolina, and Texas) as well as the southern part of the Mississippi River (Port Hudson and south). Union Naval activities were dictated by the Anaconda Plan. Battles One of the earliest battles of the war was fought at Port Royal Sound (November, 1861), south of Charleston. Much of the war along the South Carolina coast concentrated on capturing Charleston. In attempting to capture Charleston, the Union military tried two approaches: by land over James or Morris Islands or through the harbor. However, the Confederates were able to drive back each Union attack. One of the most famous of the land attacks was the Second Battle of Fort Wagner, in which the 54th Massachusetts Infantry took part. The Union suffered a serious defeat in this battle, losing 1,515 men while the Confederates lost only 174. Fort Pulaski on the Georgia coast was an early target for the Union navy. Following the capture of Port Royal, an expedition was organized with engineer troops under the command of Captain Quincy A. Gillmore, forcing a Confederate surrender. The Union army occupied the fort for the rest of the war after repairing it. In April 1862, a Union naval task force commanded by Commander David D. Porter attacked Forts Jackson and St. Philip, which guarded the river approach to New Orleans from the south. While part of the fleet bombarded the forts, other vessels forced a break in the obstructions in the river and enabled the rest of the fleet to steam upriver to the city. A Union army force commanded by Major General Benjamin Butler landed near the forts and forced their surrender. Butler's controversial command of New Orleans earned him the nickname "Beast." The following year, the Union Army of the Gulf commanded by Major General Nathaniel P. Banks laid siege to Port Hudson for nearly eight weeks, the longest siege in US military history. The Confederates attempted to defend with the Bayou Teche Campaign but surrendered after Vicksburg. These two surrenders gave the Union control over the entire Mississippi. Several small skirmishes were fought in Florida, but no major battles. The biggest was the Battle of Olustee in early 1864. Pacific Coast theater The Pacific Coast theater refers to military operations on the Pacific Ocean and in the states and Territories west of the Continental Divide. Conquest of Virginia At the beginning of 1864, Lincoln made Grant commander of all Union armies. Grant made his headquarters with the Army of the Potomac and put Maj. Gen. William Tecumseh Sherman in command of most of the western armies. Grant understood the concept of total war and believed, along with Lincoln and Sherman, that only the utter defeat of Confederate forces and their economic base would end the war. This was total war not in killing civilians but rather in taking provisions and forage and destroying homes, farms, and railroads, that Grant said "would otherwise have gone to the support of secession and rebellion. This policy I believe exercised a material influence in hastening the end." Grant devised a coordinated strategy that would strike at the entire Confederacy from multiple directions. Generals George Meade and Benjamin Butler were ordered to move against Lee near Richmond, General Franz Sigel (and later Philip Sheridan) were to attack the Shenandoah Valley, General Sherman was to capture Atlanta and march to the sea (the Atlantic Ocean), Generals George Crook and William W. Averell were to operate against railroad supply lines in West Virginia, and Maj. Gen. Nathaniel P. Banks was to capture Mobile, Alabama. Grant's Overland Campaign Grant's army set out on the Overland Campaign intending to draw Lee into a defense of Richmond, where they would attempt to pin down and destroy the Confederate army. The Union army first attempted to maneuver past Lee and fought several battles, notably at the Wilderness, Spotsylvania, and Cold Harbor. These battles resulted in heavy losses on both sides and forced Lee's Confederates to fall back repeatedly. At the Battle of Yellow Tavern, the Confederates lost Jeb Stuart. An attempt to outflank Lee from the south failed under Butler, who was trapped inside the Bermuda Hundred river bend. Each battle resulted in setbacks for the Union that mirrored what they had suffered under prior generals, though, unlike those prior generals, Grant fought on rather than retreat. Grant was tenacious and kept pressing Lee's Army of Northern Virginia back to Richmond. While Lee was preparing for an attack on Richmond, Grant unexpectedly turned south to cross the James River and began the protracted Siege of Petersburg, where the two armies engaged in trench warfare for over nine months. Sheridan's Valley Campaign Grant finally found a commander, General Philip Sheridan, aggressive enough to prevail in the Valley Campaigns of 1864. Sheridan was initially repelled at the Battle of New Market by former U.S. vice president and Confederate Gen. John C. Breckinridge. The Battle of New Market was the Confederacy's last major victory of the war and included a charge by teenage VMI cadets. After redoubling his efforts, Sheridan defeated Maj. Gen. Jubal A. Early in a series of battles, including a final decisive defeat at the Battle of Cedar Creek. Sheridan then proceeded to destroy the agricultural base of the Shenandoah Valley, a strategy similar to the tactics Sherman later employed in Georgia. Sherman's March to the Sea Meanwhile, Sherman maneuvered from Chattanooga to Atlanta, defeating Confederate Generals Joseph E. Johnston and John Bell Hood along the way. The fall of Atlanta on September 2, 1864, guaranteed the reelection of Lincoln as president. Hood left the Atlanta area to swing around and menace Sherman's supply lines and invade Tennessee in the Franklin–Nashville Campaign. Union Maj. Gen. John Schofield defeated Hood at the Battle of Franklin, and George H. Thomas dealt Hood a massive defeat at the Battle of Nashville, effectively destroying Hood's army. Leaving Atlanta, and his base of supplies, Sherman's army marched with an unknown destination, laying waste to about 20 percent of the farms in Georgia in his "March to the Sea". He reached the Atlantic Ocean at Savannah, Georgia, in December 1864. Sherman's army was followed by thousands of freed slaves; there were no major battles along the March. Sherman turned north through South Carolina and North Carolina to approach the Confederate Virginia lines from the south, increasing the pressure on Lee's army. The Waterloo of the Confederacy Lee's army, thinned by desertion and casualties, was now much smaller than Grant's. One last Confederate attempt to break the Union hold on Petersburg failed at the decisive Battle of Five Forks (sometimes called "the Waterloo of the Confederacy") on April 1. This meant that the Union now controlled the entire perimeter surrounding Richmond-Petersburg, completely cutting it off from the Confederacy. Realizing that the capital was now lost, Lee decided to evacuate his army. The Confederate capital fell to the Union XXV Corps, composed of black troops. The remaining Confederate units fled west after a defeat at Sayler's Creek. Confederacy surrenders Initially, Lee did not intend to surrender but planned to regroup at the village of Appomattox Court House, where supplies were to be waiting and then continue the war. Grant chased Lee and got in front of him so that when Lee's army reached Appomattox Court House, they were surrounded. After an initial battle, Lee decided that the fight was now hopeless, and surrendered his Army of Northern Virginia on April 9, 1865, at the McLean House. In an untraditional gesture and as a sign of Grant's respect and anticipation of peacefully restoring Confederate states to the Union, Lee was permitted to keep his sword and his horse, Traveller. His men were paroled, and a chain of Confederate surrenders began. On April 14, 1865, President Lincoln was shot by John Wilkes Booth, a Confederate sympathizer. Lincoln died early the next morning. Lincoln's vice president, Andrew Johnson, was unharmed, because his would-be assassin, George Atzerodt, lost his nerve, so Johnson was immediately sworn in as president. Meanwhile, Confederate forces across the South surrendered as news of Lee's surrender reached them. On April 26, 1865, the same day Boston Corbett killed Booth at a tobacco barn, General Joseph E. Johnston surrendered nearly 90,000 men of the Army of Tennessee to Major General William Tecumseh Sherman at Bennett Place near present-day Durham, North Carolina. It proved to be the largest surrender of Confederate forces. On May 4, all remaining Confederate forces in Alabama and Mississippi surrendered. President Johnson officially declared an end to the insurrection on May 9, 1865; Confederate president, Jefferson Davis, was captured the following day. On June 2, Kirby Smith officially surrendered his troops in the Trans-Mississippi Department. On June 23, Cherokee leader Stand Watie became the last Confederate general to surrender his forces. The final Confederate surrender was by the Shenandoah on November 6, 1865, bringing all hostilities of the four-year war to a close. Home fronts Union victory and aftermath Explaining the Union victory The causes of the war, the reasons for its outcome, and even the name of the war itself are subjects of lingering contention today. The North and West grew rich while the once-rich South became poor for a century. The national political power of the slaveowners and rich Southerners ended. Historians are less sure about the results of the postwar Reconstruction, especially regarding the second-class citizenship of the freedmen and their poverty. Historians have debated whether the Confederacy could have won the war. Most scholars, including James McPherson, argue that Confederate victory was at least possible. McPherson argues that the North's advantage in population and resources made Northern victory likely but not guaranteed. He also argues that if the Confederacy had fought using unconventional tactics, they would have more easily been able to hold out long enough to exhaust the Union. Confederates did not need to invade and hold enemy territory to win but only needed to fight a defensive war to convince the North that the cost of winning was too high. The North needed to conquer and hold vast stretches of enemy territory and defeat Confederate armies to win. Lincoln was not a military dictator and could continue to fight the war only as long as the American public supported a continuation of the war. The Confederacy sought to win independence by outlasting Lincoln; however, after Atlanta fell and Lincoln defeated McClellan in the election of 1864, all hope for a political victory for the South ended. At that point, Lincoln had secured the support of the Republicans, War Democrats, the border states, emancipated slaves, and the neutrality of Britain and France. By defeating the Democrats and McClellan, he also defeated the Copperheads and their peace platform. Some scholars argue that the Union held an insurmountable long-term advantage over the Confederacy in industrial strength and population. Confederate actions, they argue, only delayed defeat. Civil War historian Shelby Foote expressed this view succinctly: "I think that the North fought that war with one hand behind its back .... If there had been more Southern victories, and a lot more, the North simply would have brought that other hand out from behind its back. I don't think the South ever had a chance to win that War." A minority view among historians is that the Confederacy lost because, as E. Merton Coulter put it, "people did not will hard enough and long enough to win." However, most historians reject the argument. McPherson, after reading thousands of letters written by Confederate soldiers, found strong patriotism that continued to the end; they truly believed they were fighting for freedom and liberty. Even as the Confederacy was visibly collapsing in 1864–65, he says most Confederate soldiers were fighting hard. Historian Gary Gallagher cites General Sherman who in early 1864 commented, "The devils seem to have a determination that cannot but be admired." Despite their loss of slaves and wealth, with starvation looming, Sherman continued, "yet I see no sign of let-up—some few deserters—plenty tired of war, but the masses determined to fight it out." Also important were Lincoln's eloquence in rationalizing the national purpose and his skill in keeping the border states committed to the Union cause. The Emancipation Proclamation was an effective use of the President's war powers. The Confederate government failed in its attempt to get Europe involved in the war militarily, particularly Britain and France. Southern leaders needed to get European powers to help break up the blockade the Union had created around the Southern ports and cities. Lincoln's naval blockade was 95% effective at stopping trade goods; as a result, imports and exports to the South declined significantly. The abundance of European cotton and Britain's hostility to the institution of slavery, along with Lincoln's Atlantic and Gulf of Mexico naval blockades, severely decreased any chance that either Britain or France would enter the war. Historian Don Doyle has argued that the Union victory had a major impact on the course of world history. The Union victory energized popular democratic forces. A Confederate victory, on the other hand, would have meant a new birth of slavery, not freedom. Historian Fergus Bordewich, following Doyle, argues that: Scholars have debated what the effects of the war were on political and economic power in the South. The prevailing view is that the southern planter elite retained its powerful position in the South. However, a 2017 study challenges this, noting that while some Southern elites retained their economic status, the turmoil of the 1860s created greater opportunities for economic mobility in the South than in the North. Casualties The war resulted in at least 1,030,000 casualties (3 percent of the population), including about 620,000 soldier deaths—two-thirds by disease—and 50,000 civilians. Binghamton University historian J. David Hacker believes the number of soldier deaths was approximately 750,000, 20 percent higher than traditionally estimated, and possibly as high as 850,000. A novel way of calculating casualties by looking at the deviation of the death rate of men of fighting age from the norm through analysis of census data found that at least 627,000 and at most 888,000 people, but most likely 761,000 people, died through the war.As historian McPherson notes, the war's "cost in American lives was as great as in all of the nation's other wars combined through Vietnam" (referring to the Vietnam War). Based on 1860 census figures, 8 percent of all white men aged 13 to 43 died in the war, including 6 percent in the North and 18 percent in the South. About 56,000 soldiers died in prison camps during the War. An estimated 60,000 men lost limbs in the war. Of the 359,528 Union army dead, amounting to 15 percent of the over two million who served: 110,070 were killed in action (67,000) or died of wounds (43,000). 199,790 died of disease (75 percent was due to the war, the remainder would have occurred in civilian life anyway) 24,866 died in Confederate prison camps 9,058 were killed by accidents or drowning 15,741 other/unknown deaths In addition there were 4,523 deaths in the Navy (2,112 in battle) and 460 in the Marines (148 in battle). Black troops made up 10 percent of the Union death toll, they amounted to 15 percent of disease deaths but less than 3 percent of those killed in battle. Losses among African Americans were high. In the last year and a half and from all reported casualties, approximately 20 percent of all African Americans enrolled in the military lost their lives during the Civil War. Notably, their mortality rate was significantly higher than white soldiers. While 15.2% of United States Volunteers and just 8.6% of white Regular Army troops died, 20.5% of United States Colored Troops died. Confederate records compiled by historian William F. Fox list 74,524 killed and died of wounds and 59,292 died of disease. Including Confederate estimates of battle losses where no records exist would bring the Confederate death toll to 94,000 killed and died of wounds. However, this excludes the 30,000 deaths of Confederate troops in prisons, which would raise the minimum number of deaths to 290,000. The United States National Park Service uses the following figures in its official tally of war losses: Union: 853,838 110,100 killed in action 224,580 disease deaths 275,154 wounded in action 211,411 captured (including 30,192 who died as POWs) Confederate: 914,660 94,000 killed in action 164,000 disease deaths 194,026 wounded in action 462,634 captured (including 31,000 who died as POWs) While the figures of 360,000 army deaths for the Union and 260,000 for the Confederacy remained commonly cited, they are incomplete. In addition to many Confederate records being missing, partly as a result of Confederate widows not reporting deaths due to being ineligible for benefits, both armies only counted troops who died during their service and not the tens of thousands who died of wounds or diseases after being discharged. This often happened only a few days or weeks later. Francis Amasa Walker, superintendent of the 1870 census, used census and surgeon general data to estimate a minimum of 500,000 Union military deaths and 350,000 Confederate military deaths, for a total death toll of 850,000 soldiers. While Walker's estimates were originally dismissed because of the 1870 census's undercounting, it was later found that the census was only off by 6.5% and that the data Walker used would be roughly accurate. Analyzing the number of dead by using census data to calculate the deviation of the death rate of men of fighting age from the norm suggests that at least 627,000 and at most 888,000, but most likely 761,000 soldiers, died in the war. This would break down to approximately 350,000 Confederate and 411,000 Union military deaths, going by the proportion of Union to Confederate battle losses. Deaths among former slaves has proven much harder to estimate, due to the lack of reliable census data at the time, though they were known to be considerable, as former slaves were set free or escaped in massive numbers in an area where the Union army did not have sufficient shelter, doctors, or food for them. University of Connecticut Professor James Downs states that tens to hundreds of thousands of slaves died during the war from disease, starvation, or exposure and that if these deaths are counted in the war's total, the death toll would exceed 1 million. Losses were far higher than during the recent defeat of Mexico, which saw roughly thirteen thousand American deaths, including fewer than two thousand killed in battle, between 1846 and 1848. One reason for the high number of battle deaths during the war was the continued use of tactics similar to those of the Napoleonic Wars at the turn of the century, such as charging. With the advent of more accurate rifled barrels, Minié balls, and (near the end of the war for the Union army) repeating firearms such as the Spencer Repeating Rifle and the Henry Repeating Rifle, soldiers were mowed down when standing in lines in the open. This led to the adoption of trench warfare, a style of fighting that defined much of World War I. Emancipation Abolishing slavery was not a Union war goal from the outset, but it quickly became one. Lincoln's initial claims were that preserving the Union was the central goal of the war. In contrast, the South saw itself as fighting to preserve slavery. While not all Southerners saw themselves as fighting for slavery, most of the officers and over a third of the rank and file in Lee's army had close family ties to slavery. To Northerners, in contrast, the motivation was primarily to preserve the Union, not to abolish slavery. However, as the war dragged on it became clear that slavery was the central factor of the conflict, and that emancipation was (to quote the Emancipation Proclamation) "a fit and necessary war measure for suppressing [the] rebellion," Lincoln and his cabinet made ending slavery a war goal, culminating in the Emancipation Proclamation. Lincoln's decision to issue the Emancipation Proclamation angered both Peace Democrats ("Copperheads") and War Democrats, but energized most Republicans. By warning that free blacks would flood the North, Democrats made gains in the 1862 elections, but they did not gain control of Congress. The Republicans' counterargument that slavery was the mainstay of the enemy steadily gained support, with the Democrats losing decisively in the 1863 elections in the northern state of Ohio when they tried to resurrect anti-black sentiment. Emancipation Proclamation Slavery for the Confederacy's 3.5 million blacks effectively ended in each area when Union armies arrived; they were nearly all freed by the Emancipation Proclamation. The last Confederate slaves were freed on June 19, 1865, celebrated as the modern holiday of Juneteenth. Slaves in the border states and those located in some former Confederate territory occupied before the Emancipation Proclamation were freed by state action or (on December 6, 1865) by the Thirteenth Amendment. The Emancipation Proclamation enabled African Americans, both free blacks and escaped slaves, to join the Union Army. About 190,000 volunteered, further enhancing the numerical advantage the Union armies enjoyed over the Confederates, who did not dare emulate the equivalent manpower source for fear of fundamentally undermining the legitimacy of slavery. During the Civil War, sentiment concerning slaves, enslavement and emancipation in the United States was divided. Lincoln's fears of making slavery a war issue were based on a harsh reality: abolition did not enjoy wide support in the west, the territories, and the border states. In 1861, Lincoln worried that premature attempts at emancipation would mean the loss of the border states, and that "to lose Kentucky is nearly the same as to lose the whole game." Copperheads and some War Democrats opposed emancipation, although the latter eventually accepted it as part of the total war needed to save the Union. At first, Lincoln reversed attempts at emancipation by Secretary of War Simon Cameron and Generals John C. Frémont (in Missouri) and David Hunter (in South Carolina, Georgia and Florida) to keep the loyalty of the border states and the War Democrats. Lincoln warned the border states that a more radical type of emancipation would happen if his gradual plan based on compensated emancipation and voluntary colonization was rejected. But only the District of Columbia accepted Lincoln's gradual plan, which was enacted by Congress. When Lincoln told his cabinet about his proposed emancipation proclamation, Seward advised Lincoln to wait for a victory before issuing it, as to do otherwise would seem like "our last shriek on the retreat". Lincoln laid the groundwork for public support in an open letter published in response to Horace Greeley's "The Prayer of Twenty Millions." He also laid the groundwork at a meeting at the White House with five African American representatives on August 14, 1862. Arranging for a reporter to be present, he urged his visitors to agree to the voluntary colonization of black people, apparently to make his forthcoming preliminary emancipation proclamation more palatable to racist white people. A Union victory in the Battle of Antietam on September 17, 1862, provided Lincoln with an opportunity to issue the preliminary Emancipation Proclamation, and the subsequent War Governors' Conference added support for the proclamation. Lincoln issued his preliminary Emancipation Proclamation on September 22, 1862, and his final Emancipation Proclamation on January 1, 1863. In his letter to Albert G. Hodges, Lincoln explained his belief that "If slavery is not wrong, nothing is wrong .... And yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling .... I claim not to have controlled events, but confess plainly that events have controlled me." Lincoln's moderate approach succeeded in inducing the border states to remain in the Union and War Democrats to support the Union. The border states (Kentucky, Missouri, Maryland, Delaware) and Union-controlled regions around New Orleans, Norfolk, and elsewhere, were not covered by the Emancipation Proclamation. All abolished slavery on their own, except Kentucky and Delaware. Still, the proclamation did not enjoy universal support. It caused much unrest in what were then considered western states, where racist sentiments led to a great fear of abolition. There was some concern that the proclamation would lead to the secession of western states, and its issuance prompted the stationing of Union troops in Illinois in case of rebellion. Since the Emancipation Proclamation was based on the President's war powers, it applied only in territory held by Confederates at the time. However, the Proclamation became a symbol of the Union's growing commitment to add emancipation to the Union's definition of liberty. The Emancipation Proclamation greatly reduced the Confederacy's hope of being recognized or otherwise aided by Britain or France. By late 1864, Lincoln was playing a leading role in getting Congress to vote for the Thirteenth Amendment, which made emancipation universal and permanent unless it was repealed by another constitutional amendment. Reconstruction The war had utterly devastated the South, and posed serious questions of how the South would be re-integrated to the Union. The war destroyed much of the wealth that had existed in the South. All accumulated investment Confederate bonds were forfeit; most banks and railroads were bankrupt. The income per person in the South dropped to less than 40 percent of that of the North, a condition that lasted until well into the 20th century. Southern influence in the U.S. federal government, previously considered, was greatly diminished until the latter half of the 20th century. Reconstruction began during the war, with the Emancipation Proclamation of January 1, 1863, and it continued until 1877. It comprised multiple complex methods to resolve the outstanding issues of the war's aftermath, the most important of which were the three "Reconstruction Amendments" to the Constitution: the 13th outlawing slavery (1865), the 14th guaranteeing citizenship to slaves (1868) and the 15th ensuring voting rights to slaves (1870). From the Union perspective, the goals of Reconstruction were to consolidate the Union victory on the battlefield by reuniting the Union; to guarantee a "republican form of government" for the ex-Confederate states, and to permanently end slavery—and prevent semi-slavery status. President Johnson took a lenient approach and saw the achievement of the main war goals as realized in 1865 when each ex-rebel state repudiated secession and ratified the Thirteenth Amendment. Radical Republicans demanded proof that Confederate nationalism was dead and that the slaves were truly free. They came to the fore after the 1866 elections and undid much of Johnson's work. In 1872, the "Liberal Republicans" argued that the war goals had been achieved and that Reconstruction should end. They ran a presidential ticket in 1872 but were decisively defeated. In 1874, Democrats, primarily Southern, took control of Congress and opposed further reconstruction. The Compromise of 1877 closed with a national consensus that the Civil War had finally ended. With the withdrawal of federal troops, however, whites retook control of every Southern legislature, and the Jim Crow era of disenfranchisement and legal segregation was ushered in. The Civil War would have a huge impact on American politics in the years to come. Many veterans on both sides were subsequently elected to political office, including five U.S. Presidents: General Ulysses Grant, Rutherford B. Hayes, James Garfield, Benjamin Harrison, and William McKinley. Memory and historiography The Civil War is one of the central events in American collective memory. There are innumerable statues, commemorations, books, and archival collections. The memory includes the home front, military affairs, the treatment of soldiers, both living and dead, in the war's aftermath, depictions of the war in literature and art, evaluations of heroes and villains, and considerations of the moral and political lessons of the war. The last theme includes moral evaluations of racism and slavery, heroism in combat and heroism behind the lines, and issues of democracy and minority rights, as well as the notion of an "Empire of Liberty" influencing the world. Professional historians have paid much more attention to the causes of the war than to the war itself. Military history has largely developed outside academia, leading to a proliferation of studies by non-scholars who nevertheless are familiar with the primary sources and pay close attention to battles and campaigns and who write for the general public. Bruce Catton and Shelby Foote are among the best known. Practically every major figure in the war, both North and South, has had a serious biographical study. Lost Cause The memory of the war in the white South crystallized in the myth of the "Lost Cause": that the Confederate cause was just and heroic. The myth shaped regional identity and race relations for generations. Alan T. Nolan notes that the Lost Cause was expressly a rationalization, a cover-up to vindicate the name and fame of those in rebellion. Some claims revolve around the insignificance of slavery; some appeals highlight cultural differences between North and South; the military conflict by Confederate actors is idealized; in any case, secession was said to be lawful. Nolan argues that the adoption of the Lost Cause perspective facilitated the reunification of the North and the South while excusing the "virulent racism" of the 19th century, sacrificing black American progress to white man's reunification. He also deems the Lost Cause "a caricature of the truth. This caricature wholly misrepresents and distorts the facts of the matter" in every instance. The Lost Cause myth was formalized by Charles A. Beard and Mary R. Beard, whose The Rise of American Civilization (1927) spawned "Beardian historiography". The Beards downplayed slavery, abolitionism, and issues of morality. Though this interpretation was abandoned by the Beards in the 1940s, and by historians generally by the 1950s, Beardian themes still echo among Lost Cause writers. Battlefield preservation The first efforts at Civil War battlefield preservation and memorialization came during the war itself with the establishment of National Cemeteries at Gettysburg, Mill Springs and Chattanooga. Soldiers began erecting markers on battlefields beginning with the First Battle of Bull Run in July 1861, but the oldest surviving monument is the Hazen Brigade Monument near Murfreesboro, Tennessee, built in the summer of 1863 by soldiers in Union Col. William B. Hazen's brigade to mark the spot where they buried their dead following the Battle of Stones River. In the 1890s, the United States government established five Civil War battlefield parks under the jurisdiction of the War Department, beginning with the creation of the Chickamauga and Chattanooga National Military Park in Tennessee and the Antietam National Battlefield in Maryland in 1890. The Shiloh National Military Park was established in 1894, followed by the Gettysburg National Military Park in 1895 and Vicksburg National Military Park in 1899. In 1933, these five parks and other national monuments were transferred to the jurisdiction of the National Park Service. Chief among modern efforts to preserve Civil War sites has been the American Battlefield Trust, with more than 130 battlefields in 24 states. The five major Civil War battlefield parks operated by the National Park Service (Gettysburg, Antietam, Shiloh, Chickamauga/Chattanooga and Vicksburg) had a combined 3.1 million visitors in 2018, down 70% from 10.2 million in 1970. Civil War commemoration The American Civil War has been commemorated in many capacities ranging from the reenactment of battles to statues and memorial halls erected, to films being produced, to stamps and coins with Civil War themes being issued, all of which helped to shape public memory. This varied advent occurred in greater proportions on the 100th and 150th anniversary. Hollywood's take on the war has been especially influential in shaping public memory, as seen in such film classics as The Birth of a Nation (1915), Gone with the Wind (1939), and Lincoln (2012). Ken Burns's PBS television series The Civil War (1990) is especially well-remembered, though criticized for its historical accuracy. Technological significance Numerous technological innovations during the Civil War had a great impact on 19th-century science. The Civil War was one of the earliest examples of an "industrial war", in which technological might is used to achieve military supremacy in a war. New inventions, such as the train and telegraph, delivered soldiers, supplies and messages at a time when horses were considered to be the fastest way to travel. It was also in this war that aerial warfare, in the form of reconnaissance balloons, was first used. It saw the first action involving steam-powered ironclad warships in naval warfare history. Repeating firearms such as the Henry rifle, Spencer rifle, Colt revolving rifle, Triplett & Scott carbine and others, first appeared during the Civil War; they were a revolutionary invention that would soon replace muzzle-loading and single-shot firearms in warfare. The war also saw the first appearances of rapid-firing weapons and machine guns such as the Agar gun and the Gatling gun. In works of culture and art The Civil War is one of the most studied events in American history, and the collection of cultural works around it is enormous. This section gives an abbreviated overview of the most notable works. Literature When Lilacs Last in the Dooryard Bloom'd and O Captain! My Captain! (1865) by Walt Whitman, famous eulogies to Lincoln Battle-Pieces and Aspects of the War (1866) poetry by Herman Melville The Rise and Fall of the Confederate Government (1881) by Jefferson Davis The Private History of a Campaign That Failed (1885) by Mark Twain Texar's Revenge, or, North Against South (1887) by Jules Verne An Occurrence at Owl Creek Bridge (1890) by Ambrose Bierce The Red Badge of Courage (1895) by Stephen Crane Gone with the Wind (1936) by Margaret Mitchell North and South (1982) by John Jakes Film The Birth of a Nation (1915, US) The General (1926, US) Operator 13 (1934, US) Gone with the Wind (1939, US) The Red Badge of Courage (1951, US) The Horse Soldiers (1959, US) Shenandoah (1965, US) The Good, the Bad and the Ugly (1966, Italy-Spain-FRG) The Beguiled (1971, US) The Outlaw Josey Wales (1976, US) Glory (1989, US) The Civil War (1990, US) Gettysburg (1993, US) The Last Outlaw (1993, US) Cold Mountain (2003, US) Gods and Generals (2003, US) North and South (miniseries) Lincoln (2012, US) 12 Years a Slave (2013, US) Free State of Jones (2016, US) Music Dixie Battle Cry of Freedom Battle Hymn of the Republic The Bonnie Blue Flag John Brown's Body When Johnny Comes Marching Home Marching Through Georgia The Night They Drove Old Dixie Down Video games North & South (1989, FR) Sid Meier's Gettysburg! (1997, US) Sid Meier's Antietam! (1999, US) American Conqest: Divided Nation (2006, US) Forge of Freedom: The American Civil War (2006, US) The History Channel: Civil War – A Nation Divided (2006, US) Ageod's American Civil War (2007, US/FR) History Civil War: Secret Missions (2008, US) Call of Juarez: Bound in Blood (2009, US) Darkest of Days (2009, US) Victoria II: A House Divided (2011, US) Ageod's American Civil War II (2013, US/FR) Ultimate General: Gettysburg (2014, UKR) Ultimate General: Civil War (2016, UKR) See also General reference American Civil War Corps Badges List of American Civil War battles List of costliest American Civil War land battles List of weapons in the American Civil War Second American Civil War Union Presidency of Abraham Lincoln Uniform of the Union Army Confederacy Central Confederacy Uniforms of the Confederate States Armed Forces Ethnic articles African Americans in the American Civil War German Americans in the American Civil War Irish Americans in the American Civil War Italian Americans in the American Civil War Native Americans in the American Civil War Topical articles Commemoration of the American Civil War Commemoration of the American Civil War on postage stamps Dorothea Dix Education of freed people during the Civil War Spies in the American Civil War Infantry in the American Civil War List of ships captured in the 19th century#American Civil War Slavery during the American Civil War National articles Canada in the American Civil War Foreign enlistment in the American Civil War Prussia in the American Civil War United Kingdom in the American Civil War State articles :Category:American Civil War by state Memorials List of Confederate monuments and memorials List of memorials and monuments at Arlington National Cemetery List of memorials to Jefferson Davis List of memorials to Robert E. Lee List of memorials to Stonewall Jackson List of monuments erected by the United Daughters of the Confederacy List of monuments of the Gettysburg Battlefield List of Union Civil War monuments and memorials Memorials to Abraham Lincoln Removal of Confederate monuments and memorials Other modern civil wars in the world Boxer Rebellion Chinese Civil War Finnish Civil War Mexican Revolution Russian Civil War Spanish Civil War Taiping Rebellion References Notes Citations Bibliography Beringer, Richard E., Archer Jones, and Herman Hattaway, Why the South Lost the Civil War (1986), influential analysis of factors; an abridged version is The Elements of Confederate Defeat: Nationalism, War Aims, and Religion (1988) Gallagher, Gary W. (2011). The Union War. Cambridge, Massachusetts: Harvard University Press. . Gara, Larry. 1964. The Fugitive Slave Law: A Double Paradox in Essays on the Civil War and Reconstruction, New York: Holt, Rinehart and Winston, 1970 (originally published in Civil War History, X, No. 3, September 1964) Nevins, Allan. Ordeal of the Union, an 8-volume set (1947–1971). the most detailed political, economic and military narrative; by Pulitzer Prize-winner 1. Fruits of Manifest Destiny, 1847–1852 online; 2. A House Dividing, 1852–1857; 3. Douglas, Buchanan, and Party Chaos, 1857–1859; 4. Prologue to Civil War, 1859–1861; vols 5–8 have the series title War for the Union; 5. The Improvised War, 1861–1862; 6. online; War Becomes Revolution, 1862–1863; 7. The Organized War, 1863–1864; 8. The Organized War to Victory, 1864–1865 Sheehan-Dean, Aaron. A Companion to the U.S. Civil War 2 vol. (April 2014) Wiley-Blackwell, New York . 1232pp; 64 Topical chapters by scholars and experts; emphasis on historiography. Stoker, Donald. The Grand Design: Strategy and the U.S. Civil War (2010) excerpt  Borrow book at: archive.org Further reading Bibliography of the American Civil War Bibliography of American Civil War naval history External links West Point Atlas of Civil War Battles Civil War photos at the National Archives View images from the Civil War Photographs Collection at the Library of Congress American Battlefield Trust – A non-profit land preservation and educational organization with two divisions, the Civil War Trust and the Revolutionary War Trust, dedicated to preserving America's battlefields through land acquisitions. Civil War Era Digital Collection at Gettysburg College – This collection contains digital images of political cartoons, personal papers, pamphlets, maps, paintings and photographs from the Civil War Era held in Special Collections at Gettysburg College. Civil War 150 – Washington Post interactive website on the 150th Anniversary of the American Civil War. Civil War in the American South – An Association of Southeastern Research Libraries (ASERL) portal with links to almost 9,000 digitized Civil War-era items—books, pamphlets, broadsides, letters, maps, personal papers, and manuscripts—held at ASERL member libraries The Civil War – site with 7,000 pages, including the complete run of Harper's Weekly newspapers from the Civil War "American Civil World" maps at the Persuasive Cartography, The PJ Mode Collection, Cornell University Library Civil War Manuscripts at Shapell Manuscript Foundation Statements of each state as to why they were seceding Rebellions against the United States Conflicts in 1861 Conflicts in 1862 Conflicts in 1863 Conflicts in 1864 Conflicts in 1865 19th-century conflicts Civil War 1860s in the United States Wars of independence Internal wars of the United States 1860s conflicts
864
https://en.wikipedia.org/wiki/Andy%20Warhol
Andy Warhol
Andy Warhol (; born Andrew Warhola Jr.; August 6, 1928 – February 22, 1987) was an American artist, film director, and producer who was a leading figure in the visual art movement known as pop art. His works explore the relationship between artistic expression, advertising, and celebrity culture that flourished by the 1960s, and span a variety of media, including painting, silkscreening, photography, film, and sculpture. Some of his best known works include the silkscreen paintings Campbell's Soup Cans (1962) and Marilyn Diptych (1962), the experimental films Empire (1964) and Chelsea Girls (1966), and the multimedia events known as the Exploding Plastic Inevitable (1966–67). Born and raised in Pittsburgh, Warhol initially pursued a successful career as a commercial illustrator. After exhibiting his work in several galleries in the late 1950s, he began to receive recognition as an influential and controversial artist. His New York studio, The Factory, became a well-known gathering place that brought together distinguished intellectuals, drag queens, playwrights, Bohemian street people, Hollywood celebrities, and wealthy patrons. He promoted a collection of personalities known as Warhol superstars, and is credited with inspiring the widely used expression "15 minutes of fame". In the late 1960s he managed and produced the experimental rock band The Velvet Underground and founded Interview magazine. He authored numerous books, including The Philosophy of Andy Warhol and Popism: The Warhol Sixties. He lived openly as a gay man before the gay liberation movement. In June 1968, he was almost killed by radical feminist Valerie Solanas, who shot him inside his studio. After gallbladder surgery, Warhol died of cardiac arrhythmia in February 1987 at the age of 58 in New York. Warhol has been the subject of numerous retrospective exhibitions, books, and feature and documentary films. The Andy Warhol Museum in his native city of Pittsburgh, which holds an extensive permanent collection of art and archives, is the largest museum in the United States dedicated to a single artist. A 2009 article in The Economist described Warhol as the "bellwether of the art market". Many of his creations are very collectible and highly valuable. The highest price ever paid for a Warhol painting is $105 million for a 1963 serigraph titled Silver Car Crash (Double Disaster). His works include some of the most expensive paintings ever sold. Biography Early life and beginnings (1928–1949) Warhol was born on August 6, 1928, in Pittsburgh, Pennsylvania. He was the fourth child of Ondrej Warhola (Americanized as Andrew Warhola, Sr., 1889–1942) and Julia (née Zavacká, 1892–1972), whose first child was born in their homeland of Austria-Hungary and died before their move to the U.S. His parents were working-class Lemkos emigrants from Mikó, Austria-Hungary (now called Miková, located in today's northeastern Slovakia). Warhol's father emigrated to the United States in 1914, and his mother joined him in 1921, after the death of Warhol's grandparents. Warhol's father worked in a coal mine. The family lived at 55 Beelen Street and later at 3252 Dawson Street in the Oakland neighborhood of Pittsburgh. The family was Ruthenian Catholic and attended St. John Chrysostom Byzantine Catholic Church. Andy Warhol had two elder brothers—Pavol (Paul), the eldest, was born before the family emigrated; Ján was born in Pittsburgh. Pavol's son, James Warhola, became a successful children's book illustrator. In third grade, Warhol had Sydenham's chorea (also known as St. Vitus' Dance), the nervous system disease that causes involuntary movements of the extremities, which is believed to be a complication of scarlet fever which causes skin pigmentation blotchiness. At times when he was confined to bed, he drew, listened to the radio and collected pictures of movie stars around his bed. Warhol later described this period as very important in the development of his personality, skill-set and preferences. When Warhol was 13, his father died in an accident. As a teenager, Warhol graduated from Schenley High School in 1945, and as a teen, Warhol also won a Scholastic Art and Writing Award. After graduating from high school, his intentions were to study art education at the University of Pittsburgh in the hope of becoming an art teacher, but his plans changed and he enrolled in the Carnegie Institute of Technology, now Carnegie Mellon University in Pittsburgh, where he studied commercial art. During his time there, Warhol joined the campus Modern Dance Club and Beaux Arts Society. He also served as art director of the student art magazine, Cano, illustrating a cover in 1948 and a full-page interior illustration in 1949. These are believed to be his first two published artworks. Warhol earned a Bachelor of Fine Arts in pictorial design in 1949. Later that year, he moved to New York City and began a career in magazine illustration and advertising. 1950s Warhol's early career was dedicated to commercial and advertising art, where his first commission had been to draw shoes for Glamour magazine in the late 1940s. In the 1950s, Warhol worked as a designer for shoe manufacturer Israel Miller. While working in the shoe industry, Warhol developed his "blotted line" technique, applying ink to paper and then blotting the ink while still wet, which was akin to a printmaking process on the most rudimentary scale. His use of tracing paper and ink allowed him to repeat the basic image and also to create endless variations on the theme. American photographer John Coplans recalled that In 1952, Warhol had his first solo show at the Hugo Gallery in New York, and although that show was not well received, by 1956, he was included in his first group exhibition at the Museum of Modern Art, New York. Warhol's "whimsical" ink drawings of shoe advertisements figured in some of his earliest showings at the Bodley Gallery in New York in 1957. Warhol habitually used the expedient of tracing photographs projected with an epidiascope. Using prints by Edward Wallowitch, his "first boyfriend," the photographs would undergo a subtle transformation during Warhol's often cursory tracing of contours and hatching of shadows. Warhol used Wallowitch's photograph Young Man Smoking a Cigarette (c.1956), for a 1958 design for a book cover he submitted to Simon and Schuster for the Walter Ross pulp novel The Immortal, and later used others for his series of paintings. With the rapid expansion of the record industry, RCA Records hired Warhol, along with another freelance artist, Sid Maurer, to design album covers and promotional materials. 1960s Warhol was an early adopter of the silk screen printmaking process as a technique for making paintings. In 1962, Warhol was taught silk screen printmaking techniques by Max Arthur Cohn at his graphic arts business in Manhattan. In his book Popism: The Warhol Sixties, Warhol writes: "When you do something exactly wrong, you always turn up something." In May 1962, Warhol was featured in an article in Time magazine with his painting Big Campbell's Soup Can with Can Opener (Vegetable) (1962), which initiated his most sustained motif, the Campbell's soup can. That painting became Warhol's first to be shown in a museum when it was exhibited at the Wadsworth Atheneum in Hartford in July 1962. On July 9, 1962, Warhol's exhibition opened at the Ferus Gallery in Los Angeles with Campbell's Soup Cans, marking his West Coast debut of pop art. In November 1962, Warhol had an exhibition at Eleanor Ward's Stable Gallery in New York. The exhibit included the works Gold Marilyn, eight of the classic “Marilyn” series also named "Flavor Marilyns", Marilyn Diptych, 100 Soup Cans, 100 Coke Bottles, and 100 Dollar Bills. The Flavor Marilyns were selected from a group of fourteen canvases in the sub-series, each measuring 20″ x 16″. Some of the canvases were named after various candy Life Savers flavors, including Cherry Marilyn, Lemon Marilyn, Mint, Lavender, Grape or Licorice Marilyn. The others are identified by their background colors. Gold Marilyn, was bought by the architect Philip Johnson and donated to the Museum of Modern Art. At the exhibit, Warhol met poet John Giorno, who would star in Warhol's first film, Sleep, in 1964. It was during the 1960s that Warhol began to make paintings of iconic American objects such as dollar bills, mushroom clouds, electric chairs, Campbell's soup cans, Coca-Cola bottles, celebrities such as Marilyn Monroe, Elvis Presley, Marlon Brando, Troy Donahue, Muhammad Ali, and Elizabeth Taylor, as well as newspaper headlines or photographs of police dogs attacking African-American protesters during the Birmingham campaign in the civil rights movement. During these years, he founded his studio, "The Factory" and gathered about him a wide range of artists, writers, musicians, and underground celebrities. His work became popular and controversial. Warhol had this to say about Coca-Cola: In December 1962, New York City's Museum of Modern Art hosted a symposium on pop art, during which artists such as Warhol were attacked for "capitulating" to consumerism. Critics were appalled by Warhol's open acceptance of market culture, which set the tone for his reception. Warhol had his second exhibition at the Stable Gallery in the spring of 1964, which featured sculptures of commercial boxes stacked and scattered throughout the space to resemble a warehouse. For the exhibition, Warhol custom ordered wooden boxes and silkscreened graphics onto them. The sculptures—Brillo Box, Del Monte Peach Box, Heinz Tomato Ketchup Box, Kellog's Cornflakes Box, Campbell's Tomato Juice Box, and Mott's Apple Juice Box—sold for $200 to $400 depending on the size of the box. A pivotal event was The American Supermarket exhibition at Paul Bianchini's Upper East Side gallery in the fall of 1964. The show was presented as a typical small supermarket environment, except that everything in it—from the produce, canned goods, meat, posters on the wall, etc.—was created by prominent pop artists of the time, among them were sculpture Claes Oldenburg, Mary Inman and Bob Watts. Warhol designed a $12 paper shopping bag—plain white with a red Campbell's soup can. His painting of a can of a Campbell's soup cost $1,500 while each autographed can sold for 3 for $18, $6.50 each. The exhibit was one of the first mass events that directly confronted the general public with both pop art and the perennial question of what art is. As an advertisement illustrator in the 1950s, Warhol used assistants to increase his productivity. Collaboration would remain a defining (and controversial) aspect of his working methods throughout his career; this was particularly true in the 1960s. One of the most important collaborators during this period was Gerard Malanga. Malanga assisted the artist with the production of silkscreens, films, sculpture, and other works at "The Factory", Warhol's aluminum foil-and-silver-paint-lined studio on 47th Street (later moved to Broadway). Other members of Warhol's Factory crowd included Freddie Herko, Ondine, Ronald Tavel, Mary Woronov, Billy Name, and Brigid Berlin (from whom he apparently got the idea to tape-record his phone conversations). During the 1960s, Warhol also groomed a retinue of bohemian and counterculture eccentrics upon whom he bestowed the designation "superstars", including Nico, Joe Dallesandro, Edie Sedgwick, Viva, Ultra Violet, Holly Woodlawn, Jackie Curtis, and Candy Darling. These people all participated in the Factory films, and some—like Berlin—remained friends with Warhol until his death. Important figures in the New York underground art/cinema world, such as writer John Giorno and film-maker Jack Smith, also appear in Warhol films (many premiering at the New Andy Warhol Garrick Theatre and 55th Street Playhouse) of the 1960s, revealing Warhol's connections to a diverse range of artistic scenes during this time. Less well known was his support and collaboration with several teenagers during this era, who would achieve prominence later in life including writer David Dalton, photographer Stephen Shore and artist Bibbe Hansen (mother of pop musician Beck). Attempted murder: 1968 On June 3, 1968, radical feminist writer Valerie Solanas shot Warhol and Mario Amaya, art critic and curator, at Warhol's studio, The Factory. Before the shooting, Solanas had been a marginal figure in the Factory scene. She authored in 1967 the SCUM Manifesto, a separatist feminist tract that advocated the elimination of men; and appeared in the 1968 Warhol film I, a Man. Earlier on the day of the attack, Solanas had been turned away from the Factory after asking for the return of a script she had given to Warhol. The script had apparently been misplaced. Amaya received only minor injuries and was released from the hospital later the same day. Warhol was seriously wounded by the attack and barely survived. He suffered physical effects for the rest of his life, including being required to wear a surgical corset. The shooting had a profound effect on Warhol's life and art. Solanas was arrested the day after the assault, after turning herself in to police. By way of explanation, she said that Warhol "had too much control over my life". She was subsequently diagnosed with paranoid schizophrenia and eventually sentenced to three years under the control of the Department of Corrections. After the shooting, the Factory scene heavily increased its security, and for many the "Factory 60s" ended ("The superstars from the old Factory days didn't come around to the new Factory much"). Warhol had this to say about the attack: In 1969, Warhol and British journalist John Wilcock founded Interview magazine. 1970s Warhol had a retrospective exhibition at the Whitney Museum of American Art in 1971. His famous portrait of Chinese Communist leader Mao Zedong was created in 1973. In 1975, he published The Philosophy of Andy Warhol (1975). An idea expressed in the book: "Making money is art, and working is art and good business is the best art." Compared to the success and scandal of Warhol's work in the 1960s, the 1970s were a much quieter decade, as he became more entrepreneurial. He socialized at various nightspots in New York City, including Max's Kansas City and, later in the 1970s, Studio 54. He was generally regarded as quiet, shy, and a meticulous observer. Art critic Robert Hughes called him "the white mole of Union Square". In 1977, Warhol was commissioned by art collector Richard Weisman to create, Athletes, ten portraits consisting of the leading athletes of the day. According to Bob Colacello, Warhol devoted much of his time to rounding up new, rich patrons for portrait commissions—including Shah of Iran Mohammad Reza Pahlavi, his wife Empress Farah Pahlavi, his sister Princess Ashraf Pahlavi, Mick Jagger, Liza Minnelli, John Lennon, Diana Ross, and Brigitte Bardot. In 1979, reviewers disliked his exhibits of portraits of 1970s personalities and celebrities, calling them superficial, facile and commercial, with no depth or indication of the significance of the subjects. In 1979, Warhol and his longtime friend Stuart Pivar founded the New York Academy of Art. 1980s Warhol had a re-emergence of critical and financial success in the 1980s, partially due to his affiliation and friendships with a number of prolific younger artists, who were dominating the "bull market" of 1980s New York art: Jean-Michel Basquiat, Julian Schnabel, David Salle and other so-called Neo-Expressionists, as well as members of the Transavantgarde movement in Europe, including Francesco Clemente and Enzo Cucchi. Warhol also earned street credibility and graffiti artist Fab Five Freddy paid homage to Warhol by painting an entire train with Campbell soup cans. Warhol was also being criticized for becoming merely a "business artist". Critics panned his 1980 exhibition Ten Portraits of Jews of the Twentieth Century at the Jewish Museum in Manhattan, which Warhol—who was uninterested in Judaism and Jews—had described in his diary as "They're going to sell." In hindsight, however, some critics have come to view Warhol's superficiality and commerciality as "the most brilliant mirror of our times," contending that "Warhol had captured something irresistible about the zeitgeist of American culture in the 1970s." Warhol also had an appreciation for intense Hollywood glamour. He once said: "I love Los Angeles. I love Hollywood. They're so beautiful. Everything's plastic, but I love plastic. I want to be plastic." Warhol occasionally walked the fashion runways and did product endorsements, represented by Zoli Agency and later Ford Models. Before the 1984 Sarajevo Winter Olympics, he teamed with 15 other artists, including David Hockney and Cy Twombly, and contributed a Speed Skater print to the Art and Sport collection. The Speed Skater was used for the official Sarajevo Winter Olympics poster. In 1984, Vanity Fair commissioned Warhol to produce a portrait of Prince, in order to accompany an article that celebrated the success of Purple Rain and its accompanying movie. Referencing the many celebrity portraits produced by Warhol across his career, Orange Prince (1984) was created using a similar composition to the Marilyn "Flavors" series from 1962, among some of Warhol's first celebrity portraits. Prince is depicted in a pop color palette commonly used by Warhol, in bright orange with highlights of bright green and blue. The facial features and hair are screen-printed in black over the orange background. In September 1985, Warhol's joint exhibition with Basquiat, Paintings, opened to negative reviews at the Tony Shafrazi Gallery. That month, despite apprehension from Warhol, his silkscreen series Reigning Queens was shown at the Leo Castelli Gallery. In the Andy Warhol Diaries, Warhol wrote, "They were supposed to be only for Europe—nobody here cares about royalty and it'll be another bad review." In January 1987, Warhol traveled to Milan for the opening of his last exhibition, Last Supper, at the Palazzo delle Stelline. The next month, Warhol and jazz musician Miles Davis modeled for Koshin Satoh's fashion show at the Tunnel in New York City on February 17, 1987. Death Warhol died in Manhattan at 6:32 a.m. on February 22, 1987, at age 58. According to news reports, he had been making a good recovery from gallbladder surgery at New York Hospital before dying in his sleep from a sudden post-operative irregular heartbeat. Prior to his diagnosis and operation, Warhol delayed having his recurring gallbladder problems checked, as he was afraid to enter hospitals and see doctors. His family sued the hospital for inadequate care, saying that the arrhythmia was caused by improper care and water intoxication. The malpractice case was quickly settled out of court; Warhol's family received an undisclosed sum of money. Shortly before Warhol's death, doctors expected Warhol to survive the surgery, though a re-evaluation of the case about thirty years after his death showed many indications that Warhol's surgery was in fact riskier than originally thought. It was widely reported at the time that Warhol died of a "routine" surgery, though when considering factors such as his age, a family history of gallbladder problems, his previous gunshot wound, and his medical state in the weeks leading up to the procedure, the potential risk of death following the surgery appeared to have been significant. Warhol's brothers took his body back to Pittsburgh, where an open-coffin wake was held at the Thomas P. Kunsak Funeral Home. The solid bronze casket had gold-plated rails and white upholstery. Warhol was dressed in a black cashmere suit, a paisley tie, a platinum wig, and sunglasses. He was laid out holding a small prayer book and a red rose. The funeral liturgy was held at the Holy Ghost Byzantine Catholic Church on Pittsburgh's North Side. The eulogy was given by Monsignor Peter Tay. Yoko Ono and John Richardson were speakers. The coffin was covered with white roses and asparagus ferns. After the liturgy, the coffin was driven to St. John the Baptist Byzantine Catholic Cemetery in Bethel Park, a south suburb of Pittsburgh. At the grave, the priest said a brief prayer and sprinkled holy water on the casket. Before the coffin was lowered, Warhol's friend and advertising director of Interview Paige Powell dropped a copy of the magazine, an Interview T-shirt, and a bottle of the Estée Lauder perfume "Beautiful" into the grave. Warhol was buried next to his mother and father. A memorial service was held in Manhattan for Warhol at St. Patrick's Cathedral on April 1, 1987. Art works Paintings By the beginning of the 1960s, pop art was an experimental form that several artists were independently adopting; some of these pioneers, such as Roy Lichtenstein, would later become synonymous with the movement. Warhol, who would become famous as the "Pope of Pop", turned to this new style, where popular subjects could be part of the artist's palette. His early paintings show images taken from cartoons and advertisements, hand-painted with paint drips. Marilyn Monroe was a pop art painting that Warhol had done and it was very popular. Those drips emulated the style of successful abstract expressionists (such as Willem de Kooning). Warhol's first pop art paintings were displayed in April 1961, serving as the backdrop for New York Department Store Bonwit Teller's window display. This was the same stage his Pop Art contemporaries Jasper Johns, James Rosenquist and Robert Rauschenberg had also once graced. It was the gallerist Muriel Latow who came up with the ideas for both the soup cans and Warhol's dollar paintings. On November 23, 1961, Warhol wrote Latow a check for $50 which, according to the 2009 Warhol biography, Pop, The Genius of Warhol, was payment for coming up with the idea of the soup cans as subject matter. For his first major exhibition, Warhol painted his famous cans of Campbell's soup, which he claimed to have had for lunch for most of his life. From these beginnings, he developed his later style and subjects. Instead of working on a signature subject matter, as he started out to do, he worked more and more on a signature style, slowly eliminating the handmade from the artistic process. Warhol frequently used silk-screening; his later drawings were traced from slide projections. At the height of his fame as a painter, Warhol had several assistants who produced his silk-screen multiples, following his directions to make different versions and variations. Warhol produced both comic and serious works; his subject could be a soup can or an electric chair. Warhol used the same techniques—silkscreens, reproduced serially, and often painted with bright colors—whether he painted celebrities, everyday objects, or images of suicide, car crashes, and disasters, as in the 1962–63 Death and Disaster series. In 1979, Warhol was commissioned to paint a BMW M1 Group 4 racing version for the fourth installment of the BMW Art Car project. He was initially asked to paint a BMW 320i in 1978, but the car model was changed and it didn't qualify for the race that year. Warhol was the first artist to paint directly onto the automobile himself instead of letting technicians transfer a scale-model design to the car. Reportedly, it took him only 23 minutes to paint the entire car. Racecar drivers Hervé Poulain, Manfred Winkelhock and Marcel Mignot drove the car at the 1979 24 Hours of Le Mans. Some of Warhol's work, as well as his own personality, has been described as being Keatonesque. Warhol has been described as playing dumb to the media. He sometimes refused to explain his work. He has suggested that all one needs to know about his work is "already there 'on the surface. His Rorschach inkblots are intended as pop comments on art and what art could be. His cow wallpaper (literally, wallpaper with a cow motif) and his oxidation paintings (canvases prepared with copper paint that was then oxidized with urine) are also noteworthy in this context. Equally noteworthy is the way these works—and their means of production—mirrored the atmosphere at Andy's New York "Factory". Biographer Bob Colacello provides some details on Andy's "piss paintings": Warhol's 1982 portrait of Basquiat, Jean-Michel Basquiat, is a silkscreen over an oxidized copper "piss painting." After many years of silkscreen, oxidation, photography, etc., Warhol returned to painting with a brush in hand. In 1983, Warhol began collaborating with Basquiat and Clemente. Warhol and Basquiat created a series of more than 50 large collaborative works between 1984 and 1985. Despite criticism when these were first shown, Warhol called some of them "masterpieces," and they were influential for his later work. In 1984, Warhol was commissioned by collector and gallerist Alexander Iolas to produce work based on Leonardo da Vinci's The Last Supper for an exhibition at the old refectory of the Palazzo delle Stelline in Milan, opposite from the Santa Maria delle Grazie where Leonardo da Vinci's mural can be seen. Warhol exceeded the demands of the commission and produced nearly 100 variations on the theme, mostly silkscreens and paintings, and among them a collaborative sculpture with Basquiat, the Ten Punching Bags (Last Supper). The Milan exhibition that opened in January 1987 with a set of 22 silk-screens, was the last exhibition for both the artist and the gallerist. The series of The Last Supper was seen by some as "arguably his greatest," but by others as "wishy-washy, religiose" and "spiritless". It is the largest series of religious-themed works by any U.S. artist. Artist Maurizio Cattelan describes that it is difficult to separate daily encounters from the art of Andy Warhol: "That's probably the greatest thing about Warhol: the way he penetrated and summarized our world, to the point that distinguishing between him and our everyday life is basically impossible, and in any case useless." Warhol was an inspiration towards Cattelan's magazine and photography compilations, such as Permanent Food, Charley, and Toilet Paper. In the period just before his death, Warhol was working on Cars, a series of paintings for Mercedes-Benz. Art market The value of Andy Warhol's work has been on an endless upward trajectory since his death in 1987. In 2014, his works accumulated $569 million at auction, which accounted for more than a sixth of the global art market. However, there have been some dips. According to art dealer Dominique Lévy, "The Warhol trade moves something like a seesaw being pulled uphill: it rises and falls, but each new high and low is above the last one." She attributes this to the consistent influx of new collectors intrigued by Warhol. "At different moments, you've had different groups of collectors entering the Warhol market, and that resulted in peaks in demand, then satisfaction and a slow down," before the process repeats another demographic or the next generation. In 1998, Orange Marilyn (1964), a depiction of Marilyn Monroe, sold for $17.3 million, which at the time set a new record as the highest price paid for a Warhol artwork. In 2007, one of Warhol's 1963 paintings of Elizabeth Taylor, Liz (Colored Liz), which was owned by actor Hugh Grant, sold for $23.7 million at Christie's. In 2007, Stefan Edlis and Gael Neeson sold Warhol's Turquoise Marilyn (1964) to financier Steven A. Cohen for $80 million. In May 2007, Green Car Crash (1963) sold for $71.1 million and Lemon Marilyn (1962) sold for $28 million at Christie's post-war and contemporary art auction. In 2007, Large Campbell's Soup Can (1964) was sold at a Sotheby's auction to a South American collector for 7.4 million. In November 2009, 200 One Dollar Bills (1962) at Sotheby's for $43.8 million. In 2008, Eight Elvises (1963) was sold by Annibale Berlingieri for $100 million to a private buyer. The work depicts Elvis Presley in a gunslinger pose. It was first exhibited in 1963 at the Ferus Gallery in Los Angeles. Warhol made 22 versions of the Double Elvis, nine of which are held in museums. In May 2012, Double Elvis (Ferus Type) sold at auction at Sotheby's for $37 million. In November 2014, Triple Elvis (Ferus Type) sold for $81.9 million at Christie's. In May 2010, a purple self-portrait of Warhol from 1986 that was owned by fashion designer Tom Ford sold for $32.6 million at Sotheby's. In November 2010, Men in Her Life (1962), based on Elizabeth Taylor, sold for $63.4 million at Phillips de Pury and Coca-Cola (4) (1962) sold for $35.3 million at Sotheby's. In May 2011, Warhol's first self-portrait from 1963–64 sold for $38.4 million and a red self-portrait from 1986 sold for $27.5 million at Christie's. In May 2011, Liz #5 (Early Colored Liz) sold for $26.9 million at Phillips. In November 2013, Warhol's rarely seen 1963 diptych, Silver Car Crash (Double Disaster), sold at Sotheby's for $105.4 million, a new record for the artist. In November 2013, Coca-Cola (3) (1962) sold for $57.3 million at Christie's. In May 2014, White Marilyn (1962) sold for $41 million at Christie's. In November 2014, Four Marlons (1964), which depicts Marlon Brando, sold for $69.6 million at Christie's. In May 2015, Silver Liz (diptych), painted in 1963–65, sold for $28 million and Colored Mona Lisa (1963) sold for $56.2 million at Christie's. In May 2017, Warhol's 1962 painting Big Campbell's Soup Can With Can Opener (Vegetable) sold for $27.5 million at Christie's. Collectors Among Warhol's early collectors and influential supporters were Emily and Burton Tremaine. Among the over 15 artworks purchased, Marilyn Diptych (now at Tate Modern, London) and A boy for Meg (now at the National Gallery of Art in Washington, DC), were purchased directly out of Warhol's studio in 1962. One Christmas, Warhol left a small Head of Marilyn Monroe by the Tremaine's door at their New York apartment in gratitude for their support and encouragement. Works Filmography Warhol attended the 1962 premiere of the static composition by La Monte Young called Trio for Strings and subsequently created his famous series of static films. Filmmaker Jonas Mekas, who accompanied Warhol to the Trio premiere, claims Warhol's static films were directly inspired by the performance. Between 1963 and 1968, he made more than 60 films, plus some 500 short black-and-white "screen test" portraits of Factory visitors. One of his most famous films, Sleep, monitors poet John Giorno sleeping for six hours. The 35-minute film Blow Job is one continuous shot of the face of DeVeren Bookwalter supposedly receiving oral sex from filmmaker Willard Maas, although the camera never tilts down to see this. Another, Empire (1964), consists of eight hours of footage of the Empire State Building in New York City at dusk. The film Eat consists of a man eating a mushroom for 45 minutes. Batman Dracula is a 1964 film that was produced and directed by Warhol, without the permission of DC Comics. It was screened only at his art exhibits. A fan of the Batman series, Warhol's movie was an "homage" to the series, and is considered the first appearance of a blatantly campy Batman. The film was until recently thought to have been lost, until scenes from the picture were shown at some length in the 2006 documentary Jack Smith and the Destruction of Atlantis. Warhol's 1965 film Vinyl is an adaptation of Anthony Burgess' popular dystopian novel A Clockwork Orange. Others record improvised encounters between Factory regulars such as Brigid Berlin, Viva, Edie Sedgwick, Candy Darling, Holly Woodlawn, Ondine, Nico, and Jackie Curtis. Legendary underground artist Jack Smith appears in the film Camp. His most popular and critically successful film was Chelsea Girls (1966). The film was highly innovative in that it consisted of two 16 mm-films being projected simultaneously, with two different stories being shown in tandem. From the projection booth, the sound would be raised for one film to elucidate that "story" while it was lowered for the other. The multiplication of images evoked Warhol's seminal silk-screen works of the early 1960s. Warhol was a fan of filmmaker Radley Metzger film work and commented that Metzger's film, The Lickerish Quartet, was "an outrageously kinky masterpiece". Blue Movie—a film in which Warhol superstar Viva makes love in bed with Louis Waldon, another Warhol superstar—was Warhol's last film as director. The film, a seminal film in the Golden Age of Porn, was, at the time, controversial for its frank approach to a sexual encounter. Blue Movie was publicly screened in New York City in 2005, for the first time in more than 30 years. In the wake of the 1968 shooting, a reclusive Warhol relinquished his personal involvement in filmmaking. His acolyte and assistant director, Paul Morrissey, took over the film-making chores for the Factory collective, steering Warhol-branded cinema towards more mainstream, narrative-based, B-movie exploitation fare with Flesh, Trash, and Heat. All of these films, including the later Andy Warhol's Dracula and Andy Warhol's Frankenstein, were far more mainstream than anything Warhol as a director had attempted. These latter "Warhol" films starred Joe Dallesandro—more of a Morrissey star than a true Warhol superstar. In the early 1970s, most of the films directed by Warhol were pulled out of circulation by Warhol and the people around him who ran his business. After Warhol's death, the films were slowly restored by the Whitney Museum and are occasionally projected at museums and film festivals. Few of the Warhol-directed films are available on video or DVD. Music In the mid-1960s, Warhol adopted the band the Velvet Underground, making them a crucial element of the Exploding Plastic Inevitable multimedia performance art show. Warhol, with Paul Morrissey, acted as the band's manager, introducing them to Nico (who would perform with the band at Warhol's request). While managing The Velvet Underground, Andy would have them dressed in all black to perform in front of movies that he was also presenting. In 1966, he "produced" their first album The Velvet Underground & Nico, as well as providing its album art. His actual participation in the album's production amounted to simply paying for the studio time. After the band's first album, Warhol and band leader Lou Reed started to disagree more about the direction the band should take, and their artistic friendship ended. In 1989, after Warhol's death, Reed and John Cale re-united for the first time since 1972 to write, perform, record and release the concept album Songs for Drella, a tribute to Warhol. In October 2019, an audio tape of publicly unknown music by Reed, based on Warhols' 1975 book, "The Philosophy of Andy Warhol: From A to B and Back Again", was reported to have been discovered in an archive at the Andy Warhol Museum in Pittsburgh. Warhol designed many album covers for various artists starting with the photographic cover of John Wallowitch's debut album, This Is John Wallowitch!!! (1964). He designed the cover art for The Rolling Stones' albums Sticky Fingers (1971) and Love You Live (1977), and the John Cale albums The Academy in Peril (1972) and Honi Soit in 1981. One of Warhol's last works was a portrait of Aretha Franklin for the cover of her 1986 gold album Aretha. Warhol strongly influenced the new wave/punk rock band Devo, as well as David Bowie. Bowie recorded a song called "Andy Warhol" for his 1971 album Hunky Dory. Lou Reed wrote the song "Andy's Chest", about Valerie Solanas, the woman who shot Warhol, in 1968. He recorded it with the Velvet Underground, and this version was released on the VU album in 1985. Bowie would later play Warhol in the 1996 movie, Basquiat. Bowie recalled how meeting Warhol in real life helped him in the role, and recounted his early meetings with him: The band Triumph also wrote a song about Andy Warhol, "Stranger In A Strange Land" off their 1984 album Thunder Seven. Books and print Beginning in the early 1950s, Warhol produced several unbound portfolios of his work. The first of several bound self-published books by Warhol was 25 Cats Name Sam and One Blue Pussy, printed in 1954 by Seymour Berlin on Arches brand watermarked paper using his blotted line technique for the lithographs. The original edition was limited to 190 numbered, hand-colored copies, using Dr. Martin's ink washes. Most of these were given by Warhol as gifts to clients and friends. Copy No. 4, inscribed "Jerry" on the front cover and given to Geraldine Stutz, was used for a facsimile printing in 1987, and the original was auctioned in May 2006 for US$35,000 by Doyle New York. Other self-published books by Warhol include: A Gold Book Wild Raspberries Holy Cats Warhol's book A La Recherche du Shoe Perdu (1955) marked his "transition from commercial to gallery artist". (The title is a play on words by Warhol on the title of French author Marcel Proust's À la recherche du temps perdu.) After gaining fame, Warhol "wrote" several books that were commercially published: a, A Novel (1968, ) is a literal transcription—containing spelling errors and phonetically written background noise and mumbling—of audio recordings of Ondine and several of Andy Warhol's friends hanging out at the Factory, talking, going out. The Philosophy of Andy Warhol (From A to B & Back Again) (1975, )—according to Pat Hackett's introduction to The Andy Warhol Diaries, Pat Hackett did the transcriptions and text for the book based on daily phone conversations, sometimes (when Warhol was traveling) using audio cassettes that Andy Warhol gave her. Said cassettes contained conversations with Brigid Berlin (also known as Brigid Polk) and former Interview magazine editor Bob Colacello. Popism: The Warhol Sixties (1980, ), authored by Warhol and Pat Hackett, is a retrospective view of the 1960s and the role of pop art. The Andy Warhol Diaries (1989, ), edited by Pat Hackett, is a diary dictated by Warhol to Hackett in daily phone conversations. Warhol started the diary to keep track of his expenses after being audited, although it soon evolved to include his personal and cultural observations. Warhol created the fashion magazine Interview that is still published today. The loopy title script on the cover is thought to be either his own handwriting or that of his mother, Julia Warhola, who would often do text work for his early commercial pieces. Other media Although Andy Warhol is most known for his paintings and films, he authored works in many different media. Drawing: Warhol started his career as a commercial illustrator, producing drawings in "blotted-ink" style for advertisements and magazine articles. Best known of these early works are his drawings of shoes. Some of his personal drawings were self-published in small booklets, such as Yum, Yum, Yum (about food), Ho, Ho, Ho (about Christmas) and Shoes, Shoes, Shoes. His most artistically acclaimed book of drawings is probably A Gold Book, compiled of sensitive drawings of young men. A Gold Book is so named because of the gold leaf that decorates its pages. In April 2012 a sketch of 1930s singer Rudy Vallee claimed to have been drawn by Andy Warhol was found at a Las Vegas garage sale. The image was said to have been drawn when Andy was nine or 10. Various authorities have challenged the image's authenticity. Sculpture: Warhol's most famous sculpture is probably his Brillo Boxes, silkscreened ink on wood replicas of the large, branded cardboard boxes used to hold 24 packages of Brillo soap pads. The original Brillo design was by commercial artist James Harvey. Warhol's sculpture was part of a series of "grocery carton" works that also included Heinz ketchup and Campbell's tomato juice cases. Other famous works include the Silver Clouds—helium filled, silver mylar, pillow-shaped balloons. A Silver Cloud was included in the traveling exhibition Air Art (1968–1969) curated by Willoughby Sharp. Clouds was also adapted by Warhol for avant-garde choreographer Merce Cunningham's dance piece RainForest (1968). Audio: At one point Warhol carried a portable recorder with him wherever he went, taping everything everybody said and did. He referred to this device as his "wife". Some of these tapes were the basis for his literary work. Another audio-work of Warhol's was his Invisible Sculpture, a presentation in which burglar alarms would go off when entering the room. Warhol's cooperation with the musicians of The Velvet Underground was driven by an expressed desire to become a music producer. Time Capsules: In 1973, Warhol began saving ephemera from his daily life—correspondence, newspapers, souvenirs, childhood objects, even used plane tickets and food—which was sealed in plain cardboard boxes dubbed Time Capsules. By the time of his death, the collection grew to include 600, individually dated "capsules". The boxes are now housed at the Andy Warhol Museum. Television: Andy Warhol dreamed of a television special about a favorite subject of hisNothingthat he would call The Nothing Special. Later in his career he did create two cable television shows, Andy Warhol's TV in 1982 and Andy Warhol's Fifteen Minutes (based on his famous "fifteen minutes of fame" quotation) for MTV in 1986. Besides his own shows he regularly made guest appearances on other programs, including The Love Boat wherein a Midwestern wife (Marion Ross) fears Andy Warhol will reveal to her husband (Tom Bosley, who starred alongside Ross in sitcom Happy Days) her secret past as a Warhol superstar named Marina del Rey. Warhol also produced a TV commercial for Schrafft's Restaurants in New York City, for an ice cream dessert appropriately titled the "Underground Sundae". Fashion: Warhol is quoted for having said: "I'd rather buy a dress and put it up on the wall, than put a painting, wouldn't you?" One of his best-known superstars, Edie Sedgwick, aspired to be a fashion designer, and his good friend Halston was a famous one. Warhol's work in fashion includes silkscreened dresses, a short sub-career as a catwalk-model and books on fashion as well as paintings with fashion (shoes) as a subject. Warhol himself has been described as a modern dandy, whose authority "rested more on presence than on words". Performance Art: Warhol and his friends staged theatrical multimedia happenings at parties and public venues, combining music, film, slide projections and even Gerard Malanga in an S&M outfit cracking a whip. The Exploding Plastic Inevitable in 1966 was the culmination of this area of his work. Theater: Warhol's play Andy Warhol's Pork opened on May 5, 1971, at LaMama theater in New York for a two-week run and was brought to the Roundhouse in London for a longer run in August 1971. Pork was based on tape-recorded conversations between Brigid Berlin and Andy during which Brigid would play for Andy tapes she had made of phone conversations between herself and her mother, socialite Honey Berlin. The play featured Jayne County as "Vulva" and Cherry Vanilla as "Amanda Pork". In 1974, Andy Warhol also produced the stage musical Man on the Moon, which was written by John Phillips of the Mamas and the Papas. Photography: To produce his silkscreens, Warhol made photographs or had them made by his friends and assistants. These pictures were mostly taken with a specific model of Polaroid camera, The Big Shot, that Polaroid kept in production especially for Warhol. This photographic approach to painting and his snapshot method of taking pictures has had a great effect on artistic photography. Warhol was an accomplished photographer, and took an enormous number of photographs of Factory visitors, friends, acquired by Stanford University. Music: In 1963, Warhol founded The Druds, a short-lived avant-garde noise music band that featured prominent members of the New York proto-conceptual art and minimal art community. Computer: Warhol used Amiga computers to generate digital art, including You Are the One, which he helped design and build with Amiga, Inc. He also displayed the difference between slow fill and fast fill on live TV with Debbie Harry as a model. Personal life Sexuality Warhol was homosexual. In 1980, he told an interviewer that he was still a virgin. Biographer Bob Colacello, who was present at the interview, felt it was probably true and that what little sex he had was probably "a mixture of voyeurism and masturbation—to use [Andy's] word abstract". Warhol's assertion of virginity would seem to be contradicted by his hospital treatment in 1960 for condylomata, a sexually transmitted disease. It has also been contradicted by his lovers, including Warhol muse BillyBoy, who has said they had sex to orgasm: "When he wasn't being Andy Warhol and when you were just alone with him he was an incredibly generous and very kind person. What seduced me was the Andy Warhol who I saw alone. In fact when I was with him in public he kind of got on my nerves....I'd say: 'You're just obnoxious, I can't bear you.'" Billy Name also denied that Warhol was only a voyeur, saying: "He was the essence of sexuality. It permeated everything. Andy exuded it, along with his great artistic creativity....It brought a joy to the whole art world in New York." "But his personality was so vulnerable that it became a defense to put up the blank front." Warhol's lovers included John Giorno, Billy Name, Charles Lisanby, and Jon Gould. His boyfriend of 12 years was Jed Johnson, whom he met in 1968, and who later achieved fame as an interior designer. The fact that Warhol's homosexuality influenced his work and shaped his relationship to the art world is a major subject of scholarship on the artist and is an issue that Warhol himself addressed in interviews, in conversation with his contemporaries, and in his publications (e.g., Popism: The Warhol 1960s). Throughout his career, Warhol produced erotic photography and drawings of male nudes. Many of his most famous works (portraits of Liza Minnelli, Judy Garland, and Elizabeth Taylor, and films such as Blow Job, My Hustler and Lonesome Cowboys) draw from gay underground culture or openly explore the complexity of sexuality and desire. As has been addressed by a range of scholars, many of his films premiered in gay porn theaters, including the New Andy Warhol Garrick Theatre and 55th Street Playhouse, in the late 1960s. The first works that Warhol submitted to a fine art gallery, homoerotic drawings of male nudes, were rejected for being too openly gay. In Popism, furthermore, the artist recalls a conversation with the filmmaker Emile de Antonio about the difficulty Warhol had being accepted socially by the then-more-famous (but closeted) gay artists Jasper Johns and Robert Rauschenberg. De Antonio explained that Warhol was "too swish and that upsets them". In response to this, Warhol writes, "There was nothing I could say to that. It was all too true. So I decided I just wasn't going to care, because those were all the things that I didn't want to change anyway, that I didn't think I 'should' want to change ... Other people could change their attitudes but not me". In exploring Warhol's biography, many turn to this period—the late 1950s and early 1960s—as a key moment in the development of his persona. Some have suggested that his frequent refusal to comment on his work, to speak about himself (confining himself in interviews to responses like "Um, no" and "Um, yes", and often allowing others to speak for him)—and even the evolution of his pop style—can be traced to the years when Warhol was first dismissed by the inner circles of the New York art world. Religious beliefs Warhol was a practicing Ruthenian Catholic. He regularly volunteered at homeless shelters in New York City, particularly during the busier times of the year, and described himself as a religious person. Many of Warhol's later works depicted religious subjects, including two series, Details of Renaissance Paintings (1984) and The Last Supper (1986). In addition, a body of religious-themed works was found posthumously in his estate. During his life, Warhol regularly attended Liturgy, and the priest at Warhol's church, Saint Vincent Ferrer, said that the artist went there almost daily, although he was not observed taking Communion or going to Confession and sat or knelt in the pews at the back. The priest thought he was afraid of being recognized; Warhol said he was self-conscious about being seen in a Roman Rite church crossing himself "in the Orthodox way" (right to left instead of the reverse). His art is noticeably influenced by the Eastern Christian tradition which was so evident in his places of worship. Warhol's brother has described the artist as "really religious, but he didn't want people to know about that because [it was] private". Despite the private nature of his faith, in Warhol's eulogy John Richardson depicted it as devout: "To my certain knowledge, he was responsible for at least one conversion. He took considerable pride in financing his nephew's studies for the priesthood". Collections Warhol was an avid collector. His friends referred to his numerous collections, which filled not only his four-story townhouse, but also a nearby storage unit, as "Andy's Stuff". The true extent of his collections was not discovered until after his death, when The Andy Warhol Museum in Pittsburgh took in 641 boxes of his "Stuff". Warhol's collections included a Coca-Cola memorabilia sign, and 19th century paintings along with airplane menus, unpaid invoices, pizza dough, pornographic pulp novels, newspapers, stamps, supermarket flyers, and cookie jars, among other eccentricities. It also included significant works of art, such as George Bellows's Miss Bentham. One of his main collections was his wigs. Warhol owned more than 40 and felt very protective of his hairpieces, which were sewn by a New York wig-maker from hair imported from Italy. In 1985, a girl snatched Warhol's wig off his head. It was later discovered in Warhol's diary entry for that day that he wrote: "I don't know what held me back from pushing her over the balcony." In 1960, he had bought a drawing of a light bulb by Jasper Johns. Another item found in Warhol's boxes at the museum in Pittsburgh was a mummified human foot from Ancient Egypt. The curator of anthropology at Carnegie Museum of Natural History felt that Warhol most likely found it at a flea market. Andy Warhol also collected many books, with more than 1200 titles in his personal collection. Of these, 139 titles have been publicly identified through a 1988 Sotheby's Auction catalog, The Andy Warhol Collection and can be viewed online. His book collection reflects his eclectic taste and interests, and includes books written by and about some of his acquaintances and friends. Some of the titles in his collection include The Two Mrs. Grenvilles: A Novel by Dominick Dunne, Artists in Uniform by Max Eastman, Andrews' Diseases of the Skin: Clinical Dermatology by George Clinton Andrews, D.V. by Diana Vreeland, Blood of a Poet by Jean Cocteau, Watercolours by Francesco Clemente, Little World, Hello! by Jimmy Savo, Hidden Faces by Salvador Dalí, and The Dinah Shore Cookbook by Dinah Shore. Legacy In 2002, the U.S. Postal Service issued an 18-cent stamp commemorating Warhol. Designed by Richard Sheaff of Scottsdale, Arizona, the stamp was unveiled at a ceremony at The Andy Warhol Museum and features Warhol's painting "Self-Portrait, 1964". In March 2011, a chrome statue of Andy Warhol and his Polaroid camera was revealed at Union Square in New York City. A crater on Mercury was named after Warhol in 2012. Warhol Foundation Warhol's will dictated that his entire estate—with the exception of a few modest legacies to family members—would go to create a foundation dedicated to the "advancement of the visual arts". Warhol had so many possessions that it took Sotheby's nine days to auction his estate after his death; the auction grossed more than US$20 million. In 1987, in accordance with Warhol's will, the Andy Warhol Foundation for the Visual Arts began. The foundation serves as the estate of Andy Warhol, but also has a mission "to foster innovative artistic expression and the creative process" and is "focused primarily on supporting work of a challenging and often experimental nature". The Artists Rights Society is the U.S. copyright representative for the Andy Warhol Foundation for the Visual Arts for all Warhol works with the exception of Warhol film stills. The U.S. copyright representative for Warhol film stills is the Warhol Museum in Pittsburgh. Additionally, the Andy Warhol Foundation for the Visual Arts has agreements in place for its image archive. All digital images of Warhol are exclusively managed by Corbis, while all transparency images of Warhol are managed by Art Resource. The Andy Warhol Foundation released its 20th Anniversary Annual Report as a three-volume set in 2007: Vol. I, 1987–2007; Vol. II, Grants & Exhibitions; and Vol. III, Legacy Program. The Foundation is in the process of compiling its catalogue raisonné of paintings and sculptures in volumes covering blocks of years of the artist's career. Volumes IV and V were released in 2019. The subsequent volumes are still in the process of being compiled. The Foundation remains one of the largest grant-giving organizations for the visual arts in the U.S. Many of Warhol's works and possessions are on display at the Andy Warhol Museum in Pittsburgh. The foundation donated more than 3,000 works of art to the museum. In pop culture Warhol founded Interview magazine, a stage for celebrities he "endorsed" and a business staffed by his friends. He collaborated with others on all of his books (some of which were written with Pat Hackett.) One might even say that he produced people (as in the Warholian "Superstar" and the Warholian portrait). Warhol endorsed products, appeared in commercials, and made frequent celebrity guest appearances on television shows and in films (he appeared in everything from Love Boat to Saturday Night Live and the Richard Pryor movie Dynamite Chicken). In this respect Warhol was a fan of "Art Business" and "Business Art"—he, in fact, wrote about his interest in thinking about art as business in The Philosophy of Andy Warhol from A to B and Back Again. Films Warhol appeared as himself in the film Cocaine Cowboys (1979) and in the film Tootsie (1982). After his death, Warhol was portrayed by Crispin Glover in Oliver Stone's film The Doors (1991), by David Bowie in Julian Schnabel's film Basquiat (1996), and by Jared Harris in Mary Harron's film I Shot Andy Warhol (1996). Warhol appears as a character in Michael Daugherty's opera Jackie O (1997). Actor Mark Bringleson makes a brief cameo as Warhol in Austin Powers: International Man of Mystery (1997). Many films by avant-garde cineast Jonas Mekas have caught the moments of Warhol's life. Sean Gregory Sullivan depicted Warhol in the film 54 (1998). Guy Pearce portrayed Warhol in the film Factory Girl (2007) about Edie Sedgwick's life. Actor Greg Travis portrays Warhol in a brief scene from the film Watchmen (2009). In the movie Highway to Hell a group of Andy Warhols are part of the Good Intentions Paving Company where good-intentioned souls are ground into pavement. In the film Men in Black 3 (2012) Andy Warhol turns out to really be undercover MIB Agent W (played by Bill Hader). Warhol is throwing a party at The Factory in 1969, where he is looked up by MIB Agents K and J (J from the future). Agent W is desperate to end his undercover job ("I'm so out of ideas I'm painting soup cans and bananas, for Christ sakes!", "You gotta fake my death, okay? I can't listen to sitar music anymore." and "I can't tell the women from the men."). Andy Warhol (portrayed by Tom Meeten) is one of main characters of the 2012 British television show Noel Fielding's Luxury Comedy. The character is portrayed as having robot-like mannerisms. In the 2017 feature The Billionaire Boys Club Cary Elwes portrays Warhol in a film based on the true story about Ron Levin (portrayed by Kevin Spacey) a friend of Warhol's who was murdered in 1986. In September 2016, it was announced that Jared Leto would portray the title character in Warhol, an upcoming American biographical drama film produced by Michael De Luca and written by Terence Winter, based on the book Warhol: The Biography by Victor Bockris. Documentaries The documentary Absolut Warhola (2001) was produced by Polish director Stanislaw Mucha, featuring Warhol's parents' family and hometown in Slovakia. Andy Warhol: A Documentary Film (2006) is a reverential, four-hour movie by Ric Burns that won a Peabody Award in 2006. Andy Warhol: Double Denied (2006) is a 52-minute movie by Ian Yentob about the difficulties authenticating Warhol's work. Andy Warhol's People Factory (2008), a three-part television documentary directed by Catherine Shorr, features interviews with several of Warhol's associates. Television Warhol appeared as a recurring character in TV series Vinyl, played by John Cameron Mitchell. Warhol was portrayed by Evan Peters in the American Horror Story: Cult episode "Valerie Solanas Died for Your Sins: Scumbag". The episode depicts the attempted assassination of Warhol by Valerie Solanas (Lena Dunham). In early 1969, Andy Warhol was commissioned by Braniff International to appear in two television commercials to promote the luxury airline's "When You Got It – Flaunt It" campaign. The campaign was created by the advertising agency Lois Holland Calloway, which was led by George Lois, creator of a famed series of Esquire Magazine covers. The first commercial series involved pairing unlikely people who shared the fact that they both flew Braniff Airways. Warhol was paired with boxing legend Sonny Liston. The odd commercial worked as did the others that featured unlikely fellow travelers such as painter Salvador Dalí and baseball legend Whitey Ford. Two additional commercials for Braniff were created that featured famous persons entering a Braniff jet and being greeted by a Braniff hostess while espousing their like for flying Braniff. Warhol was also featured in the first of these commercials that were also produced by Lois and were released in the summer of 1969. Lois has incorrectly stated that he was commissioned by Braniff in 1967 for representation during that year, but at that time Madison Avenue advertising doyenne Mary Wells Lawrence, who was married to Braniff's chairman and president Harding Lawrence, was representing the Dallas-based carrier at that time. Lois succeeded Wells Rich Greene Agency on December 1, 1968. The rights to Warhol's films for Braniff and his signed contracts are owned by a private trust and are administered by Braniff Airways Foundation in Dallas, Texas. Books A biography of Andy Warhol written by art critic Blake Gopnik was published in 2020 under the title Warhol. See also Andy Warhol Art Authentication Board Andy Warhol Bridge, Pittsburgh, PA LGBT culture in New York City List of LGBT people from New York City Moon Museum Painting the Century: 101 Portrait Masterpieces 1900–2000 References Further reading "A symposium on Pop Art". Arts Magazine, April 1963, pp. 36–45. The symposium was held in 1962, at The Museum of Modern Art, and published in this issue the following year. Celant, Germano. Andy Warhol: A Factory. Kunstmuseum Wolfsbug, 1999. Doyle, Jennifer, Jonathan Flatley, and José Esteban Muñoz, eds (1996). Pop Out: Queer Warhol. Durham: Duke University Press. Duncan Fallowell, 20th Century Characters, ch. Andy Lives (London, Vintage, 1994) James, James, "Andy Warhol: The Producer as Author", in Allegories of Cinema: American Film in the 1960s (1989), pp. 58–84. Princeton: Princeton University Press. Krauss, Rosalind E. "Warhol's Abstract Spectacle". In Abstraction, Gesture, Ecriture: Paintings from the Daros Collection. New York: Scalo, 1999, pp. 123–33. Lippard, Lucy R., Pop Art, Thames and Hudson, 1970 (1985 reprint), Scherman, Tony, & David Dalton, POP: The Genius of Andy Warhol, New York, NY: HarperCollins, 2009 Suarez, Juan Antonio (1996). Bike Boys, Drag Queens, & Superstars: Avant-Garde, Mass Culture, and Gay Identities in the 1960s Underground Cinema. Indianapolis: Indiana University Press. External links Andy Warhol at the National Gallery of Art Warhol Foundation in New York City Andy Warhol Collection in Pittsburgh The work of Andy Warhol spoken about by David Cronenberg Warholstars: Andy Warhol Films, Art and Superstars Warhol & The Computer Andy Warhol Andy Warhol at the Jewish Museum A Piece of Work podcast, WNYC Studios/MoMA, Tavi Gevinson and Abbi Jacobson discuss Andy Warhol's Campbell's Soup Cans Andy Warhol's Personal Book Shelf 1928 births 1987 deaths 20th-century American musicians 20th-century American painters American male painters 20th-century American photographers 20th-century American writers Album-cover and concert-poster artists American cinematographers American contemporary artists American Eastern Catholics American experimental filmmakers American film producers American portrait painters American people of Lemko descent American pop artists American printmakers American male screenwriters American shooting survivors American socialites Artists from New York (state) Artists from Pittsburgh Burials in Pennsylvania Carnegie Mellon University College of Fine Arts alumni Catholics from Pennsylvania Censorship in the arts Fashion illustrators Film directors from New York (state) Film directors from Pennsylvania Gay artists American gay writers Hypochondriacs LGBT photographers from the United States LGBT Roman Catholics LGBT people from New York (state) LGBT people from Pennsylvania LGBT producers Photographers from New York (state) American portrait photographers Postmodern artists Ruthenian Greek Catholics Schenley High School alumni The Velvet Underground Warhola family Writers from New York (state) Writers from Pittsburgh Experiments in Art and Technology collaborating artists People associated with The Factory 20th-century American male writers 20th-century American screenwriters Google Doodles LGBT film directors
868
https://en.wikipedia.org/wiki/Alp%20Arslan
Alp Arslan
Alp Arslan (honorific in Turkic meaning "Heroic or Great Lion"; in ; Arabic epithet: Diyā ad-Dunyā wa ad-Dīn Adud ad-Dawlah Abu Shujā' Muhammad Ālp Ārslan ibn Dawūd, ; 20 January 1029 – 24 November 1072), real name: Muhammad bin Dawud Chaghri, was the second Sultan of the Seljuk Empire and great-grandson of Seljuk, the eponymous founder of the dynasty. He greatly expanded the Seljuk territory and consolidated his power, defeating rivals to south and northwest and his victory over the Byzantines at the Battle of Manzikert, in 1071, ushered in the Turkoman settlement of Anatolia. For his military prowess and fighting skills, he obtained the name Alp Arslan, which means "Heroic Lion" in Turkish. Early life Alp Arslan was the son of Chaghri and nephew of Tughril, the founding Sultans of the Seljuk Empire. His grandfather was Mikail, who in turn was the son of the warlord Seljuk. He was the father of numerous children, including Malik-Shah I and Tutush I. It is unclear who the mother or mothers of his children were. He was known to have been married at least twice. His wives included the widow of his uncle Tughril, a Kara-Khanid princess known as Aka Khatun, and the daughter or niece of Bagrat IV of Georgia (who would later marry his vizier, Nizam al-Mulk). One of Seljuk's other sons was the Turkic chieftain Arslan Isra'il, whose son, Kutalmish, contested his nephew's succession to the sultanate. Alp Arslan's younger brothers Suleiman ibn Chaghri and Qavurt were his rivals. Kilij Arslan, the son and successor of Suleiman ibn Kutalmish (Kutalmish's son, who would later become Sultan of Rûm), was a major opponent of the Franks during the First Crusade and the Crusade of 1101. Early career Alp Arslan accompanied his uncle Tughril on campaigns in the south against the Fatimids while his father Chaghri remained in Khorasan. Upon Alp Arslan's return to Khorasan, he began his work in administration at his father's suggestion. While there, his father introduced him to Nizam al-Mulk, one of the most eminent statesmen in early Muslim history and Alp Arslan's future vizier. After the death of his father, Alp Arslan succeeded him as governor of Khorasan in 1059. His uncle Tughril died in 1063 and had designated his successor as Suleiman, Arslan's infant brother. Arslan and his uncle Kutalmish both contested this succession which was resolved at the battle of Damghan in 1063. Arslan defeated Kutalmish for the throne and succeeded on 27 April 1064 as sultan of the Seljuk Empire, thus becoming sole monarch of Persia from the river Oxus to the Tigris. In consolidating his empire and subduing contending factions, Arslan was ably assisted by Nizam al-Mulk, and the two are credited with helping to stabilize the empire after the death of Tughril. With peace and security established in his dominions, Arslan convoked an assembly of the states and in 1066, he declared his son Malik Shah I his heir and successor. With the hope of capturing Caesarea Mazaca, the capital of Cappadocia, he placed himself at the head of the Turkoman cavalry, crossed the Euphrates, and entered and invaded the city. Along with Nizam al-Mulk, he then marched into Armenia and Georgia, which he conquered in 1064. After a siege of 25 days, the Seljuks captured Ani, the capital city of Armenia. An account of the sack and massacres in Ani is given by the historian Sibt ibn al-Jawzi, who quotes an eyewitness saying: Byzantine struggle In route to fight the Fatimids in Syria in 1068, Alp Arslan invaded the Byzantine Empire. The Emperor Romanos IV Diogenes, assuming command in person, met the invaders in Cilicia. In three arduous campaigns, the Turks were defeated in detail and driven across the Euphrates in 1070. The first two campaigns were conducted by the emperor himself, while the third was directed by Manuel Comnenos, great-uncle of Emperor Manuel Comnenos. During this time, Arslan gained the allegiance of Rashid al-Dawla Mahmud, the Mirdasid emir of Aleppo. In 1071, Romanos again took the field and advanced into Armenia with possibly 30,000 men, including a contingent of Cuman Turks as well as contingents of Franks and Normans, under Ursel de Baieul. Alp Arslan, who had moved his troops south to fight the Fatimids, quickly reversed to meet the Byzantines. At Manzikert, on the Murat River, north of Lake Van, the two forces waged the Battle of Manzikert. The Cuman mercenaries among the Byzantine forces immediately defected to the Turkic side. Seeing this, "the Western mercenaries rode off and took no part in the battle." To be exact, Romanos was betrayed by general Andronikos Doukas, son of the Caesar (Romanos's stepson), who pronounced him dead and rode off with a large part of the Byzantine forces at a critical moment. The Byzantines were totally routed. Emperor Romanos IV was himself taken prisoner and conducted into the presence of Alp Arslan. After a ritual humiliation, Arslan treated him with generosity. After peace terms were agreed to, Arslan dismissed the Emperor, loaded with presents and respectfully attended by a military guard. The following conversation is said to have taken place after Romanos was brought as a prisoner before the Sultan: Alp Arslan's victories changed the balance in near Asia completely in favour of the Seljuq Turks and Sunni Muslims. While the Byzantine Empire was to continue for nearly four more centuries, the victory at Manzikert signalled the beginning of Turkmen ascendancy in Anatolia. The victory at Manzikert became so popular among the Turks that later every noble family in Anatolia claimed to have had an ancestor who had fought on that day. Most historians, including Edward Gibbon, date the defeat at Manzikert as the beginning of the end of the Eastern Roman Empire. State organization Alp Arslan's strength lay in the military realm. Domestic affairs were handled by his able vizier, Nizam al-Mulk, the founder of the administrative organization that characterized and strengthened the sultanate during the reigns of Alp Arslan and his son, Malik Shah. Military fiefs, governed by Seljuq princes, were established to provide support for the soldiery and to accommodate the nomadic Turks to the established Anatolian agricultural scene. This type of military fiefdom enabled the nomadic Turks to draw on the resources of the sedentary Persians, Turks, and other established cultures within the Seljuq realm, and allowed Alp Arslan to field a huge standing army without depending on tribute from conquest to pay his soldiers. He not only had enough food from his subjects to maintain his military, but the taxes collected from traders and merchants added to his coffers sufficiently to fund his continuous wars. Suleiman ibn Qutalmish was the son of the contender for Arslan's throne; he was appointed governor of the north-western provinces and assigned to completing the invasion of Anatolia. An explanation for this choice can only be conjectured from Ibn al-Athir's account of the battle between Alp-Arslan and Kutalmish, in which he writes that Alp-Arslan wept for the latter's death and greatly mourned the loss of his kinsman. Death After Manzikert, the dominion of Alp Arslan extended over much of western Asia. He soon prepared to march for the conquest of Turkestan, the original seat of his ancestors. With a powerful army he advanced to the banks of the Oxus. Before he could pass the river with safety, however, it was necessary to subdue certain fortresses, one of which was for several days vigorously defended by the Kurdish rebel, Yusuf al-Kharezmi or Yusuf al-Harani. Perhaps over-eager to press on against his Qarakhanid enemy, Alp Arslan gained the governor's submission by promising the rebel ‘perpetual ownership of his lands’. When Yusuf al-Harani was brought before him, the Sultan ordered that he be shot, but before the archers could raise their bows Yusuf seized a knife and threw himself at Alp Arslan, striking three blows before being slain. Four days later on 24 November 1072, Alp Arslan died and was buried at Merv, having designated his 18-year-old son Malik Shah as his successor. Family One of his wives was Safariyya Khatun. She had a daughter, Sifri Khatun, who in 1071–72, married Abbasid Caliph Al-Muqtadi. Safariyya died in Isfahan in 1073–4. Another of his wives was Akka Khatun. She had been formerly the wife of Sultan Tughril. Alp Arslan married her after Tughril's death in 1063. Another of his wives was Shah Khatun. She was the daughter of Qadir Khan Yusuf, and had been formerly married to Ghaznavid Mas'ud. Another of his wives was the daughter of the Georgian king Bagrat. They married in 1067–68. He divorced her soon after, and married her to Fadlun. His sons were Malik-Shah I, Tutush I, Tekish, and Arslan Arghun. One of his daughters, married the son of Kurd Surkhab, son of Bard in 1068. Another daughter, Zulaikha Khatun, was married to Muslim, son of Quraish in 1086–7. Another daughter, Aisha Khatun married Shams al-Mulk Nasr, son of Ibrahim Khan Tamghach. Legacy Alp Arslan's conquest of Anatolia from the Byzantines is also seen as one of the pivotal precursors to the launch of the Crusades. From 2002 to July 2008 under Turkmen calendar reform, the month of August was named after Alp Arslan. The 2nd Training Motorized Rifle Division of the Turkmen Ground Forces is named in his honour. References Sources Çoban, R. V. (2020). The Manzikert Battle and Sultan Alp Arslan with European Perspective in the 15st Century in the Miniatures of Giovanni Boccaccio's "De Casibus Virorum Illustrium"s 226 and 232. French Manuscripts in Bibliothèque Nationale de France. S. Karakaya ve V. Baydar (Ed.), in 2nd International Muş Symposium Articles Book (pp. 48-64). Muş: Muş Alparslan University. Source 11th-century births 1072 deaths Seljuk rulers Byzantine–Seljuk wars 11th-century murdered monarchs 11th-century Turkic people Deaths by stabbing Shahanshahs
869
https://en.wikipedia.org/wiki/American%20Film%20Institute
American Film Institute
The American Film Institute (AFI) is an American film organization that educates filmmakers and honors the heritage of the motion picture arts in the United States. AFI is supported by private funding and public membership fees. Leadership The institute is composed of leaders from the film, entertainment, business, and academic communities. The board of trustees is chaired by Kathleen Kennedy and the board of directors chaired by Robert A. Daly guide the organization, which is led by President and CEO, film historian Bob Gazzale. Prior leaders were founding director George Stevens, Jr. (from the organization's inception in 1967 until 1980) and Jean Picker Firstenberg (from 1980 to 2007). History The American Film Institute was founded by a 1965 presidential mandate announced in the Rose Garden of the White House by Lyndon B. Johnson—to establish a national arts organization to preserve the legacy of American film heritage, educate the next generation of filmmakers, and honor the artists and their work. Two years later, in 1967, AFI was established, supported by the National Endowment for the Arts, the Motion Picture Association of America and the Ford Foundation. The original 22-member Board of Trustees included actor Gregory Peck as chairman and actor Sidney Poitier as vice-chairman, as well as director Francis Ford Coppola, film historian Arthur Schlesinger, Jr., lobbyist Jack Valenti, and other representatives from the arts and academia. The institute established a training program for filmmakers known then as the Center for Advanced Film Studies. Also created in the early years were a repertory film exhibition program at the Kennedy Center for the Performing Arts and the AFI Catalog of Feature Films — a scholarly source for American film history. The institute moved to its current eight-acre Hollywood campus in 1981. The film training program grew into the AFI Conservatory, an accredited graduate school. AFI moved its presentation of first-run and auteur films from the Kennedy Center to the historic AFI Silver Theatre and Cultural Center, which hosts the AFI DOCS film festival, making AFI the largest nonprofit film exhibitor in the world. AFI educates audiences and recognizes artistic excellence through its awards programs and 10 Top 10 Lists. List of programs in brief AFI educational and cultural programs include: AFI Awards – an honor celebrating the creative ensembles of the most outstanding motion picture and television programs of the year AFI Catalog of Feature Films and AFI Archive – the written history of all feature films during the first 100 years of the art form – accessible free online AFI Conservatory – a film school led by master filmmakers in a graduate-level program AFI Directing Workshop for Women – a production-based training program committed to increasing the number of women working professionally in screen directing AFI Life Achievement Award – a tradition since 1973, a high honor for a career in film AFI 100 Years... series – television events and movie reference lists AFI's two film festivals – AFI Fest in Los Angeles and AFI Docs in Washington, D.C. and Silver Spring, Maryland AFI Silver Theatre and Cultural Center – a historic theater with year-round art house, first-run and classic film programming in Silver Spring, Maryland American Film – a magazine that explores the art of new and historic film classics, now a blog on AFI.com AFI Conservatory In 1969, the institute established the AFI Conservatory for Advanced Film Studies at Greystone, the Doheny Mansion in Beverly Hills, California. The first class included filmmakers Terrence Malick, Caleb Deschanel, and Paul Schrader. That program grew into the AFI Conservatory, an accredited graduate film school located in the hills above Hollywood, California, providing training in six filmmaking disciplines: cinematography, directing, editing, producing, production design, and screenwriting. Mirroring a professional production environment, Fellows collaborate to make more films than any other graduate level program. Admission to AFI Conservatory is highly selective, with a maximum of 140 graduates per year. In 2013, Emmy and Oscar-winning director, producer, and screenwriter James L. Brooks (As Good as It Gets, Broadcast News, Terms of Endearment) joined as the artistic director of the AFI Conservatory where he provides leadership for the film program. Brooks' artistic role at the AFI Conservatory has a rich legacy that includes Daniel Petrie, Jr., Robert Wise, and Frank Pierson. Award-winning director Bob Mandel served as dean of the AFI Conservatory for nine years. Jan Schuette took over as dean in 2014 and served until 2017. Film producer Richard Gladstein was dean from 2017 until 2019, when Susan Ruskin was appointed. Notable alumni AFI Conservatory's alumni have careers in film, television and on the web. They have been recognized with all of the major industry awards—Academy Award, Emmy Award, guild awards, and the Tony Award. Among the alumni of AFI are Andrea Arnold (Red Road, Fish Tank), Darren Aronofsky (Requiem for a Dream, Black Swan), Carl Colpaert (Gas Food Lodging, Hurlyburly, Swimming with Sharks), Doug Ellin (Entourage), Todd Field (In the Bedroom, Little Children), Jack Fisk (Badlands, Days of Heaven, There Will Be Blood), Carl Franklin (One False Move, Devil in a Blue Dress, House of Cards), Patty Jenkins (Monster, Wonder Woman), Janusz Kamiński (Lincoln, Schindler's List, Saving Private Ryan), Matthew Libatique (Noah, Black Swan), David Lynch (Mulholland Drive, Blue Velvet), Terrence Malick (Days of Heaven, The Thin Red Line, The Tree of Life), Victor Nuñez, (Ruby in Paradise, Ulee's Gold), Wally Pfister (Memento, The Dark Knight, Inception), Robert Richardson (Platoon, JFK, Django Unchained), Ari Aster (Hereditary, Midsommar), and many others. AFI programs AFI Catalog of Feature Films The AFI Catalog, started in 1968, is a web-based filmographic database. A research tool for film historians, the catalog consists of entries on more than 60,000 feature films and 17,000 short films produced from 1893 to 2011, as well as AFI Awards Outstanding Movies of the Year from 2000 through 2010. Early print copies of this catalog may also be found at local libraries. AFI Life Achievement Award AFI Awards Created in 2000, the AFI Awards honor the ten outstanding films ("Movies of the Year") and ten outstanding television programs ("TV Programs of the Year"). The awards are a non-competitive acknowledgment of excellence. The awards are announced in December, and a private luncheon for award honorees takes place the following January. AFI Maya Deren Award AFI 100 Years... series The AFI 100 Years... series, which ran from 1998 to 2008 and created jury-selected lists of America's best movies in categories such as Musicals, Laughs and Thrills, prompted new generations to experience classic American films. The juries consisted of over 1,500 artists, scholars, critics, and historians. Citizen Kane was voted the greatest American film twice. AFI film festivals AFI operates two film festivals: AFI Fest in Los Angeles, and AFI Docs (formally known as Silverdocs) in Silver Spring, Maryland, and Washington, D.C. AFI Fest AFI Fest is the American Film Institute's annual celebration of artistic excellence. It is a showcase for the best festival films of the year and an opportunity for master filmmakers and emerging artists to come together with audiences in the movie capital of the world. It is the only festival of its stature that is free to the public. The Academy of Motion Picture Arts and Sciences recognizes AFI Fest as a qualifying festival for the Short Films category for the annual Academy Awards. The festival has paid tribute to numerous influential filmmakers and artists over the years, including Agnès Varda, Pedro Almodóvar and David Lynch as guest artistic directors, and has screened scores of films that have produced Oscar nominations and wins. AFI Docs Held annually in June, AFI Docs (formerly Silverdocs) is a documentary festival in Washington, D.C. The festival attracts over 27,000 documentary enthusiasts. AFI Silver Theatre and Cultural Center The AFI Silver Theatre and Cultural Center is a moving image exhibition, education and cultural center located in Silver Spring, Maryland. Anchored by the restoration of noted architect John Eberson's historic 1938 Silver Theatre, it features 32,000 square feet of new construction housing two stadium theatres, office and meeting space, and reception and exhibit areas. The AFI Silver Theatre and Cultural Center presents film and video programming, augmented by filmmaker interviews, panels, discussions, and musical performances. The AFI Directing Workshop for Women The Directing Workshop for Women is a training program committed to educating and mentoring participants in an effort to increase the number of women working professionally in screen directing. In this tuition-free program, each participant is required to complete a short film by the end of the year-long program. Alumnae of the program include Maya Angelou, Anne Bancroft, Dyan Cannon, Ellen Burstyn, Jennifer Getzinger, Lesli Linka Glatter, Lily Tomlin, Susan Oliver and Nancy Malone. AFI Directors Series AFI released a set of hour-long programs reviewing the career of acclaimed directors. The Directors Series content was copyrighted in 1997 by Media Entertainment Inc and The American Film Institute, and the VHS and DVDs were released between 1999 and 2001 on Winstar TV and Video. Directors featured included: John McTiernan (WHE73067) Ron Howard (WHE73068) Sydney Pollack (WHE73071) Norman Jewison (WHE73076) Lawrence Kasdan (WHE73088) Terry Gilliam (WHE73089) Spike Lee (WHE73090) Barry Levinson (WHE73093) Miloš Forman (WHE73094) Martin Scorsese (WHE73098) Barbra Streisand (WHE73099) David Cronenberg (WHE73101) Robert Zemeckis (WHE73131) Robert Altman John Frankenheimer Adrian Lyne Garry Marshall William Friedkin Clint Eastwood David Zucker, Jim Abrahams and Jerry Zucker Roger Corman Michael Mann James Cameron Rob Reiner Joel Schumacher Steven Spielberg Wes Craven See also British Film Institute, the British equivalent to AFI References External links AFI Los Angeles Film Festival - history and information Arts organizations based in California Cinema of Southern California Hollywood history and culture Los Feliz, Los Angeles Organizations based in Los Angeles 1967 establishments in California Organizations established in 1967
872
https://en.wikipedia.org/wiki/Akira%20Kurosawa
Akira Kurosawa
was a Japanese filmmaker and painter who directed thirty films in a career spanning over five decades. He is regarded as one of the most important and influential filmmakers in film history. Kurosawa entered the Japanese film industry in 1936, following a brief stint as a painter. After years of working on numerous films as an assistant director and scriptwriter, he made his debut as a director during World War II with the popular action film Sanshiro Sugata. After the war, the critically acclaimed Drunken Angel (1948), in which Kurosawa cast the then little-known actor Toshiro Mifune in a starring role, cemented the director's reputation as one of the most important young filmmakers in Japan. The two men would go on to collaborate on another fifteen films. Rashomon, which premiered in Tokyo, became the surprise winner of the Golden Lion at the 1951 Venice Film Festival. The commercial and critical success of that film opened up Western film markets for the first time to the products of the Japanese film industry, which in turn led to international recognition for other Japanese filmmakers. Kurosawa directed approximately one film per year throughout the 1950s and early 1960s, including a number of highly regarded (and often adapted) films, such as Ikiru (1952), Seven Samurai (1954) and Yojimbo (1961). After the 1960s he became much less prolific; even so, his later work—including two of his final films, Kagemusha (1980) and Ran (1985)—continued to receive great acclaim. In 1990, he accepted the Academy Award for Lifetime Achievement. Posthumously, he was named "Asian of the Century" in the "Arts, Literature, and Culture" category by AsianWeek magazine and CNN, cited there as being among the five people who most prominently contributed to the improvement of Asia in the 20th century. His career has been honored by many retrospectives, critical studies and biographies in both print and video, and by releases in many consumer media. Biography Childhood to war years (1910–1945) Childhood and youth (1910–1935) Kurosawa was born on March 23, 1910, in Ōimachi in the Ōmori district of Tokyo. His father Isamu (1864–1948), a member of a samurai family from Akita Prefecture, worked as the director of the Army's Physical Education Institute's lower secondary school, while his mother Shima (1870–1952) came from a merchant's family living in Osaka. Akira was the eighth and youngest child of the moderately wealthy family, with two of his siblings already grown up at the time of his birth and one deceased, leaving Kurosawa to grow up with three sisters and a brother. In addition to promoting physical exercise, Isamu Kurosawa was open to Western traditions and considered theatre and motion pictures to have educational merit. He encouraged his children to watch films; young Akira viewed his first movies at the age of six. An important formative influence was his elementary school teacher Mr. Tachikawa, whose progressive educational practices ignited in his young pupil first a love of drawing and then an interest in education in general. During this time, the boy also studied calligraphy and Kendo swordsmanship. Another major childhood influence was Heigo Kurosawa (1906-1933), Akira's older brother by four years. In the aftermath of the Great Kantō earthquake of 1923, Heigo took the thirteen-year-old Akira to view the devastation. When the younger brother wanted to look away from the corpses of humans and beasts scattered everywhere, Heigo forbade him to do so, encouraging Akira instead to face his fears by confronting them directly. Some commentators have suggested that this incident would influence Kurosawa's later artistic career, as the director was seldom hesitant to confront unpleasant truths in his work. Heigo was academically gifted, but soon after failing to secure a place in Tokyo's foremost high school, he began to detach himself from the rest of the family, preferring to concentrate on his interest in foreign literature. In the late 1920s, Heigo became a benshi (silent film narrator) for Tokyo theaters showing foreign films and quickly made a name for himself. Akira, who at this point planned to become a painter, moved in with him, and the two brothers became inseparable. With Heigo's guidance, Akira devoured not only films but also theater and circus performances, while exhibiting his paintings and working for the left-wing Proletarian Artists' League. However, he was never able to make a living with his art, and, as he began to perceive most of the proletarian movement as "putting unfulfilled political ideals directly onto the canvas", he lost his enthusiasm for painting. With the increasing production of talking pictures in the early 1930s, film narrators like Heigo began to lose work, and Akira moved back in with his parents. In July 1933, Heigo committed suicide. Kurosawa has commented on the lasting sense of loss he felt at his brother's death and the chapter of his autobiography (Something Like an Autobiography) that describes it—written nearly half a century after the event—is titled, "A Story I Don't Want to Tell". Only four months later, Kurosawa's eldest brother also died, leaving Akira, at age 23, the only one of the Kurosawa brothers still living, together with his three surviving sisters. Director in training (1935–1941) In 1935, the new film studio Photo Chemical Laboratories, known as P.C.L. (which later became the major studio Toho), advertised for assistant directors. Although he had demonstrated no previous interest in film as a profession, Kurosawa submitted the required essay, which asked applicants to discuss the fundamental deficiencies of Japanese films and find ways to overcome them. His half-mocking view was that if the deficiencies were fundamental, there was no way to correct them. Kurosawa's essay earned him a call to take the follow-up exams, and director Kajirō Yamamoto, who was among the examiners, took a liking to Kurosawa and insisted that the studio hire him. The 25-year-old Kurosawa joined P.C.L. in February 1936. During his five years as an assistant director, Kurosawa worked under numerous directors, but by far the most important figure in his development was Yamamoto. Of his 24 films as A.D., he worked on 17 under Yamamoto, many of them comedies featuring the popular actor Ken'ichi Enomoto, known as "Enoken". Yamamoto nurtured Kurosawa's talent, promoting him directly from third assistant director to chief assistant director after a year. Kurosawa's responsibilities increased, and he worked at tasks ranging from stage construction and film development to location scouting, script polishing, rehearsals, lighting, dubbing, editing, and second-unit directing. In the last of Kurosawa's films as an assistant director for Yamamoto, Horse (Uma, 1941), Kurosawa took over most of the production, as his mentor was occupied with the shooting of another film. Yamamoto advised Kurosawa that a good director needed to master screenwriting. Kurosawa soon realized that the potential earnings from his scripts were much higher than what he was paid as an assistant director. He later wrote or co-wrote all his films, and frequently penned screenplays for other directors such as Satsuo Yamamoto's film, A Triumph of Wings (Tsubasa no gaika, 1942). This outside scriptwriting would serve Kurosawa as a lucrative sideline lasting well into the 1960s, long after he became famous. Wartime films and marriage (1942–1945) In the two years following the release of Horse in 1941, Kurosawa searched for a story he could use to launch his directing career. Towards the end of 1942, about a year after the Japanese attack on Pearl Harbor, novelist Tsuneo Tomita published his Musashi Miyamoto-inspired judo novel, Sanshiro Sugata, the advertisements for which intrigued Kurosawa. He bought the book on its publication day, devoured it in one sitting, and immediately asked Toho to secure the film rights. Kurosawa's initial instinct proved correct as, within a few days, three other major Japanese studios also offered to buy the rights. Toho prevailed, and Kurosawa began pre-production on his debut work as director. Shooting of Sanshiro Sugata began on location in Yokohama in December 1942. Production proceeded smoothly, but getting the completed film past the censors was an entirely different matter. The censorship office considered the work to be objectionably "British-American" by the standards of wartime Japan, and it was only through the intervention of director Yasujirō Ozu, who championed the film, that Sanshiro Sugata was finally accepted for release on March 25, 1943. (Kurosawa had just turned 33.) The movie became both a critical and commercial success. Nevertheless, the censorship office would later decide to cut out some 18 minutes of footage, much of which is now considered lost. He next turned to the subject of wartime female factory workers in The Most Beautiful, a propaganda film which he shot in a semi-documentary style in early 1944. To elicit realistic performances from his actresses, the director had them live in a real factory during the shoot, eat the factory food and call each other by their character names. He would use similar methods with his performers throughout his career. During production, the actress playing the leader of the factory workers, Yōko Yaguchi, was chosen by her colleagues to present their demands to the director. She and Kurosawa were constantly at odds, and it was through these arguments that the two paradoxically became close. They married on May 21, 1945, with Yaguchi two months pregnant (she never resumed her acting career), and the couple would remain together until her death in 1985. They had two children, both surviving Kurosawa : a son, Hisao, born December 20, 1945, who served as producer on some of his father's last projects, and Kazuko, a daughter, born April 29, 1954, who became a costume designer. Shortly before his marriage, Kurosawa was pressured by the studio against his will to direct a sequel to his debut film. The often blatantly propagandistic Sanshiro Sugata Part II, which premiered in May 1945, is generally considered one of his weakest pictures. Kurosawa decided to write the script for a film that would be both censor-friendly and less expensive to produce. The Men Who Tread on the Tiger's Tail, based on the Kabuki play Kanjinchō and starring the comedian Enoken, with whom Kurosawa had often worked during his assistant director days, was completed in September 1945. By this time, Japan had surrendered and the occupation of Japan had begun. The new American censors interpreted the values allegedly promoted in the picture as overly "feudal" and banned the work. It was not released until 1952, the year another Kurosawa film, Ikiru, was also released. Ironically, while in production, the film had already been savaged by Japanese wartime censors as too Western and "democratic" (they particularly disliked the comic porter played by Enoken), so the movie most probably would not have seen the light of day even if the war had continued beyond its completion. Early postwar years to Red Beard (1946–65) First postwar works (1946–50) After the war, Kurosawa, influenced by the democratic ideals of the Occupation, sought to make films that would establish a new respect towards the individual and the self. The first such film, No Regrets for Our Youth (1946), inspired by both the 1933 Takigawa incident and the Hotsumi Ozaki wartime spy case, criticized Japan's prewar regime for its political oppression. Atypically for the director, the heroic central character is a woman, Yukie (Setsuko Hara), who, born into upper-middle-class privilege, comes to question her values in a time of political crisis. The original script had to be extensively rewritten and, because of its controversial theme and gender of its protagonist, the completed work divided critics. Nevertheless, it managed to win the approval of audiences, who turned variations on the film's title into a postwar catchphrase. His next film, One Wonderful Sunday premiered in July 1947 to mixed reviews. It is a relatively uncomplicated and sentimental love story dealing with an impoverished postwar couple trying to enjoy, within the devastation of postwar Tokyo, their one weekly day off. The movie bears the influence of Frank Capra, D. W. Griffith and F. W. Murnau, each of whom was among Kurosawa's favorite directors. Another film released in 1947 with Kurosawa's involvement was the action-adventure thriller, Snow Trail, directed by Senkichi Taniguchi from Kurosawa's screenplay. It marked the debut of the intense young actor Toshiro Mifune. It was Kurosawa who, with his mentor Yamamoto, had intervened to persuade Toho to sign Mifune, during an audition in which the young man greatly impressed Kurosawa, but managed to alienate most of the other judges. Drunken Angel is often considered the director's first major work. Although the script, like all of Kurosawa's occupation-era works, had to go through rewrites due to American censorship, Kurosawa felt that this was the first film in which he was able to express himself freely. A gritty story of a doctor who tries to save a gangster (yakuza) with tuberculosis, it was also the first time that Kurosawa directed Mifune, who went on to play major roles in all but one of the director's next 16 films (the exception being Ikiru). While Mifune was not cast as the protagonist in Drunken Angel, his explosive performance as the gangster so dominates the drama that he shifted the focus from the title character, the alcoholic doctor played by Takashi Shimura, who had already appeared in several Kurosawa movies. However, Kurosawa did not want to smother the young actor's immense vitality, and Mifune's rebellious character electrified audiences in much the way that Marlon Brando's defiant stance would startle American film audiences a few years later. The film premiered in Tokyo in April 1948 to rave reviews and was chosen by the prestigious Kinema Junpo critics poll as the best film of its year, the first of three Kurosawa movies to be so honored. Kurosawa, with producer Sōjirō Motoki and fellow directors and friends Kajiro Yamamoto, Mikio Naruse and Senkichi Taniguchi, formed a new independent production unit called Film Art Association (Eiga Geijutsu Kyōkai). For this organization's debut work, and first film for Daiei studios, Kurosawa turned to a contemporary play by Kazuo Kikuta and, together with Taniguchi, adapted it for the screen. The Quiet Duel starred Toshiro Mifune as an idealistic young doctor struggling with syphilis, a deliberate attempt by Kurosawa to break the actor away from being typecast as gangsters. Released in March 1949, it was a box office success, but is generally considered one of the director's lesser achievements. His second film of 1949, also produced by Film Art Association and released by Shintoho, was Stray Dog. It is a detective movie (perhaps the first important Japanese film in that genre) that explores the mood of Japan during its painful postwar recovery through the story of a young detective, played by Mifune, and his fixation on the recovery of his handgun, which was stolen by a penniless war veteran who proceeds to use it to rob and murder. Adapted from an unpublished novel by Kurosawa in the style of a favorite writer of his, Georges Simenon, it was the director's first collaboration with screenwriter Ryuzo Kikushima, who would later help to script eight other Kurosawa films. A famous, virtually wordless sequence, lasting over eight minutes, shows the detective, disguised as an impoverished veteran, wandering the streets in search of the gun thief; it employed actual documentary footage of war-ravaged Tokyo neighborhoods shot by Kurosawa's friend, Ishirō Honda, the future director of Godzilla. The film is considered a precursor to the contemporary police procedural and buddy cop film genres. Scandal, released by Shochiku in April 1950, was inspired by the director's personal experiences with, and anger towards, Japanese yellow journalism. The work is an ambitious mixture of courtroom drama and social problem film about free speech and personal responsibility, but even Kurosawa regarded the finished product as dramatically unfocused and unsatisfactory, and almost all critics agree. However, it would be Kurosawa's second film of 1950, Rashomon, that would ultimately win him, and Japanese cinema, a whole new international audience. International recognition (1950–58) After finishing Scandal, Kurosawa was approached by Daiei studios to make another film for them. Kurosawa picked a script by an aspiring young screenwriter, Shinobu Hashimoto, who would eventually work on nine of his films. Their first joint effort was based on Ryūnosuke Akutagawa's experimental short story "In a Grove", which recounts the murder of a samurai and the rape of his wife from various different and conflicting points-of-view. Kurosawa saw potential in the script, and with Hashimoto's help, polished and expanded it and then pitched it to Daiei, who were happy to accept the project due to its low budget. The shooting of Rashomon began on July 7, 1950, and, after extensive location work in the primeval forest of Nara, wrapped on August 17. Just one week was spent in hurried post-production, hampered by a studio fire, and the finished film premiered at Tokyo's Imperial Theatre on August 25, expanding nationwide the following day. The movie was met by lukewarm reviews, with many critics puzzled by its unique theme and treatment, but it was nevertheless a moderate financial success for Daiei. Kurosawa's next film, for Shochiku, was The Idiot, an adaptation of the novel by the director's favorite writer, Fyodor Dostoyevsky. The story is relocated from Russia to Hokkaido, but otherwise adheres closely to the original, a fact seen by many critics as detrimental to the work. A studio-mandated edit shortened it from Kurosawa's original cut of 265 minutes to just 166 minutes, making the resulting narrative exceedingly difficult to follow. The severely edited film version is widely considered to be one of the director's least successful works and the original full-length version no longer exists. Contemporary reviews of the much shortened edited version were very negative, but the film was a moderate success at the box office, largely because of the popularity of one of its stars, Setsuko Hara. Meanwhile, unbeknownst to Kurosawa, Rashomon had been entered in the Venice Film Festival, due to the efforts of Giuliana Stramigioli, a Japan-based representative of an Italian film company, who had seen and admired the movie and convinced Daiei to submit it. On September 10, 1951, Rashomon was awarded the festival's highest prize, the Golden Lion, shocking not only Daiei but the international film world, which at the time was largely unaware of Japan's decades-old cinematic tradition. After Daiei briefly exhibited a subtitled print of the film in Los Angeles, RKO purchased distribution rights to Rashomon in the United States. The company was taking a considerable gamble. It had put out only one prior subtitled film in the American market, and the only previous Japanese talkie commercially released in New York had been Mikio Naruse's comedy, Wife! Be Like a Rose, in 1937: a critical and box-office flop. However, Rashomons commercial run, greatly helped by strong reviews from critics and even the columnist Ed Sullivan, earned $35,000 in its first three weeks at a single New York theatre, an almost unheard-of sum at the time. This success in turn led to a vogue in America and the West for Japanese movies throughout the 1950s, replacing the enthusiasm for Italian neorealist cinema. By the end of 1952 Rashomon was released in Japan, the United States, and most of Europe. Among the Japanese film-makers whose work, as a result, began to win festival prizes and commercial release in the West were Kenji Mizoguchi (The Life of Oharu, Ugetsu, Sansho the Bailiff) and, somewhat later, Yasujirō Ozu (Tokyo Story, An Autumn Afternoon)—artists highly respected in Japan but, before this period, almost totally unknown in the West. Kurosawa's growing reputation among Western audiences in the 1950s would make Western audiences more sympathetic to the reception of later generations of Japanese film-makers ranging from Kon Ichikawa, Masaki Kobayashi, Nagisa Oshima and Shohei Imamura to Juzo Itami, Takeshi Kitano and Takashi Miike. His career boosted by his sudden international fame, Kurosawa, now reunited with his original film studio, Toho (which would go on to produce his next 11 films), set to work on his next project, Ikiru. The movie stars Takashi Shimura as a cancer-ridden Tokyo bureaucrat, Watanabe, on a final quest for meaning before his death. For the screenplay, Kurosawa brought in Hashimoto as well as writer Hideo Oguni, who would go on to co-write twelve Kurosawa films. Despite the work's grim subject matter, the screenwriters took a satirical approach, which some have compared to the work of Brecht, to both the bureaucratic world of its hero and the U.S. cultural colonization of Japan. (American pop songs figure prominently in the film.) Because of this strategy, the film-makers are usually credited with saving the picture from the kind of sentimentality common to dramas about characters with terminal illnesses. Ikiru opened in October 1952 to rave reviews—it won Kurosawa his second Kinema Junpo "Best Film" award—and enormous box office success. It remains the most acclaimed of all the artist's films set in the modern era. In December 1952, Kurosawa took his Ikiru screenwriters, Shinobu Hashimoto and Hideo Oguni, for a forty-five-day secluded residence at an inn to create the screenplay for his next movie, Seven Samurai. The ensemble work was Kurosawa's first proper samurai film, the genre for which he would become most famous. The simple story, about a poor farming village in Sengoku period Japan that hires a group of samurai to defend it against an impending attack by bandits, was given a full epic treatment, with a huge cast (largely consisting of veterans of previous Kurosawa productions) and meticulously detailed action, stretching out to almost three-and-a-half hours of screen time. Three months were spent in pre-production and a month in rehearsals. Shooting took up 148 days spread over almost a year, interrupted by production and financing troubles and Kurosawa's health problems. The film finally opened in April 1954, half a year behind its original release date and about three times over budget, making it at the time the most expensive Japanese film ever made. (However, by Hollywood standards, it was a quite modestly budgeted production, even for that time.) The film received positive critical reaction and became a big hit, quickly making back the money invested in it and providing the studio with a product that they could, and did, market internationally—though with extensive edits. Over time—and with the theatrical and home video releases of the uncut version—its reputation has steadily grown. It is now regarded by some commentators as the greatest Japanese film ever made, and in 1979, a poll of Japanese film critics also voted it the best Japanese film ever made. In the most recent (2012) version of the widely respected British Film Institute (BFI) Sight & Sound "Greatest Films of All Time" poll, Seven Samurai placed 17th among all films from all countries in both the critics' and the directors' polls, receiving a place in the Top Ten lists of 48 critics and 22 directors. In 1954, nuclear tests in the Pacific were causing radioactive rainstorms in Japan and one particular incident in March had exposed a Japanese fishing boat to nuclear fallout, with disastrous results. It is in this anxious atmosphere that Kurosawa's next film, Record of a Living Being, was conceived. The story concerned an elderly factory owner (Toshiro Mifune) so terrified of the prospect of a nuclear attack that he becomes determined to move his entire extended family (both legal and extra-marital) to what he imagines is the safety of a farm in Brazil. Production went much more smoothly than the director's previous film, but a few days before shooting ended, Kurosawa's composer, collaborator and close friend Fumio Hayasaka died (of tuberculosis) at the age of 41. The film's score was finished by Hayasaka's student, Masaru Sato, who would go on to score all of Kurosawa's next eight films. Record of a Living Being opened in November 1955 to mixed reviews and muted audience reaction, becoming the first Kurosawa film to lose money during its original theatrical run. Today, it is considered by many to be among the finest films dealing with the psychological effects of the global nuclear stalemate. Kurosawa's next project, Throne of Blood, an adaptation of William Shakespeare's Macbeth—set, like Seven Samurai, in the Sengoku Era—represented an ambitious transposition of the English work into a Japanese context. Kurosawa instructed his leading actress, Isuzu Yamada, to regard the work as if it were a cinematic version of a Japanese rather than a European literary classic. Given Kurosawa's appreciation of traditional Japanese stage acting, the acting of the players, particularly Yamada, draws heavily on the stylized techniques of the Noh theater. It was filmed in 1956 and released in January 1957 to a slightly less negative domestic response than had been the case with the director's previous film. Abroad, Throne of Blood, regardless of the liberties it takes with its source material, quickly earned a place among the most celebrated Shakespeare adaptations. Another adaptation of a classic European theatrical work followed almost immediately, with production of The Lower Depths, based on a play by Maxim Gorky, taking place in May and June 1957. In contrast to the Shakespearean sweep of Throne of Blood, The Lower Depths was shot on only two confined sets, in order to emphasize the restricted nature of the characters' lives. Though faithful to the play, this adaptation of Russian material to a completely Japanese setting—in this case, the late Edo period—unlike his earlier The Idiot, was regarded as artistically successful. The film premiered in September 1957, receiving a mixed response similar to that of Throne of Blood. However, some critics rank it among the director's most underrated works. Kurosawa's three next movies after Seven Samurai had not managed to capture Japanese audiences in the way that that film had. The mood of the director's work had been growing increasingly pessimistic and dark, with the possibility of redemption through personal responsibility now very much questioned, particularly in Throne of Blood and The Lower Depths. He recognized this, and deliberately aimed for a more light-hearted and entertaining film for his next production, while switching to the new widescreen format that had been gaining popularity in Japan. The resulting film, The Hidden Fortress, is an action-adventure comedy-drama about a medieval princess, her loyal general and two peasants who all need to travel through enemy lines in order to reach their home region. Released in December 1958, The Hidden Fortress became an enormous box office success in Japan and was warmly received by critics both in Japan and abroad. Today, the film is considered one of Kurosawa's most lightweight efforts, though it remains popular, not least because it is one of several major influences on George Lucas's 1977 space opera, Star Wars. Birth of a company and Red Beard (1959–65) Starting with Rashomon, Kurosawa's productions had become increasingly large in scope and so had the director's budgets. Toho, concerned about this development, suggested that he might help finance his own works, therefore making the studio's potential losses smaller, while in turn allowing himself more artistic freedom as co-producer. Kurosawa agreed, and the Kurosawa Production Company was established in April 1959, with Toho as the majority shareholder. Despite risking his own money, Kurosawa chose a story that was more directly critical of the Japanese business and political elites than any previous work. The Bad Sleep Well, based on a script by Kurosawa's nephew Mike Inoue, is a revenge drama about a young man who is able to infiltrate the hierarchy of a corrupt Japanese company with the intention of exposing the men responsible for his father's death. Its theme proved topical: while the film was in production, the massive Anpo protests were held against the new U.S.–Japan Security treaty, which was seen by many Japanese, particularly the young, as threatening the country's democracy by giving too much power to corporations and politicians. The film opened in September 1960 to positive critical reaction and modest box office success. The 25-minute opening sequence depicting a corporate wedding reception is widely regarded as one of Kurosawa's most skillfully executed set pieces, but the remainder of the film is often perceived as disappointing by comparison. The movie has also been criticized for employing the conventional Kurosawan hero to combat a social evil that cannot be resolved through the actions of individuals, however courageous or cunning. Yojimbo (The Bodyguard), Kurosawa Production's second film, centers on a masterless samurai, Sanjuro, who strolls into a 19th-century town ruled by two opposing violent factions and provokes them into destroying each other. The director used this work to play with many genre conventions, particularly the Western, while at the same time offering an unprecedentedly (for the Japanese screen) graphic portrayal of violence. Some commentators have seen the Sanjuro character in this film as a fantasy figure who magically reverses the historical triumph of the corrupt merchant class over the samurai class. Featuring Tatsuya Nakadai in his first major role in a Kurosawa movie, and with innovative photography by Kazuo Miyagawa (who shot Rashomon) and Takao Saito, the film premiered in April 1961 and was a critically and commercially successful venture, earning more than any previous Kurosawa film. The movie and its blackly comic tone were also widely imitated abroad. Sergio Leone's A Fistful of Dollars was a virtual (unauthorized) scene-by-scene remake with Toho filing a lawsuit on Kurosawa's behalf and prevailing. Following the success of Yojimbo, Kurosawa found himself under pressure from Toho to create a sequel. Kurosawa turned to a script he had written before Yojimbo, reworking it to include the hero of his previous film. Sanjuro was the first of three Kurosawa films to be adapted from the work of the writer Shūgorō Yamamoto (the others would be Red Beard and Dodeskaden). It is lighter in tone and closer to a conventional period film than Yojimbo, though its story of a power struggle within a samurai clan is portrayed with strongly comic undertones. The film opened on January 1, 1962, quickly surpassing Yojimbos box office success and garnering positive reviews. Kurosawa had meanwhile instructed Toho to purchase the film rights to King's Ransom, a novel about a kidnapping written by American author and screenwriter Evan Hunter, under his pseudonym of Ed McBain, as one of his 87th Precinct series of crime books. The director intended to create a work condemning kidnapping, which he considered one of the very worst crimes. The suspense film, titled High and Low, was shot during the latter half of 1962 and released in March 1963. It broke Kurosawa's box office record (the third film in a row to do so), became the highest grossing Japanese film of the year, and won glowing reviews. However, his triumph was somewhat tarnished when, ironically, the film was blamed for a wave of kidnappings which occurred in Japan about this time (he himself received kidnapping threats directed at his young daughter, Kazuko). High and Low is considered by many commentators to be among the director's strongest works. Kurosawa quickly moved on to his next project, Red Beard. Based on a short story collection by Shūgorō Yamamoto and incorporating elements from Dostoyevsky's novel The Insulted and Injured, it is a period film, set in a mid-nineteenth century clinic for the poor, in which Kurosawa's humanist themes receive perhaps their fullest statement. A conceited and materialistic, foreign-trained young doctor, Yasumoto, is forced to become an intern at the clinic under the stern tutelage of Doctor Niide, known as "Akahige" ("Red Beard"), played by Mifune. Although he resists Red Beard initially, Yasumoto comes to admire his wisdom and courage, and to perceive the patients at the clinic, whom he at first despised, as worthy of compassion and dignity. Yūzō Kayama, who plays Yasumoto, was an extremely popular film and music star at the time, particularly for his "Young Guy" (Wakadaishō) series of musical comedies, so signing him to appear in the film virtually guaranteed Kurosawa strong box-office. The shoot, the film-maker's longest ever, lasted well over a year (after five months of pre-production), and wrapped in spring 1965, leaving the director, his crew and his actors exhausted. Red Beard premiered in April 1965, becoming the year's highest-grossing Japanese production and the third (and last) Kurosawa film to top the prestigious Kinema Jumpo yearly critics poll. It remains one of Kurosawa's best-known and most-loved works in his native country. Outside Japan, critics have been much more divided. Most commentators concede its technical merits and some praise it as among Kurosawa's best, while others insist that it lacks complexity and genuine narrative power, with still others claiming that it represents a retreat from the artist's previous commitment to social and political change. The film marked something of an end of an era for its creator. The director himself recognized this at the time of its release, telling critic Donald Richie that a cycle of some kind had just come to an end and that his future films and production methods would be different. His prediction proved quite accurate. Beginning in the late 1950s, television began increasingly to dominate the leisure time of the formerly large and loyal Japanese cinema audience. And as film company revenues dropped, so did their appetite for risk—particularly the risk represented by Kurosawa's costly production methods. Red Beard also marked the midway point, chronologically, in the artist's career. During his previous twenty-nine years in the film industry (which includes his five years as assistant director), he had directed twenty-three films, while during the remaining twenty-eight years, for many and complex reasons, he would complete only seven more. Also, for reasons never adequately explained, Red Beard would be his final film starring Toshiro Mifune. Yu Fujiki, an actor who worked on The Lower Depths, observed, regarding the closeness of the two men on the set, "Mr. Kurosawa's heart was in Mr. Mifune's body." Donald Richie has described the rapport between them as a unique "symbiosis". Hollywood ambitions to last films (1966–98) Hollywood detour (1966–68) When Kurosawa's exclusive contract with Toho came to an end in 1966, the 56-year-old director was seriously contemplating change. Observing the troubled state of the domestic film industry, and having already received dozens of offers from abroad, the idea of working outside Japan appealed to him as never before. For his first foreign project, Kurosawa chose a story based on a Life magazine article. The Embassy Pictures action thriller, to be filmed in English and called simply Runaway Train, would have been his first in color. But the language barrier proved a major problem, and the English version of the screenplay was not even finished by the time filming was to begin in autumn 1966. The shoot, which required snow, was moved to autumn 1967, then canceled in 1968. Almost two decades later, another foreign director working in Hollywood, Andrei Konchalovsky, finally made Runaway Train (1985), though from a new script loosely based on Kurosawa's. The director meanwhile had become involved in a much more ambitious Hollywood project. Tora! Tora! Tora!, produced by 20th Century Fox and Kurosawa Production, would be a portrayal of the Japanese attack on Pearl Harbor from both the American and the Japanese points of view, with Kurosawa helming the Japanese half and an Anglophonic film-maker directing the American half. He spent several months working on the script with Ryuzo Kikushima and Hideo Oguni, but very soon the project began to unravel. The director of the American sequences turned out not to be David Lean, as originally planned, but American Richard Fleischer. The budget was also cut, and the screen time allocated for the Japanese segment would now be no longer than 90 minutes—a major problem, considering that Kurosawa's script ran over four hours. After numerous revisions with the direct involvement of Darryl Zanuck, a more or less finalized cut screenplay was agreed upon in May 1968. Shooting began in early December, but Kurosawa would last only a little over three weeks as director. He struggled to work with an unfamiliar crew and the requirements of a Hollywood production, while his working methods puzzled his American producers, who ultimately concluded that the director must be mentally ill. Kurosawa was examined at Kyoto University Hospital by a neuropsychologist, Dr. Murakami, whose diagnosis was forwarded to Darryl Zanuck and Richard Zanuck at Fox studios indicating a diagnosis of neurasthenia stating that, "He is suffering from disturbance of sleep, agitated with feelings of anxiety and in manic excitement caused by the above mentioned illness. It is necessary for him to have rest and medical treatment for more than two months." On Christmas Eve 1968, the Americans announced that Kurosawa had left the production due to "fatigue", effectively firing him. He was ultimately replaced, for the film's Japanese sequences, with two directors, Kinji Fukasaku and Toshio Masuda. Tora! Tora! Tora!, finally released to unenthusiastic reviews in September 1970, was, as Donald Richie put it, an "almost unmitigated tragedy" in Kurosawa's career. He had spent years of his life on a logistically nightmarish project to which he ultimately did not contribute a foot of film shot by himself. (He had his name removed from the credits, though the script used for the Japanese half was still his and his co-writers'.) He became estranged from his longtime collaborator, writer Ryuzo Kikushima, and never worked with him again. The project had inadvertently exposed corruption in his own production company (a situation reminiscent of his own movie, The Bad Sleep Well). His very sanity had been called into question. Worst of all, the Japanese film industry—and perhaps the man himself—began to suspect that he would never make another film. A difficult decade (1969–77) Knowing that his reputation was at stake following the much publicised Tora! Tora! Tora! debacle, Kurosawa moved quickly to a new project to prove he was still viable. To his aid came friends and famed directors Keisuke Kinoshita, Masaki Kobayashi and Kon Ichikawa, who together with Kurosawa established in July 1969 a production company called the Club of the Four Knights (Yonki no kai). Although the plan was for the four directors to create a film each, it has been suggested that the real motivation for the other three directors was to make it easier for Kurosawa to successfully complete a film, and therefore find his way back into the business. The first project proposed and worked on was a period film to be called Dora-heita, but when this was deemed too expensive, attention shifted to Dodesukaden, an adaptation of yet another Shūgorō Yamamoto work, again about the poor and destitute. The film was shot quickly (by Kurosawa's standards) in about nine weeks, with Kurosawa determined to show he was still capable of working quickly and efficiently within a limited budget. For his first work in color, the dynamic editing and complex compositions of his earlier pictures were set aside, with the artist focusing on the creation of a bold, almost surreal palette of primary colors, in order to reveal the toxic environment in which the characters live. It was released in Japan in October 1970, but though a minor critical success, it was greeted with audience indifference. The picture lost money and caused the Club of the Four Knights to dissolve. Initial reception abroad was somewhat more favorable, but Dodesukaden has since been typically considered an interesting experiment not comparable to the director's best work. After struggling through the production of Dodesukaden, Kurosawa turned to television work the following year for the only time in his career with Song of the Horse, a documentary about thoroughbred race horses. It featured a voice-over narrated by a fictional man and a child (voiced by the same actors as the beggar and his son in Dodesukaden). It is the only documentary in Kurosawa's filmography; the small crew included his frequent collaborator Masaru Sato, who composed the music. Song of the Horse is also unique in Kurosawa's oeuvre in that it includes an editor's credit, suggesting that it is the only Kurosawa film that he did not cut himself. Unable to secure funding for further work and allegedly suffering from health problems, Kurosawa apparently reached the breaking point: on December 22, 1971, he slit his wrists and throat multiple times. The suicide attempt proved unsuccessful and the director's health recovered fairly quickly, with Kurosawa now taking refuge in domestic life, uncertain if he would ever direct another film. In early 1973, the Soviet studio Mosfilm approached the film-maker to ask if he would be interested in working with them. Kurosawa proposed an adaptation of Russian explorer Vladimir Arsenyev's autobiographical work Dersu Uzala. The book, about a Goldi hunter who lives in harmony with nature until destroyed by encroaching civilization, was one that he had wanted to make since the 1930s. In December 1973, the 63-year-old Kurosawa set off for the Soviet Union with four of his closest aides, beginning a year-and-a-half stay in the country. Shooting began in May 1974 in Siberia, with filming in exceedingly harsh natural conditions proving very difficult and demanding. The picture wrapped in April 1975, with a thoroughly exhausted and homesick Kurosawa returning to Japan and his family in June. Dersu Uzala had its world premiere in Japan on August 2, 1975, and did well at the box office. While critical reception in Japan was muted, the film was better reviewed abroad, winning the Golden Prize at the 9th Moscow International Film Festival, as well as an Academy Award for Best Foreign Language Film. Today, critics remain divided over the film: some see it as an example of Kurosawa's alleged artistic decline, while others count it among his finest works. Although proposals for television projects were submitted to him, he had no interest in working outside the film world. Nevertheless, the hard-drinking director did agree to appear in a series of television ads for Suntory whiskey, which aired in 1976. While fearing that he might never be able to make another film, the director nevertheless continued working on various projects, writing scripts and creating detailed illustrations, intending to leave behind a visual record of his plans in case he would never be able to film his stories. Two epics (1978–86) In 1977, American director George Lucas released Star Wars, a wildly successful science fiction film influenced by Kurosawa's The Hidden Fortress, among other works. Lucas, like many other New Hollywood directors, revered Kurosawa and considered him a role model, and was shocked to discover that the Japanese film-maker was unable to secure financing for any new work. The two met in San Francisco in July 1978 to discuss the project Kurosawa considered most financially viable: Kagemusha, the epic story of a thief hired as the double of a medieval Japanese lord of a great clan. Lucas, enthralled by the screenplay and Kurosawa's illustrations, leveraged his influence over 20th Century Fox to coerce the studio that had fired Kurosawa just ten years earlier to produce Kagemusha, then recruited fellow fan Francis Ford Coppola as co-producer. Production began the following April, with Kurosawa in high spirits. Shooting lasted from June 1979 through March 1980 and was plagued with problems, not the least of which was the firing of the original lead actor, Shintaro Katsu—creator of the very popular Zatoichi character—due to an incident in which the actor insisted, against the director's wishes, on videotaping his own performance. (He was replaced by Tatsuya Nakadai, in his first of two consecutive leading roles in a Kurosawa movie.) The film was completed only a few weeks behind schedule and opened in Tokyo in April 1980. It quickly became a massive hit in Japan. The film was also a critical and box office success abroad, winning the coveted Palme d'Or at the 1980 Cannes Film Festival in May, though some critics, then and now, have faulted the film for its alleged coldness. Kurosawa spent much of the rest of the year in Europe and America promoting Kagemusha, collecting awards and accolades, and exhibiting as art the drawings he had made to serve as storyboards for the film. The international success of Kagemusha allowed Kurosawa to proceed with his next project, Ran, another epic in a similar vein. The script, partly based on William Shakespeare's King Lear, depicted a ruthless, bloodthirsty daimyō (warlord), played by Tatsuya Nakadai, who, after foolishly banishing his one loyal son, surrenders his kingdom to his other two sons, who then betray him, thus plunging the entire kingdom into war. As Japanese studios still felt wary about producing another film that would rank among the most expensive ever made in the country, international help was again needed. This time it came from French producer Serge Silberman, who had produced Luis Buñuel's final movies. Filming did not begin until December 1983 and lasted more than a year. In January 1985, production of Ran was halted as Kurosawa's 64-year-old wife Yōko fell ill. She died on February 1. Kurosawa returned to finish his film and Ran premiered at the Tokyo Film Festival on May 31, with a wide release the next day. The film was a moderate financial success in Japan, but a larger one abroad and, as he had done with Kagemusha, Kurosawa embarked on a trip to Europe and America, where he attended the film's premieres in September and October. Ran won several awards in Japan, but was not quite as honored there as many of the director's best films of the 1950s and 1960s had been. The film world was surprised, however, when Japan passed over the selection of Ran in favor of another film as its official entry to compete for an Oscar nomination in the Best Foreign Film category, which was ultimately rejected for competition at the 58th Academy Awards. Both the producer and Kurosawa himself attributed the failure to even submit Ran for competition to a misunderstanding: because of the Academy's arcane rules, no one was sure whether Ran qualified as a Japanese film, a French film (due to its financing), or both, so it was not submitted at all. In response to what at least appeared to be a blatant snub by his own countrymen, the director Sidney Lumet led a successful campaign to have Kurosawa receive an Oscar nomination for Best Director that year (Sydney Pollack ultimately won the award for directing Out of Africa). Rans costume designer, Emi Wada, won the movie's only Oscar. Kagemusha and Ran, particularly the latter, are often considered to be among Kurosawa's finest works. After Rans release, Kurosawa would point to it as his best film, a major change of attitude for the director who, when asked which of his works was his best, had always previously answered "my next one". Final works and last years (1987–98) For his next movie, Kurosawa chose a subject very different from any that he had ever filmed before. While some of his previous pictures (for example, Drunken Angel and Kagemusha) had included brief dream sequences, Dreams was to be entirely based upon the director's own dreams. Significantly, for the first time in over forty years, Kurosawa, for this deeply personal project, wrote the screenplay alone. Although its estimated budget was lower than the films immediately preceding it, Japanese studios were still unwilling to back one of his productions, so Kurosawa turned to another famous American fan, Steven Spielberg, who convinced Warner Bros. to buy the international rights to the completed film. This made it easier for Kurosawa's son, Hisao, as co-producer and soon-to-be head of Kurosawa Production, to negotiate a loan in Japan that would cover the film's production costs. Shooting took more than eight months to complete, and Dreams premiered at Cannes in May 1990 to a polite but muted reception, similar to the reaction the picture would generate elsewhere in the world. In 1990, he accepted the Academy Award for Lifetime Achievement. In his acceptance speech, he famously said "I'm a little worried because I don't feel that I understand cinema yet." Kurosawa now turned to a more conventional story with Rhapsody in August—the director's first film fully produced in Japan since Dodeskaden over twenty years before—which explored the scars of the nuclear bombing which destroyed Nagasaki at the very end of World War II. It was adapted from a Kiyoko Murata novel, but the film's references to the Nagasaki bombing came from the director rather than from the book. This was his only movie to include a role for an American movie star: Richard Gere, who plays a small role as the nephew of the elderly heroine. Shooting took place in early 1991, with the film opening on May 25 that year to a largely negative critical reaction, especially in the United States, where the director was accused of promulgating naïvely anti-American sentiments, though Kurosawa rejected these accusations. Kurosawa wasted no time moving onto his next project: Madadayo, or Not Yet. Based on autobiographical essays by Hyakken Uchida, the film follows the life of a Japanese professor of German through the Second World War and beyond. The narrative centers on yearly birthday celebrations with his former students, during which the protagonist declares his unwillingness to die just yet—a theme that was becoming increasingly relevant for the film's 81-year-old creator. Filming began in February 1992 and wrapped by the end of September. Its release on April 17, 1993, was greeted by an even more disappointed reaction than had been the case with his two preceding works. Kurosawa nevertheless continued to work. He wrote the original screenplays The Sea is Watching in 1993 and After the Rain in 1995. While putting finishing touches on the latter work in 1995, Kurosawa slipped and broke the base of his spine. Following the accident, he would use a wheelchair for the rest of his life, putting an end to any hopes of him directing another film. His longtime wish—to die on the set while shooting a movie—was never to be fulfilled. After his accident, Kurosawa's health began to deteriorate. While his mind remained sharp and lively, his body was giving up, and for the last half-year of his life, the director was largely confined to bed, listening to music and watching television at home. On September 6, 1998, Kurosawa died of a stroke in Setagaya, Tokyo, at the age of 88. At the time of his death, Kurosawa had two children, his son Hisao Kurosawa who married Hiroko Hayashi and his daughter Kazuko Kurosawa who married Harayuki Kato, along with several grandchildren. One of his grandchildren, the actor Takayuki Kato and grandson by Kazuko, became a supporting actor in two films posthumously developed from screenplays written by Kurosawa which remained unproduced during his own lifetime, Takashi Koizumi's After the Rain (1999) and Kei Kumai's The Sea is Watching (2002). Creative works/filmography Although Kurosawa is primarily known as a film-maker, he also worked in theater and television, and wrote books. A detailed list, including his complete filmography, can be found in the list of creative works by Akira Kurosawa. Style and main themes Kurosawa displayed a bold, dynamic style, strongly influenced by Western cinema yet distinct from it; he was involved with all aspects of film production. He was a gifted screenwriter and worked closely with his co-writers from the film's development onward to ensure a high-quality script, which he considered the firm foundation of a good film. He frequently served as editor of his own films. His team, known as the "Kurosawa-gumi" (Kurosawa group), which included the cinematographer Asakazu Nakai, the production assistant Teruyo Nogami and the actor Takashi Shimura, was notable for its loyalty and dependability. Kurosawa's style is marked by a number of devices and techniques. In his films of the 1940s and 1950s, he frequently employs the "axial cut", in which the camera moves toward or away from the subject through a series of matched jump cuts rather than tracking shots or dissolves. Another stylistic trait is "cut on motion", which displays the motion on the screen in two or more shots instead of one uninterrupted one. A form of cinematic punctuation strongly identified with Kurosawa is the wipe, an effect created through an optical printer: a line or bar appears to move across the screen, wiping away the end of a scene and revealing the first image of the next. As a transitional device, it is used as a substitute for the straight cut or the dissolve; in his mature work, the wipe became Kurosawa's signature. In the film's soundtrack, Kurosawa favored the sound-image counterpoint, in which the music or sound effects appeared to comment ironically on the image rather than emphasizing it. Teruyo Nogami's memoir gives several such examples from Drunken Angel and Stray Dog. Kurosawa was also involved with several of Japan's outstanding contemporary composers, including Fumio Hayasaka and Tōru Takemitsu. Kurosawa employed a number of recurring themes in his films: the master-disciple relationship between a usually older mentor and one or more novices, which often involves spiritual as well as technical mastery and self-mastery; the heroic champion, the exceptional individual who emerges from the mass of people to produce something or right some wrong; the depiction of extremes of weather as both dramatic devices and symbols of human passion; and the recurrence of cycles of savage violence within history. According to Stephen Prince, the last theme, which he calls, "the countertradition to the committed, heroic mode of Kurosawa's cinema," began with Throne of Blood (1957), and recurred in the films of the 1980s. Legacy Legacy of general criticism Kenji Mizoguchi, the acclaimed director of Ugetsu (1953) and Sansho the Bailiff (1954) was eleven years Kurosawa's senior. After the mid-1950s, some critics of the French New Wave began to favor Mizoguchi to Kurosawa. New Wave critic and film-maker Jacques Rivette, in particular, thought Mizoguchi to be the only Japanese director whose work was at once entirely Japanese and truly universal; Kurosawa, by contrast was thought to be more influenced by Western cinema and culture, a view that has been disputed. In Japan, some critics and film-makers considered Kurosawa to be elitist. They viewed him to center his effort and attention on exceptional or heroic characters. In her D.V.D. commentary on Seven Samurai, Joan Mellen argued that certain shots of the samurai characters Kambei and Kyuzo, which show Kurosawa to have accorded higher status or validity to them, constitutes evidence for this point of view. These Japanese critics argued that Kurosawa was not sufficiently progressive because the peasants were unable to find leaders from within their ranks. In an interview with Mellen, Kurosawa defended himself, saying, I wanted to say that after everything the peasants were the stronger, closely clinging to the earth ... It was the samurai who were weak because they were being blown by the winds of time. From the early 1950s, Kurosawa was also charged with catering to Western tastes due to his popularity in Europe and America. In the 1970s, the left-wing director Nagisa Oshima, who was noted for his critical reaction to Kurosawa's work, accused Kurosawa of pandering to Western beliefs and ideologies. Author Audie Block, however, assessed Kurosawa to have never played up to a non-Japanese viewing public and to have denounced those directors who did. Reputation among film-makers Many film-makers have been influenced by Kurosawa's work. Ingmar Bergman called his own film The Virgin Spring a "touristic... lousy imitation of Kurosawa", and added, "At that time my admiration for the Japanese cinema was at its height. I was almost a samurai myself!" Federico Fellini considered Kurosawa to be "the greatest living example of all that an author of the cinema should be". Satyajit Ray, who was posthumously awarded the Akira Kurosawa Award for Lifetime Achievement in Directing at the San Francisco International Film Festival in 1992, had said earlier of Rashomon: "The effect of the film on me [upon first seeing it in Calcutta in 1952] was electric. I saw it three times on consecutive days, and wondered each time if there was another film anywhere which gave such sustained and dazzling proof of a director's command over every aspect of film making." Roman Polanski considered Kurosawa to be among the three film-makers he favored most, along with Fellini and Orson Welles, and picked Seven Samurai, Throne of Blood and The Hidden Fortress for praise. Bernardo Bertolucci considered Kurosawa's influence to be seminal: "Kurosawa's movies and La Dolce Vita of Fellini are the things that pushed me, sucked me into being a film director." Andrei Tarkovsky cited Kurosawa as one of his favorites and named Seven Samurai as one of his ten favorite films. Sidney Lumet called Kurosawa the "Beethoven of movie directors". Werner Herzog reflected on film-makers with whom he feels kinship and the movies that he admires: Griffith - especially his Birth of a Nation and Broken Blossoms - Murnau, Buñuel, Kurosawa and Eisenstein’s Ivan the Terrible, ... all come to mind. ... I like Dreyer’s The Passion of Joan of Arc, Pudovkin’s Storm Over Asia and Dovzhenko’s Earth, ... Mizoguchi's Ugetsu Monogatari, Satyajit Ray's The Music Room ... I have always wondered how Kurosawa made something as good as Rashomon; the equilibrium and flow are perfect, and he uses space in such a well-balanced way. It is one of the best films ever made. According to an assistant, Stanley Kubrick considered Kurosawa to be "one of the great film directors" and spoke of him "consistently and admiringly", to the point that a letter from him "meant more than any Oscar" and caused him to agonize for months over drafting a reply. Robert Altman upon first seeing Rashomon was so impressed by the sequence of frames of the sun that he began to shoot the same sequences in his work the very next day, he claimed. Kurosawa was ranked 3rd in the directors' poll and 5th in the critics' poll in Sight & Sound's 2002 list of the greatest directors of all time. Posthumous screenplays Following Kurosawa's death, several posthumous works based on his unfilmed screenplays have been produced. After the Rain, directed by Takashi Koizumi, was released in 1999, and The Sea Is Watching, directed by Kei Kumai, premiered in 2002. A script created by the Yonki no Kai ("Club of the Four Knights") (Kurosawa, Keisuke Kinoshita, Masaki Kobayashi, and Kon Ichikawa), around the time that Dodeskaden was made, finally was filmed and released (in 2000) as Dora-heita, by the only surviving founding member of the club, Kon Ichikawa. Huayi Brothers Media and CKF Pictures in China announced in 2017 plans to produce a film of Kurosawa's posthumous screenplay of The Masque of the Red Death by Edgar Allan Poe for 2020, to be entitled The Mask of the Black Death. Patrick Frater writing for Variety magazine in May 2017 stated that another two unfinished films by Kurosawa were planned, with Silvering Spear to start filming in 2018. Kurosawa Production Company In September 2011, it was reported that remake rights to most of Kurosawa's movies and unproduced screenplays were assigned by the Akira Kurosawa 100 Project to the L.A.-based company Splendent. Splendent's chief Sakiko Yamada, stated that he aimed to "help contemporary film-makers introduce a new generation of moviegoers to these unforgettable stories". Kurosawa Production Co., established in 1959, continues to oversee many of the aspects of Kurosawa's legacy. The director's son, Hisao Kurosawa, is the current head of the company. Its American subsidiary, Kurosawa Enterprises, is located in Los Angeles. Rights to Kurosawa's works were then held by Kurosawa Production and the film studios under which he worked, most notably Toho. These rights were then assigned to the Akira Kurosawa 100 Project before being reassigned in 2011 to the L.A. based company Splendent. Kurosawa Production works closely with the Akira Kurosawa Foundation, established in December 2003 and also run by Hisao Kurosawa. The foundation organizes an annual short film competition and spearheads Kurosawa-related projects, including a recently shelved one to build a memorial museum for the director. Film studios and awards In 1981, the Kurosawa Film Studio was opened in Yokohama; two additional locations have since been launched in Japan. A large collection of archive material, including scanned screenplays, photos and news articles, has been made available through the Akira Kurosawa Digital Archive, a Japanese proprietary website maintained by Ryukoku University Digital Archives Research Center in collaboration with Kurosawa Production. Anaheim University's Akira Kurosawa School of Film was launched in spring 2009 with the backing of Kurosawa Production. It offers online programs in digital film making, with headquarters in Anaheim and a learning center in Tokyo. Two film awards have also been named in Kurosawa's honor. The Akira Kurosawa Award for Lifetime Achievement in Film Directing is awarded during the San Francisco International Film Festival, while the Akira Kurosawa Award is given during the Tokyo International Film Festival. Kurosawa is often cited as one of the greatest film-makers of all time. In 1999, he was named "Asian of the Century" in the "Arts, Literature, and Culture" category by AsianWeek magazine and CNN, cited as "one of the [five] people who contributed most to the betterment of Asia in the past 100 years". In commemoration of the 100th anniversary of Kurosawa's birth in 2010, a project called AK100 was launched in 2008. The AK100 Project aims to "expose young people who are the representatives of the next generation, and all people everywhere, to the light and spirit of Akira Kurosawa and the wonderful world he created". Anaheim University in cooperation with the Kurosawa Family established the Anaheim University Akira Kurosawa School of Film to offer online and blended learning programs on Akira Kurosawa and film-making. The animated Wes Anderson film Isle of Dogs () is partially inspired by Kurosawa's filming techniques. At the 64th Sydney Film Festival, there was a retrospective of Akira Kurosawa where films of his were screened to remember the great legacy he has created from his work. Documentaries A significant number of short and full-length documentaries concerning the life and work of Kurosawa were made both during his artistic heyday and after his death. AK, by French video essay director Chris Marker, was filmed while Kurosawa was working on Ran; however, the documentary is more concerned about Kurosawa's distant yet polite personality than on the making of the film. Other documentaries concerning Kurosawa's life and works produced posthumously include: Kurosawa: The Last Emperor (Alex Cox, 1999) A Message from Akira Kurosawa: For Beautiful Movies (Hisao Kurosawa, 2000) Kurosawa (Adam Low, 2001) Akira Kurosawa: It Is Wonderful to Create (Toho Masterworks, 2002) Akira Kurosawa: The Epic and the Intimate (2010) Kurosawa's Way (Catherine Cadou, 2011) Notes References Sources Further reading Buchanan, Judith (2005). Shakespeare on Film. Pearson Longman. . Burch, Nöel (1979). To the Distant Observer: Form and Meaning in the Japanese Cinema. University of California Press. . Cowie, Peter (2010). Akira Kurosawa: Master of Cinema. Rizzoli Publications. . Davies, Anthony (1990). Filming Shakespeare's Plays: The Adaptions of Laurence Olivier, Orson Welles, Peter Brook and Akira Kurosawa. Cambridge University Press. . Desser, David (1983). The Samurai Films of Akira Kurosawa (Studies in Cinema No. 23). UMI Research Press. . Leonard, Kendra Preston (2009). Shakespeare, Madness, and Music: Scoring Insanity in Cinematic Adaptations. Plymouth: The Scarecrow Press. . Sorensen, Lars-Martin (2009). Censorship of Japanese Films During the U.S. Occupation of Japan: The Cases of Yasujiro Ozu and Akira Kurosawa. Edwin Mellen Press. . Wild, Peter. (2014) Akira Kurosawa Reaktion Books External links Akira Kurosawa at the Criterion Collection Akira Kurosawa: News, Information and Discussion Senses of Cinema: Great Directors Critical Database Akira Kurosawa at Japanese celebrity's grave guide Several trailers Anaheim University Akira Kurosawa School of Film 1910 births 1998 deaths 20th-century Japanese writers 20th-century male writers Academy Honorary Award recipients Akira Kurosawa Award winners Best Director BAFTA Award winners César Award winners David di Donatello winners Directors Guild of America Award winners Directors of Best Foreign Language Film Academy Award winners Directors of Palme d'Or winners Directors of Golden Lion winners Filmmakers who won the Best Foreign Language Film BAFTA Award Fellows of the American Academy of Arts and Sciences Japanese film directors Japanese film editors Japanese film producers Japanese male writers Japanese screenwriters Kyoto laureates in Arts and Philosophy Recipients of the Legion of Honour Male screenwriters People from Shinagawa People of the Empire of Japan People's Honour Award winners Persons of Cultural Merit Propaganda film directors Ramon Magsaysay Award winners Recipients of the Order of Culture Recipients of the Order of Friendship of Peoples Recipients of the Praemium Imperiale Samurai film directors Silver Bear for Best Director recipients Writers from Tokyo Yakuza film directors Google Doodles
874
https://en.wikipedia.org/wiki/Ancient%20Egypt
Ancient Egypt
Ancient Egypt was a civilization of ancient Africa, concentrated along the lower reaches of the Nile River, situated in the place that is now the country Egypt. Ancient Egyptian civilization followed prehistoric Egypt and coalesced around 3100BC (according to conventional Egyptian chronology) with the political unification of Upper and Lower Egypt under Menes (often identified with Narmer). The history of ancient Egypt occurred as a series of stable kingdoms, separated by periods of relative instability known as Intermediate Periods: the Old Kingdom of the Early Bronze Age, the Middle Kingdom of the Middle Bronze Age and the New Kingdom of the Late Bronze Age. Egypt reached the pinnacle of its power in the New Kingdom, ruling much of Nubia and a sizable portion of the Near East, after which it entered a period of slow decline. During the course of its history Egypt was invaded or conquered by a number of foreign powers, including the Hyksos, the Libyans, the Nubians, the Assyrians, the Achaemenid Persians, and the Macedonians under the command of Alexander the Great. The Greek Ptolemaic Kingdom, formed in the aftermath of Alexander's death, ruled Egypt until 30BC, when, under Cleopatra, it fell to the Roman Empire and became a Roman province. The success of ancient Egyptian civilization came partly from its ability to adapt to the conditions of the Nile River valley for agriculture. The predictable flooding and controlled irrigation of the fertile valley produced surplus crops, which supported a more dense population, and social development and culture. With resources to spare, the administration sponsored mineral exploitation of the valley and surrounding desert regions, the early development of an independent writing system, the organization of collective construction and agricultural projects, trade with surrounding regions, and a military intended to assert Egyptian dominance. Motivating and organizing these activities was a bureaucracy of elite scribes, religious leaders, and administrators under the control of a pharaoh, who ensured the cooperation and unity of the Egyptian people in the context of an elaborate system of religious beliefs. The many achievements of the ancient Egyptians include the quarrying, surveying and construction techniques that supported the building of monumental pyramids, temples, and obelisks; a system of mathematics, a practical and effective system of medicine, irrigation systems and agricultural production techniques, the first known planked boats, Egyptian faience and glass technology, new forms of literature, and the earliest known peace treaty, made with the Hittites. Ancient Egypt has left a lasting legacy. Its art and architecture were widely copied, and its antiquities carried off to far corners of the world. Its monumental ruins have inspired the imaginations of travelers and writers for millennia. A newfound respect for antiquities and excavations in the early modern period by Europeans and Egyptians led to the scientific investigation of Egyptian civilization and a greater appreciation of its cultural legacy. History The Nile has been the lifeline of its region for much of human history. The fertile floodplain of the Nile gave humans the opportunity to develop a settled agricultural economy and a more sophisticated, centralized society that became a cornerstone in the history of human civilization. Nomadic modern human hunter-gatherers began living in the Nile valley through the end of the Middle Pleistocene some 120,000 years ago. By the late Paleolithic period, the arid climate of Northern Africa became increasingly hot and dry, forcing the populations of the area to concentrate along the river region. Predynastic period In Predynastic and Early Dynastic times, the Egyptian climate was much less arid than it is today. Large regions of Egypt were covered in treed savanna and traversed by herds of grazing ungulates. Foliage and fauna were far more prolific in all environs and the Nile region supported large populations of waterfowl. Hunting would have been common for Egyptians, and this is also the period when many animals were first domesticated. By about 5500 BC, small tribes living in the Nile valley had developed into a series of cultures demonstrating firm control of agriculture and animal husbandry, and identifiable by their pottery and personal items, such as combs, bracelets, and beads. The largest of these early cultures in upper (Southern) Egypt was the Badarian culture, which probably originated in the Western Desert; it was known for its high-quality ceramics, stone tools, and its use of copper. The Badari was followed by the Naqada culture: the Amratian (Naqada I), the Gerzeh (Naqada II), and Semainean (Naqada III). These brought a number of technological improvements. As early as the Naqada I Period, predynastic Egyptians imported obsidian from Ethiopia, used to shape blades and other objects from flakes. In Naqada II times, early evidence exists of contact with the Near East, particularly Canaan and the Byblos coast. Over a period of about 1,000 years, the Naqada culture developed from a few small farming communities into a powerful civilization whose leaders were in complete control of the people and resources of the Nile valley. Establishing a power center at Nekhen (in Greek, Hierakonpolis), and later at Abydos, Naqada III leaders expanded their control of Egypt northwards along the Nile. They also traded with Nubia to the south, the oases of the western desert to the west, and the cultures of the eastern Mediterranean and Near East to the east, initiating a period of Egypt-Mesopotamia relations. The Naqada culture manufactured a diverse selection of material goods, reflective of the increasing power and wealth of the elite, as well as societal personal-use items, which included combs, small statuary, painted pottery, high quality decorative stone vases, cosmetic palettes, and jewelry made of gold, lapis, and ivory. They also developed a ceramic glaze known as faience, which was used well into the Roman Period to decorate cups, amulets, and figurines. During the last predynastic phase, the Naqada culture began using written symbols that eventually were developed into a full system of hieroglyphs for writing the ancient Egyptian language. Early Dynastic Period (c. 3150–2686 BC) The Early Dynastic Period was approximately contemporary to the early Sumerian-Akkadian civilisation of Mesopotamia and of ancient Elam. The third-centuryBC Egyptian priest Manetho grouped the long line of kings from Menes to his own time into 30 dynasties, a system still used today. He began his official history with the king named "Meni" (or Menes in Greek), who was believed to have united the two kingdoms of Upper and Lower Egypt. The transition to a unified state happened more gradually than ancient Egyptian writers represented, and there is no contemporary record of Menes. Some scholars now believe, however, that the mythical Menes may have been the king Narmer, who is depicted wearing royal regalia on the ceremonial Narmer Palette, in a symbolic act of unification. In the Early Dynastic Period, which began about 3000BC, the first of the Dynastic kings solidified control over lower Egypt by establishing a capital at Memphis, from which he could control the labour force and agriculture of the fertile delta region, as well as the lucrative and critical trade routes to the Levant. The increasing power and wealth of the kings during the early dynastic period was reflected in their elaborate mastaba tombs and mortuary cult structures at Abydos, which were used to celebrate the deified king after his death. The strong institution of kingship developed by the kings served to legitimize state control over the land, labour, and resources that were essential to the survival and growth of ancient Egyptian civilization. Old Kingdom (2686–2181 BC) Major advances in architecture, art, and technology were made during the Old Kingdom, fueled by the increased agricultural productivity and resulting population, made possible by a well-developed central administration. Some of ancient Egypt's crowning achievements, the Giza pyramids and Great Sphinx, were constructed during the Old Kingdom. Under the direction of the vizier, state officials collected taxes, coordinated irrigation projects to improve crop yield, drafted peasants to work on construction projects, and established a justice system to maintain peace and order. With the rising importance of central administration in Egypt, a new class of educated scribes and officials arose who were granted estates by the king in payment for their services. Kings also made land grants to their mortuary cults and local temples, to ensure that these institutions had the resources to worship the king after his death. Scholars believe that five centuries of these practices slowly eroded the economic vitality of Egypt, and that the economy could no longer afford to support a large centralized administration. As the power of the kings diminished, regional governors called nomarchs began to challenge the supremacy of the office of king. This, coupled with severe droughts between 2200 and 2150BC, is believed to have caused the country to enter the 140-year period of famine and strife known as the First Intermediate Period. First Intermediate Period (2181–2055 BC) After Egypt's central government collapsed at the end of the Old Kingdom, the administration could no longer support or stabilize the country's economy. Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars. Yet despite difficult problems, local leaders, owing no tribute to the king, used their new-found independence to establish a thriving culture in the provinces. Once in control of their own resources, the provinces became economically richer—which was demonstrated by larger and better burials among all social classes. In bursts of creativity, provincial artisans adopted and adapted cultural motifs formerly restricted to the royalty of the Old Kingdom, and scribes developed literary styles that expressed the optimism and originality of the period. Free from their loyalties to the king, local rulers began competing with each other for territorial control and political power. By 2160BC, rulers in Herakleopolis controlled Lower Egypt in the north, while a rival clan based in Thebes, the Intef family, took control of Upper Egypt in the south. As the Intefs grew in power and expanded their control northward, a clash between the two rival dynasties became inevitable. Around 2055BC the northern Theban forces under Nebhepetre Mentuhotep II finally defeated the Herakleopolitan rulers, reuniting the Two Lands. They inaugurated a period of economic and cultural renaissance known as the Middle Kingdom. Middle Kingdom (2134–1690 BC) The kings of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects. Mentuhotep II and his Eleventh Dynasty successors ruled from Thebes, but the vizier Amenemhat I, upon assuming the kingship at the beginning of the Twelfth Dynasty around 1985BC, shifted the kingdom's capital to the city of Itjtawy, located in Faiyum. From Itjtawy, the kings of the Twelfth Dynasty undertook a far-sighted land reclamation and irrigation scheme to increase agricultural output in the region. Moreover, the military reconquered territory in Nubia that was rich in quarries and gold mines, while laborers built a defensive structure in the Eastern Delta, called the "Walls of the Ruler", to defend against foreign attack. With the kings having secured the country militarily and politically and with vast agricultural and mineral wealth at their disposal, the nation's population, arts, and religion flourished. In contrast to elitist Old Kingdom attitudes towards the gods, the Middle Kingdom displayed an increase in expressions of personal piety. Middle Kingdom literature featured sophisticated themes and characters written in a confident, eloquent style. The relief and portrait sculpture of the period captured subtle, individual details that reached new heights of technical sophistication. The last great ruler of the Middle Kingdom, Amenemhat III, allowed Semitic-speaking Canaanite settlers from the Near East into the Delta region to provide a sufficient labour force for his especially active mining and building campaigns. These ambitious building and mining activities, however, combined with severe Nile floods later in his reign, strained the economy and precipitated the slow decline into the Second Intermediate Period during the later Thirteenth and Fourteenth dynasties. During this decline, the Canaanite settlers began to assume greater control of the Delta region, eventually coming to power in Egypt as the Hyksos. Second Intermediate Period (1674–1549 BC) and the Hyksos Around 1785BC, as the power of the Middle Kingdom kings weakened, a Western Asian people called the Hyksos, who had already settled in the Delta, seized control of Egypt and established their capital at Avaris, forcing the former central government to retreat to Thebes. The king was treated as a vassal and expected to pay tribute. The Hyksos ("foreign rulers") retained Egyptian models of government and identified as kings, thereby integrating Egyptian elements into their culture. They and other invaders introduced new tools of warfare into Egypt, most notably the composite bow and the horse-drawn chariot. After retreating south, the native Theban kings found themselves trapped between the Canaanite Hyksos ruling the north and the Hyksos' Nubian allies, the Kushites, to the south. After years of vassalage, Thebes gathered enough strength to challenge the Hyksos in a conflict that lasted more than 30 years, until 1555BC. The kings Seqenenre Tao II and Kamose were ultimately able to defeat the Nubians to the south of Egypt, but failed to defeat the Hyksos. That task fell to Kamose's successor, Ahmose I, who successfully waged a series of campaigns that permanently eradicated the Hyksos' presence in Egypt. He established a new dynasty and, in the New Kingdom that followed, the military became a central priority for the kings, who sought to expand Egypt's borders and attempted to gain mastery of the Near East. New Kingdom (1549–1069 BC) The New Kingdom pharaohs established a period of unprecedented prosperity by securing their borders and strengthening diplomatic ties with their neighbours, including the Mitanni Empire, Assyria, and Canaan. Military campaigns waged under Tuthmosis I and his grandson Tuthmosis III extended the influence of the pharaohs to the largest empire Egypt had ever seen. Beginning with Merneptah the rulers of Egypt adopted the title of pharaoh. Between their reigns, Hatshepsut, a queen who established herself as pharaoh, launched many building projects, including the restoration of temples damaged by the Hyksos, and sent trading expeditions to Punt and the Sinai. When Tuthmosis III died in 1425BC, Egypt had an empire extending from Niya in north west Syria to the Fourth Cataract of the Nile in Nubia, cementing loyalties and opening access to critical imports such as bronze and wood. The New Kingdom pharaohs began a large-scale building campaign to promote the god Amun, whose growing cult was based in Karnak. They also constructed monuments to glorify their own achievements, both real and imagined. The Karnak temple is the largest Egyptian temple ever built. Around 1350BC, the stability of the New Kingdom was threatened when Amenhotep IV ascended the throne and instituted a series of radical and chaotic reforms. Changing his name to Akhenaten, he touted the previously obscure sun deity Aten as the supreme deity, suppressed the worship of most other deities, and moved the capital to the new city of Akhetaten (modern-day Amarna). He was devoted to his new religion and artistic style. After his death, the cult of the Aten was quickly abandoned and the traditional religious order restored. The subsequent pharaohs, Tutankhamun, Ay, and Horemheb, worked to erase all mention of Akhenaten's heresy, now known as the Amarna Period. Around 1279BC, Ramesses II, also known as Ramesses the Great, ascended the throne, and went on to build more temples, erect more statues and obelisks, and sire more children than any other pharaoh in history. A bold military leader, Ramesses II led his army against the Hittites in the Battle of Kadesh (in modern Syria) and, after fighting to a stalemate, finally agreed to the first recorded peace treaty, around 1258BC. Egypt's wealth, however, made it a tempting target for invasion, particularly by the Libyan Berbers to the west, and the Sea Peoples, a conjectured confederation of seafarers from the Aegean Sea. Initially, the military was able to repel these invasions, but Egypt eventually lost control of its remaining territories in southern Canaan, much of it falling to the Assyrians. The effects of external threats were exacerbated by internal problems such as corruption, tomb robbery, and civil unrest. After regaining their power, the high priests at the temple of Amun in Thebes accumulated vast tracts of land and wealth, and their expanded power splintered the country during the Third Intermediate Period. Third Intermediate Period (1069–653 BC) Following the death of Ramesses XI in 1078BC, Smendes assumed authority over the northern part of Egypt, ruling from the city of Tanis. The south was effectively controlled by the High Priests of Amun at Thebes, who recognized Smendes in name only. During this time, Libyans had been settling in the western delta, and chieftains of these settlers began increasing their autonomy. Libyan princes took control of the delta under Shoshenq I in 945BC, founding the so-called Libyan or Bubastite dynasty that would rule for some 200 years. Shoshenq also gained control of southern Egypt by placing his family members in important priestly positions. Libyan control began to erode as a rival dynasty in the delta arose in Leontopolis, and Kushites threatened from the south. Around 727BC the Kushite king Piye invaded northward, seizing control of Thebes and eventually the Delta, which established the 25th Dynasty. During the 25th Dynasty, Pharaoh Taharqa created an empire nearly as large as the New Kingdom's. Twenty-fifth Dynasty pharaohs built, or restored, temples and monuments throughout the Nile valley, including at Memphis, Karnak, Kawa, and Jebel Barkal. During this period, the Nile valley saw the first widespread construction of pyramids (many in modern Sudan) since the Middle Kingdom. Egypt's far-reaching prestige declined considerably toward the end of the Third Intermediate Period. Its foreign allies had fallen under the Assyrian sphere of influence, and by 700BC war between the two states became inevitable. Between 671 and 667BC the Assyrians began the Assyrian conquest of Egypt. The reigns of both Taharqa and his successor, Tanutamun, were filled with constant conflict with the Assyrians, against whom Egypt enjoyed several victories. Ultimately, the Assyrians pushed the Kushites back into Nubia, occupied Memphis, and sacked the temples of Thebes. Late Period (653–332 BC) The Assyrians left control of Egypt to a series of vassals who became known as the Saite kings of the Twenty-Sixth Dynasty. By 653BC, the Saite king Psamtik I was able to oust the Assyrians with the help of Greek mercenaries, who were recruited to form Egypt's first navy. Greek influence expanded greatly as the city-state of Naukratis became the home of Greeks in the Nile Delta. The Saite kings based in the new capital of Sais witnessed a brief but spirited resurgence in the economy and culture, but in 525BC, the powerful Persians, led by Cambyses II, began their conquest of Egypt, eventually capturing the pharaoh Psamtik III at the Battle of Pelusium. Cambyses II then assumed the formal title of pharaoh, but ruled Egypt from Iran, leaving Egypt under the control of a satrap. A few successful revolts against the Persians marked the 5th centuryBC, but Egypt was never able to permanently overthrow the Persians. Following its annexation by Persia, Egypt was joined with Cyprus and Phoenicia in the sixth satrapy of the Achaemenid Persian Empire. This first period of Persian rule over Egypt, also known as the Twenty-Seventh Dynasty, ended in 402BC, when Egypt regained independence under a series of native dynasties. The last of these dynasties, the Thirtieth, proved to be the last native royal house of ancient Egypt, ending with the kingship of Nectanebo II. A brief restoration of Persian rule, sometimes known as the Thirty-First Dynasty, began in 343BC, but shortly after, in 332BC, the Persian ruler Mazaces handed Egypt over to Alexander the Great without a fight. Ptolemaic period (332–30 BC) In 332BC, Alexander the Great conquered Egypt with little resistance from the Persians and was welcomed by the Egyptians as a deliverer. The administration established by Alexander's successors, the Macedonian Ptolemaic Kingdom, was based on an Egyptian model and based in the new capital city of Alexandria. The city showcased the power and prestige of Hellenistic rule, and became a seat of learning and culture, centered at the famous Library of Alexandria. The Lighthouse of Alexandria lit the way for the many ships that kept trade flowing through the city—as the Ptolemies made commerce and revenue-generating enterprises, such as papyrus manufacturing, their top priority. Hellenistic culture did not supplant native Egyptian culture, as the Ptolemies supported time-honored traditions in an effort to secure the loyalty of the populace. They built new temples in Egyptian style, supported traditional cults, and portrayed themselves as pharaohs. Some traditions merged, as Greek and Egyptian gods were syncretized into composite deities, such as Serapis, and classical Greek forms of sculpture influenced traditional Egyptian motifs. Despite their efforts to appease the Egyptians, the Ptolemies were challenged by native rebellion, bitter family rivalries, and the powerful mob of Alexandria that formed after the death of Ptolemy IV. In addition, as Rome relied more heavily on imports of grain from Egypt, the Romans took great interest in the political situation in the country. Continued Egyptian revolts, ambitious politicians, and powerful opponents from the Near East made this situation unstable, leading Rome to send forces to secure the country as a province of its empire. Roman period (30 BC – AD 641) Egypt became a province of the Roman Empire in 30BC, following the defeat of Mark Antony and Ptolemaic Queen Cleopatra VII by Octavian (later Emperor Augustus) in the Battle of Actium. The Romans relied heavily on grain shipments from Egypt, and the Roman army, under the control of a prefect appointed by the emperor, quelled rebellions, strictly enforced the collection of heavy taxes, and prevented attacks by bandits, which had become a notorious problem during the period. Alexandria became an increasingly important center on the trade route with the orient, as exotic luxuries were in high demand in Rome. Although the Romans had a more hostile attitude than the Greeks towards the Egyptians, some traditions such as mummification and worship of the traditional gods continued. The art of mummy portraiture flourished, and some Roman emperors had themselves depicted as pharaohs, though not to the extent that the Ptolemies had. The former lived outside Egypt and did not perform the ceremonial functions of Egyptian kingship. Local administration became Roman in style and closed to native Egyptians. From the mid-first century AD, Christianity took root in Egypt and it was originally seen as another cult that could be accepted. However, it was an uncompromising religion that sought to win converts from the pagan Egyptian and Greco-Roman religions and threatened popular religious traditions. This led to the persecution of converts to Christianity, culminating in the great purges of Diocletian starting in 303, but eventually Christianity won out. In 391 the Christian emperor Theodosius introduced legislation that banned pagan rites and closed temples. Alexandria became the scene of great anti-pagan riots with public and private religious imagery destroyed. As a consequence, Egypt's native religious culture was continually in decline. While the native population continued to speak their language, the ability to read hieroglyphic writing slowly disappeared as the role of the Egyptian temple priests and priestesses diminished. The temples themselves were sometimes converted to churches or abandoned to the desert. In the fourth century, as the Roman Empire divided, Egypt found itself in the Eastern Empire with its capital at Constantinople. In the waning years of the Empire, Egypt fell to the Sasanian Persian army in the Sasanian conquest of Egypt (618–628). It was then recaptured by the Byzantine emperor Heraclius (629–639), and was finally captured by Muslim Rashidun army in 639–641, ending Byzantine rule. Government and economy Administration and commerce The pharaoh was the absolute monarch of the country and, at least in theory, wielded complete control of the land and its resources. The king was the supreme military commander and head of the government, who relied on a bureaucracy of officials to manage his affairs. In charge of the administration was his second in command, the vizier, who acted as the king's representative and coordinated land surveys, the treasury, building projects, the legal system, and the archives. At a regional level, the country was divided into as many as 42 administrative regions called nomes each governed by a nomarch, who was accountable to the vizier for his jurisdiction. The temples formed the backbone of the economy. Not only were they places of worship, but were also responsible for collecting and storing the kingdom's wealth in a system of granaries and treasuries administered by overseers, who redistributed grain and goods. Much of the economy was centrally organized and strictly controlled. Although the ancient Egyptians did not use coinage until the Late period, they did use a type of money-barter system, with standard sacks of grain and the deben, a weight of roughly of copper or silver, forming a common denominator. Workers were paid in grain; a simple laborer might earn 5 sacks (200 kg or 400 lb) of grain per month, while a foreman might earn 7 sacks (250 kg or 550 lb). Prices were fixed across the country and recorded in lists to facilitate trading; for example a shirt cost five copper deben, while a cow cost 140deben. Grain could be traded for other goods, according to the fixed price list. During the fifth centuryBC coined money was introduced into Egypt from abroad. At first the coins were used as standardized pieces of precious metal rather than true money, but in the following centuries international traders came to rely on coinage. Social status Egyptian society was highly stratified, and social status was expressly displayed. Farmers made up the bulk of the population, but agricultural produce was owned directly by the state, temple, or noble family that owned the land. Farmers were also subject to a labor tax and were required to work on irrigation or construction projects in a corvée system. Artists and craftsmen were of higher status than farmers, but they were also under state control, working in the shops attached to the temples and paid directly from the state treasury. Scribes and officials formed the upper class in ancient Egypt, known as the "white kilt class" in reference to the bleached linen garments that served as a mark of their rank. The upper class prominently displayed their social status in art and literature. Below the nobility were the priests, physicians, and engineers with specialized training in their field. It is unclear whether slavery as understood today existed in ancient Egypt; there is difference of opinions among authors. The ancient Egyptians viewed men and women, including people from all social classes, as essentially equal under the law, and even the lowliest peasant was entitled to petition the vizier and his court for redress. Although slaves were mostly used as indentured servants, they were able to buy and sell their servitude, work their way to freedom or nobility, and were usually treated by doctors in the workplace. Both men and women had the right to own and sell property, make contracts, marry and divorce, receive inheritance, and pursue legal disputes in court. Married couples could own property jointly and protect themselves from divorce by agreeing to marriage contracts, which stipulated the financial obligations of the husband to his wife and children should the marriage end. Compared with their counterparts in ancient Greece, Rome, and even more modern places around the world, ancient Egyptian women had a greater range of personal choices, legal rights, and opportunities for achievement. Women such as Hatshepsut and Cleopatra VII even became pharaohs, while others wielded power as Divine Wives of Amun. Despite these freedoms, ancient Egyptian women did not often take part in official roles in the administration, aside from the royal high priestesses, apparently served only secondary roles in the temples (not much data for many dynasties), and were not so likely to be as educated as men. Legal system The head of the legal system was officially the pharaoh, who was responsible for enacting laws, delivering justice, and maintaining law and order, a concept the ancient Egyptians referred to as Ma'at. Although no legal codes from ancient Egypt survive, court documents show that Egyptian law was based on a common-sense view of right and wrong that emphasized reaching agreements and resolving conflicts rather than strictly adhering to a complicated set of statutes. Local councils of elders, known as Kenbet in the New Kingdom, were responsible for ruling in court cases involving small claims and minor disputes. More serious cases involving murder, major land transactions, and tomb robbery were referred to the Great Kenbet, over which the vizier or pharaoh presided. Plaintiffs and defendants were expected to represent themselves and were required to swear an oath that they had told the truth. In some cases, the state took on both the role of prosecutor and judge, and it could torture the accused with beatings to obtain a confession and the names of any co-conspirators. Whether the charges were trivial or serious, court scribes documented the complaint, testimony, and verdict of the case for future reference. Punishment for minor crimes involved either imposition of fines, beatings, facial mutilation, or exile, depending on the severity of the offense. Serious crimes such as murder and tomb robbery were punished by execution, carried out by decapitation, drowning, or impaling the criminal on a stake. Punishment could also be extended to the criminal's family. Beginning in the New Kingdom, oracles played a major role in the legal system, dispensing justice in both civil and criminal cases. The procedure was to ask the god a "yes" or "no" question concerning the right or wrong of an issue. The god, carried by a number of priests, rendered judgement by choosing one or the other, moving forward or backward, or pointing to one of the answers written on a piece of papyrus or an ostracon. Agriculture A combination of favorable geographical features contributed to the success of ancient Egyptian culture, the most important of which was the rich fertile soil resulting from annual inundations of the Nile River. The ancient Egyptians were thus able to produce an abundance of food, allowing the population to devote more time and resources to cultural, technological, and artistic pursuits. Land management was crucial in ancient Egypt because taxes were assessed based on the amount of land a person owned. Farming in Egypt was dependent on the cycle of the Nile River. The Egyptians recognized three seasons: Akhet (flooding), Peret (planting), and Shemu (harvesting). The flooding season lasted from June to September, depositing on the river's banks a layer of mineral-rich silt ideal for growing crops. After the floodwaters had receded, the growing season lasted from October to February. Farmers plowed and planted seeds in the fields, which were irrigated with ditches and canals. Egypt received little rainfall, so farmers relied on the Nile to water their crops. From March to May, farmers used sickles to harvest their crops, which were then threshed with a flail to separate the straw from the grain. Winnowing removed the chaff from the grain, and the grain was then ground into flour, brewed to make beer, or stored for later use. The ancient Egyptians cultivated emmer and barley, and several other cereal grains, all of which were used to make the two main food staples of bread and beer. Flax plants, uprooted before they started flowering, were grown for the fibers of their stems. These fibers were split along their length and spun into thread, which was used to weave sheets of linen and to make clothing. Papyrus growing on the banks of the Nile River was used to make paper. Vegetables and fruits were grown in garden plots, close to habitations and on higher ground, and had to be watered by hand. Vegetables included leeks, garlic, melons, squashes, pulses, lettuce, and other crops, in addition to grapes that were made into wine. Animals The Egyptians believed that a balanced relationship between people and animals was an essential element of the cosmic order; thus humans, animals and plants were believed to be members of a single whole. Animals, both domesticated and wild, were therefore a critical source of spirituality, companionship, and sustenance to the ancient Egyptians. Cattle were the most important livestock; the administration collected taxes on livestock in regular censuses, and the size of a herd reflected the prestige and importance of the estate or temple that owned them. In addition to cattle, the ancient Egyptians kept sheep, goats, and pigs. Poultry, such as ducks, geese, and pigeons, were captured in nets and bred on farms, where they were force-fed with dough to fatten them. The Nile provided a plentiful source of fish. Bees were also domesticated from at least the Old Kingdom, and provided both honey and wax. The ancient Egyptians used donkeys and oxen as beasts of burden, and they were responsible for plowing the fields and trampling seed into the soil. The slaughter of a fattened ox was also a central part of an offering ritual. Horses were introduced by the Hyksos in the Second Intermediate Period. Camels, although known from the New Kingdom, were not used as beasts of burden until the Late Period. There is also evidence to suggest that elephants were briefly utilized in the Late Period but largely abandoned due to lack of grazing land. Cats, dogs, and monkeys were common family pets, while more exotic pets imported from the heart of Africa, such as Sub-Saharan African lions, were reserved for royalty. Herodotus observed that the Egyptians were the only people to keep their animals with them in their houses. During the Late Period, the worship of the gods in their animal form was extremely popular, such as the cat goddess Bastet and the ibis god Thoth, and these animals were kept in large numbers for the purpose of ritual sacrifice. Natural resources Egypt is rich in building and decorative stone, copper and lead ores, gold, and semiprecious stones. These natural resources allowed the ancient Egyptians to build monuments, sculpt statues, make tools, and fashion jewelry. Embalmers used salts from the Wadi Natrun for mummification, which also provided the gypsum needed to make plaster. Ore-bearing rock formations were found in distant, inhospitable wadis in the Eastern Desert and the Sinai, requiring large, state-controlled expeditions to obtain natural resources found there. There were extensive gold mines in Nubia, and one of the first maps known is of a gold mine in this region. The Wadi Hammamat was a notable source of granite, greywacke, and gold. Flint was the first mineral collected and used to make tools, and flint handaxes are the earliest pieces of evidence of habitation in the Nile valley. Nodules of the mineral were carefully flaked to make blades and arrowheads of moderate hardness and durability even after copper was adopted for this purpose. Ancient Egyptians were among the first to use minerals such as sulfur as cosmetic substances. The Egyptians worked deposits of the lead ore galena at Gebel Rosas to make net sinkers, plumb bobs, and small figurines. Copper was the most important metal for toolmaking in ancient Egypt and was smelted in furnaces from malachite ore mined in the Sinai. Workers collected gold by washing the nuggets out of sediment in alluvial deposits, or by the more labor-intensive process of grinding and washing gold-bearing quartzite. Iron deposits found in upper Egypt were utilized in the Late Period. High-quality building stones were abundant in Egypt; the ancient Egyptians quarried limestone all along the Nile valley, granite from Aswan, and basalt and sandstone from the wadis of the Eastern Desert. Deposits of decorative stones such as porphyry, greywacke, alabaster, and carnelian dotted the Eastern Desert and were collected even before the First Dynasty. In the Ptolemaic and Roman Periods, miners worked deposits of emeralds in Wadi Sikait and amethyst in Wadi el-Hudi. Trade The ancient Egyptians engaged in trade with their foreign neighbors to obtain rare, exotic goods not found in Egypt. In the Predynastic Period, they established trade with Nubia to obtain gold and incense. They also established trade with Palestine, as evidenced by Palestinian-style oil jugs found in the burials of the First Dynasty pharaohs. An Egyptian colony stationed in southern Canaan dates to slightly before the First Dynasty. Narmer had Egyptian pottery produced in Canaan and exported back to Egypt. By the Second Dynasty at latest, ancient Egyptian trade with Byblos yielded a critical source of quality timber not found in Egypt. By the Fifth Dynasty, trade with Punt provided gold, aromatic resins, ebony, ivory, and wild animals such as monkeys and baboons. Egypt relied on trade with Anatolia for essential quantities of tin as well as supplementary supplies of copper, both metals being necessary for the manufacture of bronze. The ancient Egyptians prized the blue stone lapis lazuli, which had to be imported from far-away Afghanistan. Egypt's Mediterranean trade partners also included Greece and Crete, which provided, among other goods, supplies of olive oil. Language Historical development The Egyptian language is a northern Afro-Asiatic language closely related to the Berber and Semitic languages. It has the second longest known history of any language (after Sumerian), having been written from c. 3200BC to the Middle Ages and remaining as a spoken language for longer. The phases of ancient Egyptian are Old Egyptian, Middle Egyptian (Classical Egyptian), Late Egyptian, Demotic and Coptic. Egyptian writings do not show dialect differences before Coptic, but it was probably spoken in regional dialects around Memphis and later Thebes. Ancient Egyptian was a synthetic language, but it became more analytic later on. Late Egyptian developed prefixal definite and indefinite articles, which replaced the older inflectional suffixes. There was a change from the older verb–subject–object word order to subject–verb–object. The Egyptian hieroglyphic, hieratic, and demotic scripts were eventually replaced by the more phonetic Coptic alphabet. Coptic is still used in the liturgy of the Egyptian Orthodox Church, and traces of it are found in modern Egyptian Arabic. Sounds and grammar Ancient Egyptian has 25 consonants similar to those of other Afro-Asiatic languages. These include pharyngeal and emphatic consonants, voiced and voiceless stops, voiceless fricatives and voiced and voiceless affricates. It has three long and three short vowels, which expanded in Late Egyptian to about nine. The basic word in Egyptian, similar to Semitic and Berber, is a triliteral or biliteral root of consonants and semiconsonants. Suffixes are added to form words. The verb conjugation corresponds to the person. For example, the triconsonantal skeleton is the semantic core of the word 'hear'; its basic conjugation is , 'he hears'. If the subject is a noun, suffixes are not added to the verb: , 'the woman hears'. Adjectives are derived from nouns through a process that Egyptologists call nisbation because of its similarity with Arabic. The word order is in verbal and adjectival sentences, and in nominal and adverbial sentences. The subject can be moved to the beginning of sentences if it is long and is followed by a resumptive pronoun. Verbs and nouns are negated by the particle n, but nn is used for adverbial and adjectival sentences. Stress falls on the ultimate or penultimate syllable, which can be open (CV) or closed (CVC). Writing Hieroglyphic writing dates from c. 3000BC, and is composed of hundreds of symbols. A hieroglyph can represent a word, a sound, or a silent determinative; and the same symbol can serve different purposes in different contexts. Hieroglyphs were a formal script, used on stone monuments and in tombs, that could be as detailed as individual works of art. In day-to-day writing, scribes used a cursive form of writing, called hieratic, which was quicker and easier. While formal hieroglyphs may be read in rows or columns in either direction (though typically written from right to left), hieratic was always written from right to left, usually in horizontal rows. A new form of writing, Demotic, became the prevalent writing style, and it is this form of writing—along with formal hieroglyphs—that accompany the Greek text on the Rosetta Stone. Around the first century AD, the Coptic alphabet started to be used alongside the Demotic script. Coptic is a modified Greek alphabet with the addition of some Demotic signs. Although formal hieroglyphs were used in a ceremonial role until the fourth century, towards the end only a small handful of priests could still read them. As the traditional religious establishments were disbanded, knowledge of hieroglyphic writing was mostly lost. Attempts to decipher them date to the Byzantine and Islamic periods in Egypt, but only in the 1820s, after the discovery of the Rosetta Stone and years of research by Thomas Young and Jean-François Champollion, were hieroglyphs substantially deciphered. Literature Writing first appeared in association with kingship on labels and tags for items found in royal tombs. It was primarily an occupation of the scribes, who worked out of the Per Ankh institution or the House of Life. The latter comprised offices, libraries (called House of Books), laboratories and observatories. Some of the best-known pieces of ancient Egyptian literature, such as the Pyramid and Coffin Texts, were written in Classical Egyptian, which continued to be the language of writing until about 1300BC. Late Egyptian was spoken from the New Kingdom onward and is represented in Ramesside administrative documents, love poetry and tales, as well as in Demotic and Coptic texts. During this period, the tradition of writing had evolved into the tomb autobiography, such as those of Harkhuf and Weni. The genre known as Sebayt ("instructions") was developed to communicate teachings and guidance from famous nobles; the Ipuwer papyrus, a poem of lamentations describing natural disasters and social upheaval, is a famous example. The Story of Sinuhe, written in Middle Egyptian, might be the classic of Egyptian literature. Also written at this time was the Westcar Papyrus, a set of stories told to Khufu by his sons relating the marvels performed by priests. The Instruction of Amenemope is considered a masterpiece of Near Eastern literature. Towards the end of the New Kingdom, the vernacular language was more often employed to write popular pieces like the Story of Wenamun and the Instruction of Any. The former tells the story of a noble who is robbed on his way to buy cedar from Lebanon and of his struggle to return to Egypt. From about 700BC, narrative stories and instructions, such as the popular Instructions of Onchsheshonqy, as well as personal and business documents were written in the demotic script and phase of Egyptian. Many stories written in demotic during the Greco-Roman period were set in previous historical eras, when Egypt was an independent nation ruled by great pharaohs such as Ramesses II. Culture Daily life Most ancient Egyptians were farmers tied to the land. Their dwellings were restricted to immediate family members, and were constructed of mudbrick designed to remain cool in the heat of the day. Each home had a kitchen with an open roof, which contained a grindstone for milling grain and a small oven for baking the bread. Ceramics served as household wares for the storage, preparation, transport, and consumption of food, drink, and raw materials. Walls were painted white and could be covered with dyed linen wall hangings. Floors were covered with reed mats, while wooden stools, beds raised from the floor and individual tables comprised the furniture. The ancient Egyptians placed a great value on hygiene and appearance. Most bathed in the Nile and used a pasty soap made from animal fat and chalk. Men shaved their entire bodies for cleanliness; perfumes and aromatic ointments covered bad odors and soothed skin. Clothing was made from simple linen sheets that were bleached white, and both men and women of the upper classes wore wigs, jewelry, and cosmetics. Children went without clothing until maturity, at about age 12, and at this age males were circumcised and had their heads shaved. Mothers were responsible for taking care of the children, while the father provided the family's income. Music and dance were popular entertainments for those who could afford them. Early instruments included flutes and harps, while instruments similar to trumpets, oboes, and pipes developed later and became popular. In the New Kingdom, the Egyptians played on bells, cymbals, tambourines, drums, and imported lutes and lyres from Asia. The sistrum was a rattle-like musical instrument that was especially important in religious ceremonies. The ancient Egyptians enjoyed a variety of leisure activities, including games and music. Senet, a board game where pieces moved according to random chance, was particularly popular from the earliest times; another similar game was mehen, which had a circular gaming board. “Hounds and Jackals” also known as 58 holes is another example of board games played in ancient Egypt. The first complete set of this game was discovered from a Theban tomb of the Egyptian pharaoh Amenemhat IV that dates to the 13th Dynasty. Juggling and ball games were popular with children, and wrestling is also documented in a tomb at Beni Hasan. The wealthy members of ancient Egyptian society enjoyed hunting, fishing, and boating as well. The excavation of the workers' village of Deir el-Medina has resulted in one of the most thoroughly documented accounts of community life in the ancient world, which spans almost four hundred years. There is no comparable site in which the organization, social interactions, and working and living conditions of a community have been studied in such detail. Cuisine Egyptian cuisine remained remarkably stable over time; indeed, the cuisine of modern Egypt retains some striking similarities to the cuisine of the ancients. The staple diet consisted of bread and beer, supplemented with vegetables such as onions and garlic, and fruit such as dates and figs. Wine and meat were enjoyed by all on feast days while the upper classes indulged on a more regular basis. Fish, meat, and fowl could be salted or dried, and could be cooked in stews or roasted on a grill. Architecture The architecture of ancient Egypt includes some of the most famous structures in the world: the Great Pyramids of Giza and the temples at Thebes. Building projects were organized and funded by the state for religious and commemorative purposes, but also to reinforce the wide-ranging power of the pharaoh. The ancient Egyptians were skilled builders; using only simple but effective tools and sighting instruments, architects could build large stone structures with great accuracy and precision that is still envied today. The domestic dwellings of elite and ordinary Egyptians alike were constructed from perishable materials such as mudbricks and wood, and have not survived. Peasants lived in simple homes, while the palaces of the elite and the pharaoh were more elaborate structures. A few surviving New Kingdom palaces, such as those in Malkata and Amarna, show richly decorated walls and floors with scenes of people, birds, water pools, deities and geometric designs. Important structures such as temples and tombs that were intended to last forever were constructed of stone instead of mudbricks. The architectural elements used in the world's first large-scale stone building, Djoser's mortuary complex, include post and lintel supports in the papyrus and lotus motif. The earliest preserved ancient Egyptian temples, such as those at Giza, consist of single, enclosed halls with roof slabs supported by columns. In the New Kingdom, architects added the pylon, the open courtyard, and the enclosed hypostyle hall to the front of the temple's sanctuary, a style that was standard until the Greco-Roman period. The earliest and most popular tomb architecture in the Old Kingdom was the mastaba, a flat-roofed rectangular structure of mudbrick or stone built over an underground burial chamber. The step pyramid of Djoser is a series of stone mastabas stacked on top of each other. Pyramids were built during the Old and Middle Kingdoms, but most later rulers abandoned them in favor of less conspicuous rock-cut tombs. The use of the pyramid form continued in private tomb chapels of the New Kingdom and in the royal pyramids of Nubia. Art The ancient Egyptians produced art to serve functional purposes. For over 3500 years, artists adhered to artistic forms and iconography that were developed during the Old Kingdom, following a strict set of principles that resisted foreign influence and internal change. These artistic standards—simple lines, shapes, and flat areas of color combined with the characteristic flat projection of figures with no indication of spatial depth—created a sense of order and balance within a composition. Images and text were intimately interwoven on tomb and temple walls, coffins, stelae, and even statues. The Narmer Palette, for example, displays figures that can also be read as hieroglyphs. Because of the rigid rules that governed its highly stylized and symbolic appearance, ancient Egyptian art served its political and religious purposes with precision and clarity. Ancient Egyptian artisans used stone as a medium for carving statues and fine reliefs, but used wood as a cheap and easily carved substitute. Paints were obtained from minerals such as iron ores (red and yellow ochres), copper ores (blue and green), soot or charcoal (black), and limestone (white). Paints could be mixed with gum arabic as a binder and pressed into cakes, which could be moistened with water when needed. Pharaohs used reliefs to record victories in battle, royal decrees, and religious scenes. Common citizens had access to pieces of funerary art, such as shabti statues and books of the dead, which they believed would protect them in the afterlife. During the Middle Kingdom, wooden or clay models depicting scenes from everyday life became popular additions to the tomb. In an attempt to duplicate the activities of the living in the afterlife, these models show laborers, houses, boats, and even military formations that are scale representations of the ideal ancient Egyptian afterlife. Despite the homogeneity of ancient Egyptian art, the styles of particular times and places sometimes reflected changing cultural or political attitudes. After the invasion of the Hyksos in the Second Intermediate Period, Minoan-style frescoes were found in Avaris. The most striking example of a politically driven change in artistic forms comes from the Amarna Period, where figures were radically altered to conform to Akhenaten's revolutionary religious ideas. This style, known as Amarna art, was quickly abandoned after Akhenaten's death and replaced by the traditional forms. Religious beliefs Beliefs in the divine and in the afterlife were ingrained in ancient Egyptian civilization from its inception; pharaonic rule was based on the divine right of kings. The Egyptian pantheon was populated by gods who had supernatural powers and were called on for help or protection. However, the gods were not always viewed as benevolent, and Egyptians believed they had to be appeased with offerings and prayers. The structure of this pantheon changed continually as new deities were promoted in the hierarchy, but priests made no effort to organize the diverse and sometimes conflicting myths and stories into a coherent system. These various conceptions of divinity were not considered contradictory but rather layers in the multiple facets of reality. Gods were worshiped in cult temples administered by priests acting on the king's behalf. At the center of the temple was the cult statue in a shrine. Temples were not places of public worship or congregation, and only on select feast days and celebrations was a shrine carrying the statue of the god brought out for public worship. Normally, the god's domain was sealed off from the outside world and was only accessible to temple officials. Common citizens could worship private statues in their homes, and amulets offered protection against the forces of chaos. After the New Kingdom, the pharaoh's role as a spiritual intermediary was de-emphasized as religious customs shifted to direct worship of the gods. As a result, priests developed a system of oracles to communicate the will of the gods directly to the people. The Egyptians believed that every human being was composed of physical and spiritual parts or aspects. In addition to the body, each person had a šwt (shadow), a ba (personality or soul), a ka (life-force), and a name. The heart, rather than the brain, was considered the seat of thoughts and emotions. After death, the spiritual aspects were released from the body and could move at will, but they required the physical remains (or a substitute, such as a statue) as a permanent home. The ultimate goal of the deceased was to rejoin his ka and ba and become one of the "blessed dead", living on as an akh, or "effective one". For this to happen, the deceased had to be judged worthy in a trial, in which the heart was weighed against a "feather of truth." If deemed worthy, the deceased could continue their existence on earth in spiritual form. If they were not deemed worthy, their heart was eaten by Ammit the Devourer and they were erased from the Universe. Burial customs The ancient Egyptians maintained an elaborate set of burial customs that they believed were necessary to ensure immortality after death. These customs involved preserving the body by mummification, performing burial ceremonies, and interring with the body goods the deceased would use in the afterlife. Before the Old Kingdom, bodies buried in desert pits were naturally preserved by desiccation. The arid, desert conditions were a boon throughout the history of ancient Egypt for burials of the poor, who could not afford the elaborate burial preparations available to the elite. Wealthier Egyptians began to bury their dead in stone tombs and use artificial mummification, which involved removing the internal organs, wrapping the body in linen, and burying it in a rectangular stone sarcophagus or wooden coffin. Beginning in the Fourth Dynasty, some parts were preserved separately in canopic jars. By the New Kingdom, the ancient Egyptians had perfected the art of mummification; the best technique took 70 days and involved removing the internal organs, removing the brain through the nose, and desiccating the body in a mixture of salts called natron. The body was then wrapped in linen with protective amulets inserted between layers and placed in a decorated anthropoid coffin. Mummies of the Late Period were also placed in painted cartonnage mummy cases. Actual preservation practices declined during the Ptolemaic and Roman eras, while greater emphasis was placed on the outer appearance of the mummy, which was decorated. Wealthy Egyptians were buried with larger quantities of luxury items, but all burials, regardless of social status, included goods for the deceased. Funerary texts were often included in the grave, and, beginning in the New Kingdom, so were shabti statues that were believed to perform manual labor for them in the afterlife. Rituals in which the deceased was magically re-animated accompanied burials. After burial, living relatives were expected to occasionally bring food to the tomb and recite prayers on behalf of the deceased. Military The ancient Egyptian military was responsible for defending Egypt against foreign invasion, and for maintaining Egypt's domination in the ancient Near East. The military protected mining expeditions to the Sinai during the Old Kingdom and fought civil wars during the First and Second Intermediate Periods. The military was responsible for maintaining fortifications along important trade routes, such as those found at the city of Buhen on the way to Nubia. Forts also were constructed to serve as military bases, such as the fortress at Sile, which was a base of operations for expeditions to the Levant. In the New Kingdom, a series of pharaohs used the standing Egyptian army to attack and conquer Kush and parts of the Levant. Typical military equipment included bows and arrows, spears, and round-topped shields made by stretching animal skin over a wooden frame. In the New Kingdom, the military began using chariots that had earlier been introduced by the Hyksos invaders. Weapons and armor continued to improve after the adoption of bronze: shields were now made from solid wood with a bronze buckle, spears were tipped with a bronze point, and the khopesh was adopted from Asiatic soldiers. The pharaoh was usually depicted in art and literature riding at the head of the army; it has been suggested that at least a few pharaohs, such as Seqenenre Tao II and his sons, did do so. However, it has also been argued that "kings of this period did not personally act as frontline war leaders, fighting alongside their troops." Soldiers were recruited from the general population, but during, and especially after, the New Kingdom, mercenaries from Nubia, Kush, and Libya were hired to fight for Egypt. Technology, medicine, and mathematics Technology In technology, medicine, and mathematics, ancient Egypt achieved a relatively high standard of productivity and sophistication. Traditional empiricism, as evidenced by the Edwin Smith and Ebers papyri (c. 1600BC), is first credited to Egypt. The Egyptians created their own alphabet and decimal system. Faience and glass Even before the Old Kingdom, the ancient Egyptians had developed a glassy material known as faience, which they treated as a type of artificial semi-precious stone. Faience is a non-clay ceramic made of silica, small amounts of lime and soda, and a colorant, typically copper. The material was used to make beads, tiles, figurines, and small wares. Several methods can be used to create faience, but typically production involved application of the powdered materials in the form of a paste over a clay core, which was then fired. By a related technique, the ancient Egyptians produced a pigment known as Egyptian blue, also called blue frit, which is produced by fusing (or sintering) silica, copper, lime, and an alkali such as natron. The product can be ground up and used as a pigment. The ancient Egyptians could fabricate a wide variety of objects from glass with great skill, but it is not clear whether they developed the process independently. It is also unclear whether they made their own raw glass or merely imported pre-made ingots, which they melted and finished. However, they did have technical expertise in making objects, as well as adding trace elements to control the color of the finished glass. A range of colors could be produced, including yellow, red, green, blue, purple, and white, and the glass could be made either transparent or opaque. Medicine The medical problems of the ancient Egyptians stemmed directly from their environment. Living and working close to the Nile brought hazards from malaria and debilitating schistosomiasis parasites, which caused liver and intestinal damage. Dangerous wildlife such as crocodiles and hippos were also a common threat. The lifelong labors of farming and building put stress on the spine and joints, and traumatic injuries from construction and warfare all took a significant toll on the body. The grit and sand from stone-ground flour abraded teeth, leaving them susceptible to abscesses (though caries were rare). The diets of the wealthy were rich in sugars, which promoted periodontal disease. Despite the flattering physiques portrayed on tomb walls, the overweight mummies of many of the upper class show the effects of a life of overindulgence. Adult life expectancy was about 35 for men and 30 for women, but reaching adulthood was difficult as about one-third of the population died in infancy. Ancient Egyptian physicians were renowned in the ancient Near East for their healing skills, and some, such as Imhotep, remained famous long after their deaths. Herodotus remarked that there was a high degree of specialization among Egyptian physicians, with some treating only the head or the stomach, while others were eye-doctors and dentists. Training of physicians took place at the Per Ankh or "House of Life" institution, most notably those headquartered in Per-Bastet during the New Kingdom and at Abydos and Saïs in the Late period. Medical papyri show empirical knowledge of anatomy, injuries, and practical treatments. Wounds were treated by bandaging with raw meat, white linen, sutures, nets, pads, and swabs soaked with honey to prevent infection, while opium, thyme, and belladona were used to relieve pain. The earliest records of burn treatment describe burn dressings that use the milk from mothers of male babies. Prayers were made to the goddess Isis. Moldy bread, honey, and copper salts were also used to prevent infection from dirt in burns. Garlic and onions were used regularly to promote good health and were thought to relieve asthma symptoms. Ancient Egyptian surgeons stitched wounds, set broken bones, and amputated diseased limbs, but they recognized that some injuries were so serious that they could only make the patient comfortable until death occurred. Maritime technology Early Egyptians knew how to assemble planks of wood into a ship hull and had mastered advanced forms of shipbuilding as early as 3000BC. The Archaeological Institute of America reports that the oldest planked ships known are the Abydos boats. A group of 14 discovered ships in Abydos were constructed of wooden planks "sewn" together. Discovered by Egyptologist David O'Connor of New York University, woven straps were found to have been used to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Because the ships are all buried together and near a mortuary belonging to Pharaoh Khasekhemwy, originally they were all thought to have belonged to him, but one of the 14 ships dates to 3000BC, and the associated pottery jars buried with the vessels also suggest earlier dating. The ship dating to 3000BC was long and is now thought to perhaps have belonged to an earlier pharaoh, perhaps one as early as Hor-Aha. Early Egyptians also knew how to assemble planks of wood with treenails to fasten them together, using pitch for caulking the seams. The "Khufu ship", a vessel sealed into a pit in the Giza pyramid complex at the foot of the Great Pyramid of Giza in the Fourth Dynasty around 2500BC, is a full-size surviving example that may have filled the symbolic function of a solar barque. Early Egyptians also knew how to fasten the planks of this ship together with mortise and tenon joints. Large seagoing ships are known to have been heavily used by the Egyptians in their trade with the city states of the eastern Mediterranean, especially Byblos (on the coast of modern-day Lebanon), and in several expeditions down the Red Sea to the Land of Punt. In fact one of the earliest Egyptian words for a seagoing ship is a "Byblos Ship", which originally defined a class of Egyptian seagoing ships used on the Byblos run; however, by the end of the Old Kingdom, the term had come to include large seagoing ships, whatever their destination. In 1977, an ancient north–south canal was discovered extending from Lake Timsah to the Ballah Lakes. It was dated to the Middle Kingdom of Egypt by extrapolating dates of ancient sites constructed along its course. In 2011, archaeologists from Italy, the United States, and Egypt excavating a dried-up lagoon known as Mersa Gawasis have unearthed traces of an ancient harbor that once launched early voyages like Hatshepsut's Punt expedition onto the open ocean. Some of the site's most evocative evidence for the ancient Egyptians' seafaring prowess include large ship timbers and hundreds of feet of ropes, made from papyrus, coiled in huge bundles. In 2013, a team of Franco-Egyptian archaeologists discovered what is believed to be the world's oldest port, dating back about 4500 years, from the time of King Cheops on the Red Sea coast near Wadi el-Jarf (about 110 miles south of Suez). Mathematics The earliest attested examples of mathematical calculations date to the predynastic Naqada period, and show a fully developed numeral system. The importance of mathematics to an educated Egyptian is suggested by a New Kingdom fictional letter in which the writer proposes a scholarly competition between himself and another scribe regarding everyday calculation tasks such as accounting of land, labor, and grain. Texts such as the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus show that the ancient Egyptians could perform the four basic mathematical operations—addition, subtraction, multiplication, and division—use fractions, calculate the areas of rectangles, triangles, and circles and compute the volumes of boxes, columns and pyramids. They understood basic concepts of algebra and geometry, and could solve simple sets of simultaneous equations. Mathematical notation was decimal, and based on hieroglyphic signs for each power of ten up to one million. Each of these could be written as many times as necessary to add up to the desired number; so to write the number eighty or eight hundred, the symbol for ten or one hundred was written eight times respectively. Because their methods of calculation could not handle most fractions with a numerator greater than one, they had to write fractions as the sum of several fractions. For example, they resolved the fraction two-fifths into the sum of one-third + one-fifteenth. Standard tables of values facilitated this. Some common fractions, however, were written with a special glyph—the equivalent of the modern two-thirds is shown on the right. Ancient Egyptian mathematicians knew the Pythagorean theorem as an empirical formula. They were aware, for example, that a triangle had a right angle opposite the hypotenuse when its sides were in a 3–4–5 ratio. They were able to estimate the area of a circle by subtracting one-ninth from its diameter and squaring the result: Area ≈ [()D]2 = ()r2 ≈ 3.16r2, a reasonable approximation of the formula . The golden ratio seems to be reflected in many Egyptian constructions, including the pyramids, but its use may have been an unintended consequence of the ancient Egyptian practice of combining the use of knotted ropes with an intuitive sense of proportion and harmony. Population Estimates of the size of the population range from 1-1.5 million in the 3rd millennium BCE to possibly 2-3 million by the 1st millennium BCE, before growing significantly towards the end of that millennium. DNA In 2012, the DNA of the 20th dynasty mummies of Ramesses III and another mummy believed to be Ramesses III's son Pentawer were analyzed by Albert Zink, Yehia Z Gad and a team of researchers under Zahi Hawass, then Secretary General of the Supreme Council of Antiquities, Egypt. Genetic kinship analyses revealed identical haplotypes in both mummies. Using the Whit Athey's haplogroup predictor, they identified the Y chromosomal haplogroup E1b1a (E-M2). In 2017, a team led by led by researchers from the University of Tuebingen and the Max Planck Institute for the Science of Human History in Jena tested the maternal DNA (mitochondrial) of 90 mummies from Abusir el-Meleq in northern Egypt (near Cairo), which was the first reliable data using high-throughput DNA sequencing methods. Additionally, three of the mummies were also analyzed for Y-DNA. Two were assigned to West Asian J and one to haplogroup E1b1b1 both common in North Africa. The researchers cautioned that the affinities of the examined ancient Egyptian specimens may not be representative of those of all ancient Egyptians since they were from a single archaeological site. Whilst not conclusive since the few relatively older mummies only go back to the 18th-19th dynasty, the rest being from then up to late Roman period, the authors of this study said the Abusir el-Meleq mummies "closely resembled ancient and modern Near Eastern populations, especially those in the Levant." The genetics of the mummies remained remarkably consistent within this range even as different powers—including Nubians, Greeks, and Romans—conquered the empire." A wide range of mtDNA haplogroups were found including clades of J, U, H, HV, M, R0, R2, K, T, L, I, N, X, W. The authors of the study noted that the mummies at Abusir el-Meleq have 6–15% maternal sub-Saharan DNA while modern Egyptians have a little more sub-Saharan ancestry, 15% to 20%, suggesting some degree of influx after the end of the empire. Other genetic studies show greater levels of sub-Saharan African ancestry in modern southern Egyptian populations and anticipate that mummies from southern Egypt would show greater levels of sub-Saharan African ancestry. In 2018, the 4000-year-old mummified head of Djehutynakht, a governor in the Middle Kingdom of the 11th or 12th dynasty, was analyzed for mitochondrial DNA. The sequence of the mummy most closely resembles a U5a lineage from sample JK2903, a much more recent 2000-year-old skeleton from the Abusir el-Meleq site in Egypt, although no direct matches to the Djehutynakht sequence have been reported. Haplogroup U5 is also found in modern-day Berbers from the Siwa Oasis in Egypt. A 2008 article by C. Coudray, "The complex and diversified mitochondrial gene pool of Berber populations", recorded haplogroup U5 at 16.7% for the Siwa whereas haplogroup U6 is more common in other Berber populations to the west of Egypt. In 2020, Yehia Z Gad and other researchers of the Hawass team published results of an analysis of the maternal and paternal haplogroups of several 18th dynasty mummies, including Tutankhamun. Genetic analysis indicated the following haplogroups: Tutankhamun YDNA R1b / mtDNA K Akhenaten YDNA R1b / mtDNA K Amenhotep III YDNA R1b / mtDNA K Yuya G2a / mtDNA K Tiye mtDNA K Thuya mtDNA K The clade of R1b was not determined. A high frequency of R1b1a2 (R-V88) (26.9%) was observed among the Berbers from the Siwa Oasis. This haplogroup reaches its highest frequency in northern Cameroon, northern Nigeria, Chad, and Niger. Legacy The culture and monuments of ancient Egypt have left a lasting legacy on the world. Egyptian civilization significantly influenced the Kingdom of Kush and Meroë with both adopting Egyptian religious and architectural norms (hundreds of pyramids (6–30 meters high) were built in Egypt/Sudan), as well as using Egyptian writing as the basis of the Meroitic script. Meroitic is the oldest written language in Africa, other than Egyptian, and was used from the 2nd century BC until the early 5th century AD. The cult of the goddess Isis, for example, became popular in the Roman Empire, as obelisks and other relics were transported back to Rome. The Romans also imported building materials from Egypt to erect Egyptian-style structures. Early historians such as Herodotus, Strabo, and Diodorus Siculus studied and wrote about the land, which Romans came to view as a place of mystery. During the Middle Ages and the Renaissance, Egyptian pagan culture was in decline after the rise of Christianity and later Islam, but interest in Egyptian antiquity continued in the writings of medieval scholars such as Dhul-Nun al-Misri and al-Maqrizi. In the seventeenth and eighteenth centuries, European travelers and tourists brought back antiquities and wrote stories of their journeys, leading to a wave of Egyptomania across Europe. This renewed interest sent collectors to Egypt, who took, purchased, or were given many important antiquities. Napoleon arranged the first studies in Egyptology when he brought some 150 scientists and artists to study and document Egypt's natural history, which was published in the Description de l'Égypte. In the 20th century, the Egyptian Government and archaeologists alike recognized the importance of cultural respect and integrity in excavations. The Ministry of Tourism and Antiquities (formerly Supreme Council of Antiquities) now approves and oversees all excavations, which are aimed at finding information rather than treasure. The council also supervises museums and monument reconstruction programs designed to preserve the historical legacy of Egypt. See also Glossary of ancient Egypt artifacts Index of ancient Egypt–related articles Outline of ancient Egypt List of ancient Egyptians List of Ancient Egyptian inventions and discoveries Archaeology of Ancient Egypt British school of diffusionism Notes Citation References Further reading :de:Lexikon der Ägyptologie External links BBC History: Egyptiansprovides a reliable general overview and further links World History Encyclopedia on Egypt Ancient Egyptian Science: A Source Book Door Marshall Clagett, 1989 Ancient Egyptian Metallurgy A site that shows the history of Egyptian metalworking Napoleon on the Nile: Soldiers, Artists, and the Rediscovery of Egypt, Art History. Digital Egypt for Universities. Scholarly treatment with broad coverage and cross references (internal and external). Artifacts used extensively to illustrate topics. Priests of Ancient Egypt In-depth-information about Ancient Egypt's priests, religious services and temples. Much picture material and bibliography. In English and German. Ancient Egypt at History.com UCLA Encyclopedia of Egyptology Ancient Egypt and the Role of Women by Dr Joann Fletcher Full-length account of Ancient Egypt as part of history of the world Ancient Egypt Civilizations Egypt Former empires in Asia Ancient peoples History of Egypt History of the Mediterranean
875
https://en.wikipedia.org/wiki/Analog%20Brothers
Analog Brothers
Analog Brothers were an experimental hip hop band featuring Tracy "Ice Oscillator" Marrow (Ice-T) on keyboards, drums and vocals, Keith "Keith Korg" Thornton (Ultramagnetic MCs' Kool Keith) on bass, strings and vocals, Marc "Mark Moog" Giveand (Raw Breed's Marc Live) on drums, violins and vocals, Christopher "Silver Synth" Rodgers (Black Silver) on synthesizer, lazar bell and vocals, and Rex Colonel "Rex Roland JX3P" Doby Jr. (Pimpin' Rex) on keyboards, vocals and production. Its album Pimp to Eat featured guest appearances by various members of Rhyme Syndicate, Odd Oberheim, Jacky Jasper (who appears as Jacky Jasper on the song "We Sleep Days" and H-Bomb on "War"), D.J. Cisco from S.M., Synth-A-Size Sisters and Teflon. While the group only recorded one album together as the Analog Brothers, a few bootlegs of its live concert performances, including freestyles with original lyrics, have occasionally surfaced online. After Pimp to Eat, the Analog Brothers continued performing together in various line ups. Kool Keith and Marc Live joined with Jacky Jasper to release two albums as KHM. Marc Live rapped with Ice T's group SMG. Marc also formed a group with Black Silver called Live Black, but while five of their tracks were released on a demo CD sold at concerts, Live Black's first album has yet to be released. In 2008, Ice-T and Black Silver toured together as Black Ice, and released an album together called Urban Legends. In 2013, Black Silver and newest member to Analog Brothers, Kiew Kurzweil (Kiew Nikon of Kinetic) collaborated on the joint album called Slang Banging (Return to Analog) with production by Junkadelic Music. In addition to all this, the Analog Brothers continue to make frequent appearances on each other's solo albums. Discography 2000 - 2005 A.D. (single), Ground Control Records/Nu Gruv 2000 - Pimp to Eat (LP), Ground Control Records/Mello Music Group 2014 - Slang Banging (Return to Analog), Junkadelic Music References External links Kool Keith's Site Ultrakeith Analog Brothers at Discogs Ice-T American hip hop groups
876
https://en.wikipedia.org/wiki/Motor%20neuron%20disease
Motor neuron disease
Motor neuron diseases or motor neurone diseases (MNDs) are a group of rare neurodegenerative disorders that selectively affect motor neurons, the cells which control voluntary muscles of the body. They include amyotrophic lateral sclerosis (ALS), progressive bulbar palsy (PBP), pseudobulbar palsy, progressive muscular atrophy (PMA), primary lateral sclerosis (PLS), spinal muscular atrophy (SMA) and monomelic amyotrophy (MMA), as well as some rarer variants resembling ALS. Motor neuron diseases affect both children and adults. While each motor neuron disease affects patients differently, they all cause movement-related symptoms, mainly muscle weakness. Most of these diseases seem to occur randomly without known causes, but some forms are inherited. Studies into these inherited forms have led to discoveries of various genes (e.g. SOD1) that are thought to be important in understanding how the disease occurs. Symptoms of motor neuron diseases can be first seen at birth or can come on slowly later in life. Most of these diseases worsen over time; while some, such as ALS, shorten one's life expectancy, others do not. Currently, there are no approved treatments for the majority of motor neuron disorders, and care is mostly symptomatic. Signs and symptoms Signs and symptoms depend on the specific disease, but motor neuron diseases typically manifest as a group of movement-related symptoms. They come on slowly, and worsen over the course of more than three months. Various patterns of muscle weakness are seen, and muscle cramps and spasms may occur. One can have difficulty breathing with climbing stairs (exertion), difficulty breathing when lying down (orthopnea), or even respiratory failure if breathing muscles become involved. Bulbar symptoms, including difficulty speaking (dysarthria), difficulty swallowing (dysphagia), and excessive saliva production (sialorrhea), can also occur. Sensation, or the ability to feel, is typically not affected. Emotional disturbance (e.g. pseudobulbar affect) and cognitive and behavioural changes (e.g. problems in word fluency, decision-making, and memory) are also seen. There can be lower motor neuron findings (e.g. muscle wasting, muscle twitching), upper motor neuron findings (e.g. brisk reflexes, Babinski reflex, Hoffman's reflex, increased muscle tone), or both. Motor neuron diseases are seen both in children and in adults. Those that affect children tend to be inherited or familial, and their symptoms are either present at birth or appear before learning to walk. Those that affect adults tend to appear after age 40. The clinical course depends on the specific disease, but most progress or worsen over the course of months. Some are fatal (e.g. ALS), while others are not (e.g. PLS). Patterns of weakness Various patterns of muscle weakness occur in different motor neuron diseases. Weakness can be symmetric or asymmetric, and it can occur in body parts that are distal, proximal, or both... According to Statland et al., there are three main weakness patterns that are seen in motor neuron diseases, which are: Asymmetric distal weakness without sensory loss (e.g. ALS, PLS, PMA, MMA) Symmetric weakness without sensory loss (e.g. PMA, PLS) Symmetric focal midline proximal weakness (neck, trunk, bulbar involvement; e.g. ALS, PBP, PLS) Lower and upper motor neuron findings Motor neuron diseases are on a spectrum in terms of upper and lower motor neuron involvement. Some have just lower or upper motor neuron findings, while others have a mix of both. Lower motor neuron (LMN) findings include muscle atrophy and fasciculations, and upper motor neuron (UMN) findings include hyperreflexia, spasticity, muscle spasm, and abnormal reflexes. Pure upper motor neuron diseases, or those with just UMN findings, include PLS. Pure lower motor neuron diseases, or those with just LMN findings, include PMA. Motor neuron diseases with both UMN and LMN findings include both familial and sporadic ALS. Causes Most cases are sporadic and their causes are usually not known. It is thought that environmental, toxic, viral, or genetic factors may be involved. DNA damage TARDBP (TAR DNA-binding protein 43), also referred to as TDP-43, is a critical component of the non-homologous end joining (NHEJ) enzymatic pathway that repairs DNA double-strand breaks in pluripotent stem cell-derived motor neurons. TDP-43 is rapidly recruited to double-strand breaks where it acts as a scaffold for the recruitment of the XRCC4-DNA ligase protein complex that then acts to repair double-strand breaks. About 95% of ALS patients have abnormalities in the nucleus-cytoplasmic localization in spinal motor neurons of TDP43. In TDP-43 depleted human neural stem cell-derived motor neurons, as well as in sporadic ALS patients’ spinal cord specimens there is significant double-strand break accumulation and reduced levels of NHEJ. Associated risk factors In adults, men are more commonly affected than women. Diagnosis Differential diagnosis can be challenging due to the number of overlapping symptoms, shared between several motor neuron diseases. Frequently, the diagnosis is based on clinical findings (i.e. LMN vs. UMN signs and symptoms, patterns of weakness), family history of MND, and a variation of tests, many of which are used to rule out disease mimics, which can manifest with identical symptoms. Classification Motor neuron disease describes a collection of clinical disorders, characterized by progressive muscle weakness and the degeneration of the motor neuron on electrophysiological testing. As discussed above, the term "motor neuron disease" has varying meanings in different countries. Similarly, the literature inconsistently classifies which degenerative motor neuron disorders can be included under the umbrella term "motor neuron disease". The four main types of MND are marked (*) in the table below. All types of MND can be differentiated by two defining characteristics: Is the disease sporadic or inherited? Is there involvement of the upper motor neurons (UMN), the lower motor neurons (LMN), or both? Sporadic or acquired MNDs occur in patients with no family history of degenerative motor neuron disease. Inherited or genetic MNDs adhere to one of the following inheritance patterns: autosomal dominant, autosomal recessive, or X-linked. Some disorders, like ALS, can occur sporadically (85%) or can have a genetic cause (15%) with the same clinical symptoms and progression of disease. UMNs are motor neurons that project from the cortex down to the brainstem or spinal cord. LMNs originate in the anterior horns of the spinal cord and synapse on peripheral muscles. Both motor neurons are necessary for the strong contraction of a muscle, but damage to an UMN can be distinguished from damage to a LMN by physical exam. Tests Cerebrospinal fluid (CSF) tests: Analysis of the fluid from around the brain and spinal cord could reveal signs of an infection or inflammation. Magnetic resonance imaging (MRI): An MRI of the brain and spinal cord is recommended in patients with UMN signs and symptoms to explore other causes, such as a tumor, inflammation, or lack of blood supply (stroke). Electromyogram (EMG) & nerve conduction study (NCS): The EMG, which evaluates muscle function, and NCS, which evaluates nerve function, are performed together in patients with LMN signs. For patients with MND affecting the LMNs, the EMG will show evidence of: (1) acute denervation, which is ongoing as motor neurons degenerate, and (2) chronic denervation and reinnervation of the muscle, as the remaining motor neurons attempt to fill in for lost motor neurons. By contrast, the NCS in these patients is usually normal. It can show a low compound muscle action potential (CMAP), which results from the loss of motor neurons, but the sensory neurons should remain unaffected. Tissue biopsy: Taking a small sample of a muscle or nerve may be necessary if the EMG/NCS is not specific enough to rule out other causes of progressive muscle weakness, but it is rarely used. Treatment There are no known curative treatments for the majority of motor neuron disorders. Please refer to the articles on individual disorders for more details. Prognosis The table below lists life expectancy for patients who are diagnosed with MND. Terminology In the United States and Canada, the term motor neuron disease usually refers to the group of disorders while amyotrophic lateral sclerosis is frequently called Lou Gehrig's disease. In the United Kingdom and Australia, the term motor neuron(e) disease is used for amyotrophic lateral sclerosis, although is not uncommon to refer to the entire group. While MND refers to a specific subset of similar diseases, there are numerous other diseases of motor neurons that are referred to collectively as "motor neuron disorders", for instance the diseases belonging to the spinal muscular atrophies group. However, they are not classified as "motor neuron diseases" by the 11th edition of the International Statistical Classification of Diseases and Related Health Problems (ICD-11), which is the definition followed in this article. See also Spinal muscular atrophies Hereditary motor and sensory neuropathies References External links Motor neuron diseases Rare diseases Systemic atrophies primarily affecting the central nervous system
877
https://en.wikipedia.org/wiki/Abjad
Abjad
An abjad (, ; also abgad) is a writing system in which only consonants are represented, leaving vowel sounds to be inferred by the reader. This contrasts with true alphabets, which provide glyphs for both consonants and vowels. The term was introduced in 1990 by Peter T. Daniels. Other terms for the same concept include: partial phonemic script, segmentally linear defective phonographic script, consonantary, consonant writing, and consonantal alphabet. Impure abjads represent vowels with either optional diacritics, a limited number of distinct vowel glyphs, or both. The name abjad is based on the Arabic alphabet's first (in its original order) four letters — corresponding to a, b, j, d — to replace the more common terms "consonantary" and "consonantal alphabet", in describing the family of scripts classified as "West Semitic". Etymology The name "abjad" is derived from pronouncing the first letters of the Arabic alphabet order, in its original order. This ordering matches that of the older Phoenician, Hebrew and Semitic proto-alphabets: specifically, aleph, bet, gimel, dalet. Terminology According to the formulations of Peter T. Daniels, abjads differ from alphabets in that only consonants, not vowels, are represented among the basic graphemes. Abjads differ from abugidas, another category defined by Daniels, in that in abjads, the vowel sound is implied by phonology, and where vowel marks exist for the system, such as nikkud for Hebrew and ḥarakāt for Arabic, their use is optional and not the dominant (or literate) form. Abugidas mark all vowels (other than the "inherent" vowel) with a diacritic, a minor attachment to the letter, a standalone glyph, or (in Canadian Aboriginal syllabics) by rotation of the letter. Some abugidas use a special symbol to suppress the inherent vowel so that the consonant alone can be properly represented. In a syllabary, a grapheme denotes a complete syllable, that is, either a lone vowel sound or a combination of a vowel sound with one or more consonant sounds. The antagonism of abjad versus alphabet, as it was formulated by Daniels, has been rejected by some scholars because abjad is also used as a term not only for the Arabic numeral system but (most importantly in terms of historical grammatology) also as a term for the alphabetic device (i.e. letter order) of ancient Northwest Semitic scripts in opposition to the 'south Arabian' order. This caused fatal effects on terminology in general and especially in (ancient) Semitic philology. Also, it suggests that consonantal alphabets, in opposition to, for instance, the Greek alphabet, were not yet true alphabets and not yet entirely complete, lacking something important to be a fully working script system. It has also been objected that, as a set of letters, an alphabet is not the mirror of what should be there in a language from a phonological point of view; rather, it is the data stock of what provides maximum efficiency with least effort from a semantic point of view. Origins The first abjad to gain widespread usage was the Phoenician abjad. Unlike other contemporary scripts, such as cuneiform and Egyptian hieroglyphs, the Phoenician script consisted of only a few dozen symbols. This made the script easy to learn, and seafaring Phoenician merchants took the script throughout the then-known world. The Phoenician abjad was a radical simplification of phonetic writing, since hieroglyphics required the writer to pick a hieroglyph starting with the same sound that the writer wanted to write in order to write phonetically, much as man'yōgana (kanji used solely for phonetic use) was used to represent Japanese phonetically before the invention of kana. Phoenician gave rise to a number of new writing systems, including the widely used Aramaic abjad and the Greek alphabet. The Greek alphabet evolved into the modern western alphabets, such as Latin and Cyrillic, while Aramaic became the ancestor of many modern abjads and abugidas of Asia. Impure abjads Impure abjads have characters for some vowels, optional vowel diacritics, or both. The term pure abjad refers to scripts entirely lacking in vowel indicators. However, most modern abjads, such as Arabic, Hebrew, Aramaic, and Pahlavi, are "impure" abjadsthat is, they also contain symbols for some of the vowel phonemes, although the said non-diacritic vowel letters are also used to write certain consonants, particularly approximants that sound similar to long vowels. A "pure" abjad is exemplified (perhaps) by very early forms of ancient Phoenician, though at some point (at least by the 9th century BC) it and most of the contemporary Semitic abjads had begun to overload a few of the consonant symbols with a secondary function as vowel markers, called matres lectionis. This practice was at first rare and limited in scope but became increasingly common and more developed in later times. Addition of vowels In the 9th century BC the Greeks adapted the Phoenician script for use in their own language. The phonetic structure of the Greek language created too many ambiguities when vowels went unrepresented, so the script was modified. They did not need letters for the guttural sounds represented by aleph, he, heth or ayin, so these symbols were assigned vocalic values. The letters waw and yod were also adapted into vowel signs; along with he, these were already used as matres lectionis in Phoenician. The major innovation of Greek was to dedicate these symbols exclusively and unambiguously to vowel sounds that could be combined arbitrarily with consonants (as opposed to syllabaries such as Linear B which usually have vowel symbols but cannot combine them with consonants to form arbitrary syllables). Abugidas developed along a slightly different route. The basic consonantal symbol was considered to have an inherent "a" vowel sound. Hooks or short lines attached to various parts of the basic letter modify the vowel. In this way, the South Arabian abjad evolved into the Ge'ez abugida of Ethiopia between the 5th century BC and the 5th century AD. Similarly, the Brāhmī abugida of the Indian subcontinent developed around the 3rd century BC (from the Aramaic abjad, it has been hypothesized). The other major family of abugidas, Canadian Aboriginal syllabics, was initially developed in the 1840s by missionary and linguist James Evans for the Cree and Ojibwe languages. Evans used features of Devanagari script and Pitman shorthand to create his initial abugida. Later in the 19th century, other missionaries adapted Evans's system to other Canadian aboriginal languages. Canadian syllabics differ from other abugidas in that the vowel is indicated by rotation of the consonantal symbol, with each vowel having a consistent orientation. Abjads and the structure of Semitic languages The abjad form of writing is well-adapted to the morphological structure of the Semitic languages it was developed to write. This is because words in Semitic languages are formed from a root consisting of (usually) three consonants, the vowels being used to indicate inflectional or derived forms. For instance, according to Classical Arabic and Modern Standard Arabic, from the Arabic root Dh-B-Ḥ (to slaughter) can be derived the forms (he slaughtered), (you (masculine singular) slaughtered), (he slaughters), and (slaughterhouse). In most cases, the absence of full glyphs for vowels makes the common root clearer, allowing readers to guess the meaning of unfamiliar words from familiar roots (especially in conjunction with context clues) and improving word recognition while reading for practiced readers. By contrast, the Arabic and Hebrew scripts sometimes perform the role of true alphabets rather than abjads when used to write certain Indo-European languages, including Kurdish, Bosnian, and Yiddish. Comparative chart of Abjads, extinct and extant See also Abjad numerals (Arabic alphanumeric code) Abugida Gematria (Hebrew & English system of alphanumeric code) Numerology Shorthand (constructed writing systems that are structurally abjads) References Sources External links The Science of Arabic Letters, Abjad and Geometry, by Jorge Lupin Arabic orthography
878
https://en.wikipedia.org/wiki/Abugida
Abugida
An abugida (, from Ge'ez: ), sometimes known as alphasyllabary, neosyllabary or pseudo-alphabet, is a segmental writing system in which consonant-vowel sequences are written as units; each unit is based on a consonant letter, and vowel notation is secondary. This contrasts with a full alphabet, in which vowels have status equal to consonants, and with an abjad, in which vowel marking is absent, partial, or optional (although in less formal contexts, all three types of script may be termed alphabets). The terms also contrast them with a syllabary, in which the symbols cannot be split into separate consonants and vowels. Related concepts were introduced independently in 1948 by James Germain Février (using the term ) and David Diringer (using the term semisyllabary), then in 1959 by Fred Householder (introducing the term pseudo-alphabet). The Ethiopic term "abugida" was chosen as a designation for the concept in 1990 by Peter T. Daniels. In 1992, Faber suggested "segmentally coded syllabically linear phonographic script", and in 1992 Bright used the term alphasyllabary, and Gnanadesikan and Rimzhim, Katz, & Fowler have suggested aksara or āksharik. Abugidas include the extensive Brahmic family of scripts of Tibet, South and Southeast Asia, Semitic Ethiopic scripts, and Canadian Aboriginal syllabics. As is the case for syllabaries, the units of the writing system may consist of the representations both of syllables and of consonants. For scripts of the Brahmic family, the term akshara is used for the units. Terminology In several languages of Ethiopia and Eritrea, abugida traditionally meant letters of the Ethiopic or Ge‘ez script in which many of these languages are written. Ge'ez is one of several segmental writing systems in the world, others include Indic/Brahmic scripts and Canadian Aboriginal Syllabics. The word abugida is derived from the four letters, ä, bu, gi, and da, in much the same way that abecedary is derived from Latin letters a be ce de, abjad is derived from the Arabic a b j d, and alphabet is derived from the names of the two first letters in the Greek alphabet, alpha and beta. Abugida as a term in linguistics was proposed by Peter T. Daniels in his 1990 typology of writing systems. As Daniels used the word, an abugida is in contrast with a syllabary, where letters with shared consonants or vowels show no particular resemblance to one another, and also with an alphabet proper, where independent letters are used to denote both consonants and vowels. The term alphasyllabary was suggested for the Indic scripts in 1997 by William Bright, following South Asian linguistic usage, to convey the idea that "they share features of both alphabet and syllabary." General description The formal definitions given by Daniels and Bright for abugida and alphasyllabary differ; some writing systems are abugidas but not alphasyllabaries, and some are alphasyllabaries but not abugidas. An abugida is defined as "a type of writing system whose basic characters denote consonants followed by a particular vowel, and in which diacritics denote other vowels". (This 'particular vowel' is referred to as the inherent or implicit vowel, as opposed to the explicit vowels marked by the 'diacritics'.) An alphasyllabary is defined as "a type of writing system in which the vowels are denoted by subsidiary symbols not all of which occur in a linear order (with relation to the consonant symbols) that is congruent with their temporal order in speech". Bright did not require that an alphabet explicitly represent all vowels. ʼPhags-pa is an example of an abugida that is not an alphasyllabary, and modern Lao is an example of an alphasyllabary that is not an abugida, for its vowels are always explicit. This description is expressed in terms of an abugida. Formally, an alphasyllabary that is not an abugida can be converted to an abugida by adding a purely formal vowel sound that is never used and declaring that to be the inherent vowel of the letters representing consonants. This may formally make the system ambiguous, but in practice this is not a problem, for then the interpretation with the never-used inherent vowel sound will always be a wrong interpretation. Note that the actual pronunciation may be complicated by interactions between the sounds apparently written just as the sounds of the letters in the English words wan, gem and war are affected by neighbouring letters. The fundamental principles of an abugida apply to words made up of consonant-vowel (CV) syllables. The syllables are written as a linear sequences of the units of the script. Each syllable is either a letter that represents the sound of a consonant and its inherent vowel or a letter modified to indicate the vowel, either by means of diacritics or by changes in the form of the letter itself. If all modifications are by diacritics and all diacritics follow the direction of the writing of the letters, then the abugida is not an alphasyllabary. However, most languages have words that are more complicated than a sequence of CV syllables, even ignoring tone. The first complication is syllables that consist of just a vowel (V). This issue does not arise in some languages because every syllable starts with a consonant. This is common in Semitic languages and in languages of mainland SE Asia; for such languages this issue need not arise. For some languages, a zero consonant letter is used as though every syllable began with a consonant. For other languages, each vowel has a separate letter that is used for each syllable consisting of just the vowel. These letters are known as independent vowels, and are found in most Indic scripts. These letters may be quite different from the corresponding diacritics, which by contrast are known as dependent vowels. As a result of the spread of writing systems, independent vowels may be used to represent syllables beginning with a glottal stop, even for non-initial syllables. The next two complications are sequences of consonants before a vowel (CCV) and syllables ending in a consonant (CVC). The simplest solution, which is not always available, is to break with the principle of writing words as a sequence of syllables and use a unit representing just a consonant (C). This unit may be represented with: a modification that explicitly indicates the lack of a vowel (virama), a lack of vowel marking (often with ambiguity between no vowel and a default inherent vowel), vowel marking for a short or neutral vowel such as schwa (with ambiguity between no vowel and that short or neutral vowel), or a visually unrelated letter. In a true abugida, the lack of distinctive marking may result from the diachronic loss of the inherent vowel, e.g. by syncope and apocope in Hindi. When not handled by decomposition into C + CV, CCV syllables are handled by combining the two consonants. In the Indic scripts, the earliest method was simply to arrange them vertically, but the two consonants may merge as a conjunct consonant letters, where two or more letters are graphically joined in a ligature, or otherwise change their shapes. Rarely, one of the consonants may be replaced by a gemination mark, e.g. the Gurmukhi addak. When they are arranged vertically, as in Burmese or Khmer, they are said to be 'stacked'. Often there has been a change to writing the two consonants side by side. In the latter case, the fact of combination may be indicated by a diacritic on one of the consonants or a change in the form of one of the consonants, e.g. the half forms of Devanagari. Generally, the reading order is top to bottom or the general reading order of the script, but sometimes the order is reversed. The division of a word into syllables for the purposes of writing does not always accord with the natural phonetics of the language. For example, Brahmic scripts commonly handle a phonetic sequence CVC-CV as CV-CCV or CV-C-CV. However, sometimes phonetic CVC syllables are handled as single units, and the final consonant may be represented: in much the same way as the second consonant in CCV, e.g. in the Tibetan, Khmer and Tai Tham scripts. The positioning of the components may be slightly different, as in Khmer and Tai Tham. by a special dependent consonant sign, which may be a smaller or differently placed version of the full consonant letter, or may be a distinct sign altogether. not at all. For example, repeated consonants need not be represented, homorganic nasals may be ignored, and in Philippine scripts, the syllable-final consonant was traditionally never represented. More complicated unit structures (e.g. CC or CCVC) are handled by combining the various techniques above. Family-specific features There are three principal families of abugidas, depending on whether vowels are indicated by modifying consonants by diacritics, distortion, or orientation. The oldest and largest is the Brahmic family of India and Southeast Asia, in which vowels are marked with diacritics and syllable-final consonants, when they occur, are indicated with ligatures, diacritics, or with a special vowel-canceling mark. In the Ethiopic family, vowels are marked by modifying the shapes of the consonants, and one of the vowel-forms serves additionally to indicate final consonants. In Canadian Aboriginal syllabics, vowels are marked by rotating or flipping the consonants, and final consonants are indicated with either special diacritics or superscript forms of the main initial consonants. Tāna of the Maldives has dependent vowels and a zero vowel sign, but no inherent vowel. Indic (Brahmic) Indic scripts originated in India and spread to Southeast Asia, Bangladesh, Sri Lanka, Nepal, Bhutan, Tibet, Mongolia, and Russia. All surviving Indic scripts are descendants of the Brahmi alphabet. Today they are used in most languages of South Asia (although replaced by Perso-Arabic in Urdu, Kashmiri and some other languages of Pakistan and India), mainland Southeast Asia (Myanmar, Thailand, Laos, Cambodia, and Vietnam), Tibet (Tibetan), Indonesian archipelago (Javanese, Balinese, Sundanese), Philippines (Baybayin, Buhid, Hanunuo, Kulitan, and Aborlan Tagbanwa), Malaysia (Rencong, etc.). The primary division is into North Indic scripts used in Northern India, Nepal, Tibet, Bhutan, Mongolia, and Russia and Southern Indic scripts used in South India, Sri Lanka and Southeast Asia. South Indic letter forms are very rounded; North Indic less so, though Odia, Golmol and Litumol of Nepal script are rounded. Most North Indic scripts' full letters incorporate a horizontal line at the top, with Gujarati and Odia as exceptions; South Indic scripts do not. Indic scripts indicate vowels through dependent vowel signs (diacritics) around the consonants, often including a sign that explicitly indicates the lack of a vowel. If a consonant has no vowel sign, this indicates a default vowel. Vowel diacritics may appear above, below, to the left, to the right, or around the consonant. The most widely used Indic script is Devanagari, shared by Hindi, Bihari, Marathi, Konkani, Nepali, and often Sanskrit. A basic letter such as क in Hindi represents a syllable with the default vowel, in this case ka (). In some languages, including Hindi, it becomes a final closing consonant at the end of a word, in this case k. The inherent vowel may be changed by adding vowel mark (diacritics), producing syllables such as कि ki, कु ku, के ke, को ko. In many of the Brahmic scripts, a syllable beginning with a cluster is treated as a single character for purposes of vowel marking, so a vowel marker like ि -i, falling before the character it modifies, may appear several positions before the place where it is pronounced. For example, the game cricket in Hindi is क्रिकेट cricket; the diacritic for appears before the consonant cluster , not before the . A more unusual example is seen in the Batak alphabet: Here the syllable bim is written ba-ma-i-(virama). That is, the vowel diacritic and virama are both written after the consonants for the whole syllable. In many abugidas, there is also a diacritic to suppress the inherent vowel, yielding the bare consonant. In Devanagari, क् is k, and ल् is l. This is called the virāma or halantam in Sanskrit. It may be used to form consonant clusters, or to indicate that a consonant occurs at the end of a word. Thus in Sanskrit, a default vowel consonant such as क does not take on a final consonant sound. Instead, it keeps its vowel. For writing two consonants without a vowel in between, instead of using diacritics on the first consonant to remove its vowel, another popular method of special conjunct forms is used in which two or more consonant characters are merged to express a cluster, such as Devanagari: क्ल kla. (Note that some fonts display this as क् followed by ल, rather than forming a conjunct. This expedient is used by ISCII and South Asian scripts of Unicode.) Thus a closed syllable such as kal requires two aksharas to write. The Róng script used for the Lepcha language goes further than other Indic abugidas, in that a single akshara can represent a closed syllable: Not only the vowel, but any final consonant is indicated by a diacritic. For example, the syllable [sok] would be written as something like s̥̽, here with an underring representing and an overcross representing the diacritic for final . Most other Indic abugidas can only indicate a very limited set of final consonants with diacritics, such as or , if they can indicate any at all. Ethiopic In Ethiopic or Ge'ez script, fidels (individual "letters" of the script) have "diacritics" that are fused with the consonants to the point that they must be considered modifications of the form of the letters. Children learn each modification separately, as in a syllabary; nonetheless, the graphic similarities between syllables with the same consonant are readily apparent, unlike the case in a true syllabary. Though now an abugida, the Ge'ez script, until the advent of Christianity (ca. AD 350), had originally been what would now be termed an abjad. In the Ge'ez abugida (or fidel), the base form of the letter (also known as fidel) may be altered. For example, ሀ hä (base form), ሁ hu (with a right-side diacritic that doesn't alter the letter), ሂ hi (with a subdiacritic that compresses the consonant, so it is the same height), ህ hə or (where the letter is modified with a kink in the left arm). Canadian Aboriginal syllabics In the family known as Canadian Aboriginal syllabics, which was inspired by the Devanagari script of India, vowels are indicated by changing the orientation of the syllabogram. Each vowel has a consistent orientation; for example, Inuktitut ᐱ pi, ᐳ pu, ᐸ pa; ᑎ ti, ᑐ tu, ᑕ ta. Although there is a vowel inherent in each, all rotations have equal status and none can be identified as basic. Bare consonants are indicated either by separate diacritics, or by superscript versions of the aksharas; there is no vowel-killer mark. Borderline cases Vowelled abjads Consonantal scripts ("abjads") are normally written without indication of many vowels. However, in some contexts like teaching materials or scriptures, Arabic and Hebrew are written with full indication of vowels via diacritic marks (harakat, niqqud) making them effectively alphasyllabaries. The Brahmic and Ethiopic families are thought to have originated from the Semitic abjads by the addition of vowel marks. The Arabic scripts used for Kurdish in Iraq and for Uyghur in Xinjiang, China, as well as the Hebrew script of Yiddish, are fully vowelled, but because the vowels are written with full letters rather than diacritics (with the exception of distinguishing between /a/ and /o/ in the latter) and there are no inherent vowels, these are considered alphabets, not abugidas. Phagspa The imperial Mongol script called Phagspa was derived from the Tibetan abugida, but all vowels are written in-line rather than as diacritics. However, it retains the features of having an inherent vowel /a/ and having distinct initial vowel letters. Pahawh Pahawh Hmong is a non-segmental script that indicates syllable onsets and rimes, such as consonant clusters and vowels with final consonants. Thus it is not segmental and cannot be considered an abugida. However, it superficially resembles an abugida with the roles of consonant and vowel reversed. Most syllables are written with two letters in the order rime–onset (typically vowel-consonant), even though they are pronounced as onset-rime (consonant-vowel), rather like the position of the vowel in Devanagari, which is written before the consonant. Pahawh is also unusual in that, while an inherent rime (with mid tone) is unwritten, it also has an inherent onset . For the syllable , which requires one or the other of the inherent sounds to be overt, it is that is written. Thus it is the rime (vowel) that is basic to the system. Meroitic It is difficult to draw a dividing line between abugidas and other segmental scripts. For example, the Meroitic script of ancient Sudan did not indicate an inherent a (one symbol stood for both m and ma, for example), and is thus similar to Brahmic family of abugidas. However, the other vowels were indicated with full letters, not diacritics or modification, so the system was essentially an alphabet that did not bother to write the most common vowel. Shorthand Several systems of shorthand use diacritics for vowels, but they do not have an inherent vowel, and are thus more similar to Thaana and Kurdish script than to the Brahmic scripts. The Gabelsberger shorthand system and its derivatives modify the following consonant to represent vowels. The Pollard script, which was based on shorthand, also uses diacritics for vowels; the placements of the vowel relative to the consonant indicates tone. Pitman shorthand uses straight strokes and quarter-circle marks in different orientations as the principal "alphabet" of consonants; vowels are shown as light and heavy dots, dashes and other marks in one of 3 possible positions to indicate the various vowel-sounds. However, to increase writing speed, Pitman has rules for "vowel indication" using the positioning or choice of consonant signs so that writing vowel-marks can be dispensed with. Development As the term alphasyllabary suggests, abugidas have been considered an intermediate step between alphabets and syllabaries. Historically, abugidas appear to have evolved from abjads (vowelless alphabets). They contrast with syllabaries, where there is a distinct symbol for each syllable or consonant-vowel combination, and where these have no systematic similarity to each other, and typically develop directly from logographic scripts. Compare the examples above to sets of syllables in the Japanese hiragana syllabary: か ka, き ki, く ku, け ke, こ ko have nothing in common to indicate k; while ら ra, り ri, る ru, れ re, ろ ro have neither anything in common for r, nor anything to indicate that they have the same vowels as the k set. Most Indian and Indochinese abugidas appear to have first been developed from abjads with the Kharoṣṭhī and Brāhmī scripts; the abjad in question is usually considered to be the Aramaic one, but while the link between Aramaic and Kharosthi is more or less undisputed, this is not the case with Brahmi. The Kharosthi family does not survive today, but Brahmi's descendants include most of the modern scripts of South and Southeast Asia. Ge'ez derived from a different abjad, the Sabean script of Yemen; the advent of vowels coincided with the introduction or adoption of Christianity about AD 350. The Ethiopic script is the elaboration of an abjad. The Cree syllabary was invented with full knowledge of the Devanagari system. The Meroitic script was developed from Egyptian hieroglyphs, within which various schemes of 'group writing' had been used for showing vowels. List of abugidas Brahmic family, descended from Brāhmī (c. 6th century BC) Ahom Assamese Balinese BatakToba and other Batak languages BaybayinIlocano, Pangasinan, Tagalog, Bikol languages, Visayan languages, and possibly other Philippine languages BengaliBengali, Assamese, Meitei, Bishnupriya Manipuri, Kokborok, Khasi, Bodo language Bhaiksuki BrahmiSanskrit, Prakrit Buhid BurmeseBurmese, Karen languages, Mon, and Shan Chakma Cham DevanagariHindi, Sanskrit, Marathi, Nepali, Konkani and other languages of northern India Dhives Akuru GranthaSanskrit GujaratiGujarāti, Kachchi Gurmukhi scriptPunjabi Hanunó’o Javanese KagangaLampung, Rencong, Rejang KaithiBhojpuri and other languages of northern and eastern India KannadaKannada, Tulu, Konkani, Kodava Kawi Khmer Khojki Khotanese Khudawadi KolezhuthuTamil, Malayalam Kulitan Lao Leke Lepcha Limbu Lontara'Buginese, Makassar, and Mandar Mahajani MalayalamMalayalam MalayanmaMalayalam MarchenZhang-Zhung Meetei Mayek Meroitic ModiMarathi MultaniSaraiki NandinagariSanskrit NewarNepal Bhasa, Sanskrit New Tai Lue Odia Pallava scriptTamil, Sanskrit, various Prakrits Phags-paMongolian, Chinese, and other languages of the Yuan dynasty Mongol Empire RanjanaNepal Bhasa, Sanskrit SharadaSanskrit SiddhamSanskrit Sinhala Sourashtra Soyombo Sundanese Sylheti NagriSylheti language TagbanwaPalawan languages Tai Dam Tai Le Tai ThamKhün, and Northern Thai Takri Tamil Telugu Thai Tibetan TigalariSanskrit, Tulu TirhutaMaithili Tocharian VatteluttuTamil, Malayalam Zanabazar Square Zhang zhung scripts Kharoṣṭhī, from the 3rd century BC Ge'ez, from the 4th century AD Canadian Aboriginal syllabics CreeOjibwe syllabics Blackfoot syllabics Carrier syllabics Inuktitut syllabics Pollard script Pitman shorthand Fictional Tengwar Ihathvé Sabethired Abugida-like scripts Meroitic (an alphabet with an inherent vowel) – Meroitic, Old Nubian (possibly) Thaana (abugida with no inherent vowel) References External links Syllabic alphabets – Omniglot's list of abugidas, including examples of various writing systems Alphabets – list of abugidas and other scripts (in Spanish) Comparing Devanagari with Burmese, Khmer, Thai, and Tai Tham scripts
880
https://en.wikipedia.org/wiki/ABBA
ABBA
ABBA ( , ) are a Swedish pop group formed in Stockholm in 1972 by Agnetha Fältskog, Björn Ulvaeus, Benny Andersson, and Anni-Frid Lyngstad. The group's name is an acronym of the first letters of their first names arranged as a palindrome. One of the most popular and successful musical groups of all time, they became one of the best-selling music acts in the history of popular music, topping the charts worldwide from 1974 to 1983, and in 2021. In 1974, ABBA were Sweden's first winner of the Eurovision Song Contest with the song "Waterloo", which in 2005 was chosen as the best song in the competition's history as part of the 50th anniversary celebration of the contest. During the band's main active years, it consisted of two married couples: Fältskog and Ulvaeus, and Lyngstad and Andersson. With the increase of their popularity, their personal lives suffered, which eventually resulted in the collapse of both marriages. The relationship changes were reflected in the group's music, with latter compositions featuring darker and more introspective lyrics. After ABBA separated in December 1982, Andersson and Ulvaeus continued their success writing music for multiple audiences including stage, musicals and movies, while Fältskog and Lyngstad pursued solo careers. Ten years after the group broke up, a compilation, ABBA Gold, was released, becoming a worldwide best-seller. In 1999, ABBA's music was adapted into Mamma Mia!, a successful musical that toured worldwide and, as of November 2021, is still in the top-ten longest running productions on both Broadway (closed in 2015) and the West End (still running). A film of the same name, released in 2008, became the highest-grossing film in the United Kingdom that year. A sequel, Mamma Mia! Here We Go Again, was released in 2018. In 2016, the group reunited and started working on a digital avatar concert tour. Newly recorded songs were announced in 2018. Voyage, their first new album in 40 years, was released on November 5, 2021. ABBA Voyage, a concert residency featuring ABBA as virtual avatars – dubbed 'ABBAtars' – is due to take place in London from May to December 2022. ABBA is one of the best-selling music artists of all time, with record sales estimated to be between 150 million to 385 million sold worldwide and the group were ranked 3rd best-selling singles artists in the United Kingdom with a total of 11.3 million singles sold by 3 November 2012. ABBA were the first group from a non-English-speaking country to achieve consistent success in the charts of English-speaking countries, including the United States, United Kingdom, Republic of Ireland, Canada, Australia, New Zealand and South Africa. They are the best-selling Swedish band of all time and the best-selling band originating in continental Europe. ABBA had eight consecutive number-one albums in the UK. The group also enjoyed significant success in Latin America and recorded a collection of their hit songs in Spanish. ABBA were inducted into the Vocal Group Hall of Fame in 2002. The group were inducted into the Rock and Roll Hall of Fame in 2010, the first and only recording artists to receive this honour from outside an Anglophone country. In 2015, their song "Dancing Queen" was inducted into the Recording Academy's Grammy Hall of Fame. History 1958–1970: Before ABBA Member origins and collaboration Benny Andersson (born 16 December 1946 in Stockholm, Sweden) became (at age 18) a member of a popular Swedish pop-rock group, the Hep Stars, that performed, among other things, covers of international hits. The Hep Stars were known as "the Swedish Beatles". They also set up Hep House, their equivalent of Apple Corps. Andersson played the keyboard and eventually started writing original songs for his band, many of which became major hits, including "No Response", which hit number three in 1965, and "Sunny Girl", "Wedding", and "Consolation", all of which hit number one in 1966. Andersson also had a fruitful songwriting collaboration with Lasse Berghagen, with whom he wrote his first Svensktoppen entry, "Sagan om lilla Sofie" ("The tale of Little Sophie") in 1968. Björn Ulvaeus (born 25 April 1945 in Gothenburg, Sweden) also began his musical career at the age of 18 (as a singer and guitarist), when he fronted the Hootenanny Singers, a popular Swedish folk–skiffle group. Ulvaeus started writing English-language songs for his group, and even had a brief solo career alongside. The Hootenanny Singers and the Hep Stars sometimes crossed paths while touring. In June 1966, Ulvaeus and Andersson decided to write a song together. Their first attempt was "Isn't It Easy to Say", a song was later recorded by the Hep Stars. Stig Anderson was the manager of the Hootenanny Singers and founder of the Polar Music label. He saw potential in the collaboration, and encouraged them to write more. The two also began playing occasionally with the other's bands on stage and on record, although it was not until 1969 that the pair wrote and produced some of their first real hits together: "Ljuva sextital" ("Sweet Sixties"), recorded by Brita Borg, and the Hep Stars' 1969 hit "Speleman" ("Fiddler"). Andersson wrote and submitted the song "Hej, Clown" for Melodifestivalen 1969, the national festival to select the Swedish entry to the Eurovision Song Contest. The song tied for first place, but re-voting relegated Andersson's song to second place. On that occasion Andersson briefly met his future spouse, singer Anni-Frid Lyngstad, who also participated in the contest. A month later, the two had become a couple. As their respective bands began to break up during 1969, Andersson and Ulvaeus teamed up and recorded their first album together in 1970, called Lycka ("Happiness"), which included original songs sung by both men. Their partners were often present in the recording studio, and sometimes added backing vocals; Fältskog even co-wrote a song with the two. Ulvaeus still occasionally recorded and performed with the Hootenanny Singers until the middle of 1974, and Andersson took part in producing their records. Anni-Frid "Frida" Lyngstad (born 15 November 1945 in Bjørkåsen in Ballangen, Norway) sang from the age of 13 with various dance bands, and worked mainly in a jazz-oriented cabaret style. She also formed her own band, the Anni-Frid Four. In the middle of 1967, she won a national talent competition with "En ledig dag" ("A Day Off") a Swedish version of the bossa nova song "A Day in Portofino", which is included in the EMI compilation Frida 1967–1972. The first prize was a recording contract with EMI Sweden and to perform live on the most popular TV shows in the country. This TV performance, amongst many others, is included in the 3½-hour documentary Frida – The DVD. Lyngstad released several schlager style singles on EMI without much success. When Benny Andersson started to produce her recordings in 1971, she had her first number-one single, "Min egen stad" ("My Own Town"), written by Benny and featuring all the future ABBA members on backing vocals. Lyngstad toured and performed regularly in the folkpark circuit and made appearances on radio and TV. She met Ulvaeus briefly in 1963 during a talent contest, and Fältskog during a TV show in early 1968. Lyngstad linked up with her future bandmates in 1969. On 1 March 1969, she participated in the Melodifestival, where she met Andersson for the first time. A few weeks later they met again during a concert tour in southern Sweden and they soon became a couple. Andersson produced her single "Peter Pan" in September 1969—her first collaboration with Benny & Björn, as they had written the song. Andersson would then produce Lyngstad's debut studio album, Frida, which was released in March 1971. Lyngstad also played in several revues and cabaret shows in Stockholm between 1969 and 1973. After ABBA formed, she recorded another successful album in 1975, Frida ensam, which included a Swedish rendition of "Fernando", a hit on the Swedish radio charts before the English version was released. Agnetha Fältskog (born 5 April 1950 in Jönköping, Sweden) sang with a local dance band headed by Bernt Enghardt who sent a demo recording of the band to Karl Gerhard Lundkvist. The demo tape featured a song written and sung by Agnetha: "Jag var så kär" ("I Was So in Love"). Lundkvist was so impressed with her voice that he was convinced she would be a star. After going through considerable effort to locate the singer, he arranged for Agnetha to come to Stockholm and to record two of her own songs. This led to Agnetha at the age of 18 having a number-one record in Sweden with a self-composed song, which later went on to sell over 80,000 copies. She was soon noticed by the critics and songwriters as a talented singer/songwriter of schlager style songs. Fältskog's main inspiration in her early years was singers such as Connie Francis. Along with her own compositions, she recorded covers of foreign hits and performed them on tours in Swedish folkparks. Most of her biggest hits were self-composed, which was quite unusual for a female singer in the 1960s. Agnetha released four solo LPs between 1968 and 1971. She had many successful singles in the Swedish charts. During filming of a Swedish TV special in May 1969, Fältskog met Ulvaeus and they married on 6 July 1971. Fältskog and Ulvaeus eventually were involved in each other's recording sessions, and soon even Andersson and Lyngstad added backing vocals to Fältskog's third studio album, Som jag är ("As I Am") (1970). In 1972, Fältskog starred as Mary Magdalene in the original Swedish production of Jesus Christ Superstar and attracted favourable reviews. Between 1967 and 1975, Fältskog released five studio albums. First live performance and the start of "Festfolket" An attempt at combining their talents occurred in April 1970 when the two couples went on holiday together to the island of Cyprus. What started as singing for fun on the beach ended up as an improvised live performance in front of the United Nations soldiers stationed on the island. Andersson and Ulvaeus were at this time recording their first album together, Lycka, which was to be released in September 1970. Fältskog and Lyngstad added backing vocals on several tracks during June, and the idea of their working together saw them launch a stage act, "Festfolket" (which translates from Swedish to "Party People" and in pronunciation also "engaged couples"), on 1 November 1970 in Gothenburg. The cabaret show attracted generally negative reviews, except for the performance of the Andersson and Ulvaeus hit "Hej, gamle man" ("Hello, Old Man")–the first Björn and Benny recording to feature all four. They also performed solo numbers from respective albums, but the lukewarm reception convinced the foursome to shelve plans for working together for the time being, and each soon concentrated on individual projects again. First record together "Hej, gamle man" "Hej, gamle man", a song about an old Salvation Army soldier, became the quartet's first hit. The record was credited to Björn & Benny and reached number five on the sales charts and number one on Svensktoppen, staying on the latter chart (which was not a chart linked to sales or airplay) for 15 weeks. It was during 1971 that the four artists began working together more, adding vocals to the others' recordings. Fältskog, Andersson and Ulvaeus toured together in May, while Lyngstad toured on her own. Frequent recording sessions brought the foursome closer together during the summer. 1970–1973: Forming the group After the 1970 release of Lycka, two more singles credited to "Björn & Benny" were released in Sweden, "Det kan ingen doktor hjälpa" ("No Doctor Can Help with That") and "Tänk om jorden vore ung" ("Imagine If Earth Was Young"), with more prominent vocals by Fältskog and Lyngstad–and moderate chart success. Fältskog and Ulvaeus, now married, started performing together with Andersson on a regular basis at the Swedish folkparks in the middle of 1971. Stig Anderson, founder and owner of Polar Music, was determined to break into the mainstream international market with music by Andersson and Ulvaeus. "One day the pair of you will write a song that becomes a worldwide hit," he predicted. Stig Anderson encouraged Ulvaeus and Andersson to write a song for Melodifestivalen, and after two rejected entries in 1971, Andersson and Ulvaeus submitted their new song "Säg det med en sång" ("Say It with a Song") for the 1972 contest, choosing newcomer Lena Anderson to perform. The song came in third place, encouraging Stig Anderson, and became a hit in Sweden. The first signs of foreign success came as a surprise, as the Andersson and Ulvaeus single "She's My Kind of Girl" was released through Epic Records in Japan in March 1972, giving the duo a Top 10 hit. Two more singles were released in Japan, "En Carousel" ("En Karusell" in Scandinavia, an earlier version of "Merry-Go-Round") and "Love Has Its Ways" (a song they wrote with Kōichi Morita). First hit as Björn, Benny, Agnetha & Anni-Frid Ulvaeus and Andersson persevered with their songwriting and experimented with new sounds and vocal arrangements. "People Need Love" was released in June 1972, featuring guest vocals by the women, who were now given much greater prominence. Stig Anderson released it as a single, credited to Björn & Benny, Agnetha & Anni-Frid. The song peaked at number 17 in the Swedish combined single and album charts, enough to convince them they were on to something. The single also became the first record to chart for the quartet in the United States, where it peaked at number 114 on the Cashbox singles chart and number 117 on the Record World singles chart. Labelled as Björn & Benny (with Svenska Flicka- meaning Swedish Girl), it was released there through Playboy Records. This association with Playboy caused much confusion, many mistaking it for soft-core porn, including the record companies in the US and the UK, according to Ulvaeus, since it was common in Sweden at the time. According to Stig Anderson, "People Need Love" could have been a much bigger American hit, but a small label like Playboy Records did not have the distribution resources to meet the demand for the single from retailers and radio programmers. "Ring Ring" In 1973, the band and their manager Stig Anderson decided to have another try at Melodifestivalen, this time with the song "Ring Ring". The studio sessions were handled by Michael B. Tretow, who experimented with a "wall of sound" production technique that became a distinctive new sound thereafter associated with ABBA. Stig Anderson arranged an English translation of the lyrics by Neil Sedaka and Phil Cody and they thought this would be a success. However, on 10 February 1973, the song came third in Melodifestivalen; thus it never reached the Eurovision Song Contest itself. Nevertheless, the group released their debut studio album, also called Ring Ring. The album did well and the "Ring Ring" single was a hit in many parts of Europe and also in South Africa. However, Stig Anderson felt that the true breakthrough could only come with a UK or US hit. When Agnetha Fältskog gave birth to her daughter Linda in 1973, she was replaced for a short period by Inger Brundin on a trip to West Germany. Official naming In 1973, Stig Anderson, tired of unwieldy names, started to refer to the group privately and publicly as ABBA (a palindrome). At first, this was a play on words, as Abba is also the name of a well-known fish-canning company in Sweden, and itself an abbreviation. However, since the fish-canners were unknown outside Sweden, Anderson came to believe the name would work in international markets. A competition to find a suitable name for the group was held in a Gothenburg newspaper and it was officially announced in the summer that the group were to be known as "ABBA". The group negotiated with the canners for the rights to the name. Fred Bronson reported for Billboard that Fältskog told him in a 1988 interview that "[ABBA] had to ask permission and the factory said, 'O.K., as long as you don't make us feel ashamed for what you're doing. "ABBA" is an acronym formed from the first letters of each group member's first name: Agnetha, Björn, Benny, Anni-Frid. The earliest known example of "ABBA" written on paper is on a recording session sheet from the Metronome Studio in Stockholm dated 16 October 1973. This was first written as "Björn, Benny, Agnetha & Frida", but was subsequently crossed out with "ABBA" written in large letters on top. Official logo Their official logo, distinct with the backward 'B', was designed by Rune Söderqvist, who designed most of ABBA's record sleeves. The ambigram first appeared on the French compilation album, Golden Double Album, released in May 1976 by Disques Vogue, and would henceforth be used for all official releases. The idea for the official logo was made by the German photographer on a velvet jumpsuit photo shoot for the teenage magazine Bravo. In the photo, the ABBA members held giant initial letters of their names. After the pictures were made, Heilemann found out that Benny Andersson reversed his letter "B"; this prompted discussions about the mirrored "B", and the members of ABBA agreed on the mirrored letter. From 1976 onward, the first "B" in the logo version of the name was "mirror-image" reversed on the band's promotional material, thus becoming the group's registered trademark. Following their acquisition of the group's catalogue, PolyGram began using variations of the ABBA logo, employing a different font. In 1992, Polygram added a crown emblem to it for the first release of the ABBA Gold: Greatest Hits compilation. After Universal Music purchased PolyGram (and, thus, ABBA's label Polar Music International), control of the group's catalogue returned to Stockholm. Since then, the original logo has been reinstated on all official products. 1973–1976: Breakthrough Eurovision Song Contest 1974 As the group entered the Melodifestivalen with "Ring Ring" but failed to qualify as the 1973 Swedish entry, Stig Anderson immediately started planning for the 1974 contest. Ulvaeus, Andersson and Stig Anderson believed in the possibilities of using the Eurovision Song Contest as a way to make the music business aware of them as songwriters, as well as the band itself. In late 1973, they were invited by Swedish television to contribute a song for the Melodifestivalen 1974 and from a number of new songs, the upbeat song "Waterloo" was chosen; the group were now inspired by the growing glam rock scene in England. ABBA won their nation's hearts on Swedish television on 9 February 1974, and with this third attempt were far more experienced and better prepared for the Eurovision Song Contest. Winning the 1974 Eurovision Song Contest on 6 April 1974 (and singing "Waterloo" in English instead of their native tongue) gave ABBA the chance to tour Europe and perform on major television shows; thus the band saw the "Waterloo" single chart in many European countries. Following their success at the Eurovision Song Contest, ABBA spent an evening of glory partying in the appropriately named first-floor Napoleon suite of The Grand Brighton Hotel. "Waterloo" was ABBA's first major hit in numerous countries, becoming their first number-one single in nine western and northern European countries, including the big markets of the UK and West Germany, and in South Africa. It also made the top ten in several other countries, including rising to number three in Spain, number four in Australia and France, and number seven in Canada. In the United States, the song peaked at number six on the Billboard Hot 100 chart, paving the way for their first album and their first trip as a group there. Albeit a short promotional visit, it included their first performance on American television, The Mike Douglas Show. The album Waterloo only peaked at number 145 on the Billboard 200 chart, but received unanimous high praise from the US critics: Los Angeles Times called it "a compelling and fascinating debut album that captures the spirit of mainstream pop quite effectively ... an immensely enjoyable and pleasant project", while Creem characterised it as "a perfect blend of exceptional, lovable compositions". ABBA's follow-up single, "Honey, Honey", peaked at number 27 on the US Billboard Hot 100, reached the top twenty in several other countries, and was a number-two hit in West Germany although it only reached the top 30 in Australia and the US. In the United Kingdom, ABBA's British record label, Epic, decided to re-release a remixed version of "Ring Ring" instead of "Honey, Honey", and a cover version of the latter by Sweet Dreams peaked at number 10. Both records debuted on the UK chart within one week of each other. "Ring Ring" failed to reach the Top 30 in the UK, increasing growing speculation that the group were simply a Eurovision one-hit wonder. Post-Eurovision In November 1974, ABBA embarked on their first European tour, playing dates in Denmark, West Germany and Austria. It was not as successful as the band had hoped, since most of the venues did not sell out. Due to a lack of demand, they were even forced to cancel a few shows, including a sole concert scheduled in Switzerland. The second leg of the tour, which took them through Scandinavia in January 1975, was very different. They played to full houses everywhere and finally got the reception they had aimed for. Live performances continued in the middle of 1975 when ABBA embarked on a fourteen open-air date tour of Sweden and Finland. Their Stockholm show at the Gröna Lund amusement park had an estimated audience of 19,200. Björn Ulvaeus later said that "If you look at the singles we released straight after Waterloo, we were trying to be more like The Sweet, a semi-glam rock group, which was stupid because we were always a pop group." In late 1974, "So Long" was released as a single in the United Kingdom but it received no airplay from Radio 1 and failed to chart in the UK; the only countries in which it was successful were Austria, Sweden and Germany, reaching the top ten in the first two and number 21 in the latter. In the middle of 1975, ABBA released "I Do, I Do, I Do, I Do, I Do", which again received little airplay on Radio 1, but did manage to climb to number 38 on the UK chart, while making top five in several northern and western European countries, and number one in South Africa. Later that year, the release of their self-titled third studio album ABBA and single "SOS" brought back their chart presence in the UK, where the single hit number six and the album peaked at number 13. "SOS" also became ABBA's second number-one single in Germany, their third in Australia and their first in France, plus reached number two in several other European countries, including Italy. Success was further solidified with "Mamma Mia" reaching number-one in the United Kingdom, Germany and Australia and the top two in a few other western and northern European countries. In the United States, both "I Do, I Do, I Do, I Do, I Do" and "SOS" peaked at number 15 on the Billboard Hot 100 chart, with the latter picking up the BMI Award along the way as one of the most played songs on American radio in 1975. "Mamma Mia", however, stalled at number 32. In Canada, the three songs rose to number 12, nine and 18, respectively. The success of the group in the United States had until that time been limited to single releases. By early 1976, the group already had four Top 30 singles on the US charts, but the album market proved to be tough to crack. The eponymous ABBA album generated three American hits, but it only peaked at number 165 on the Cashbox album chart and number 174 on the Billboard 200 chart. Opinions were voiced, by Creem in particular, that in the US ABBA had endured "a very sloppy promotional campaign". Nevertheless, the group enjoyed warm reviews from the American press. Cashbox went as far as saying that "there is a recurrent thread of taste and artistry inherent in Abba's marketing, creativity and presentation that makes it almost embarrassing to critique their efforts", while Creem wrote: "SOS is surrounded on this LP by so many good tunes that the mind boggles." In Australia, the airing of the music videos for "I Do, I Do, I Do, I Do, I Do" and "Mamma Mia" on the nationally broadcast TV pop show Countdown (which premiered in November 1974) saw the band rapidly gain enormous popularity, and Countdown become a key promoter of the group via their distinctive music videos. This started an immense interest for ABBA in Australia, resulting in "I Do, I Do, I Do, I Do, I Do" staying at number one for three weeks, then "SOS" spending a week there, followed by "Mamma Mia" staying there for ten weeks, and the album holding down the number one position for months. The three songs were also successful in nearby New Zealand with the first two topping that chart and the third reaching number two. 1976–1981: Superstardom Greatest Hits and Arrival In March 1976, the band released the compilation album Greatest Hits. It became their first UK number-one album, and also took ABBA into the Top 50 on the US album charts for the first time, eventually selling more than a million copies there. Also included on Greatest Hits was a new single, "Fernando", which went to number-one in at least thirteen countries all over the world, including the UK, Germany, France, Australia, South Africa and Mexico, and the top five in most other significant markets, including, at number four, becoming their biggest hit to date in Canada; the single went on to sell over 10 million copies worldwide. In Australia, "Fernando" occupied the top position for a then record breaking 14 weeks (and stayed in the chart for 40 weeks), and was the longest-running chart-topper there for over 40 years until it was overtaken by Ed Sheeran's "Shape of You" in May 2017. It still remains as one of the best-selling singles of all time in Australia. Also in 1976, the group received its first international prize, with "Fernando" being chosen as the "Best Studio Recording of 1975". In the United States, "Fernando" reached the Top 10 of the Cashbox Top 100 singles chart and number 13 on the Billboard Hot 100. It topped the Billboard Adult Contemporary chart, ABBA's first American number-one single on any chart. At the same time, a compilation named The Very Best of ABBA was released in Germany, becoming a number-one album there whereas the Greatest Hits compilation which followed a few months later ascended to number two in Germany, despite all similarities with The Very Best album. The group's fourth studio album, Arrival, a number-one best-seller in parts of Europe, the UK and Australia, and a number-three hit in Canada and Japan, represented a new level of accomplishment in both songwriting and studio work, prompting rave reviews from more rock-oriented UK music weeklies such as Melody Maker and New Musical Express, and mostly appreciative notices from US critics. Hit after hit flowed from Arrival: "Money, Money, Money", another number-one in Germany, France, Australia and other countries of western and northern Europe, plus number two in the UK; and, "Knowing Me, Knowing You", ABBA's sixth consecutive German number-one, as well as another UK number-one, plus a top five hit in many other countries, although it was only a number nine hit in Australia and France. The real sensation was the first single, "Dancing Queen", not only topping the charts in loyal markets like the UK, Germany, Sweden, several other western and northern European countries, and Australia, but also reaching number-one in the United States, Canada, the Soviet Union and Japan, and the top ten in France, Spain and Italy. All three songs were number-one hits in Mexico. In South Africa, ABBA had astounding success with each of "Fernando", "Dancing Queen" and "Knowing Me, Knowing You" being among the top 20 best-selling singles for 1976–77. In 1977, Arrival was nominated for the inaugural BRIT Award in the category "Best International Album of the Year". By this time ABBA were popular in the UK, most of Europe, Australia, New Zealand and Canada. In Frida – The DVD, Lyngstad explains how she and Fältskog developed as singers, as ABBA's recordings grew more complex over the years. The band's popularity in the United States would remain on a comparatively smaller scale, and "Dancing Queen" became the only Billboard Hot 100 number-one single ABBA with "Knowing Me, Knowing You" later peaking at number seven; "Money, Money, Money", however, had barely charted there or in Canada (where "Knowing Me, Knowing You" had reached number five). They did, however, get three more singles to the number-one position on other Billboard US charts, including Billboard Adult Contemporary and Hot Dance Club Play). Nevertheless, Arrival finally became a true breakthrough release for ABBA on the US album market where it peaked at number 20 on the Billboard 200 chart and was certified gold by RIAA. European and Australian tour In January 1977, ABBA embarked on their first major tour. The group's status had changed dramatically and they were clearly regarded as superstars. They opened their much anticipated tour in Oslo, Norway, on 28 January, and mounted a lavishly produced spectacle that included a few scenes from their self-written mini-operetta The Girl with the Golden Hair. The concert attracted immense media attention from across Europe and Australia. They continued the tour through Western Europe, visiting Gothenburg, Copenhagen, Berlin, Cologne, Amsterdam, Antwerp, Essen, Hanover, and Hamburg and ending with shows in the United Kingdom in Manchester, Birmingham, Glasgow and two sold-out concerts at London's Royal Albert Hall. Tickets for these two shows were available only by mail application and it was later revealed that the box-office received 3.5 million requests for tickets, enough to fill the venue 580 times. Along with praise ("ABBA turn out to be amazingly successful at reproducing their records", wrote Creem), there were complaints that "ABBA performed slickly...but with a zero personality coming across from a total of 16 people on stage" (Melody Maker). One of the Royal Albert Hall concerts was filmed as a reference for the filming of the Australian tour for what became ABBA: The Movie, though it is not exactly known how much of the concert was filmed. After the European leg of the tour, in March 1977, ABBA played 11 dates in Australia before a total of 160,000 people. The opening concert in Sydney at the Sydney Showground on 3 March to an audience of 20,000 was marred by torrential rain with Lyngstad slipping on the wet stage during the concert. However, all four members would later recall this concert as the most memorable of their career. Upon their arrival in Melbourne, a civic reception was held at the Melbourne Town Hall and ABBA appeared on the balcony to greet an enthusiastic crowd of 6,000. In Melbourne, the group gave three concerts at the Sidney Myer Music Bowl with 14,500 at each including the Australian Prime Minister Malcolm Fraser and his family. At the first Melbourne concert, an additional 16,000 people gathered outside the fenced-off area to listen to the concert. In Adelaide, the group performed one concert at Football Park in front of 20,000 people, with another 10,000 listening outside. During the first of five concerts in Perth, there was a bomb scare with everyone having to evacuate the Entertainment Centre. The trip was accompanied by mass hysteria and unprecedented media attention ("Swedish ABBA stirs box-office in Down Under tour...and the media coverage of the quartet rivals that set to cover the upcoming Royal tour of Australia", wrote Variety), and is captured on film in ABBA: The Movie, directed by Lasse Hallström. The Australian tour and its subsequent ABBA: The Movie produced some ABBA lore, as well. Fältskog's blonde good looks had long made her the band's "pin-up girl", a role she disdained. During the Australian tour, she performed in a skin-tight white jumpsuit, causing one Australian newspaper to use the headline "Agnetha's bottom tops dull show". When asked about this at a news conference, she replied: "Don't they have bottoms in Australia?" ABBA: The Album In December 1977, ABBA followed up Arrival with the more ambitious fifth album, ABBA: The Album, released to coincide with the debut of ABBA: The Movie. Although the album was less well received by UK reviewers, it did spawn more worldwide hits: "The Name of the Game" and "Take a Chance on Me", which both topped the UK charts and racked up impressive sales in most countries, although "The Name of the Game" was generally the more successful in the Nordic countries and Down Under, while "Take a Chance on Me" was more successful in North America and the German-speaking countries. "The Name of the Game" was a number two hit in the Netherlands, Belgium and Sweden while also making the Top 5 in Finland, Norway, New Zealand and Australia, while only peaking at numbers 10, 12 and 15 in Mexico, the US and Canada. "Take a Chance on Me" was a number one hit in Austria, Belgium and Mexico, made the Top 3 in the US, Canada, the Netherlands, Germany and Switzerland, while only reaching numbers 12 and 14 in Australia and New Zealand, respectively. Both songs were Top 10 hits in countries as far afield as Rhodesia and South Africa, as well as in France. Although "Take a Chance on Me" did not top the American charts, it proved to be ABBA's biggest hit single there, selling more copies than "Dancing Queen". The drop in sales in Australia was felt to be inevitable by industry observers as an "Abba-Fever" that had existed there for almost three years could only last so long as adolescents would naturally begin to move away a group so deified by both their parents and grandparents. A third single, "Eagle", was released in continental Europe and Down Under becoming a number one hit in Belgium and a Top 10 hit in the Netherlands, Germany, Switzerland and South Africa, but barely charting Down Under. The B-side of "Eagle" was "Thank You for the Music", and it was belatedly released as an A-side single in the both the United Kingdom and Ireland in 1983. "Thank You for the Music" has become one of the best loved and best known ABBA songs without being released as a single during the group's lifetime. ABBA: The Album topped the album charts in the UK, the Netherlands, New Zealand, Sweden, Norway, Switzerland, while ascending to the Top 5 in Australia, Germany, Austria, Finland and Rhodesia, and making the Top 10 in Canada and Japan. Sources also indicate that sales in Poland exceeded 1 million copies and that sales demand in Russia could not be met by the supply available. The album peaked at number 14 in the US. Polar Music Studio formation By 1978, ABBA were one of the biggest bands in the world. They converted a vacant cinema into the Polar Music Studio, a state-of-the-art studio in Stockholm. The studio was used by several other bands; notably Genesis' Duke and Led Zeppelin's In Through the Out Door were recorded there. During May 1978, the group went to the United States for a promotional campaign, performing alongside Andy Gibb on Olivia Newton-John's TV show. Recording sessions for the single "Summer Night City" were an uphill struggle, but upon release the song became another hit for the group. The track would set the stage for ABBA's foray into disco with their next album. On 9 January 1979, the group performed "Chiquitita" at the Music for UNICEF Concert held at the United Nations General Assembly to celebrate UNICEF's Year of the Child. ABBA donated the copyright of this worldwide hit to the UNICEF; see Music for UNICEF Concert. The single was released the following week, and reached number-one in ten countries. North American and European tours In mid-January 1979, Ulvaeus and Fältskog announced they were getting divorced. The news caused interest from the media and led to speculation about the band's future. ABBA assured the press and their fan base they were continuing their work as a group and that the divorce would not affect them. Nonetheless, the media continued to confront them with this in interviews. To escape the media swirl and concentrate on their writing, Andersson and Ulvaeus secretly travelled to Compass Point Studios in Nassau, Bahamas, where for two weeks they prepared their next album's songs. The group's sixth studio album, Voulez-Vous, was released in April 1979, with its title track recorded at the famous Criteria Studios in Miami, Florida, with the assistance of recording engineer Tom Dowd amongst others. The album topped the charts across Europe and in Japan and Mexico, hit the Top 10 in Canada and Australia and the Top 20 in the US. While none of the singles from the album reached number one on the UK chart, the lead single, "Chiquitita", and the fourth single, "I Have a Dream", both ascended to number two, and the other two, "Does Your Mother Know" and "Angeleyes" (with "Voulez-Vous", released as a double A-side) both made the top 5. All four singles reached number one in Belgium, although the last three did not chart in Sweden or Norway. "Chiquitita", which was featured in the Music for UNICEF Concert after which ABBA decided to donate half of the royalties from the song to UNICEF, topped the singles charts in the Netherlands, Switzerland, Finland, Spain, Mexico, South Africa, Rhodesia and New Zealand, rose to number two in Sweden, and made the Top 5 in Germany, Austria, Norway and Australia, although it only reached number 29 in the US. "I Have a Dream" was a sizeable hit reaching number one in the Netherlands, Switzerland, and Austria, number three in South Africa, and number four in Germany, although it only reached number 64 in Australia. In Canada, "I Have a Dream" became ABBA's second number one on the RPM Adult Contemporary chart (after "Fernando" hit the top previously) although it did not chart in the US. "Does Your Mother Know", a rare song in which Ulvaeus sings lead vocals, was a Top 5 hit in the Netherlands and Finland, and a Top 10 hit in Germany, Switzerland, Australia, although it only reached number number 27 in New Zealand. It did better in North America than "Chiquitita", reaching number 12 in Canada and number 19 in the US, and made the Top 20 in Japan. "Voulez-Vous" was a Top 10 hit in the Netherlands and Switzerland, a Top 20 hit in Germany and Finland, but only peaked in the 80s in Australia, Canada and the US. Also in 1979, the group released their second compilation album, Greatest Hits Vol. 2, which featured a brand new track: "Gimme! Gimme! Gimme! (A Man After Midnight)", which was a Top 3 hit in the UK, Belgium, the Netherlands, Germany, Austria, Switzerland, Finland and Norway, and returned ABBA to the Top 10 in Australia. Greatest Hits Vol. 2 went to number one in the UK, Belgium, Canada and Japan while making the Top 5 in several other countries, but only reaching number 20 in Australia and number 46 in the US. In Russia during the late 1970s, the group were paid in oil commodities because of an embargo on the ruble. On 13 September 1979, ABBA began ABBA: The Tour at Northlands Coliseum in Edmonton, Canada, with a full house of 14,000. "The voices of the band, Agnetha's high sauciness combined with round, rich lower tones of Anni-Frid, were excellent...Technically perfect, melodically correct and always in perfect pitch...The soft lower voice of Anni-Frid and the high, edgy vocals of Agnetha were stunning", raved Edmonton Journal. During the next four weeks they played a total of 17 sold-out dates, 13 in the United States and four in Canada. The last scheduled ABBA concert in the United States in Washington, D.C. was cancelled due to Fältskog's emotional distress suffered during the flight from New York to Boston, when the group's private plane was subjected to extreme weather conditions and was unable to land for an extended period. They appeared at the Boston Music Hall for the performance 90 minutes late. The tour ended with a show in Toronto, Canada at Maple Leaf Gardens before a capacity crowd of 18,000. "ABBA plays with surprising power and volume; but although they are loud, they're also clear, which does justice to the signature vocal sound... Anyone who's been waiting five years to see Abba will be well satisfied", wrote Record World. On 19 October 1979, the tour resumed in Western Europe where the band played 23 sold-out gigs, including six sold-out nights at London's Wembley Arena. Progression In March 1980, ABBA travelled to Japan where upon their arrival at Narita International Airport, they were besieged by thousands of fans. The group performed eleven concerts to full houses, including six shows at Tokyo's Budokan. This tour was the last "on the road" adventure of their career. In July 1980, ABBA released the single "The Winner Takes It All", the group's eighth UK chart topper (and their first since 1978). The song is widely misunderstood as being written about Ulvaeus and Fältskog's marital tribulations; Ulvaeus wrote the lyrics, but has stated they were not about his own divorce; Fältskog has repeatedly stated she was not the loser in their divorce. In the United States, the single peaked at number-eight on the Billboard Hot 100 chart and became ABBA's second Billboard Adult Contemporary number-one. It was also re-recorded by Andersson and Ulvaeus with a slightly different backing track, by French chanteuse Mireille Mathieu at the end of 1980 – as "Bravo tu as gagné", with French lyrics by Alain Boublil. November the same year saw the release of ABBA's seventh album Super Trouper, which reflected a certain change in ABBA's style with more prominent use of synthesizers and increasingly personal lyrics. It set a record for the most pre-orders ever received for a UK album after one million copies were ordered before release. The second single from the album, "Super Trouper", also hit number-one in the UK, becoming the group's ninth and final UK chart-topper. Another track from the album, "Lay All Your Love on Me", released in 1981 as a Twelve-inch single only in selected territories, managed to top the Billboard Hot Dance Club Play chart and peaked at number-seven on the UK singles chart becoming, at the time, the highest ever charting 12-inch release in UK chart history. Also in 1980, ABBA recorded a compilation of Spanish-language versions of their hits called Gracias Por La Música. This was released in Spanish-speaking countries as well as in Japan and Australia. The album became a major success, and along with the Spanish version of "Chiquitita", this signalled the group's breakthrough in Latin America. ABBA Oro: Grandes Éxitos, the Spanish equivalent of ABBA Gold: Greatest Hits, was released in 1999. 1981–1982: The Visitors and later performances In January 1981, Ulvaeus married Lena Källersjö, and manager Stig Anderson celebrated his 50th birthday with a party. For this occasion, ABBA recorded the track "Hovas Vittne" (a pun on the Swedish name for Jehovah's Witness and Anderson's birthplace, Hova) as a tribute to him, and released it only on 200 red vinyl copies, to be distributed to the guests attending the party. This single has become a sought-after collectable. In mid-February 1981, Andersson and Lyngstad announced they were filing for divorce. Information surfaced that their marriage had been an uphill struggle for years, and Benny had already met another woman, Mona Nörklit, whom he married in November 1981. Andersson and Ulvaeus had songwriting sessions in early 1981, and recording sessions began in mid-March. At the end of April, the group recorded a TV special, Dick Cavett Meets ABBA with the US talk show host Dick Cavett. The Visitors, ABBA's eighth studio album, showed a songwriting maturity and depth of feeling distinctly lacking from their earlier recordings but still placing the band squarely in the pop genre, with catchy tunes and harmonies. Although not revealed at the time of its release, the album's title track, according to Ulvaeus, refers to the secret meetings held against the approval of totalitarian governments in Soviet-dominated states, while other tracks address topics like failed relationships, the threat of war, ageing, and loss of innocence. The album's only major single release, "One of Us", proved to be the last of ABBA's nine number-one singles in Germany, this being in December 1981; and the swansong of their sixteen Top 5 singles on the South African chart. "One of Us" was also ABBA's final Top 3 hit in the UK, reaching number-three on the UK Singles Chart. Although it topped the album charts across most of Europe, including Ireland, the UK and Germany, The Visitors was not as commercially successful as its predecessors, showing a commercial decline in previously loyal markets such as France, Australia and Japan. A track from the album, "When All Is Said and Done", was released as a single in North America, Australia and New Zealand, and fittingly became ABBA's final Top 40 hit in the US (debuting on the US charts on 31 December 1981), while also reaching the US Adult Contemporary Top 10, and number-four on the RPM Adult Contemporary chart in Canada. The song's lyrics, as with "The Winner Takes It All" and "One of Us", dealt with the painful experience of separating from a long-term partner, though it looked at the trauma more optimistically. With the now publicised story of Andersson and Lyngstad's divorce, speculation increased of tension within the band. Also released in the United States was the title track of The Visitors, which hit the Top Ten on the Billboard Hot Dance Club Play chart. Later recording sessions In the spring of 1982, songwriting sessions had started and the group came together for more recordings. Plans were not completely clear, but a new album was discussed and the prospect of a small tour suggested. The recording sessions in May and June 1982 were a struggle, and only three songs were eventually recorded: "You Owe Me One", "I Am the City" and "Just Like That". Andersson and Ulvaeus were not satisfied with the outcome, so the tapes were shelved and the group took a break for the summer. Back in the studio again in early August, the group had changed plans for the rest of the year: they settled for a Christmas release of a double album compilation of all their past single releases to be named The Singles: The First Ten Years. New songwriting and recording sessions took place, and during October and December, they released the singles "The Day Before You Came"/"Cassandra" and "Under Attack"/"You Owe Me One", the A-sides of which were included on the compilation album. Neither single made the Top 20 in the United Kingdom, though "The Day Before You Came" became a Top 5 hit in many European countries such as Germany, the Netherlands and Belgium. The album went to number one in the UK and Belgium, Top 5 in the Netherlands and Germany and Top 20 in many other countries. "Under Attack", the group's final release before disbanding, was a Top 5 hit in the Netherlands and Belgium. "I Am the City" and "Just Like That" were left unreleased on The Singles: The First Ten Years for possible inclusion on the next projected studio album, though this never came to fruition. "I Am the City" was eventually released on the compilation album More ABBA Gold in 1993, while "Just Like That" has been recycled in new songs with other artists produced by Andersson and Ulvaeus. A reworked version of the verses ended up in the musical Chess. The chorus section of "Just Like That" was eventually released on a retrospective box set in 1994, as well as in the ABBA Undeleted medley featured on disc 9 of The Complete Studio Recordings. Despite a number of requests from fans, Ulvaeus and Andersson are still refusing to release ABBA's version of "Just Like That" in its entirety, even though the complete version has surfaced on bootlegs. The group travelled to London to promote The Singles: The First Ten Years in the first week of November 1982, appearing on Saturday Superstore and The Late, Late Breakfast Show, and also to West Germany in the second week, to perform on Show Express. On 19 November 1982, ABBA appeared for the last time in Sweden on the TV programme Nöjesmaskinen, and on 11 December 1982, they made their last performance ever, transmitted to the UK on Noel Edmonds' The Late, Late Breakfast Show, through a live link from a TV studio in Stockholm. Later performances Andersson and Ulvaeus began collaborating with Tim Rice in early 1983 on writing songs for the musical project Chess, while Fältskog and Lyngstad both concentrated on international solo careers. While Andersson and Ulvaeus were working on the musical, a further co-operation among the three of them came with the musical Abbacadabra that was produced in France for television. It was a children's musical using 14 ABBA songs. Alain and Daniel Boublil, who wrote Les Misérables, had been in touch with Stig Anderson about the project, and the TV musical was aired over Christmas on French TV and later a Dutch version was also broadcast. Boublil previously also wrote the French lyric for Mireille Mathieu's version of "The Winner Takes It All". Lyngstad, who had recently moved to Paris, participated in the French version, and recorded a single, "Belle", a duet with French singer Daniel Balavoine. The song was a cover of ABBA's 1976 instrumental track "Arrival". As the single "Belle" sold well in France, Cameron Mackintosh wanted to stage an English-language version of the show in London, with the French lyrics translated by David Wood and Don Black; Andersson and Ulvaeus got involved in the project, and contributed with one new song, "I Am the Seeker". "Abbacadabra" premiered on 8 December 1983 at the Lyric Hammersmith Theatre in London, to mixed reviews and full houses for eight weeks, closing on 21 January 1984. Lyngstad was also involved in this production, recording "Belle" in English as "Time", a duet with actor and singer B. A. Robertson: the single sold well, and was produced and recorded by Mike Batt. In May 1984, Lyngstad performed "I Have a Dream" with a children's choir at the United Nations Organisation Gala, in Geneva, Switzerland. All four members made their (at the time, final) public appearance as four friends more than as ABBA in January 1986, when they recorded a video of themselves performing an acoustic version of "Tivedshambo" (which was the first song written by their manager Stig Anderson), for a Swedish TV show honouring Anderson on his 55th birthday. The four had not seen each other for more than two years. That same year they also performed privately at another friend's 40th birthday: their old tour manager, Claes af Geijerstam. They sang a self-written song titled "Der Kleine Franz" that was later to resurface in Chess. Also in 1986, ABBA Live was released, featuring selections of live performances from the group's 1977 and 1979 tours. The four members were guests at the 50th birthday of Görel Hanser in 1999. Hanser was a long-time friend of all four, and also former secretary of Stig Anderson. Honouring Görel, ABBA performed a Swedish birthday song "Med en enkel tulipan" a cappella. Andersson has on several occasions performed ABBA songs. In June 1992, he and Ulvaeus appeared with U2 at a Stockholm concert, singing the chorus of "Dancing Queen", and a few years later during the final performance of the B & B in Concert in Stockholm, Andersson joined the cast for an encore at the piano. Andersson frequently adds an ABBA song to the playlist when he performs with his BAO band. He also played the piano during new recordings of the ABBA songs "Like an Angel Passing Through My Room" with opera singer Anne Sofie von Otter, and "When All Is Said and Done" with Swede Viktoria Tolstoy. In 2002, Andersson and Ulvaeus both performed an a cappella rendition of the first verse of "Fernando" as they accepted their Ivor Novello award in London. Lyngstad performed and recorded an a cappella version of "Dancing Queen" with the Swedish group the Real Group in 1993, and also re-recorded "I Have a Dream" with Swiss singer Dan Daniell in 2003. Break and reunion ABBA never officially announced the end of the group or an indefinite break, but it was long considered dissolved after their final public performance together in 1982. Their final public performance together as ABBA before their 2016 reunion was on the British TV programme The Late, Late Breakfast Show (live from Stockholm) on 11 December 1982. While reminiscing on "The Day Before You Came", Ulvaeus said: "we might have continued for a while longer if that had been a number one". In January 1983, Fältskog started recording sessions for a solo album, as Lyngstad had successfully released her album Something's Going On some months earlier. Ulvaeus and Andersson, meanwhile, started songwriting sessions for the musical Chess. In interviews at the time, Björn and Benny denied the split of ABBA ("Who are we without our ladies? Initials of Brigitte Bardot?"), and Lyngstad and Fältskog kept claiming in interviews that ABBA would come together for a new album repeatedly during 1983 and 1984. Internal strife between the group and their manager escalated and the band members sold their shares in Polar Music during 1983. Except for a TV appearance in 1986, the foursome did not come together publicly again until they were reunited at the Swedish premiere of the Mamma Mia! movie on 4 July 2008. The individual members' endeavours shortly before and after their final public performance coupled with the collapse of both marriages and the lack of significant activity in the following few years after that widely suggested that the group had broken up. In an interview with the Sunday Telegraph following the premiere, Ulvaeus and Andersson said that there was nothing that could entice them back on stage again. Ulvaeus said: "We will never appear on stage again. [...] There is simply no motivation to re-group. Money is not a factor and we would like people to remember us as we were. Young, exuberant, full of energy and ambition. I remember Robert Plant saying Led Zeppelin were a cover band now because they cover all their own stuff. I think that hit the nail on the head." However, on 3 January 2011, Fältskog, long considered to be the most reclusive member of the group and a major obstacle to any reunion, raised the possibility of reuniting for a one-off engagement. She admitted that she has not yet brought the idea up to the other three members. In April 2013, she reiterated her hopes for reunion during an interview with Die Zeit, stating: "If they ask me, I'll say yes." In a May 2013 interview, Fältskog, aged 63 at the time, stated that an ABBA reunion would never occur: "I think we have to accept that it will not happen, because we are too old and each one of us has their own life. Too many years have gone by since we stopped, and there's really no meaning in putting us together again". Fältskog further explained that the band members remained on amicable terms: "It's always nice to see each other now and then and to talk a little and to be a little nostalgic." In an April 2014 interview, Fältskog, when asked about whether the band might reunite for a new recording said: "It's difficult to talk about this because then all the news stories will be: 'ABBA is going to record another song!' But as long as we can sing and play, then why not? I would love to, but it's up to Björn and Benny." Resurgence of public interest The same year the members of ABBA went their separate ways, the French production of a "tribute" show (a children's TV musical named Abbacadabra using 14 ABBA songs) spawned new interest in the group's music. After receiving little attention during the mid-to-late-1980s, ABBA's music experienced a resurgence in the early 1990s due to the UK synth-pop duo Erasure, who released Abba-esque, a four track extended play release featuring cover versions of ABBA songs which topped several European charts in 1992. As U2 arrived in Stockholm for a concert in June of that year, the band paid homage to ABBA by inviting Björn Ulvaeus and Benny Andersson to join them on stage for a rendition of "Dancing Queen", playing guitar and keyboards. September 1992 saw the release of ABBA Gold: Greatest Hits, a new compilation album. The single "Dancing Queen" received radio airplay in the UK in the middle of 1992 to promote the album. The song returned to the Top 20 of the UK singles chart in August that year, this time peaking at number 16. With sales of 30 million, Gold is the best-selling ABBA album, as well as one of the best-selling albums worldwide. With sales of 5.5 million copies it is the second-highest selling album of all time in the UK, after Queen's Greatest Hits. More ABBA Gold: More ABBA Hits, a follow-up to Gold, was released in 1993. In 1994, two Australian cult films caught the attention of the world's media, both focusing on admiration for ABBA: The Adventures of Priscilla, Queen of the Desert and Muriel's Wedding. The same year, Thank You for the Music, a four-disc box set comprising all the group's hits and stand-out album tracks, was released with the involvement of all four members. "By the end of the twentieth century," American critic Chuck Klosterman wrote a decade later, "it was far more contrarian to hate ABBA than to love them." ABBA were soon recognised and embraced by other acts: Evan Dando of the Lemonheads recorded a cover version of "Knowing Me, Knowing You"; Sinéad O'Connor and Boyzone's Stephen Gately have recorded "Chiquitita"; Tanita Tikaram, Blancmange and Steven Wilson paid tribute to "The Day Before You Came". Cliff Richard covered "Lay All Your Love on Me", while Dionne Warwick, Peter Cetera, Frank Sidebottom and Celebrity Skin recorded their versions of "SOS". US alternative-rock musician Marshall Crenshaw has also been known to play a version of "Knowing Me, Knowing You" in concert appearances, while legendary English Latin pop songwriter Richard Daniel Roman has recognised ABBA as a major influence. Swedish metal guitarist Yngwie Malmsteen covered "Gimme! Gimme! Gimme! (A Man After Midnight)" with slightly altered lyrics. Two different compilation albums of ABBA songs have been released. ABBA: A Tribute coincided with the 25th anniversary celebration and featured 17 songs, some of which were recorded especially for this release. Notable tracks include Go West's "One of Us", Army of Lovers "Hasta Mañana", Information Society's "Lay All Your Love on Me", Erasure's "Take a Chance on Me" (with MC Kinky), and Lyngstad's a cappella duet with the Real Group of "Dancing Queen". A second 12-track album was released in 1999, titled ABBAmania, with proceeds going to the Youth Music charity in England. It featured all new cover versions: notable tracks were by Madness ("Money, Money, Money"), Culture Club ("Voulez-Vous"), the Corrs ("The Winner Takes It All"), Steps ("Lay All Your Love on Me", "I Know Him So Well"), and a medley titled "Thank ABBA for the Music" performed by several artists and as featured on the Brits Awards that same year. In 1998, an ABBA tribute group was formed, the ABBA Teens, which was subsequently renamed the A-Teens to allow the group some independence. The group's first album, The ABBA Generation, consisting solely of ABBA covers reimagined as 1990s pop songs, was a worldwide success and so were subsequent albums. The group disbanded in 2004 due to a gruelling schedule and intentions to go solo. In Sweden, the growing recognition of the legacy of Andersson and Ulvaeus resulted in the 1998 B & B Concerts, a tribute concert (with Swedish singers who had worked with the songwriters through the years) showcasing not only their ABBA years, but hits both before and after ABBA. The concert was a success, and was ultimately released on CD. It later toured Scandinavia and even went to Beijing in the People's Republic of China for two concerts. In 2000 ABBA were reported to have turned down an offer of approximately one billion US dollars to do a reunion tour consisting of 100 concerts. For the semi-final of the Eurovision Song Contest 2004, staged in Istanbul 30 years after ABBA had won the contest in Brighton, all four members made cameo appearances in a special comedy video made for the interval act, titled Our Last Video Ever. Other well-known stars such as Rik Mayall, Cher and Iron Maiden's Eddie also made appearances in the video. It was not included in the official DVD release of the 2004 Eurovision contest, but was issued as a separate DVD release, retitled The Last Video at the request of the former ABBA members. The video was made using puppet models of the members of the band. The video has surpassed 13 million views on YouTube as of November 2020. In 2005, all four members of ABBA appeared at the Stockholm premiere of the musical Mamma Mia!. On 22 October 2005, at the 50th anniversary celebration of the Eurovision Song Contest, "Waterloo" was chosen as the best song in the competition's history. In the same month, American singer Madonna released the single "Hung Up", which contains a sample of the keyboard melody from ABBA's 1979 song "Gimme! Gimme! Gimme! (A Man After Midnight)"; the song was a smash hit, peaking at number one in at least 50 countries. On 4 July 2008, all four ABBA members were reunited at the Swedish premiere of the film Mamma Mia!. It was only the second time all of them had appeared together in public since 1986. During the appearance, they re-emphasised that they intended never to officially reunite, citing the opinion of Robert Plant that the re-formed Led Zeppelin was more like a cover band of itself than the original band. Ulvaeus stated that he wanted the band to be remembered as they were during the peak years of their success. Gold returned to number-one in the UK album charts for the fifth time on 3 August 2008. On 14 August 2008, the Mamma Mia! The Movie film soundtrack went to number-one on the US Billboard charts, ABBA's first US chart-topping album. During the band's heyday the highest album chart position they had ever achieved in America was number 14. In November 2008, all eight studio albums, together with a ninth of rare tracks, were released as The Albums. It hit several charts, peaking at number-four in Sweden and reaching the Top 10 in several other European territories. In 2008, Sony Computer Entertainment Europe, in collaboration with Universal Music Group Sweden AB, released SingStar ABBA on both the PlayStation 2 and PlayStation 3 games consoles, as part of the SingStar music video games. The PS2 version features 20 ABBA songs, while 25 songs feature on the PS3 version. On 22 January 2009, Fältskog and Lyngstad appeared together on stage to receive the Swedish music award "Rockbjörnen" (for "lifetime achievement"). In an interview, the two women expressed their gratitude for the honorary award and thanked their fans. On 25 November 2009, PRS for Music announced that the British public voted ABBA as the band they would most like to see re-form. On 27 January 2010, ABBAWORLD, a 25-room touring exhibition featuring interactive and audiovisual activities, debuted at Earls Court Exhibition Centre in London. According to the exhibition's website, ABBAWORLD is "approved and fully supported" by the band members. "Mamma Mia" was released as one of the first few non-premium song selections for the online RPG game Bandmaster. On 17 May 2011, "Gimme! Gimme! Gimme!" was added as a non-premium song selection for the Bandmaster Philippines server. On 15 November 2011, Ubisoft released a dancing game called ABBA: You Can Dance for the Wii. In January 2012, Universal Music announced the re-release of ABBA's final album The Visitors, featuring a previously unheard track "From a Twinkling Star to a Passing Angel". A book titled ABBA: The Official Photo Book was published in early 2014 to mark the 40th anniversary of the band's Eurovision victory. The book reveals that part of the reason for the band's outrageous costumes was that Swedish tax laws at the time allowed the cost of garish outfits that were not suitable for daily wear to be tax deductible. A sequel to the 2008 movie Mamma Mia!, titled Mamma Mia! Here We Go Again, was announced in May 2017; the film was released on 20 July 2018. Cher, who appeared in the movie, also released Dancing Queen, an ABBA cover album, in September 2018. In June 2017, a blue plaque outside Brighton Dome was set to commemorate their 1974 Eurovision win. In May 2020, it was announced that ABBA's entire studio discography would be released on coloured vinyl for the first time, in a box set titled ABBA: The Studio Albums. The initial release sold out in just a few hours. 2016–present: Reunion, Voyage and ABBAtars On 20 January 2016, all four members of ABBA made a public appearance at Mamma Mia! The Party in Stockholm. On 6 June 2016, the quartet appeared together at a private party at Berns Salonger in Stockholm, which was held to celebrate the 50th anniversary of Andersson and Ulvaeus's first meeting. Fältskog and Lyngstad performed live, singing "The Way Old Friends Do" before they were joined on stage by Andersson and Ulvaeus. British manager Simon Fuller announced in a statement in October 2016 that the group would be reuniting to work on a new 'digital entertainment experience'. The project would feature the members in their "life-like" avatar form, called ABBAtars, based on their late 1970s tour and would be set to launch by the spring of 2019. On 27 April 2018, all four original members of ABBA made a joint announcement that they had recorded two new songs, titled "I Still Have Faith in You" and "Don't Shut Me Down", to feature in a TV special set to air later that year. In September 2018, Ulvaeus stated that the two new songs, as well as the aforementioned TV special, now called ABBA: Thank You for the Music, An All-Star Tribute, would not be released until 2019. The TV special was later revealed to be scrapped by 2018, as Andersson and Ulvaeus rejected Fuller's project, and instead partnered with visual effects company Industrial Light and Magic to prepare the ABBAtars for a music video and a concert. In January 2019, it was revealed that neither song would be released before the summer. Andersson hinted at the possibility of a third song. In June 2019, Ulvaeus announced that the first new song and video containing the ABBAtars would be released in November 2019. In September, he stated in an interview that there were now five new ABBA songs to be released in 2020. In early 2020, Andersson confirmed that he was aiming for the songs to be released in September 2020. In April 2020, Ulvaeus gave an interview saying that in the wake of the COVID-19 pandemic, the avatar project had been delayed by six months. As of 2020, five out of the eight original songs written by Benny for the new album had been recorded by the two female members, and the release of a new music video with new unseen technology that cost £15 million was to be decided. In July 2020, Ulvaeus told podcaster Geoff Lloyd that the release of the new ABBA recordings had been delayed until 2021. On 22 September 2020, all four ABBA members reunited at Ealing Studios in London to continue working on the avatar project and filming for the tour. Björn said that the avatar tour would be scheduled for 2022 since the nature of the technology was complex. When questioned if the new recordings were definitely coming out in 2021, Björn said "There will be new music this year, that is definite, it's not a case anymore of it might happen, it will happen." On 26 August 2021, a new website was launched, with the title ABBA Voyage. On the page, visitors were prompted to subscribe "to be the first in line to hear more about ABBA Voyage". Simultaneously with the launch of the webpage, new ABBA Voyage social media accounts were launched, and billboards around London started to appear, all showing the date "02.09.21", leading to expectation of what was to be revealed on that date. On 29 August, the band officially joined TikTok with a video of Benny Andersson playing "Dancing Queen" on the piano, and media reported on a new album to be announced on 2 September. On that date, Voyage, their first new album in 40 years, was announced to be released on 5 November 2021, along with ABBA Voyage, a concert residency in London featuring the motion capture digital avatars of the four band members alongside a 10-piece live band, due to start in May 2022. Fältskog stated that the Voyage album and tour are likely to be their last. The announcement of the new album was accompanied by the release of the previously-announced new singles "I Still Have Faith in You" and "Don't Shut Me Down". The music video for "I Still Have Faith in You", featuring footage of the band during their performing years and also a first look at the ABBAtars, earned over a million views in its first three hours. "Don't Shut Me Down" became the first ABBA release since October 1978 to top the singles chart in Sweden. In October 2021, the third single "Just a Notion" was released, and it was announced that ABBA would split for good after the release of Voyage. However, in an interview with BBC Radio 2 on 11 November, Lyngstad stated "don't be too sure" that Voyage is the final ABBA album. Also, in an interview with BBC News on 5 November, Andersson stated "if they (the ladies) twist my arm I might change my mind." The fourth single from the album, “Little Things”, was released on 3 December. Artistry Recording process ABBA were perfectionists in the studio, working on tracks until they got them right rather than leaving them to come back to later on. They spent the bulk of their time within the studio; in separate 2021 interviews Ulvaeus stated they may have toured for only 6 months while Andersson said they played fewer than 100 shows during the band's career. The band created a basic rhythm track with a drummer, guitarist and bass player, and overlaid other arrangements and instruments. Vocals were then added, and orchestra overdubs were usually left until last. Fältskog and Lyngstad contributed ideas at the studio stage. Andersson and Ulvaeus played them the backing tracks and they made comments and suggestions. According to Fältskog, she and Lyngstad had the final say in how the lyrics were shaped. After vocals and overdubs were done, the band took up to five days to mix a song. Their single "S.O.S." was "heavily influenced by Phil Spector's Wall of Sound and the melodies of the Beach Boys", according to Billboard writer Fred Bronson, who also reported that Ulvaeus had said, "Because there was the Latin-American influence, the German, the Italian, the English, the American, all of that. I suppose we were a bit exotic in every territory in an acceptable way." Fashion, style, videos, advertising campaigns ABBA was widely noted for the colourful and trend-setting costumes its members wore. The reason for the wild costumes was Swedish tax law: the cost of the clothes was deductible only if they could not be worn other than for performances. Choreography by Graham Tainton also contributed to their performance style. The videos that accompanied some of the band's biggest hits are often cited as being among the earliest examples of the genre. Most of ABBA's videos (and ABBA: The Movie) were directed by Lasse Hallström, who would later direct the films My Life as a Dog, The Cider House Rules and Chocolat. ABBA made videos because their songs were hits in many different countries and personal appearances were not always possible. This was also done in an effort to minimise travelling, particularly to countries that would have required extremely long flights. Fältskog and Ulvaeus had two young children and Fältskog, who was also afraid of flying, was very reluctant to leave her children for such a long time. ABBA's manager, Stig Anderson, realised the potential of showing a simple video clip on television to publicise a single or album, thereby allowing easier and quicker exposure than a concert tour. Some of these videos have become classics because of the 1970s-era costumes and early video effects, such as the grouping of the band members in different combinations of pairs, overlapping one singer's profile with the other's full face, and the contrasting of one member against another. In 1976, ABBA participated in an advertising campaign to promote the Matsushita Electric Industrial Co.'s brand, National, in Australia. The campaign was also broadcast in Japan. Five commercial spots, each of approximately one minute, were produced, each presenting the "National Song" performed by ABBA using the melody and instrumental arrangements of "Fernando" and revised lyrics. Political use of ABBA's music In September 2010, band members Andersson and Ulvaeus criticised the right-wing Danish People's Party (DF) for using the ABBA song "Mamma Mia" (with modified lyrics referencing Pia Kjærsgaard) at rallies. The band threatened to file a lawsuit against the DF, saying they never allowed their music to be used politically and that they had absolutely no interest in supporting the party. Their record label Universal Music later said that no legal action would be taken because an agreement had been reached. Success in the United States During their active career, from 1972 to 1982, 20 of ABBA's singles entered the Billboard Hot 100; 14 of these made the Top 40 (13 on the Cashbox Top 100), with 10 making the Top 20 on both charts. A total of four of those singles reached the Top 10, including "Dancing Queen", which reached number one in April 1977. While "Fernando" and "SOS" did not break the Top 10 on the Billboard Hot 100 (reaching number 13 and 15 respectively), they did reach the Top 10 on Cashbox ("Fernando") and Record World ("SOS") charts. Both "Dancing Queen" and "Take a Chance on Me" were certified gold by the Recording Industry Association of America for sales of over one million copies each. The group also had 12 Top 20 singles on the Billboard Adult Contemporary chart with two of them, "Fernando" and "The Winner Takes It All", reaching number one. "Lay All Your Love on Me" was ABBA's fourth number-one single on a Billboard chart, topping the Hot Dance Club Play chart. Ten ABBA albums have made their way into the top half of the Billboard 200 album chart, with eight reaching the Top 50, five reaching the Top 20 and one reaching the Top 10. In November 2021, Voyage became ABBA's highest charting album on the Billboard 200 peaking at No. 2. Five albums received RIAA gold certification (more than 500,000 copies sold), while three acquired platinum status (selling more than one million copies). The compilation album ABBA Gold: Greatest Hits topped the Billboard Top Pop Catalog Albums chart in August 2008 (15 years after it was first released in the US in 1993), becoming the group's first number-one album ever on any of the Billboard album charts. It has sold 6 million copies there. On 15 March 2010, ABBA were inducted into the Rock and Roll Hall of Fame by Bee Gees members Barry Gibb and Robin Gibb. The ceremony was held at the Waldorf Astoria Hotel in New York City. The group were represented by Anni-Frid Lyngstad and Benny Andersson. in November 2021, ABBA received a Grammy nomination for Record of the Year. The single, "I Still Have Faith In You", from the album, Voyage, was their first ever nomination. Band members Agnetha Fältskog – lead and backing vocals Anni-Frid "Frida" Lyngstad – lead and backing vocals Björn Ulvaeus – guitars, backing and lead vocals Benny Andersson – keyboards, synthesizers, piano, accordion, guitars, backing and lead vocals The members of ABBA were married as follows: Agnetha Fältskog and Björn Ulvaeus from 1971 to 1980: Benny Andersson and Anni-Frid Lyngstad from 1978 to 1981. In addition to the four members of ABBA, other musicians played on their studio recordings, live appearances and concert performances. These include Rutger Gunnarsson (1972–1982) bass guitar and string arrangements, Ola Brunkert (1972–1981) drums, Mike Watson (1972–1980) bass guitar, Janne Schaffer (1972–1982) lead electric guitar, Roger Palm (1972–1979) drums, Malando Gassama (1973–1979) percussion, Lasse Wellander (1974–2021) lead electric guitar, and Per Lindvall (1980–2021) drums. ABBA-related tributes Musical groups Abbaesque – An Irish ABBA tribute band A-Teens – A pop music group from Stockholm, Sweden Björn Again – An Australian tribute band; notable as the earliest-formed ABBA tribute band (1988) and, as of 2021, still currently touring. Gabba – An ABBA–Ramones tribute band that covers the former in the style of the latter, the name being a reference to the Ramones catchphrase "Gabba Gabba Hey". Media Saturday Night (1975) (TV) .... Season 1 Episode 5 (Hosted by Robert Klein with Musical Numbers by ABBA and Loudon Wainwright III) Abbacadabra – A French children's musical based on songs from ABBA Abba-esque – A 1992 cover EP by Erasure Abbasalutely – A compilation album released in 1995 as a tribute album to ABBA Mamma Mia! – A musical stage show based on songs of ABBA ABBAmania – An ITV programme and tribute album to ABBA released in 1999 Mamma Mia! – A film adaptation of the musical stage show Mamma Mia! Here We Go Again – A prequel/sequel to the original film ABBA: You Can Dance – A dance video game released by Ubisoft in 2011 with songs from ABBA and also a spin-off of Just Dance video game series Dancing Queen - A 2018 cover album by Cher Discography Studio albums Ring Ring (1973) Waterloo (1974) ABBA (1975) Arrival (1976) The Album (1977) Voulez-Vous (1979) Super Trouper (1980) The Visitors (1981) Voyage (2021) Tours 1973: Swedish Folkpark Tour 1974–1975: European Tour 1977: European & Australian Tour 1979–1980: ABBA: The Tour 2022: ABBA Voyage Awards and nominations See also ABBA: The Museum ABBA City Walks – Stockholm City Museum ABBAMAIL List of best-selling music artists List of Swedes in music Music of Sweden Popular music in Sweden Citations References Bibliography Further reading Benny Andersson, Björn Ulvaeus, Judy Craymer: Mamma Mia! How Can I Resist You?: The Inside Story of Mamma Mia! and the Songs of ABBA. Weidenfeld & Nicolson, 2006 Carl Magnus Palm. ABBA – The Complete Recording Sessions (1994) Carl Magnus Palm (2000). From "ABBA" to "Mamma Mia!" Elisabeth Vincentelli: ABBA Treasures: A Celebration of the Ultimate Pop Group. Omnibus Press, 2010, Oldham, Andrew, Calder, Tony & Irvin, Colin (1995) "ABBA: The Name of the Game", Potiez, Jean-Marie (2000). ABBA – The Book Simon Sheridan: The Complete ABBA. Titan Books, 2012, Anna Henker (ed.), Astrid Heyde (ed.): Abba – Das Lexikon. Northern Europe Institut, Humboldt-University Berlin, 2015 (German) Steve Harnell (ed.): Classic Pop Presents Abba: A Celebration. Classic Pop Magazine (special edition), November 2016 Documentaries A for ABBA. BBC, 20 July 1993 Thierry Lecuyer, Jean-Marie Potiez: Thank You ABBA. Willow Wil Studios/A2C Video, 1993 Barry Barnes: ABBA − The History. Polar Music International AB, 1999 Chris Hunt: The Winner Takes it All − The ABBA Story. Littlestar Services/lambic Productions, 1999 Steve Cole, Chris Hunt: Super Troupers − Thirty Years of ABBA. BBC, 2004 The Joy of ABBA. BBC 4, 27 December 2013 (BBC page) Carl Magnus Palm, Roger Backlund: ABBA – When Four Became One. SVT, 2 January 2012 Carl Magnus Palm, Roger Backlund: ABBA – Absolute Image. SVT, 2 January 2012 ABBA – Bang a boomerang. ABC 1, 30 January 2013 (ABC page) ABBA: When All Is Said and Done, 2017 Thank you for the music . Sunday Night (7 News), 1 October 2019 External links Official ABBA Voyage website Owen Gleibermann: The Secret Majesty of ABBA: They Were the Feminine Pop Opera of Their Time. Variety, 22 July 2018 Barry Walters: ABBA's Essential, Influential Melancholy. NPR, 23 May 2015 Jackie Mansky: What’s Behind ABBA’s Staying Power?. Smithonian, 20 July 2018 ABBAinter.net TV-performances archive ABBA Songs – ABBA Album and Song details. Abba – The Articles – extensive collection of contemporary international newspaper and magazine articles on Abba 1972 establishments in Sweden Atlantic Records artists English-language singers from Sweden Epic Records artists Eurodisco groups Eurovision Song Contest entrants for Sweden Eurovision Song Contest entrants of 1974 Eurovision Song Contest winners Melodifestivalen contestants Melodifestivalen winners Musical groups disestablished in 1982 Musical groups established in 1972 Musical groups from Stockholm Musical groups reestablished in 2018 Musical quartets Palindromes RCA Records artists Schlager groups Swedish dance music groups Swedish musical groups Swedish pop music groups Swedish pop rock music groups Swedish-language singers
881
https://en.wikipedia.org/wiki/Allegiance
Allegiance
An allegiance is a duty of fidelity said to be owed, or freely committed, by the people, subjects or citizens to their state or sovereign. Etymology From Middle English ligeaunce (see medieval Latin ligeantia, "a liegance"). The al- prefix was probably added through confusion with another legal term, allegeance, an "allegation" (the French allegeance comes from the English). Allegiance is formed from "liege," from Old French liege, "liege, free", of Germanic origin. The connection with Latin ligare, "to bind," is erroneous. Usage Traditionally, English legal commentators used the term allegiance in two ways. In one sense, it referred to the deference which anyone, even foreigners, was expected to pay to the institutions of the country where one lived. In the other sense, it meant national character and the subjection due to that character. Types Local allegiance Natural allegiance United Kingdom The English doctrine, which was at one time adopted in the United States, asserted that allegiance was indelible: "Nemo potest exuere patriam". As the law stood prior to 1870, every person who by birth or naturalisation satisfied the conditions set forth, even if removed in infancy to another country where their family resided, owed an allegiance to the British crown which they could never resign or lose, except by act of parliament or by the recognition of the independence or the cession of the portion of British territory in which they resided. This refusal to accept any renunciation of allegiance to the Crown led to conflict with the United States over impressment, which led to further conflicts during the War of 1812, when thirteen Irish American prisoners of war were executed as traitors after the Battle of Queenston Heights; Winfield Scott urged American reprisal, but none was carried out. Allegiance was the tie which bound the subject to the sovereign, in return for that protection which the sovereign afforded the subject. It was the mutual bond and obligation between monarch and subjects, whereby subjects were called their liege subjects, because they are bound to obey and serve them; and the monarch was called their liege lord, because they should maintain and defend them (Ex parte Anderson (1861) 3 El & El 487; 121 ER 525; China Navigation Co v Attorney-General (1932) 48 TLR 375; Attorney-General v Nissan [1969] 1 All ER 629; Oppenheimer v Cattermole [1972] 3 All ER 1106). The duty of the crown towards its subjects was to govern and protect them. The reciprocal duty of the subject towards the crown was that of allegiance. At common law, allegiance was a true and faithful obedience of the subject due to their sovereign. As the subject owed to their sovereign their true and faithful allegiance and obedience, so the sovereign (Calvin's Case (1608) 7 Co Rep 1a; Jenk 306; 2 State Tr 559; 77 ER 377). Natural allegiance and obedience is an incident inseparable to every subject, for parte Anderson (1861) 3 El & El 487; 121 ER 525). Natural-born subjects owe allegiance wherever they may be. Where territory is occupied in the course of hostilities by an enemy's force, even if the annexation of the occupied country is proclaimed by the enemy, there can be no change of allegiance during the progress of hostilities on the part of a citizen of the occupied country (R v Vermaak (1900) 21 NLR 204 (South Africa)). Allegiance is owed both to the sovereign as a natural person and to the sovereign in the political capacity (Re Stepney Election Petition, Isaacson v Durant (1886) 17 QBD 54 (per Lord Coleridge CJ)). Attachment to the person of the reigning sovereign is not sufficient. Loyalty requires affection also to the office of the sovereign, attachment to royalty, attachment to the law and to the constitution of the realm, and he who would, by force or by fraud, endeavour to prostrate that law and constitution, though he may retain his affection for its head, can boast but an imperfect and spurious species of loyalty (R v O'Connell (1844) 7 ILR 261). There were four kinds of allegiances (Rittson v Stordy (1855) 3 Sm & G 230; De Geer v Stone (1882) 22 Ch D 243; Isaacson v Durant (1886) 54 LT 684; Gibson, Gavin v Gibson [1913] 3 KB 379; Joyce v DPP [1946] AC 347; Collingwood v Pace (1661) O Bridg 410; Lane v Bennett (1836) 1 M & W 70; Lyons Corp v East India Co (1836) 1 Moo PCC 175; Birtwhistle v Vardill (1840) 7 Cl & Fin 895; R v Lopez, R v Sattler (1858) Dears & B 525; Ex p Brown (1864) 5 B & S 280); (a) Ligeantia naturalis, absoluta, pura et indefinita, and this originally is due by nature and birthright, and is called alta ligeantia, and those that owe this are called subditus natus; (b) Ligeantia acquisita, not by nature but by acquisition or denization, being called a denizen, or rather denizon, because they are subditus datus; (c) Ligeantia localis, by operation of law, when a friendly alien enters the country, because so long as they are in the country they are within the sovereign's protection, therefore they owe the sovereign a local obedience or allegiance (R v Cowle (1759) 2 Burr 834; Low v Routledge (1865) 1 Ch App 42; Re Johnson, Roberts v Attorney-General [1903] 1 Ch 821; Tingley v Muller [1917] 2 Ch 144; Rodriguez v Speyer [1919] AC 59; Johnstone v Pedlar [1921] 2 AC 262; R v Tucker (1694) Show Parl Cas 186; R v Keyn (1876) 2 Ex D 63; Re Stepney Election Petn, Isaacson v Durant (1886) 17 QBD 54); (d) A legal obedience, where a particular law requires the taking of an oath of allegiance by subject or alien alike. Natural allegiance was acquired by birth within the sovereign's dominions (except for the issue of diplomats or of invading forces or of an alien in an enemy occupied territory). The natural allegiance and obedience are an incident inseparable from every subject, for as soon as they are born they owe by birthright allegiance and obedience to the Sovereign (Ex p. Anderson (1861) 3 E & E 487). A natural-born subject owes allegiance wherever they may be, so that where territory is occupied in the course of hostilities by an enemy's force, even if the annexation of the occupied country is proclaimed by the enemy, there can be no change of allegiance during the progress of hostilities on the part of a citizen of the occupied country (R v Vermaak (1900) 21 NLR 204 (South Africa)). Acquired allegiance was acquired by naturalisation or denization. Denization, or ligeantia acquisita, appears to be threefold (Thomas v Sorrel (1673) 3 Keb 143); (a) absolute, as the common denization, without any limitation or restraint; (b) limited, as when the sovereign grants letters of denization to an alien, and the alien's male heirs, or to an alien for the term of their life; (c) It may be granted upon condition, cujus est dare, ejus est disponere, and this denization of an alien may come about three ways: by parliament; by letters patent, which was the usual manner; and by conquest. Local allegiance was due by an alien while in the protection of the crown. All friendly resident aliens incurred all the obligations of subjects (The Angelique (1801) 3 Ch Rob App 7). An alien, coming into a colony, also became, temporarily, a subject of the crown, and acquired rights both within and beyond the colony, and these latter rights could not be affected by the laws of that colony (Routledge v Low (1868) LR 3 HL 100; 37 LJ Ch 454; 18 LT 874; 16 WR 1081, HL; Reid v Maxwell (1886) 2 TLR 790; Falcon v Famous Players Film Co [1926] 2 KB 474). A resident alien owed allegiance even when the protection of the crown was withdrawn owing to the occupation of an enemy, because the absence of the crown's protection was temporary and involuntary (de Jager v Attorney-General of Natal [1907] AC 326). Legal allegiance was due when an alien took an oath of allegiance required for a particular office under the crown. By the Naturalisation Act 1870, it was made possible for British subjects to renounce their nationality and allegiance, and the ways in which that nationality is lost were defined. So British subjects voluntarily naturalized in a foreign state are deemed aliens from the time of such naturalization, unless, in the case of persons naturalized before the passing of the act, they had declared their desire to remain British subjects within two years from the passing of the act. Persons who, from having been born within British territory, are British subjects, but who, at birth, came under the law of any foreign state or of subjects of such state, and, also, persons who, though born abroad, are British subjects by reason of parentage, may, by declarations of alienage, get rid of British nationality. Emigration to an uncivilized country left British nationality unaffected: indeed the right claimed by all states to follow with their authority their subjects so emigrating was one of the usual and recognized means of colonial expansion. United States The doctrine that no man can cast off his native allegiance without the consent of his sovereign was early abandoned in the United States, and Chief Justice John Rutledge also declared in Talbot v. Janson, "a man may, at the same time, enjoy the rights of citizenship under two governments." On July 27, 1868, the day before the Fourteenth Amendment was adopted, U.S. Congress declared in the preamble of the Expatriation Act that "the right of expatriation is a natural and inherent right of all people, indispensable to the enjoyment of the rights of life, liberty and the pursuit of happiness," and (Section I) one of "the fundamental principles of this government" (United States Revised Statutes, sec. 1999). Every natural-born citizen of a foreign state who is also an American citizen, and every natural-born American citizen who is also a citizen of a foreign land, owes a double allegiance, one to the United States, and one to their homeland (in the event of an immigrant becoming a citizen of the US) or to their adopted land (in the event of an emigrant natural-born citizen of the US becoming a citizen of another nation). If these allegiances come into conflict, the person may be guilty of treason against one or both. If the demands of these two sovereigns upon their duty of allegiance come into conflict, those of the United States have the paramount authority in American law; likewise, those of the foreign land have paramount authority in their legal system. In such a situation, it may be incumbent on the individual to renounce one of their citizenships, to avoid possibly being forced into situations where countervailing duties are required of them, such as might occur in the event of war. Oath of allegiance The oath of allegiance is an oath of fidelity to the sovereign taken by all persons holding important public office and as a condition of naturalization. By ancient common law, it was required of all persons above the age of 12, and it was repeatedly used as a test for the disaffected. In England, it was first imposed by statute in the reign of Elizabeth I (1558), and its form has, more than once, been altered since. Up to the time of the revolution, the promise was "to be true and faithful to the king and his heirs, and truth and faith to bear of life and limb and terrene honour, and not to know or hear of any ill or damage intended him without defending him therefrom." This was thought to favour the doctrine of absolute non-resistance, and, accordingly, the Convention Parliament enacted the form that has been in use since that time – "I do sincerely promise and swear that I will be faithful and bear true allegiance to His Majesty ..." In the United States and some other republics, the oath is known as the Pledge of Allegiance. Instead of declaring fidelity to a monarch, the pledge is made to the flag, the republic, and to the core values of the country, specifically liberty and justice. The reciting of the pledge in the United States is voluntary because of the rights guaranteed to the people under the First Amendment to the United States Constitution - specifically, the guarantee of freedom of speech, which inherently includes the freedom not to speak. In Islam The word used in the Arabic language for allegiance is bay'at (Arabic: بيعة), which means "taking hand". The practice is sanctioned in the Quran by Surah 48:10: "Verily, those who give thee their allegiance, they give it but to Allah Himself". The word is used for the oath of allegiance to an emir. It is also used for the initiation ceremony specific to many Sufi orders. See also Impressment Legitimacy (political) Mandate of Heaven Renunciation of citizenship Treason Usurpation War of 1812 Winfield Scott References Further reading Salmond on "Citizenship and Allegiance," in the Law Quarterly Review (July 1901, January 1902). Nationalism de:Loyalität es:Lealtad ko:충 no:Lojalitet sv:Lojalitet
885
https://en.wikipedia.org/wiki/Altenberg
Altenberg
Altenberg (German for "old mountain" or "mountain of the old") may refer to: Places Austria Altenberg, a town in Sankt Andrä-Wördern, Tulln District Altenberg bei Linz, in Upper Austria Altenberg an der Rax, in Styria Germany Altenberg (Bergisches Land), an area in Odenthal, North Rhine-Westphalia, Germany Altenberg Abbey, Cistercian monastery in Altenberg (Bergisches Land) Altenberger Dom sometimes called Altenberg Cathedral, the former church of this Cistercian monastery Altenberg, Saxony, a town in the Free State of Saxony Altenberga, a municipality in the Saale-Holzfeld district, Thuringia Altenberg Abbey, Solms, a former Premonstratensian nunnery near Wetzlar in Hesse Zinkfabrik Altenberg, a former zinc factory, now a branch of the LVR Industrial Museum, Oberhausen, North Rhine-Westphalia Grube Altenberg, a show mine near Kreuztal, North Rhine-Westphalia Other places Altenberg, the German name for Vieille Montagne (old mountain in French), a former zinc mine in Kelmis, Moresnet, Belgium Altenberg, a district in the city of Bern, Switzerland Other uses Altenberg Lieder (Five Orchestral Songs), composed by Alban Berg in 1911/12 Altenberg Publishing (1880–1934), a former Polish publishing house Altenberg Trio, a Viennese piano trio People with the surname Jakob Altenberg (1875–1944), Austrian businessman Lee Altenberg, theoretical biologist Peter Altenberg (1859–1919), nom de plume of Austrian writer and poet Richard Engländer See also Altenburg (disambiguation)
887
https://en.wikipedia.org/wiki/MessagePad
MessagePad
The MessagePad is a discontinued series of personal digital assistant devices developed by Apple for the Newton platform in 1993. Some electronic engineering and the manufacture of Apple's MessagePad devices was undertaken in Japan by the Sharp. The devices were based on the ARM 610 RISC processor and all featured handwriting recognition software and were developed and marketed by Apple. The devices ran the Newton OS. History The development of Newton Message first began when Apple's former senior vice president of research and development, Jean-Louis Gassée; his team includes Steve Capps, co-writer of macOS Finder, and an employed engineer named Steve Sakoman. Since then, the development of the Newton Message Pad operates in secret until it was eventually revealed to the Apple Board of Directors in late 1990. When Gassee resigned from his position due to a significant disagreement with the board, seeing how his employer was treated, Sakoman also stopped developing the MessagePad on March 2, 1990. Bill Atkinson, an Apple Executive responsible for the company's Lisa's graphical interface, invited Steve Capps, John Sculley, Andy Hertzfeld, Susan Kare, and Marc Porat to a meeting on March 11, 1990. There, they brainstormed a way of saving the MessagePad. Sculley suggested adding new features, including libraries, museums, databases, or institutional archives features, allowing customers to navigate through various window tabs or opened galleries/stacks. The Board later approved his suggestion; he then gave Newton it is official and full backing. The first MessagePad on May 29, 1992 was unveiled by Sculley at the summer Consumer Electronics Show (CES) in Chicago. Even so, Sculley caved in to pressure too early because the Newton did not officially ship for another 14 months on August 2, 1993. Over 50,000 units were sold near late November 1993, starting at the price of $900 to $1,569. Details Screen and input With the MessagePad 120 with Newton OS 2.0, the Newton Keyboard by Apple became available, which can also be used via the dongle on Newton devices with a Newton InterConnect port, most notably the Apple MessagePad 2000/2100 series, as well as the Apple eMate 300. Newton devices featuring Newton OS 2.1 or higher can be used with the screen turned horizontally ("landscape") as well as vertically ("portrait"). A change of a setting rotates the contents of the display by 90, 180 or 270 degrees. Handwriting recognition still works properly with the display rotated, although display calibration is needed when rotation in any direction is used for the first time or when the Newton device is reset. Handwriting recognition In initial versions (Newton OS 1.x) the handwriting recognition gave extremely mixed results for users and was sometimes inaccurate. The original handwriting recognition engine was called Calligrapher, and was licensed from a Russian company called Paragraph International. Calligrapher's design was quite sophisticated; it attempted to learn the user's natural handwriting, using a database of known words to make guesses as to what the user was writing, and could interpret writing anywhere on the screen, whether hand-printed, in cursive, or a mix of the two. By contrast, Palm Pilot's Graffiti had a less sophisticated design than Calligrapher, but was sometimes found to be more accurate and precise due to its reliance on a fixed, predefined stroke alphabet. The stroke alphabet used letter shapes which resembled standard handwriting, but which were modified to be both simple and very easy to differentiate. Palm Computing also released two versions of Graffiti for Newton devices. The Newton version sometimes performed better and could also show strokes as they were being written as input was done on the display itself, rather than on a silkscreen area. For editing text, Newton had a very intuitive system for handwritten editing, such as scratching out words to be deleted, circling text to be selected, or using written carets to mark inserts. Later releases of the Newton operating system retained the original recognizer for compatibility, but added a hand-printed-text-only (not cursive) recognizer, called "Rosetta", which was developed by Apple, included in version 2.0 of the Newton operating system, and refined in Newton 2.1. Rosetta is generally considered a significant improvement and many reviewers, testers, and most users consider the Newton 2.1 handwriting recognition software better than any of the alternatives even 10 years after it was introduced. Recognition and computation of handwritten horizontal and vertical formulas such as "1 + 2 =" was also under development but never released. However, users wrote similar programs which could evaluate mathematical formulas using the Newton OS Intelligent Assistant, a unique part of every Newton device. The handwriting recognition and parts of the user interface for the Newton are best understood in the context of the broad history of pen computing, which is quite extensive. A vital feature of the Newton handwriting recognition system is the modeless error correction. That is, correction done in situ without using a separate window or widget, using a minimum of gestures. If a word is recognized improperly, the user could double-tap the word and a list of alternatives would pop up in a menu under the stylus. Most of the time, the correct word will be in the list. If not, a button at the bottom of the list allows the user to edit individual characters in that word. Other pen gestures could do such things as transpose letters (also in situ). The correction popup also allowed the user to revert to the original, un-recognized letter shapes - this would be useful in note-taking scenarios if there was insufficient time to make corrections immediately. To conserve memory and storage space, alternative recognition hypotheses would not be saved indefinitely. If the user returned to a note a week later, for example, they would only see the best match. Error correction in many current handwriting systems provides such functionality but adds more steps to the process, greatly increasing the interruption to a user's workflow that a given correction requires. User interface Text could also be entered by tapping with the stylus on a small on-screen pop-up QWERTY virtual keyboard, although more layouts were developed by users. Newton devices could also accept free-hand "Sketches", "Shapes", and "Ink Text", much like a desktop computer graphics tablet. With "Shapes", Newton could recognize that the user was attempting to draw a circle, a line, a polygon, etc., and it would clean them up into perfect vector representations (with modifiable control points and defined vertices) of what the user was attempting to draw. "Shapes" and "Sketches" could be scaled or deformed once drawn. "Ink text" captured the user's free-hand writing but allowed it to be treated somewhat like recognized text when manipulating for later editing purposes ("ink text" supported word wrap, could be formatted to be bold, italic, etc.). At any time a user could also direct their Newton device to recognize selected "ink text" and turn it into recognized text (deferred recognition). A Newton note (or the notes attached to each contact in Names and each Dates calendar or to-do event) could contain any mix of interleaved text, Ink Text, Shapes, and Sketches. While the Newton offered handwriting recognition training and would clean up sketches into vector shapes, both were unreliable and required much rewriting and redrawing. The most reliable application of the Newton was collecting and organizing address and phone numbers. While handwritten messages could be stored, they could not be easily filed, sorted or searched. While the technology was a probable cause for the failure of the device (which otherwise met or exceeded expectations), the technology has been instrumental in producing the future generation of handwriting software that realizes the potential and promise that began in the development of Newton-Apple's Ink Handwriting Recognition. Connectivity The MessagePad 100 series of devices used Macintosh's proprietary serial ports—round Mini-DIN 8 connectors. The MessagePad 2000/2100 models (as well as the eMate 300) have a small, proprietary Newton InterConnect port. However, the development of the Newton hardware/software platform was canceled by Steve Jobs on February 27, 1998, so the InterConnect port, while itself very advanced, can only be used to connect a serial dongle. A prototype multi-purpose InterConnect device containing serial, audio in, audio out, and other ports was also discovered. In addition, all Newton devices have infrared connectivity, initially only the Sharp ASK protocol, but later also IrDA, though the Sharp ASK protocol was kept in for compatibility reasons. Unlike the Palm Pilot, all Newton devices are equipped with a standard PC Card expansion slot (two on the 2000/2100). This allows native modem and even Ethernet connectivity; Newton users have also written drivers for 802.11b wireless networking cards and ATA-type flash memory cards (including the popular CompactFlash format), as well as for Bluetooth cards. Newton can also dial a phone number through the built-in speaker of the Newton device by simply holding a telephone handset up to the speaker and transmitting the appropriate tones. Fax and printing support is also built in at the operating system level, although it requires peripherals such as parallel adapters, PCMCIA cards, or serial modems, the most notable of which is the lightweight Newton Fax Modem released by Apple in 1993. It is powered by 2 AA batteries, and can also be used with a power adapter. It provides data transfer at 2,400 bit/s, and can also send and receive fax messages at 9,600 and 4,800 bit/s respectively. Power options The original Apple MessagePad and MessagePad 100 used four AAA batteries. They were eventually replaced by AA batteries with the release of the Apple MessagePad 110. The use of 4 AA NiCd (MessagePad 110, 120 and 130) and 4x AA NiMH cells (MP2x00 series, eMate 300) give a runtime of up to 30 hours (MP2100 with two 20 MB Linear Flash memory PC Cards, no backlight usage) and up to 24 hours with backlight on. While adding more weight to the handheld Newton devices than AAA batteries or custom battery packs, the choice of an easily replaceable/rechargeable cell format gives the user a still unsurpassed runtime and flexibility of power supply. This, together with the flash memory used as internal storage starting with the Apple MessagePad 120 (if all cells lost their power, no data was lost due to the non-volatility of this storage), gave birth to the slogan "Newton never dies, it only gets new batteries". Later efforts and improvements The Apple MessagePad 2000/2100, with a vastly improved handwriting recognition system, 162 MHz StrongARM SA-110 RISC processor, Newton OS 2.1, and a better, clearer, backlit screen, attracted critical plaudits. eMate 300 The eMate 300 was a Newton device in a laptop form factor offered to schools in 1997 as an inexpensive ($799 US, originally sold to education markets only) and durable computer for classroom use. However, in order to achieve its low price, the eMate 300 did not have all the speed and features of the contemporary MessagePad equivalent, the MessagePad 2000. The eMate was cancelled along with the rest of the Newton products in 1998. It is the only Newton device to use the ARM710 microprocessor (running at 25 MHz), have an integrated keyboard, use Newton OS 2.2 (officially numbered 2.1), and its batteries are officially irreplaceable, although several users replaced them with longer-lasting ones without any damage to the eMate hardware whatsoever. Prototypes Many prototypes of additional Newton devices were spotted. Most notable was a Newton tablet or "slate", a large, flat screen that could be written on. Others included a "Kids Newton" with side handgrips and buttons, "VideoPads" which would have incorporated a video camera and screen on their flip-top covers for two-way communications, the "Mini 2000" which would have been very similar to a Palm Pilot, and the NewtonPhone developed by Siemens, which incorporated a handset and a keyboard. Market reception Fourteen months after Sculley demoed it at the May 1992, Chicago CES, the MessagePad was first offered for sale on August 2, 1993, at the Boston Macworld Expo. The hottest item at the show, it cost $900. 50,000 MessagePads were sold in the device's first three months on the market. The original Apple MessagePad and MessagePad 100 were limited by the very short lifetime of their inadequate AAA batteries. Critics also panned the handwriting recognition that was available in the debut models, which had been trumpeted in the Newton's marketing campaign. It was this problem that was skewered in the Doonesbury comic strips in which a written text entry is (erroneously) translated as "Egg Freckles?", as well as in the animated series The Simpsons. However, the word 'freckles' was not included in the Newton dictionary, although a user could add it themselves. Difficulties were in part caused by the long time requirements for the Calligrapher handwriting recognition software to "learn" the user's handwriting; this process could take from two weeks to two months. Another factor which limited the early Newton devices' appeal was that desktop connectivity was not included in the basic retail package, a problem that was later solved with 2.x Newton devices - these were bundled with a serial cable and the appropriate Newton Connection Utilities software. Later versions of Newton OS offered improved handwriting recognition, quite possibly a leading reason for the continued popularity of the devices among Newton users. Even given the age of the hardware and software, Newtons still demand a sale price on the used market far greater than that of comparatively aged PDAs produced by other companies. In 2006, CNET compared an Apple MessagePad 2000 to a Samsung Q1, and the Newton was declared better. In 2009, CNET compared an Apple MessagePad 2000 to an iPhone 3GS, and the Newton was declared more innovative at its time of release. A chain of dedicated Newton only stores called Newton Source existed from 1994 until 1998. Locations included New York, Los Angeles, San Francisco, Chicago and Boston. The Westwood Village, California, near U.C.L.A. featured the trademark red and yellow light bulb Newton logo in neon. The stores provided an informative educational venue to learn about the Newton platform in a hands on relaxed fashion. The stores had no traditional computer retail counters and featured oval desktops where interested users could become intimately involved with the Newton product range. The stores were a model for the later Apple Stores. Newton device models {| class="wikitable" |+ !Brand | colspan="2" |Apple |Sharp |Siemens | colspan="2" |Apple |Sharp |Apple |Digital Ocean |Motorola |Harris |Digital Ocean | colspan="4" |Apple | colspan="3" |Harris |Siemens |Schlumberger |- !Device |OMP (Original Newton MessagePad) |Newton "Dummy" |ExpertPad PI-7000 |Notephone.[better source needed] |MessagePad 100 |MessagePad 110 |Sharp ExpertPad PI-7100 |MessagePad 120 |Tarpon |Marco |SuperTech 2000 |Seahorse |MessagePad 130 |eMate 300 |MessagePad 2000 |MessagePad 2100 |Access Device 2000 |Access Device, GPS |Access Device, Wireline |Online Terminal, also known as Online Access Device(OAD) |Watson |- !Introduced |August 3, 1993 (US) December 1993 (Germany) |? |August 3, 1993(US), ? (Japan) |1993? | colspan="2" |March 1994 |April 1994 |October 1994 (Germany), January 1995 (US) | colspan="2" |January 1995 (US) |August 1995 in the US |January 1996 in the US |March 1996 | colspan="2" |March 1997 |November 1997 | colspan="3" |1998 |Announced 1997 |? |- !Discontinued | colspan="3" |March 1994 |? | colspan="2" |April 1995 |late 1994 |June 1996 |? |? |? |? |April 1997 | colspan="3" |February 1998 | | | | | |- !Code name |Junior | |? |? |Junior |Lindy |? |Gelato |? |? |? |? |Dante |? |Q |? | | | | | |- !Model No. |H1000 | |? |? |H1000 |H0059 |? |H0131 |? |? |? |? |H0196 |H0208 |H0136 |H0149 | | | | | |- !Processor | colspan="13" |ARM 610 (20 MHz) |ARM 710a (25 MHz) | colspan="7" |StrongARM SA-110 (162 MHz) |- !ROM | colspan="7" |4 MB | colspan="2" |4 MB (OS 1.3) or 8 MB (OS 2.0) |5 MB |4 MB | colspan="5" |8 MB | | | | | |- !System Memory (RAM) | colspan="5" |490 KB* SRAM |544 KB SRAM |490 KB* SRAM | colspan="2" |639/687 KB DRAM |544 KB SRAM |639 KB DRAM | colspan="2" |1199 KB DRAM |1 MB DRAM (Upgradable) |1 MB DRAM |4 MB DRAM | colspan="3" |1 MB DRAM |? |1 MB DRAM |- !User Storage | colspan="5" |150 KB* SRAM |480 KB SRAM |150 KB* SRAM | colspan="2" |385/1361 KB Flash RAM |480 KB SRAM |385 KB Flash RAM | colspan="2" |1361 KB Flash RAM |2 MB Flash RAM(Upgradable) | colspan="5" |4 MB Flash RAM |? |4 MB Flash RAM |- !Total RAM | colspan="5" |640 KB |1 MB |640 KB | colspan="2" |1.0/2.0 MB | colspan="2" |1 MB | colspan="2" |2.5 MB |3 MB (Upgradable via Internal Expansion) |5 MB |8 MB | colspan="3" |5 MB |? |5 MB |- !Display | colspan="5" |336 × 240 (B&W) |320 × 240 (B&W) |336 × 240 (B&W) |320 × 240 (B&W) |320 × 240 (B&W) w/ backlight |320 × 240 (B&W) | colspan="3" |320 × 240 (B&W) w/ backlight | colspan="6" |480 × 320 grayscale (16 shades) w/ backlight | |480 × 320 greyscale (16 shades) w/ backlight |- !Newton OS version | colspan="3" |1.0 to 1.05, or 1.10 to 1.11 |1.11 | colspan="2" |1.2 or 1.3 |1.3 | colspan="2" |1.3 or 2.0 | colspan="2" |1.3 | colspan="2" |2.0 |2.1 (2.2) | colspan="2" |2.1 | colspan="5" |2.1 |- !Newton OS languages |English or German | |English or Japanese |German |English, German or French |English or French |English or Japanese |English, German or French | colspan="4" |English |English or German | colspan="2" |English |English or German | colspan="3" |English |German |French |- !Connectivity | colspan="3" |RS422, LocalTalk & SHARP ASK Infrared |Modem and Telephone dock Attachment | colspan="4" |RS422, LocalTalk & SHARP ASK Infrared |RS422, LocalTalk & SHARP ASK Infrared |RS422, LocalTalk, Infrared, ARDIS Network |RS232, LocalTalk WLAN, V.22bis modem, Analog/Digital Cellular, CDPD, RAM, ARDIS , Trunk Radio |RS232, LocalTalk, CDPD, WLAN, Optional dGPS, GSM, or IR via modular attachments |RS422, LocalTalk & SHARP ASK Infrared |IrDA, headphone port, Interconnect port, LocalTalk, Audio I/O, Autodock |Dual-mode IR;IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock, Phone I/O |Dual-mode IR; IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock | colspan="3" |Dual-mode IR;IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock, Phone I/O |? |Dual-mode IR;IrDA & SHARP ASK, LocalTalk, Audio I/O, Autodock, Phone I/O |- !PCMCIA | colspan="13" |1 PCMCIA-slot II, 5v or 12v |1 PCMCIA-slot I/II/III, 5v | colspan="2" |2 PCMCIA-slot II, 5v or 12v | colspan="2" |1 PCMCIA-slot II, 5v or 12v |1 PCMCIA-slot II, 5v or 12v, 2nd slot Propriety Rado Card | colspan="2" |1 PCMCIA-slot II, 5v or 12v, 1 Smart Card Reader |- !Power | colspan="5" |4 AAA or NiCd rechargeable or external power supply |4 AA or NiCd rechargeable or external power supply |4 AAA or NiCd rechargeable or external power supply |4 AA or NiCd rechargeable or external power supply | colspan="2" |NiCd battery pack or external power supply |4 AA or NiCd rechargeable or external power supply |NiCd battery pack or external power supply |4 AA or NiCd rechargeable or external power supply |NiMH battery pack (built-in) or external power supply | colspan="2" |4 AA or NiMH rechargeable or external power supply | colspan="3" |Custom NiMH rechargeable or external power supply |? Unknown, but likely external power supply |4 AA or NiMH rechargeable or external power supply |- !Dimensions (HxWxD) | | | (lid open) | colspan="2" | | | (lid open) | | | |? | | | | colspan="2" | |? |? |? |9 x 14.5 x 5.1 inches (23 x 37 x 13 cm) |? |- !Weight | | | with batteries installed | | | with batteries installed | with batteries installed |with batteries installed | | |? | | with batteries installed | | colspan="2" | |? |? |? |? |? |} * Varies with Installed OS Notes: The eMate 300 actually has ROM chips silk screened with 2.2 on them. Stephanie Mak on her website discusses this: If one removes all patches to the eMate 300 (by replacing the ROM chip, and then putting in the original one again, as the eMate and the MessagePad 2000/2100 devices erase their memory completely after replacing the chip), the result will be the Newton OS saying that this is version 2.2.00. Also, the Original MessagePad and the MessagePad 100 share the same model number, as they only differ in the ROM chip version. (The OMP has OS versions 1.0 to 1.05, or 1.10 to 1.11, while the MP100 has 1.3 that can be upgraded with various patches.) Other uses There were a number of projects that used the Newton as a portable information device in cultural settings such as museums. For example, Visible Interactive created a walking tour in San Francisco's Chinatown but the most significant effort took place in Malaysia at the Petronas Discovery Center, known as Petrosains. In 1995, an exhibit design firm, DMCD Inc., was awarded the contract to design a new science museum in the Petronas Towers in Kuala Lumpur. A major factor in the award was the concept that visitors would use a Newton device to access additional information, find out where they were in the museum, listen to audio, see animations, control robots and other media, and to bookmark information for printout at the end of the exhibit. The device became known as the ARIF, a Malay word for "wise man" or "seer" and it was also an acronym for A Resourceful Informative Friend. Some 400 ARIFS were installed and over 300 are still in use today. The development of the ARIF system was extremely complex and required a team of hardware and software engineers, designers, and writers. ARIF is an ancestor of the PDA systems used in museums today and it boasted features that have not been attempted since. Anyway & Company firm was involved with the Petronas Discovery Center project back in 1998 and NDAs were signed which prevents getting to know more information about this project. It was confirmed that they purchased of MP2000u or MP2100's by this firm on the behalf of the project under the name of "Petrosains Project Account". By 1998 they had invested heavily into the R&D of this project with the Newton at the center. After Apple officially cancelled the Newton in 1998 they had to acquire as many Newtons as possible for this project. It was estimated initially 1000 Newtons, but later readjusted the figure to possibly 750 Newtons. They placed an “Internet Call” for Newtons. They purchased them in large and small quantities. The Newton was also used in healthcare applications, for example in collecting data directly from patients. Newtons were used as electronic diaries, with patients entering their symptoms and other information concerning their health status on a daily basis. The compact size of the device and its ease of use made it possible for the electronic diaries to be carried around and used in the patients' everyday life setting. This was an early example of electronic patient-reported outcomes (ePRO) See also Newton (platform) Newton OS eMate 300 NewtonScript Orphaned technology Pen computing References Bibliography Apple's press release on the debut of the MessagePad 2100 Apple's overview of features & limitations of Newton Connection Utilities Newton overview at Newton Source archived from Apple Newton FAQ Pen Computing's First Look at Newton OS 2.0 Newton Gallery Birth of the Newton The Newton Hall of Fame: People behind the Newton Pen Computing's Why did Apple kill the Newton? Pen Computing's Newton Notes column archive A.I. Magazine article by Yaeger on Newton HWR design, algorithms, & quality and associated slides Info on Newton HWR from Apple's HWR Technical Lead External links Additional resources and information Defying Gravity: The Making of Newton, by Kounalakis & Menuez (Hardcover) Hardcover: 192 pages Publisher: Beyond Words Publishing (October 1993) Complete Developer's manual for the StrongARM SA-110 Beginner's overview of the StrongARM SA-110 Microprocessor Reviews MessagePad 2000 review at "The History and Macintosh Society" Prof. Wittmann's collection of Newton & MessagePad reviews Apple Newton Products introduced in 1993 Apple Inc. personal digital assistants
888
https://en.wikipedia.org/wiki/A.%20E.%20van%20Vogt
A. E. van Vogt
Alfred Elton van Vogt (; April 26, 1912 – January 26, 2000) was a Canadian-born science fiction author. His fragmented, bizarre narrative style influenced later science fiction writers, notably Philip K. Dick. He was one of the most popular and influential practitioners of science fiction in the mid-twentieth century, the genre's so-called Golden Age, and one of the most complex. The Science Fiction Writers of America named him their 14th Grand Master in 1995 (presented 1996). Early life Alfred Vogt (both "Elton" and "van" were added much later) was born on April 26, 1912, on his grandparents' farm in Edenburg, Manitoba, a tiny (and now defunct) Russian Mennonite community east of Gretna, Manitoba, Canada, in the Mennonite West Reserve. He was the third of six children born to Heinrich "Henry" Vogt and Aganetha "Agnes" Vogt (née Buhr), both of whom were born in Manitoba and grew up in heavily immigrant communities. Until age four, van Vogt and his family spoke only Plautdietsch at home. For the first dozen or so years of his life, van Vogt's father, Henry Vogt, a lawyer, moved his family several times within western Canada, moving to Neville, Saskatchewan; Morden, Manitoba; and finally Winnipeg, Manitoba. Alfred Vogt found these moves difficult, later remarking: By the 1920s, living in Winnipeg, father Henry worked as an agent for a steamship company, but the stock market crash of 1929 proved financially disastrous, and the family could not afford to send Alfred to college. During his teen years, Alfred worked as a farmhand and a truck driver, and by the age of 19, he was working in Ottawa for the Canadian Census Bureau. He began his writing career with stories in the true confession style of pulp magazines such as True Story. Most of these stories were published anonymously, with the first-person narratives allegedly being written by people (often women) in extraordinary, emotional, and life-changing circumstances. After a year in Ottawa, he moved back to Winnipeg, where he sold newspaper advertising space and continued to write. While continuing to pen melodramatic "true confessions" stories through 1937, he also began writing short radio dramas for local radio station CKY, as well as conducting interviews published in trade magazines. He added the middle name "Elton" at some point in the mid-1930s, and at least one confessional story (1937's "To Be His Keeper") was sold to the Toronto Star, who misspelled his name "Alfred Alton Bogt" in the byline. Shortly thereafter, he added the "van" to his surname, and from that point forward he used the name "A. E. van Vogt" both personally and professionally. Career By 1938, van Vogt decided to switch to writing science fiction, a genre he enjoyed reading. He was inspired by the August 1938 issue of Astounding Science Fiction, which he picked up at a newsstand. John W. Campbell's novelette "Who Goes There?" (later adapted into The Thing from Another World and The Thing) inspired van Vogt to write "Vault of the Beast", which he submitted to that same magazine. Campbell, who edited Astounding (and had written the story under a pseudonym), sent van Vogt a rejection letter, but one which encouraged van Vogt to try again. Van Vogt sent another story, entitled "Black Destroyer", which was accepted. It featured a fierce, carnivorous alien stalking the crew of a spaceship, and served as the inspiration for multiple science fiction movies, including Alien (1979). A revised version of "Vault of the Beast" was published in 1940. While still living in Winnipeg, in 1939 van Vogt married Edna Mayne Hull, a fellow Manitoban. Hull, who had previously worked as a private secretary, went on to act as van Vogt's typist, and was credited with writing several SF stories of her own throughout the early 1940s. The outbreak of World War II in September 1939 caused a change in van Vogt's circumstances. Ineligible for military service due to his poor eyesight, he accepted a clerking job with the Canadian Department of National Defence. This necessitated a move back to Ottawa, where he and his wife stayed for the next year and a half. Meanwhile, his writing career continued. "Discord in Scarlet" was van Vogt's second story to be published, also appearing as the cover story. It was accompanied by interior illustrations created by Frank Kramer and Paul Orban. (Van Vogt and Kramer thus debuted in the issue of Astounding that is sometimes identified as the start of the Golden Age of Science Fiction.) Among his most famous works of this era, "Far Centaurus" appeared in the January 1944 edition of Astounding. Van Vogt's first completed novel, and one of his most famous, is Slan (Arkham House, 1946), which Campbell serialized in Astounding (September to December 1940). Using what became one of van Vogt's recurring themes, it told the story of a nine-year-old superman living in a world in which his kind are slain by Homo sapiens. Others saw van Vogt's talent from his first story, and in May 1941 van Vogt decided to become a full-time writer, quitting his job at the Canadian Department of National Defence. Freed from the necessity of living in Ottawa, he and his wife lived for a time in the Gatineau region of Quebec before moving to Toronto in the fall of 1941. Prolific throughout this period, van Vogt wrote many of his more famous short stories and novels in the years from 1941 through 1944. The novels The Book of Ptath and The Weapon Makers both appeared in magazines in serial form during this period; they were later published in book form after World War II. As well, several (though not all) of the stories that were compiled to make up the novels The Weapon Shops of Isher, The Mixed Men and The War Against the Rull were published during this time. California and post-war writing (1944–1950) In November 1944, van Vogt and Hull moved to Hollywood; van Vogt would spend the rest of his life in California. He had been using the name "A. E. van Vogt" in his public life for several years, and as part of the process of obtaining American citizenship in 1945 he finally and formally changed his legal name from Alfred Vogt to Alfred Elton van Vogt. To his friends in the California science fiction community, he was known as "Van". Method and themes Van Vogt systematized his writing method, using scenes of 800 words or so where a new complication was added or something resolved. Several of his stories hinge on temporal conundra, a favorite theme. He stated that he acquired many of his writing techniques from three books: Narrative Technique by Thomas Uzzell, The Only Two Ways to Write a Story by John Gallishaw, and Twenty Problems of the Fiction Writer by Gallishaw. He also claimed many of his ideas came from dreams; throughout his writing life he arranged to be awakened every 90 minutes during his sleep period so he could write down his dreams. Van Vogt was also always interested in the idea of all-encompassing systems of knowledge (akin to modern meta-systems). The characters in his very first story used a system called "Nexialism" to analyze the alien's behavior. Around this time, he became particularly interested in the general semantics of Alfred Korzybski. He subsequently wrote a novel merging these overarching themes, The World of Ā, originally serialized in Astounding in 1945. Ā (often rendered as Null-A), or non-Aristotelian logic, refers to the capacity for, and practice of, using intuitive, inductive reasoning (compare fuzzy logic), rather than reflexive, or conditioned, deductive reasoning. The novel recounts the adventures of an individual living in an apparent Utopia, where those with superior brainpower make up the ruling class... though all is not as it seems. A sequel, The Players of Ā (later re-titled The Pawns of Null-A) was serialized in 1948–49. At the same time, in his fiction, van Vogt was consistently sympathetic to absolute monarchy as a form of government. This was the case, for instance, in the Weapon Shop series, the Mixed Men series, and in single stories such as "Heir Apparent" (1945), whose protagonist was described as a "benevolent dictator". These sympathies were the subject of much critical discussion during van Vogt's career, and afterwards. Van Vogt published "Enchanted Village" in the July 1950 issue of Other Worlds Science Stories. It was reprinted in over 20 collections or anthologies, and appeared many times in translation. Dianetics and fix-ups (1950–1961) In 1950, van Vogt was briefly appointed as head of L. Ron Hubbard's Dianetics operation in California. Van Vogt had first met Hubbard in 1945, and became interested in his Dianetics theories, which were published shortly thereafter. Dianetics was the secular precursor to Hubbard's Church of Scientology; van Vogt would have no association with Scientology, as he did not approve of its mysticism. The California Dianetics operation went broke nine months later, but never went bankrupt, due to van Vogt's arrangements with creditors. Very shortly after that, van Vogt and his wife opened their own Dianetics center, partly financed by his writings, until he "signed off" around 1961. From 1951 until 1961, van Vogt's focus was on Dianetics, and no new story ideas flowed from his typewriter. Fix-ups However, during the 1950s, van Vogt retrospectively patched together many of his previously published stories into novels, sometimes creating new interstitial material to help bridge gaps in the narrative. Van Vogt referred to the resulting books as "fix-ups", a term that entered the vocabulary of science-fiction criticism. When the original stories were closely related this was often successful, although some van Vogt fix-ups featured disparate stories thrown together that bore little relation to each other, generally making for a less coherent plot. One of his best-known (and well-regarded) novels, The Voyage of the Space Beagle (1950) was a fix-up of four short stories including "Discord in Scarlet"; it was published in at least five European languages by 1955. Although Van Vogt averaged a new book title every ten months from 1951 to 1961, none of them were new stories; they were all fix-ups, collections of previously published stories, expansions of previously published short stories to novel length, or republications of previous books under new titles and all based on story material written and originally published between 1939 and 1950. Examples include The Weapon Shops of Isher (1951), The Mixed Men (1952), The War Against the Rull (1959), and the two "Clane" novels, Empire of the Atom (1957) and The Wizard of Linn (1962), which were inspired (like Asimov's Foundation series) by Roman imperial history; specifically, as Damon Knight wrote, the plot of Empire of the Atom was "lifted almost bodily" from that of Robert Graves' I, Claudius. (Also, one non-fiction work, The Hypnotism Handbook, appeared in 1956, though it had apparently been written much earlier.) After more than a decade of running their Dianetics center, Hull and van Vogt closed it in 1961. Nevertheless, van Vogt maintained his association with the organization and was still president of the Californian Association of Dianetic Auditors into the 1980s. Return to writing and later career (1962–1986) Though the constant re-packaging of his older work meant that he had never really been away from the book publishing world, van Vogt had not published any wholly new fiction for almost 12 years when he decided to return to writing in 1962. He did not return immediately to science fiction, but instead wrote the only mainstream, non-sf novel of his career. Van Vogt was profoundly affected by revelations of totalitarian police states that emerged after World War II. Accordingly, he wrote a mainstream novel that he set in Communist China, The Violent Man (1962). Van Vogt explained that to research this book he had read 100 books about China. Into this book he incorporated his view of "the violent male type", which he described as a "man who had to be right", a man who "instantly attracts women" and who he said were the men who "run the world". Contemporary reviews were lukewarm at best, and van Vogt thereafter returned to science fiction. From 1963 through the mid-1980s, van Vogt once again published new material on a regular basis, though fix-ups and reworked material also appeared relatively often. His later novels included fix-ups such as The Beast (also known as Moonbeast) (1963), Rogue Ship (1965), Quest for the Future (1970) and Supermind (1977). He also wrote novels by expanding previously published short stories; works of this type include The Darkness on Diamondia (1972) and Future Glitter (also known as Tyranopolis; 1973). Novels that were written simply as novels, and not serialized magazine pieces or fix-ups, had been very rare in van Vogt's oeuvre, but began to appear regularly beginning in the 1970s. Van Vogt's original novels included Children of Tomorrow (1970), The Battle of Forever (1971) and The Anarchistic Colossus (1977). Over the years, many sequels to his classic works were promised, but only one appeared: Null-A Three (1984; originally published in French). Several later books were initially published in Europe, and at least one novel only ever appeared in foreign language editions and was never published in its original English. Final years When the 1979 film Alien appeared, it was noted that the plot closely matched the plots of both Black Destroyer and Discord in Scarlet, both published in Astounding magazine in 1939, and then later published in the 1950 book Voyage of the Space Beagle. Van Vogt sued the production company for plagiarism, and eventually collected an out-of-court settlement of $50,000 from 20th Century Fox. In increasingly frail health, van Vogt published his final short story in 1986. Personal life Van Vogt's first wife, Edna Mayne Hull, died in 1975. Van Vogt married Lydia Bereginsky in 1979; they remained together until his death. Death On January 26, 2000, A. E. van Vogt died in Los Angeles from Alzheimer's disease. He was survived by his second wife. Critical reception Critical opinion about the quality of van Vogt's work is sharply divided. An early and articulate critic was Damon Knight. In a 1945 chapter-long essay reprinted in In Search of Wonder, entitled "Cosmic Jerrybuilder: A. E. van Vogt", Knight described van Vogt as "no giant; he is a pygmy who has learned to operate an overgrown typewriter". Knight described The World of Null-A as "one of the worst allegedly adult science fiction stories ever published". Concerning van Vogt's writing, Knight said: About Empire of the Atom Knight wrote: Knight also expressed misgivings about van Vogt's politics. He noted that van Vogt's stories almost invariably present absolute monarchy in a favorable light. In 1974, Knight retracted some of his criticism after finding out about Vogt's writing down his dreams as a part of his working methods: Knight's criticism greatly damaged van Vogt's reputation. On the other hand, when science fiction author Philip K. Dick was asked which science fiction writers had influenced his work the most, he replied: Dick also defended van Vogt against Damon Knight's criticisms: In a review of Transfinite: The Essential A. E. van Vogt, science fiction writer Paul Di Filippo said: In The John W. Campbell Letters, Campbell says, "The son-of-a-gun gets hold of you in the first paragraph, ties a knot around you, and keeps it tied in every paragraph thereafter—including the ultimate last one". Harlan Ellison (who had begun reading van Vogt as a teenager) wrote, "Van was the first writer to shine light on the restricted ways in which I had been taught to view the universe and the human condition". Writing in 1984, David Hartwell said: The literary critic Leslie A. Fiedler said something similar: American literary critic Fredric Jameson says of van Vogt: Van Vogt still has his critics. For example, Darrell Schweitzer, writing to The New York Review of Science Fiction in 1999, quoted a passage from the original van Vogt novelette "The Mixed Men", which he was then reading, and remarked: Recognition In 1946, van Vogt and his first wife, Edna Mayne Hull, were Guests of Honor at the fourth World Science Fiction Convention. In 1980, van Vogt received a "Casper Award" (precursor to the Canadian Prix Aurora Awards) for Lifetime Achievement. In 1996, van Vogt received a Special Award from the World Science Fiction Convention "for six decades of golden age science fiction". That same year, he was inducted as an inaugural member of the Science Fiction and Fantasy Hall of Fame. The Science Fiction Writers of America (SFWA) named him its 14th Grand Master in 1995 (presented 1996). Great controversy within SFWA accompanied its long wait in bestowing its highest honor (limited to living writers, no more than one annually). Writing an obituary of van Vogt, Robert J. Sawyer, a fellow Canadian writer of science fiction, remarked: It is generally held that a key factor in the delay was "damnable SFWA politics" reflecting the concerns of Damon Knight, the founder of the SFWA, who abhorred van Vogt's style and politics and thoroughly demolished his literary reputation in the 1950s. Harlan Ellison was more explicit in 1999 introduction to Futures Past: The Best Short Fiction of A. E. van Vogt: In 1996, van Vogt received a Special Award from the World Science Fiction Convention "for six decades of golden age science fiction". That same year, the Science Fiction and Fantasy Hall of Fame inducted him in its inaugural class of two deceased and two living persons, along with writer Jack Williamson (also living) and editors Hugo Gernsback and John W. Campbell. The works of van Vogt were translated into French by the surrealist Boris Vian (The World of Null-A as Le Monde des Å in 1958), and van Vogt's works were "viewed as great literature of the surrealist school". In addition, Slan was published in French, translated by Jean Rosenthal, under the title À la poursuite des Slans, as part of the paperback series 'Editions J'ai Lu: Romans-Texte Integral' in 1973. This edition also listing the following works by van Vogt as having been published in French as part of this series: Le Monde des Å, La faune de l'espace, Les joueurs du Å, L'empire de l'atome, Le sorcier de Linn, Les armureries d'Isher, Les fabricants d'armes, and Le livre de Ptath. Works Novels and novellas Special works published as books Planets For Sale by E. Mayne Hull (1954). A fix-up of five stories by Hull, originally published 1942 to 1946. Certain later editions (from 1965) credit both authors. The Enchanted Village (1979). A 25-page chapbook of a short story originally published in 1950. Slan Hunter by Kevin J. Anderson (2007). A sequel to Slan, based an unfinished draft by van Vogt. Null-A Continuum by John C. Wright (2008). An authorized continuation of the Null-A series which ignored the events of Null-A Three. Collections Out of the Unknown (1948), with Edna Mayne Hull Masters of Time (1950) (a.k.a. Recruiting Station) [also includes The Changeling, both works were later published separately] Triad (1951) omnibus of The World of Null A, The Voyage of the Space Beagle, Slan. Away and Beyond (1952) (abridged in paperback in 1959; abridged (differently) in paperback in 1963) Destination: Universe! (1952) The Twisted Men (1964) Monsters (1965) (later as SF Monsters (1967)) abridged as The Blal (1976) A Van Vogt Omnibus (1967), omnibus of Planets for Sale (with Edna Mayne Hull), The Beast, The Book of Ptath The Far Out Worlds of Van Vogt (1968) The Sea Thing and Other Stories (1970) (expanded from Out of the Unknown by adding an original story by Hull; later abridged in paperback as Out of the Unknown by removing 2 of the stories) M33 in Andromeda (1971) More Than Superhuman (1971) The Proxy Intelligence and Other Mind Benders, ), with Edna Mayne Hull (1971), revised as The Gryb (1976) Van Vogt Omnibus 2 (1971), omnibus of The Mind Cage, The Winged Man (with Edna Mayne Hull), Slan. The Book of Van Vogt (1972), also published as Lost: Fifty Suns (1979) The Three Eyes of Evil Including Earth's Last Fortress (1973) The Best of A. E. van Vogt (1974) later split into 2 volumes The Worlds of A. E. van Vogt (1974) (expanded from The Far Out Worlds of Van Vogt by adding 3 stories) The Best of A. E. van Vogt (1976) [differs to 1974 edition] Away and Beyond (1977) Pendulum (1978) (almost all original stories and articles) Futures Past: The Best Short Fiction of A.E. Van Vogt (1999) Transfinite: The Essential A.E. van Vogt (2002) Transgalactic (2006) Nonfiction The Hypnotism Handbook (1956, Griffin Publishing Company, with Charles Edward Cooke) The Money Personality (1972, Parker Publishing Company Inc., West Nyack, NY, ) Reflections of A. E. Van Vogt: The Autobiography of a Science Fiction Giant (1979, Fictioneer Books Ltd., Lakemont, GA) A Report on the Violent Male (1992, Paupers' Press, UK, ) See also Notes References Bibliography External links Sevagram, the A.E. van Vogt information site Obituary at LocusOnline'' (Locus Publications) "Writers: A. E. van Vogt (1912–2000, Canada)" – bibliography at SciFan A. E. van Vogt Papers (MS 322) at the Kenneth Spencer Research Library, University of Kansas A. E. van Vogt's fiction at Free Speculative Fiction Online 1912 births 2000 deaths 20th-century American novelists American male novelists American science fiction writers Canadian male novelists Canadian science fiction writers Canadian male short story writers Canadian emigrants to the United States Neurological disease deaths in California Deaths from Alzheimer's disease SFWA Grand Masters Science Fiction Hall of Fame inductees Writers from Manitoba Mennonite writers Canadian Mennonites American male short story writers 20th-century Canadian short story writers 20th-century American short story writers 20th-century Canadian male writers Weird fiction writers Pulp fiction writers Writers from Winnipeg 20th-century American male writers
890
https://en.wikipedia.org/wiki/Anna%20Kournikova
Anna Kournikova
Anna Sergeyevna Kournikova (; born 7 June 1981) is a Russian former professional tennis player and American television personality. Her appearance and celebrity status made her one of the best known tennis stars worldwide. At the peak of her fame, fans looking for images of Kournikova made her name one of the most common search strings on Google Search. Despite never winning a singles title, she reached No. 8 in the world in 2000. She achieved greater success playing doubles, where she was at times the world No. 1 player. With Martina Hingis as her partner, she won Grand Slam titles in Australia in 1999 and 2002, and the WTA Championships in 1999 and 2000. They referred to themselves as the "Spice Girls of Tennis". Kournikova retired from professional tennis in 2003 due to serious back and spinal problems, including a herniated disk. She lives in Miami Beach, Florida, and played in occasional exhibitions and in doubles for the St. Louis Aces of World Team Tennis before the team folded in 2011. She was a new trainer for season 12 of the television show The Biggest Loser, replacing Jillian Michaels, but did not return for season 13. In addition to her tennis and television work, Kournikova serves as a Global Ambassador for Population Services International's "Five & Alive" program, which addresses health crises facing children under the age of five and their families. Early life Kournikova was born in Moscow, Russia on 7 June 1981. Her father, Sergei Kournikov (born 1961), a former Greco-Roman wrestling champion, eventually earned a PhD and was a professor at the University of Physical Culture and Sport in Moscow. As of 2001, he was still a part-time martial arts instructor there. Her mother Alla (born 1963) had been a 400-metre runner. Her younger half-brother, Allan, is a youth golf world champion who was featured in the 2013 documentary film The Short Game. Sergei Kournikov has said, "We were young and we liked the clean, physical life, so Anna was in a good environment for sport from the beginning". Kournikova received her first tennis racquet as a New Year gift in 1986 at the age of five. Describing her early regimen, she said, "I played two times a week from age six. It was a children's program. And it was just for fun; my parents didn't know I was going to play professionally, they just wanted me to do something because I had lots of energy. It was only when I started playing well at seven that I went to a professional academy. I would go to school, and then my parents would take me to the club, and I'd spend the rest of the day there just having fun with the kids." In 1986, Kournikova became a member of the Spartak Tennis Club, coached by Larissa Preobrazhenskaya. In 1989, at the age of eight, Kournikova began appearing in junior tournaments, and by the following year, was attracting attention from tennis scouts across the world. She signed a management deal at age ten and went to Bradenton, Florida, to train at Nick Bollettieri's celebrated tennis academy. Tennis career 1989–1997: Early years and breakthrough Following her arrival in the United States, she became prominent on the tennis scene. At the age of 14, she won the European Championships and the Italian Open Junior tournament. In December 1995, she became the youngest player to win the 18-and-under division of the Junior Orange Bowl tennis tournament. By the end of the year, Kournikova was crowned the ITF Junior World Champion U-18 and Junior European Champion U-18. Earlier, in September 1995, Kournikova, still only 14 years of age, debuted in the WTA Tour, when she received a wildcard into the qualifications at the WTA tournament in Moscow, the Moscow Ladies Open, and qualified before losing in the second round of the main draw to third-seeded Sabine Appelmans. She also reached her first WTA Tour doubles final in that debut appearance — partnering with 1995 Wimbledon girls' champion in both singles and doubles Aleksandra Olsza, she lost the title match to Meredith McGrath and Larisa Savchenko-Neiland. In February–March 1996, Kournikova won two ITF titles, in Midland, Michigan and Rockford, Illinois. Still only 14 years of age, in April 1996 she debuted at the Fed Cup for Russia, the youngest player ever to participate and win a match. In 1996, she started playing under a new coach, Ed Nagel. Her six-year association with Nagel was successful. At 15, she made her Grand Slam debut, reaching the fourth round of the 1996 US Open, losing to Steffi Graf, the eventual champion. After this tournament, Kournikova's ranking jumped from No. 144 to debut in the Top 100 at No. 69. Kournikova was a member of the Russian delegation to the 1996 Olympic Games in Atlanta, Georgia. In 1996, she was named WTA Newcomer of the Year, and she was ranked No. 57 in the end of the season. Kournikova entered the 1997 Australian Open as world No. 67, where she lost in the first round to world No. 12, Amanda Coetzer. At the Italian Open, Kournikova lost to Amanda Coetzer in the second round. She reached the semi-finals in the doubles partnering with Elena Likhovtseva, before losing to the sixth seeds Mary Joe Fernández and Patricia Tarabini. At the French Open, Kournikova made it to the third round before losing to world No. 1, Martina Hingis. She also reached the third round in doubles with Likhovtseva. At the Wimbledon Championships, Kournikova became only the second woman in the open era to reach the semi-finals in her Wimbledon debut, the first being Chris Evert in 1972. There she lost to eventual champion Martina Hingis. At the US Open, she lost in the second round to the eleventh seed Irina Spîrlea. Partnering with Likhovtseva, she reached the third round of the women's doubles event. Kournikova played her last WTA Tour event of 1997 at Porsche Tennis Grand Prix in Filderstadt, losing to Amanda Coetzer in the second round of singles, and in the first round of doubles to Lindsay Davenport and Jana Novotná partnering with Likhovtseva. She broke into the top 50 on 19 May, and was ranked No. 32 in singles and No. 41 in doubles at the end of the season. 1998–2000: Success and stardom In 1998, Kournikova broke into the WTA's top 20 rankings for the first time, when she was ranked No. 16. At the Australian Open, Kournikova lost in the third round to world No. 1 player, Martina Hingis. She also partnered with Larisa Savchenko-Neiland in women's doubles, and they lost to eventual champions Hingis and Mirjana Lučić in the second round. Although she lost in the second round of the Paris Open to Anke Huber in singles, Kournikova reached her second doubles WTA Tour final, partnering with Larisa Savchenko-Neiland. They lost to Sabine Appelmans and Miriam Oremans. Kournikova and Savchenko-Neiland reached their second consecutive final at the Linz Open, losing to Alexandra Fusai and Nathalie Tauziat. At the Miami Open, Kournikova reached her first WTA Tour singles final, before losing to Venus Williams in the final. Kournikova then reached two consecutive quarterfinals, at Amelia Island and the Italian Open, losing respectively to Lindsay Davenport and Martina Hingis. At the German Open, she reached the semi-finals in both singles and doubles, partnering with Larisa Savchenko-Neiland. At the French Open Kournikova had her best result at this tournament, making it to the fourth round before losing to Jana Novotná. She also reached her first Grand Slam doubles semi-finals, losing with Savchenko-Neiland to Lindsay Davenport and Natasha Zvereva. During her quarterfinals match at the grass-court Eastbourne Open versus Steffi Graf, Kournikova injured her thumb, which would eventually force her to withdraw from the 1998 Wimbledon Championships. However, she won that match, but then withdrew from her semi-finals match against Arantxa Sánchez Vicario. Kournikova returned for the Du Maurier Open and made it to the third round, before losing to Conchita Martínez. At the US Open Kournikova reached the fourth round before losing to Arantxa Sánchez Vicario. Her strong year qualified her for the year-end 1998 WTA Tour Championships, but she lost to Monica Seles in the first round. However, with Seles, she won her first WTA doubles title, in Tokyo, beating Mary Joe Fernández and Arantxa Sánchez Vicario in the final. At the end of the season, she was ranked No. 10 in doubles. At the start of the 1999 season, Kournikova advanced to the fourth round in singles before losing to Mary Pierce. However, Kournikova won her first doubles Grand Slam title, partnering with Martina Hingis. The two defeated Lindsay Davenport and Natasha Zvereva in the final. At the Tier I Family Circle Cup, Kournikova reached her second WTA Tour final, but lost to Martina Hingis. She then defeated Jennifer Capriati, Lindsay Davenport and Patty Schnyder on her route to the Bausch & Lomb Championships semi-finals, losing to Ruxandra Dragomir. At The French Open, Kournikova reached the fourth round before losing to eventual champion Steffi Graf. Once the grass-court season commenced in England, Kournikova lost to Nathalie Tauziat in the semi-finals in Eastbourne. At Wimbledon, Kournikova lost to Venus Williams in the fourth round. She also reached the final in mixed doubles, partnering with Jonas Björkman, but they lost to Leander Paes and Lisa Raymond. Kournikova again qualified for year-end WTA Tour Championships, but lost to Mary Pierce in the first round, and ended the season as World No. 12. While Kournikova had a successful singles season, she was even more successful in doubles. After their victory at the Australian Open, she and Martina Hingis won tournaments in Indian Wells, Rome, Eastbourne and the WTA Tour Championships, and reached the final of The French Open where they lost to Serena and Venus Williams. Partnering with Elena Likhovtseva, Kournikova also reached the final in Stanford. On 22 November 1999 she reached the world No. 1 ranking in doubles, and ended the season at this ranking. Anna Kournikova and Martina Hingis were presented with the WTA Award for Doubles Team of the Year. Kournikova opened her 2000 season winning the Gold Coast Open doubles tournament partnering with Julie Halard. She then reached the singles semi-finals at the Medibank International Sydney, losing to Lindsay Davenport. At the Australian Open, she reached the fourth round in singles and the semi-finals in doubles. That season, Kournikova reached eight semi-finals (Sydney, Scottsdale, Stanford, San Diego, Luxembourg, Leipzig and Tour Championships), seven quarterfinals (Gold Coast, Tokyo, Amelia Island, Hamburg, Eastbourne, Zürich and Philadelphia) and one final. On 20 November 2000 she broke into top 10 for the first time, reaching No. 8. She was also ranked No. 4 in doubles at the end of the season. Kournikova was once again, more successful in doubles. She reached the final of the US Open in mixed doubles, partnering with Max Mirnyi, but they lost to Jared Palmer and Arantxa Sánchez Vicario. She also won six doubles titles – Gold Coast (with Julie Halard), Hamburg (with Natasha Zvereva), Filderstadt, Zürich, Philadelphia and the Tour Championships (with Martina Hingis). 2001–2003: Injuries and final years Her 2001 season was plagued by injuries, including a left foot stress fracture which made her withdraw from 12 tournaments, including the French Open and Wimbledon. She underwent surgery in April. She reached her second career grand slam quarterfinals, at the Australian Open. Kournikova then withdrew from several events due to continuing problems with her left foot and did not return until Leipzig. With Barbara Schett, she won the doubles title in Sydney. She then lost in the finals in Tokyo, partnering with Iroda Tulyaganova, and at San Diego, partnering with Martina Hingis. Hingis and Kournikova also won the Kremlin Cup. At the end of the 2001 season, she was ranked No. 74 in singles and No. 26 in doubles. Kournikova regained some success in 2002. She reached the semi-finals of Auckland, Tokyo, Acapulco and San Diego, and the final of the China Open, losing to Anna Smashnova. This was Kournikova's last singles final. With Martina Hingis, she lost in the final at Sydney, but they won their second Grand Slam title together, the Australian Open. They also lost in the quarterfinals of the US Open. With Chanda Rubin, Kournikova played the semi-finals of Wimbledon, but they lost to Serena and Venus Williams. Partnering with Janet Lee, she won the Shanghai title. At the end of 2002 season, she was ranked No. 35 in singles and No. 11 in doubles. In 2003, Anna Kournikova achieved her first Grand Slam match victory in two years at the Australian Open. She defeated Henrieta Nagyová in the first round, and then lost to Justine Henin-Hardenne in the 2nd round. She withdrew from Tokyo due to a sprained back suffered at the Australian Open and did not return to Tour until Miami. On 9 April, in what would be the final WTA match of her career, Kournikova dropped out in the first round of the Family Circle Cup in Charleston, due to a left adductor strain. Her singles world ranking was 67. She reached the semi-finals at the ITF tournament in Sea Island, before withdrawing from a match versus Maria Sharapova due to the adductor injury. She lost in the first round of the ITF tournament in Charlottesville. She did not compete for the rest of the season due to a continuing back injury. At the end of the 2003 season and her professional career, she was ranked No. 305 in singles and No. 176 in doubles. Kournikova's two Grand Slam doubles titles came in 1999 and 2002, both at the Australian Open in the Women's Doubles event with partner Martina Hingis. Kournikova proved a successful doubles player on the professional circuit, winning 16 tournament doubles titles, including two Australian Opens and being a finalist in mixed doubles at the US Open and at Wimbledon, and reaching the No. 1 ranking in doubles in the WTA Tour rankings. Her pro career doubles record was 200–71. However, her singles career plateaued after 1999. For the most part, she managed to retain her ranking between 10 and 15 (her career high singles ranking was No.8), but her expected finals breakthrough failed to occur; she only reached four finals out of 130 singles tournaments, never in a Grand Slam event, and never won one. Her singles record is 209–129. Her final playing years were marred by a string of injuries, especially back injuries, which caused her ranking to erode gradually. As a personality Kournikova was among the most common search strings for both articles and images in her prime. 2004–present: Exhibitions and World Team Tennis Kournikova has not played on the WTA Tour since 2003, but still plays exhibition matches for charitable causes. In late 2004, she participated in three events organized by Elton John and by fellow tennis players Serena Williams and Andy Roddick. In January 2005, she played in a doubles charity event for the Indian Ocean tsunami with John McEnroe, Andy Roddick, and Chris Evert. In November 2005, she teamed up with Martina Hingis, playing against Lisa Raymond and Samantha Stosur in the WTT finals for charity. Kournikova is also a member of the St. Louis Aces in the World Team Tennis (WTT), playing doubles only. In September 2008, Kournikova showed up for the 2008 Nautica Malibu Triathlon held at Zuma Beach in Malibu, California. The Race raised funds for children's Hospital Los Angeles. She won that race for women's K-Swiss team. On 27 September 2008, Kournikova played exhibition mixed doubles matches in Charlotte, North Carolina, partnering with Tim Wilkison and Karel Nováček. Kournikova and Wilkison defeated Jimmy Arias and Chanda Rubin, and then Kournikova and Novacek defeated Rubin and Wilkison. On 12 October 2008, Anna Kournikova played one exhibition match for the annual charity event, hosted by Billie Jean King and Elton John, and raised more than $400,000 for the Elton John AIDS Foundation and Atlanta AIDS Partnership Fund. She played doubles with Andy Roddick (they were coached by David Chang) versus Martina Navratilova and Jesse Levine (coached by Billie Jean King); Kournikova and Roddick won. Kournikova competed alongside John McEnroe, Tracy Austin and Jim Courier at the "Legendary Night", which was held on 2 May 2009, at the Turning Stone Event Center in Verona, New York. The exhibition included a mixed doubles match of McEnroe and Austin against Courier and Kournikova. In 2008, she was named a spokesperson for K-Swiss. In 2005, Kournikova stated that if she were 100% fit, she would like to come back and compete again. In June 2010, Kournikova reunited with her doubles partner Martina Hingis to participate in competitive tennis for the first time in seven years in the Invitational Ladies Doubles event at Wimbledon. On 29 June 2010 they defeated the British pair Samantha Smith and Anne Hobbs. Playing style Kournikova plays right-handed with a two-handed backhand. She is a great player at the net. She can hit forceful groundstrokes and also drop shots. Her playing style fits the profile for a doubles player, and is complemented by her height. She has been compared to such doubles specialists as Pam Shriver and Peter Fleming. Personal life Kournikova was in a relationship with fellow Russian, Pavel Bure, an NHL ice hockey player. The two met in 1999, when Kournikova was still linked to Bure's former Russian teammate Sergei Fedorov. Bure and Kournikova were reported to have been engaged in 2000 after a reporter took a photo of them together in a Florida restaurant where Bure supposedly asked Kournikova to marry him. As the story made headlines in Russia, where they were both heavily followed in the media as celebrities, Bure and Kournikova both denied any engagement. Kournikova, 10 years younger than Bure, was 18 years old at the time. Fedorov claimed that he and Kournikova were married in 2001, and divorced in 2003. Kournikova's representatives deny any marriage to Fedorov; however, Fedorov's agent Pat Brisson claims that although he does not know when they got married, he knew "Fedorov was married". Kournikova started dating singer Enrique Iglesias in late 2001 after she had appeared in his music video for "Escape". She has consistently refused to directly confirm or deny the status of her personal relationships. In June 2008, Iglesias was quoted by the Daily Star as having married Kournikova the previous year. They reportedly split in October 2013 but reconciled. The couple have a son and daughter, Nicholas and Lucy, who are fraternal twins born on 16 December 2017. On 30 January 2020, their third child, a daughter, Mary, was born. It was reported in 2010 that Kournikova had become an American citizen. Media publicity In 2000, Kournikova became the new face for Berlei's shock absorber sports bras, and appeared in the "only the ball should bounce" billboard campaign. Following that, she was cast by the Farrelly brothers for a minor role in the 2000 film Me, Myself & Irene starring Jim Carrey and Renée Zellweger. Photographs of her have appeared on covers of various publications, including men's magazines, such as one in the much-publicized 2004 Sports Illustrated Swimsuit Issue, where she posed in bikinis and swimsuits, as well as in FHM and Maxim. Kournikova was named one of Peoples 50 Most Beautiful People in 1998 and was voted "hottest female athlete" on ESPN.com. In 2002, she also placed first in FHM's 100 Sexiest Women in the World in US and UK editions. By contrast, ESPN – citing the degree of hype as compared to actual accomplishments as a singles player – ranked Kournikova 18th in its "25 Biggest Sports Flops of the Past 25 Years". Kournikova was also ranked No. 1 in the ESPN Classic series "Who's number 1?" when the series featured sport's most overrated athletes. She continued to be the most searched athlete on the Internet through 2008 even though she had retired from the professional tennis circuit years earlier. After slipping from first to sixth among athletes in 2009, she moved back up to third place among athletes in terms of search popularity in 2010. In October 2010, Kournikova headed to NBC's The Biggest Loser where she led the contestants in a tennis-workout challenge. In May 2011, it was announced that Kournikova would join The Biggest Loser as a regular celebrity trainer in season 12. She did not return for season 13. Legacy and influence on popular culture A variation of a White Russian made with skim milk is known as an Anna Kournikova. A video game featuring Kournikova's licensed appearance, titled Anna Kournikova's Smash Court Tennis, was developed by Namco and released for the PlayStation in Japan and Europe in November 1998. A computer virus named after her spread worldwide beginning on 12 February 2001 infecting computers through email in a matter of hours. The Texas hold 'em opening hand of Ace-King is sometimes referred to as an Anna Kournikova, both for the initials on the cards and because the hand looks better than it performs. Career statistics and awards Doubles performance timeline Grand Slam tournament finals Doubles: 3 (2–1) Mixed doubles: 2 (0–2) Awards 1996: WTA Newcomer of the Year 1999: WTA Doubles Team of the Year (with Martina Hingis) Books Anna Kournikova by Susan Holden (2001) ( / ) Anna Kournikova by Connie Berman (2001) (Women Who Win) ( / ) References External links 1981 births Living people Australian Open (tennis) champions Grand Slam (tennis) champions in women's doubles Olympic tennis players of Russia Participants in American reality television series Russian emigrants to the United States Russian female tennis players Russian female models Russian socialites Sportspeople from Miami-Dade County, Florida Tennis players from Moscow Tennis players at the 1996 Summer Olympics People with acquired American citizenship Iglesias family
892
https://en.wikipedia.org/wiki/Alfons%20Maria%20Jakob
Alfons Maria Jakob
Alfons Maria Jakob (2 July 1884 – 17 October 1931) was a German neurologist who worked in the field of neuropathology. He was born in Aschaffenburg, Bavaria and educated in medicine at the universities of Munich, Berlin, and Strasbourg, where he received his doctorate in 1908. During the following year, he began clinical work under the psychiatrist Emil Kraepelin and did laboratory work with Franz Nissl and Alois Alzheimer in Munich. In 1911, by way of an invitation from Wilhelm Weygandt, he relocated to Hamburg, where he worked with Theodor Kaes and eventually became head of the laboratory of anatomical pathology at the psychiatric State Hospital Hamburg-Friedrichsberg. Following the death of Kaes in 1913, Jakob succeeded him as prosector. During World War I he served as an army physician in Belgium, and afterwards returned to Hamburg. In 1919, he obtained his habilitation for neurology and in 1924 became a professor of neurology. Under Jakob's guidance the department grew rapidly. He made significant contributions to knowledge on concussion and secondary nerve degeneration and became a doyen of neuropathology. Jakob was the author of five monographs and nearly 80 scientific papers. His neuropathological research contributed greatly to the delineation of several diseases, including multiple sclerosis and Friedreich's ataxia. He first recognised and described Alper's disease and Creutzfeldt–Jakob disease (named along with Munich neuropathologist Hans Gerhard Creutzfeldt). He gained experience in neurosyphilis, having a 200-bed ward devoted entirely to that disorder. Jakob made a lecture tour of the United States (1924) and South America (1928), of which, he wrote a paper on the neuropathology of yellow fever. He suffered from chronic osteomyelitis for the last seven years of his life. This eventually caused a retroperitoneal abscess and paralytic ileus from which he died following operation. Associated eponym Creutzfeldt–Jakob disease: A very rare and incurable degenerative neurological disease. It is the most common form of transmissible spongiform encephalopathies caused by prions. Eponym introduced by Walther Spielmeyer in 1922. Bibliography Die extrapyramidalen Erkrankungen. In: Monographien aus dem Gesamtgebiete der Neurologie und Psychiatry, Berlin, 1923 Normale und pathologische Anatomie und Histologie des Grosshirns. Separate printing of Handbuch der Psychiatry. Leipzig, 1927–1928 Das Kleinhirn. In: Handbuch der mikroskopischen Anatomie, Berlin, 1928 Die Syphilis des Gehirns und seiner Häute. In: Oswald Bumke (edit.): Handbuch der Geisteskrankheiten, Berlin, 1930. References People from Aschaffenburg University of Hamburg faculty German neurologists German neuroscientists 1884 births 1931 deaths
894
https://en.wikipedia.org/wiki/Agnosticism
Agnosticism
Agnosticism is the view or belief that the existence of God, of the divine or the supernatural is unknown or unknowable. Another definition provided is the view that "human reason is incapable of providing sufficient rational grounds to justify either the belief that God exists or the belief that God does not exist." The English biologist Thomas Henry Huxley coined the word agnostic in 1869, and said "It simply means that a man shall not say he knows or believes that which he has no scientific grounds for professing to know or believe." Earlier thinkers, however, had written works that promoted agnostic points of view, such as Sanjaya Belatthaputta, a 5th-century BCE Indian philosopher who expressed agnosticism about any afterlife; and Protagoras, a 5th-century BCE Greek philosopher who expressed agnosticism about the existence of "the gods". Defining agnosticism Being a scientist, above all else, Huxley presented agnosticism as a form of demarcation. A hypothesis with no supporting, objective, testable evidence is not an objective, scientific claim. As such, there would be no way to test said hypotheses, leaving the results inconclusive. His agnosticism was not compatible with forming a belief as to the truth, or falsehood, of the claim at hand. Karl Popper would also describe himself as an agnostic. According to philosopher William L. Rowe, in this strict sense, agnosticism is the view that human reason is incapable of providing sufficient rational grounds to justify either the belief that God exists or the belief that God does not exist. George H. Smith, while admitting that the narrow definition of atheist was the common usage definition of that word, and admitting that the broad definition of agnostic was the common usage definition of that word, promoted broadening the definition of atheist and narrowing the definition of agnostic. Smith rejects agnosticism as a third alternative to theism and atheism and promotes terms such as agnostic atheism (the view of those who do not hold a belief in the existence of any deity, but claim that the existence of a deity is unknown or inherently unknowable) and agnostic theism (the view of those who believe in the existence of a deity(s), but claim that the existence of a deity is unknown or inherently unknowable). Etymology Agnostic () was used by Thomas Henry Huxley in a speech at a meeting of the Metaphysical Society in 1869 to describe his philosophy, which rejects all claims of spiritual or mystical knowledge. Early Christian church leaders used the Greek word gnosis (knowledge) to describe "spiritual knowledge". Agnosticism is not to be confused with religious views opposing the ancient religious movement of Gnosticism in particular; Huxley used the term in a broader, more abstract sense. Huxley identified agnosticism not as a creed but rather as a method of skeptical, evidence-based inquiry. The term Agnostic is also cognate with the Sanskrit word Ajñasi which translates literally to "not knowable", and relates to the ancient Indian philosophical school of Ajñana, which proposes that it is impossible to obtain knowledge of metaphysical nature or ascertain the truth value of philosophical propositions; and even if knowledge was possible, it is useless and disadvantageous for final salvation. In recent years, scientific literature dealing with neuroscience and psychology has used the word to mean "not knowable". In technical and marketing literature, "agnostic" can also mean independence from some parameters—for example, "platform agnostic" (referring to cross-platform software) or "hardware-agnostic". Qualifying agnosticism Scottish Enlightenment philosopher David Hume contended that meaningful statements about the universe are always qualified by some degree of doubt. He asserted that the fallibility of human beings means that they cannot obtain absolute certainty except in trivial cases where a statement is true by definition (e.g. tautologies such as "all bachelors are unmarried" or "all triangles have three corners"). Types Strong agnosticism (also called "hard", "closed", "strict", or "permanent agnosticism") The view that the question of the existence or nonexistence of a deity or deities, and the nature of ultimate reality is unknowable by reason of our natural inability to verify any experience with anything but another subjective experience. A strong agnostic would say, "I cannot know whether a deity exists or not, and neither can you." Weak agnosticism (also called "soft", "open", "empirical", or "temporal agnosticism") The view that the existence or nonexistence of any deities is currently unknown but is not necessarily unknowable; therefore, one will withhold judgment until evidence, if any, becomes available. A weak agnostic would say, "I don't know whether any deities exist or not, but maybe one day, if there is evidence, we can find something out." Apathetic agnosticism The view that no amount of debate can prove or disprove the existence of one or more deities, and if one or more deities exist, they do not appear to be concerned about the fate of humans. Therefore, their existence has little to no impact on personal human affairs and should be of little interest. An apathetic agnostic would say, "I don't know whether any deity exists or not, and I don't care if any deity exists or not." History Hindu philosophy Throughout the history of Hinduism there has been a strong tradition of philosophic speculation and skepticism. The Rig Veda takes an agnostic view on the fundamental question of how the universe and the gods were created. Nasadiya Sukta (Creation Hymn) in the tenth chapter of the Rig Veda says: Hume, Kant, and Kierkegaard Aristotle, Anselm, Aquinas, Descartes, and Gödel presented arguments attempting to rationally prove the existence of God. The skeptical empiricism of David Hume, the antinomies of Immanuel Kant, and the existential philosophy of Søren Kierkegaard convinced many later philosophers to abandon these attempts, regarding it impossible to construct any unassailable proof for the existence or non-existence of God. In his 1844 book, Philosophical Fragments, Kierkegaard writes: Hume was Huxley's favourite philosopher, calling him "the Prince of Agnostics". Diderot wrote to his mistress, telling of a visit by Hume to the Baron D'Holbach, and describing how a word for the position that Huxley would later describe as agnosticism didn't seem to exist, or at least wasn't common knowledge, at the time. United Kingdom Charles Darwin Raised in a religious environment, Charles Darwin (1809–1882) studied to be an Anglican clergyman. While eventually doubting parts of his faith, Darwin continued to help in church affairs, even while avoiding church attendance. Darwin stated that it would be "absurd to doubt that a man might be an ardent theist and an evolutionist". Although reticent about his religious views, in 1879 he wrote that "I have never been an atheist in the sense of denying the existence of a God. – I think that generally ... an agnostic would be the most correct description of my state of mind." Thomas Henry Huxley Agnostic views are as old as philosophical skepticism, but the terms agnostic and agnosticism were created by Huxley (1825–1895) to sum up his thoughts on contemporary developments of metaphysics about the "unconditioned" (William Hamilton) and the "unknowable" (Herbert Spencer). Though Huxley began to use the term "agnostic" in 1869, his opinions had taken shape some time before that date. In a letter of September 23, 1860, to Charles Kingsley, Huxley discussed his views extensively: And again, to the same correspondent, May 6, 1863: Of the origin of the name agnostic to describe this attitude, Huxley gave the following account: In 1889, Huxley wrote:Therefore, although it be, as I believe, demonstrable that we have no real knowledge of the authorship, or of the date of composition of the Gospels, as they have come down to us, and that nothing better than more or less probable guesses can be arrived at on that subject. William Stewart Ross William Stewart Ross (1844–1906) wrote under the name of Saladin. He was associated with Victorian Freethinkers and the organization the British Secular Union. He edited the Secular Review from 1882; it was renamed Agnostic Journal and Eclectic Review and closed in 1907. Ross championed agnosticism in opposition to the atheism of Charles Bradlaugh as an open-ended spiritual exploration. In Why I am an Agnostic (c. 1889) he claims that agnosticism is "the very reverse of atheism". Bertrand Russell Bertrand Russell (1872–1970) declared Why I Am Not a Christian in 1927, a classic statement of agnosticism. He calls upon his readers to "stand on their own two feet and look fair and square at the world with a fearless attitude and a free intelligence". In 1939, Russell gave a lecture on The existence and nature of God, in which he characterized himself as an atheist. He said: However, later in the same lecture, discussing modern non-anthropomorphic concepts of God, Russell states: In Russell's 1947 pamphlet, Am I An Atheist or an Agnostic? (subtitled A Plea For Tolerance in the Face of New Dogmas), he ruminates on the problem of what to call himself: In his 1953 essay, What Is An Agnostic? Russell states: Later in the essay, Russell adds: Leslie Weatherhead In 1965, Christian theologian Leslie Weatherhead (1893–1976) published The Christian Agnostic, in which he argues: Although radical and unpalatable to conventional theologians, Weatherhead's agnosticism falls far short of Huxley's, and short even of weak agnosticism: United States Robert G. Ingersoll Robert G. Ingersoll (1833–1899), an Illinois lawyer and politician who evolved into a well-known and sought-after orator in 19th-century America, has been referred to as the "Great Agnostic". In an 1896 lecture titled Why I Am An Agnostic, Ingersoll related why he was an agnostic: In the conclusion of the speech he simply sums up the agnostic position as: In 1885, Ingersoll explained his comparative view of agnosticism and atheism as follows: Bernard Iddings Bell Canon Bernard Iddings Bell (1886–1958), a popular cultural commentator, Episcopal priest, and author, lauded the necessity of agnosticism in Beyond Agnosticism: A Book for Tired Mechanists, calling it the foundation of "all intelligent Christianity." Agnosticism was a temporary mindset in which one rigorously questioned the truths of the age, including the way in which one believed God. His view of Robert Ingersoll and Thomas Paine was that they were not denouncing true Christianity but rather "a gross perversion of it." Part of the misunderstanding stemmed from ignorance of the concepts of God and religion. Historically, a god was any real, perceivable force that ruled the lives of humans and inspired admiration, love, fear, and homage; religion was the practice of it. Ancient peoples worshiped gods with real counterparts, such as Mammon (money and material things), Nabu (rationality), or Ba'al (violent weather); Bell argued that modern peoples were still paying homage—with their lives and their children's lives—to these old gods of wealth, physical appetites, and self-deification. Thus, if one attempted to be agnostic passively, he or she would incidentally join the worship of the world's gods. In Unfashionable Convictions (1931), he criticized the Enlightenment's complete faith in human sensory perception, augmented by scientific instruments, as a means of accurately grasping Reality. Firstly, it was fairly new, an innovation of the Western World, which Aristotle invented and Thomas Aquinas revived among the scientific community. Secondly, the divorce of "pure" science from human experience, as manifested in American Industrialization, had completely altered the environment, often disfiguring it, so as to suggest its insufficiency to human needs. Thirdly, because scientists were constantly producing more data—to the point where no single human could grasp it all at once—it followed that human intelligence was incapable of attaining a complete understanding of universe; therefore, to admit the mysteries of the unobserved universe was to be actually scientific. Bell believed that there were two other ways that humans could perceive and interact with the world. Artistic experience was how one expressed meaning through speaking, writing, painting, gesturing—any sort of communication which shared insight into a human's inner reality. Mystical experience was how one could "read" people and harmonize with them, being what we commonly call love. In summary, man was a scientist, artist, and lover. Without exercising all three, a person became "lopsided." Bell considered a humanist to be a person who cannot rightly ignore the other ways of knowing. However, humanism, like agnosticism, was also temporal, and would eventually lead to either scientific materialism or theism. He lays out the following thesis: Truth cannot be discovered by reasoning on the evidence of scientific data alone. Modern peoples' dissatisfaction with life is the result of depending on such incomplete data. Our ability to reason is not a way to discover Truth but rather a way to organize our knowledge and experiences somewhat sensibly. Without a full, human perception of the world, one's reason tends to lead them in the wrong direction. Beyond what can be measured with scientific tools, there are other types of perception, such as one's ability know another human through loving. One's loves cannot be dissected and logged in a scientific journal, but we know them far better than we know the surface of the sun. They show us an undefinable reality that is nevertheless intimate and personal, and they reveal qualities lovelier and truer than detached facts can provide. To be religious, in the Christian sense, is to live for the Whole of Reality (God) rather than for a small part (gods). Only by treating this Whole of Reality as a person—good and true and perfect—rather than an impersonal force, can we come closer to the Truth. An ultimate Person can be loved, but a cosmic force cannot. A scientist can only discover peripheral truths, but a lover is able to get at the Truth. There are many reasons to believe in God but they are not sufficient for an agnostic to become a theist. It is not enough to believe in an ancient holy book, even though when it is accurately analyzed without bias, it proves to be more trustworthy and admirable than what we are taught in school. Neither is it enough to realize how probable it is that a personal God would have to show human beings how to live, considering they have so much trouble on their own. Nor is it enough to believe for the reason that, throughout history, millions of people have arrived at this Wholeness of Reality only through religious experience. The aforementioned reasons may warm one toward religion, but they fall short of convincing. However, if one presupposes that God is in fact a knowable, loving person, as an experiment, and then lives according that religion, he or she will suddenly come face to face with experiences previously unknown. One's life becomes full, meaningful, and fearless in the face of death. It does not defy reason but exceeds it. Because God has been experienced through love, the orders of prayer, fellowship, and devotion now matter. They create order within one's life, continually renewing the "missing piece" that had previously felt lost. They empower one to be compassionate and humble, not small-minded or arrogant. No truth should be denied outright, but all should be questioned. Science reveals an ever-growing vision of our universe that should not be discounted due to bias toward older understandings. Reason is to be trusted and cultivated. To believe in God is not to forego reason or to deny scientific facts, but to step into the unknown and discover the fullness of life. Demographics Demographic research services normally do not differentiate between various types of non-religious respondents, so agnostics are often classified in the same category as atheists or other non-religious people. A 2010 survey published in Encyclopædia Britannica found that the non-religious people or the agnostics made up about 9.6% of the world's population. A November–December 2006 poll published in the Financial Times gives rates for the United States and five European countries. The rates of agnosticism in the United States were at 14%, while the rates of agnosticism in the European countries surveyed were considerably higher: Italy (20%), Spain (30%), Great Britain (35%), Germany (25%), and France (32%). A study conducted by the Pew Research Center found that about 16% of the world's people, the third largest group after Christianity and Islam, have no religious affiliation. According to a 2012 report by the Pew Research Center, agnostics made up 3.3% of the US adult population. In the U.S. Religious Landscape Survey, conducted by the Pew Research Center, 55% of agnostic respondents expressed "a belief in God or a universal spirit", whereas 41% stated that they thought that they felt a tension "being non-religious in a society where most people are religious". According to the 2011 Australian Bureau of Statistics, 22% of Australians have "no religion", a category that includes agnostics. Between 64% and 65% of Japanese and up to 81% of Vietnamese are atheists, agnostics, or do not believe in a god. An official European Union survey reported that 3% of the EU population is unsure about their belief in a god or spirit. Criticism Agnosticism is criticized from a variety of standpoints. Some atheists criticize the use of the term agnosticism as functionally indistinguishable from atheism; this results in frequent criticisms of those who adopt the term as avoiding the atheist label. Theistic Theistic critics claim that agnosticism is impossible in practice, since a person can live only either as if God did not exist (etsi deus non-daretur), or as if God did exist (etsi deus daretur). Christian According to Pope Benedict XVI, strong agnosticism in particular contradicts itself in affirming the power of reason to know scientific truth. He blames the exclusion of reasoning from religion and ethics for dangerous pathologies such as crimes against humanity and ecological disasters. "Agnosticism", said Benedict, "is always the fruit of a refusal of that knowledge which is in fact offered to man ... The knowledge of God has always existed". He asserted that agnosticism is a choice of comfort, pride, dominion, and utility over truth, and is opposed by the following attitudes: the keenest self-criticism, humble listening to the whole of existence, the persistent patience and self-correction of the scientific method, a readiness to be purified by the truth. The Catholic Church sees merit in examining what it calls "partial agnosticism", specifically those systems that "do not aim at constructing a complete philosophy of the unknowable, but at excluding special kinds of truth, notably religious, from the domain of knowledge". However, the Church is historically opposed to a full denial of the capacity of human reason to know God. The Council of the Vatican declares, "God, the beginning and end of all, can, by the natural light of human reason, be known with certainty from the works of creation". Blaise Pascal argued that even if there were truly no evidence for God, agnostics should consider what is now known as Pascal's Wager: the infinite expected value of acknowledging God is always greater than the finite expected value of not acknowledging his existence, and thus it is a safer "bet" to choose God. Atheistic According to Richard Dawkins, a distinction between agnosticism and atheism is unwieldy and depends on how close to zero a person is willing to rate the probability of existence for any given god-like entity. About himself, Dawkins continues, "I am agnostic only to the extent that I am agnostic about fairies at the bottom of the garden." Dawkins also identifies two categories of agnostics; "Temporary Agnostics in Practice" (TAPs), and "Permanent Agnostics in Principle" (PAPs). He states that "agnosticism about the existence of God belongs firmly in the temporary or TAP category. Either he exists or he doesn't. It is a scientific question; one day we may know the answer, and meanwhile we can say something pretty strong about the probability" and considers PAP a "deeply inescapable kind of fence-sitting". Ignosticism A related concept is ignosticism, the view that a coherent definition of a deity must be put forward before the question of the existence of a deity can be meaningfully discussed. If the chosen definition is not coherent, the ignostic holds the noncognitivist view that the existence of a deity is meaningless or empirically untestable. A. J. Ayer, Theodore Drange, and other philosophers see both atheism and agnosticism as incompatible with ignosticism on the grounds that atheism and agnosticism accept the statement "a deity exists" as a meaningful proposition that can be argued for or against. See also References Further reading Alexander, Nathan G. "An Atheist with a Tall Hat On: The Forgotten History of Agnosticism." The Humanist, February 19, 2019. Annan, Noel. Leslie Stephen: The Godless Victorian (U of Chicago Press, 1984) Cockshut, A.O.J. The Unbelievers, English Thought, 1840–1890 (1966). Dawkins, Richard. "The poverty of agnosticism", in The God Delusion, Black Swan, 2007 (). Lightman, Bernard. The Origins of Agnosticism (1987). Royle, Edward. Radicals, Secularists, and Republicans: Popular Freethought in Britain, 1866–1915 (Manchester UP, 1980). External links Albert Einstein on Religion Shapell Manuscript Foundation Why I Am An Agnostic by Robert G. Ingersoll, [1896]. Dictionary of the History of Ideas: Agnosticism Agnosticism from INTERS – Interdisciplinary Encyclopedia of Religion and Science Agnosticism – from ReligiousTolerance.org What do Agnostics Believe? – A Jewish perspective Fides et Ratio  – the relationship between faith and reason Karol Wojtyla [1998] The Natural Religion by Dr Brendan Connolly, 2008 Epistemological theories Philosophy of religion Skepticism Irreligion Doubt
896
https://en.wikipedia.org/wiki/Argon
Argon
Argon is a chemical element with the symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third-most abundant gas in the Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust. Nearly all of the argon in the Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in the Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas. The name "argon" is derived from the Greek word , neuter singular form of meaning 'lazy' or 'inactive', as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990. Argon is extracted industrially by the fractional distillation of liquid air. Argon is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. Argon is also used in incandescent, fluorescent lighting, and other gas-discharge tubes. Argon makes a distinctive blue-green gas laser. Argon is also used in fluorescent glow starters. Characteristics Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature. Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized. History Argon (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785. Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Cavendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called argon. Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements. Until 1957, the symbol for argon was "A", but now it is "Ar". Occurrence Argon constitutes 0.934% by volume and 1.288% by mass of the Earth's atmosphere. Air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. The Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively. Isotopes The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating. In the Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days. Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes. The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as , and its content may be as high as 1.93% (Mars). The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table). Compounds Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space. Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa. Production Industrial Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year. In radioactive decays 40Ar, the most abundant isotope of argon, is produced by the decay of 40K with a half-life of 1.25 years by electron capture or positron emission. Because of this, it is used in potassium–argon dating to determine the age of rocks. Applications Argon has several desirable properties: Argon is a chemically inert gas. Argon is the cheapest alternative when nitrogen is not sufficiently inert. Argon has low thermal conductivity. Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications. Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. Argon is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of argon applications arise simply because it is inert and relatively cheap. Industrial processes Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium. Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life. Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam. Scientific research Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in the Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within the Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions. At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials. Preservative Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon. In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry. Argon is sometimes used as the propellant in aerosol cans. Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage. Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced. Laboratory equipment Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus. Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication. Medical use Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient. Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects. Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood. Lighting Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers. Miscellaneous uses Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity. Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure. Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating is used to date sedimentary, metamorphic, and igneous rocks. Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse. Safety Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling. See also Industrial gas Oxygen–argon ratio, a ratio of two physically similar gases, which has importance in various sectors. References Further reading On triple point pressure at 69 kPa. On triple point pressure at 83.8058 K. External links Argon at The Periodic Table of Videos (University of Nottingham) USGS Periodic Table – Argon Diving applications: Why Argon? Chemical elements E-number additives Noble gases Industrial gases
897
https://en.wikipedia.org/wiki/Arsenic
Arsenic
Arsenic is a chemical element with the symbol As and atomic number 33. Arsenic occurs in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. Arsenic is a metalloid. It has various allotropes, but only the gray form, which has a metallic appearance, is important to industry. The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is a common n-type dopant in semiconductor electronic devices. It is also a component of the III-V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the toxicity of arsenic and its compounds. A few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic are an essential dietary element in rats, hamsters, goats, chickens, and presumably other species. A role in human metabolism is not known. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world. The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic as number 1 in its 2001 Priority List of Hazardous Substances at Superfund sites. Arsenic is classified as a Group-A carcinogen. Characteristics Physical characteristics The three most common arsenic allotropes are gray, yellow, and black arsenic, with gray being the most common. Gray arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, gray arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Gray arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Gray arsenic is also the most stable form. Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into gray arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus. Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. It is also a poor electrical conductor. As arsenic's triple point is at 3.628 MPa (35.81 atm), it does not have a melting point at standard pressure but instead sublimes from solid to vapor at 887 K (615 °C or 1137 °F). Isotopes Arsenic occurs in nature as a monoisotopic element, composed of one stable isotope, 75As. As of 2003, at least 33 radioisotopes have also been synthesized, ranging in atomic mass from 60 to 92. The most stable of these is 73As with a half-life of 80.30 days. All other isotopes have half-lives of under one day, with the exception of 71As (t1/2=65.30 hours), 72As (t1/2=26.0 hours), 74As (t1/2=17.77 days), 76As (t1/2=1.0942 days), and 77As (t1/2=38.83 hours). Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds. Chemistry Arsenic has a similar electronegativity and ionization energies to its lighter congener phosphorus and accordingly readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic (and some arsenic compounds) sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is 3.63 MPa and . Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the group oxidation state of +5 than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers. Compounds Compounds of arsenic resemble in some respects those of phosphorus which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square As ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons. Inorganic compounds One of the simplest arsenic compound is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen. Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and the salts are called arsenates, the most common arsenic contamination of groundwater, and a problem that affects many people. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons. The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3. A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes. All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.) Alloys Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide. Organoarsenic compounds A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive odor; it is very poisonous. Occurrence and production Arsenic comprises about 1.5 ppm (0.00015%) of the Earth's crust, and is the 53rd most abundant element. Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater. Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment. In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust. On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture. History The word arsenic has its origin in the Syriac word (al) zarniqa, from Arabic al-zarnīḵ 'the orpiment’, based on Persian zar 'gold' from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek as arsenikon (), a form that is folk etymology, being the neuter form of the Greek word arsenikos (), meaning "male", "virile". The Greek word was adopted in Latin as arsenicum, which in French became arsenic, from which the English word arsenic is taken. Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos (circa 300 AD) describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, it was frequently used for murder until the advent of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". During the Bronze Age, arsenic was often included in bronze, which made the alloy harder (so-called "arsenical bronze"). The isolation of arsenic was described by Jabir ibn Hayyan before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rare. Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt by the reaction of potassium acetate with arsenic trioxide. In the Victorian era, "arsenic" ("white arsenic" or arsenic trioxide) was mixed with vinegar and chalk and eaten by women to improve the complexion of their faces, making their skin paler to show they did not work in the fields. The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. Wallpaper production also began to use dyes made from arsenic, which was thought to increase the pigment's brightness. Two arsenic pigments have been widely used since their discovery – Paris Green and Scheele's Green. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942. Applications Agricultural The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations). Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out by 2013 in all agricultural activities except cotton farming. The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity. Arsenic is used as a feed additive in poultry and swine production, in particular in the U.S. to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers. Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continues to sell nitarsone, primarily for use in turkeys. Arsenic is intentionally added to the feed of chickens raised for human consumption. Organic arsenic compounds are less toxic than pure arsenic, and promote the growth of chickens. Under some conditions, the arsenic in chicken feed is converted to the toxic inorganic form. A 2006 study of the remains of the Australian racehorse, Phar Lap, determined that the 1932 death of the famous champion was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system." Medical use During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler). Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis, since although these drugs have the disadvantage of severe toxicity, the disease is almost uniformly fatal if untreated. Arsenic trioxide has been used in a variety of ways over the past 500 years, most commonly in the treatment of cancer, but also in medications as diverse as Fowler's solution in psoriasis. The US Food and Drug Administration in the year 2000 approved this compound for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid. A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations. In subtoxic doses, soluble arsenic compounds act as stimulants, and were once popular in small doses as medicine by people in the mid-18th to 19th centuries; its use as a stimulant was especially prevalent as sport animals such as race horses or with work dogs. Alloys The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light. Military After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice. Other uses Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets. Arsenic is used in bronzing and pyrotechnics. As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets. Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments. Arsenic is also used for taxonomic sample preservation. Arsenic was used as an opacifier in ceramics, creating white glazes. Until recently, arsenic was used in optical glass. Modern glass manufacturers, under pressure from environmentalists, have ceased using both arsenic and lead. Biological role Bacteria Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr). In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain PHS-1 has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues. In 2011, it was postulated that a strain of Halomonadaceae could be grown in the absence of phosphorus if that element were substituted with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups. Essential trace element in higher animals Some evidence indicates that arsenic is an essential trace mineral in birds (chickens), and in mammals (rats, hamsters, and goats). However, the biological function is not known. Heredity Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility. The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation. Biomethylation Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 µg/day. Values about 1000 µg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic. Environmental issues Exposure Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts. During the Victorian era, arsenic was widely used in home decor, especially wallpapers. Occurrence in drinking water Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a recent report of Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO minimum contamination level. Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 µg/L, suggesting that arsenic induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water. In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 part per billion drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits. Low-level exposure to arsenic at concentrations of 100 parts per billion (i.e., above the 10 parts per billion drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus. Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic. Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke. Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation. Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminum oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap. Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water. Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced. Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes. Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 µg/L. This may find applications in areas where the potable water is extracted from underground aquifers. San Pedro de Atacama For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity. Hazard maps for contaminated groundwater Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground. Redox transformation of arsenic in natural waters Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution. Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic. The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are H3AsO4, H2AsO4−, HAsO42−, and AsO43− at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, H3AsO4 is predominant at pH 2–9. Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments. The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic. Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic. Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria. Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where SO42− reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1. Wood preservation in the US As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole. Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater. Mapping of industrial releases in the US One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources. Bioremediation Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered. Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination. Toxicity and precautions Arsenic and many of its compounds are especially potent poisons. Classification Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC. The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens. Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [H3AsO3; As(III)]". Legal limits, food, and drink In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb). In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard. Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic. In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior. Consumer Reports recommended: That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production; That the FDA establish a legal limit for food; That industry change production practices to lower arsenic levels, especially in food for children; and That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content). Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice. A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice. Reducing arsenic content in rice In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption. Occupational exposure limits Ecotoxicity Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils. Toxicity in animals Biological mechanism Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes. Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur. Exposure risks and remediation Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry. The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic. Treatment Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity. See also Aqua Tofana Arsenic and Old Lace Arsenic biochemistry Arsenic compounds Arsenic poisoning Arsenic toxicity Arsenic trioxide Fowler's solution GFAJ-1 Grainger challenge Hypothetical types of biochemistry Organoarsenic chemistry Toxic heavy metal White arsenic References Bibliography Further reading External links Arsenic Cancer Causing Substances, U.S. National Cancer Institute. CTD's Arsenic page and CTD's Arsenicals page from the Comparative Toxicogenomics Database Arsenic intoxication: general aspects and chelating agents, by Geir Bjørklund, Massimiliano Peana et al. Archives of Toxicology (2020) 94:1879–1897. A Small Dose of Toxicology Arsenic in groundwater Book on arsenic in groundwater by IAH's Netherlands Chapter and the Netherlands Hydrological Society Contaminant Focus: Arsenic by the EPA. Environmental Health Criteria for Arsenic and Arsenic Compounds, 2001 by the WHO. National Institute for Occupational Safety and Health – Arsenic Page Arsenic at The Periodic Table of Videos (University of Nottingham) Chemical elements Metalloids Hepatotoxins Pnictogens Biology and pharmacology of chemical elements Endocrine disruptors IARC Group 1 carcinogens Trigonal minerals Minerals in space group 166 Teratogens Fetotoxicants Suspected testicular toxicants Native element minerals Chemical elements with rhombohedral structure
898
https://en.wikipedia.org/wiki/Antimony
Antimony
Antimony is a chemical element with the symbol Sb (from ) and atomic number 51. A lustrous gray metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name kohl. The earliest known description of the metal in the West was written in 1540 by Vannoccio Biringuccio. China is the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. The industrial methods for refining antimony from stibnite are roasting followed by reduction with carbon, or direct reduction of stibnite with iron. The largest applications for metallic antimony are in alloys with lead and tin, which have improved properties for solders, bullets, and plain bearings. It improves the rigidity of lead-alloy plates in lead–acid batteries. Antimony trioxide is a prominent additive for halogen-containing flame retardants. Antimony is used as a dopant in semiconductor devices. Characteristics Properties Antimony is a member of group 15 of the periodic table, one of the elements called pnictogens, and has an electronegativity of 2.05. In accordance with periodic trends, it is more electronegative than tin or bismuth, and less electronegative than tellurium or arsenic. Antimony is stable in air at room temperature, but reacts with oxygen if heated to produce antimony trioxide, Sb2O3. Antimony is a silvery, lustrous gray metalloid with a Mohs scale hardness of 3, which is too soft to make hard objects. Coins of antimony were issued in China's Guizhou province in 1931; durability was poor, and minting was soon discontinued. Antimony is resistant to attack by acids. Four allotropes of antimony are known: a stable metallic form, and three metastable forms (explosive, black, and yellow). Elemental antimony is a brittle, silver-white, shiny metalloid. When slowly cooled, molten antimony crystallizes into a trigonal cell, isomorphic with the gray allotrope of arsenic. A rare explosive form of antimony can be formed from the electrolysis of antimony trichloride. When scratched with a sharp implement, an exothermic reaction occurs and white fumes are given off as metallic antimony forms; when rubbed with a pestle in a mortar, a strong detonation occurs. Black antimony is formed upon rapid cooling of antimony vapor. It has the same crystal structure as red phosphorus and black arsenic; it oxidizes in air and may ignite spontaneously. At 100 °C, it gradually transforms into the stable form. The yellow allotrope of antimony is the most unstable; it has been generated only by oxidation of stibine (SbH3) at −90 °C. Above this temperature and in ambient light, this metastable allotrope transforms into the more stable black allotrope. Elemental antimony adopts a layered structure (space group Rm No. 166) whose layers consist of fused, ruffled, six-membered rings. The nearest and next-nearest neighbors form an irregular octahedral complex, with the three atoms in each double layer slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony. Isotopes Antimony has two stable isotopes: 121Sb with a natural abundance of 57.36% and 123Sb with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is 125Sb with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is 120m1Sb with a half-life of 5.76 days. Isotopes that are lighter than the stable 123Sb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. Occurrence The abundance of antimony in the Earth's crust is estimated to be 0.2 to 0.5 parts per million, comparable to thallium at 0.5 parts per million and silver at 0.07 ppm. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite (Sb2S3) which is the predominant ore mineral. Compounds Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more stable. Oxides and hydroxides Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts. Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides. Many antimony ores are sulfides, including stibnite (), pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric and features antimony in the +3 oxidation state and S–S bonds. Several thioantimonides are known, such as and . Halides Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry. The trifluoride is prepared by the reaction of with HF: + 6 HF → 2 + 3 It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten is a weak electrical conductor. The trichloride is prepared by dissolving in hydrochloric acid: + 6 HCl → 2 + 3 The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. is a powerful Lewis acid used to make the superacid fluoroantimonic acid ("H2SbF7"). Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and . Antimonides, hydrides, and organoantimony compounds Compounds in this class generally are described as derivatives of Sb3−. Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as Na3Sb and Zn3Sb2, are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, : + 3 → Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly. Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include Sb(C6H5)3 (triphenylstibine), Sb2(C6H5)4 (with an Sb-Sb bond), and cyclic [Sb(C6H5)]n. Pentacoordinated organoantimony compounds are common, examples being Sb(C6H5)5 and several related halides. History Antimony(III) sulfide, Sb2S3, was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented. An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable." The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable." The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise Natural History, around 77 AD. Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony. The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony. The intentional isolation of antimony is described by Jabir ibn Hayyan before 815 AD. A description of a procedure for isolating antimony is later given in the 1540 book De la pirotechnia by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, De re metallica. In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book Currus Triumphalis Antimonii (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio. The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface. With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals. The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden. Etymology The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is antimonium. The origin of this is uncertain; all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός anti-monachos or French antimoine, still has adherents; this would mean "monk-killer", and is explained by many early alchemists being monks, and antimony being poisonous. However, the low toxicity of antimony (see below) makes this unlikely. Another popular etymology is the hypothetical Greek word ἀντίμόνος antimonos, "against aloneness", explained as "not found as metal", or "not found unalloyed". Lippmann conjectured a hypothetical Greek word ανθήμόνιον anthemonion, which would mean "floret", and cites several examples of related Greek words (but not that one) which describe chemical or biological efflorescence. The early uses of antimonium include the translations, in 1050–1100, by Constantine the African of Arabic medical treatises. Several authorities believe antimonium is a scribal corruption of some Arabic form; Meyerhof derives it from ithmid; other possibilities include athimar, the Arabic name of the metalloid, and a hypothetical as-stimmi, derived from or parallel to the Greek. The standard chemical symbol for antimony (Sb) is credited to Jöns Jakob Berzelius, who derived the abbreviation from stibium. The ancient words for antimony mostly have, as their chief meaning, kohl, the sulfide of antimony. The Egyptians called antimony mśdmt; in hieroglyphs, the vowels are uncertain, but the Coptic form of the word is ⲥⲧⲏⲙ (stēm). Egyptian stm: O34:D46-G17-F21:D4 The Greek word, στίμμι (stimmi) is used by Attic tragic poets of the 5th century BC, and is possibly a loan word from Arabic or from Egyptian stm. Later Greeks also used στἰβι stibi, as did Celsus and Pliny, writing in Latin, in the first century AD. Pliny also gives the names stimi, larbaris, alabaster, and the "very common" platyophthalmos, "wide-eye" (from the effect of the cosmetic). Later Latin authors adapted the word to Latin as stibium. The Arabic word for the substance, as opposed to the cosmetic, can appear as إثمد ithmid, athmoud, othmod, or uthmod. Littré suggests the first form, which is the earliest, derives from stimmida, an accusative for stimmi. Production Process The extraction of antimony from ores depends on the quality and composition of the ore. Most antimony is mined as the sulfide; lower-grade ores are concentrated by froth flotation, while higher-grade ores are heated to 500–600 °C, the temperature at which stibnite melts and separates from the gangue minerals. Antimony can be isolated from the crude antimony sulfide by reduction with scrap iron: + 3 Fe → 2 Sb + 3 FeS The sulfide is converted to an oxide; the product is then roasted, sometimes for the purpose of vaporizing the volatile antimony(III) oxide, which is recovered. This material is often used directly for the main applications, impurities being arsenic and sulfide. Antimony is isolated from the oxide by a carbothermal reduction: 2 + 3 C → 4 Sb + 3 The lower-grade ores are reduced in blast furnaces while the higher-grade ores are reduced in reverberatory furnaces. Top producers and production volumes The British Geological Survey (BGS) reported that in 2005 China was the top producer of antimony with approximately 84% of the world share, followed at a distance by South Africa, Bolivia and Tajikistan. Xikuangshan Mine in Hunan province has the largest deposits in China with an estimated deposit of 2.1 million metric tons. In 2016, according to the US Geological Survey, China accounted for 76.9% of total antimony production, followed in second place by Russia with 6.9% and Tajikistan with 6.2%. Chinese production of antimony is expected to decline in the future as mines and smelters are closed down by the government as part of pollution control. Especially due to an environmental protection law having gone into effect in January 2015 and revised "Emission Standards of Pollutants for Stanum, Antimony, and Mercury" having gone into effect, hurdles for economic production are higher. According to the National Bureau of Statistics in China, by September 2015 50% of antimony production capacity in the Hunan province (the province with biggest antimony reserves in China) had not been used. Reported production of antimony in China has fallen and is unlikely to increase in the coming years, according to the Roskill report. No significant antimony deposits in China have been developed for about ten years, and the remaining economic reserves are being rapidly depleted. The world's largest antimony producers, according to Roskill, are listed below: Reserves Supply risk For antimony-importing regions such as Europe and the U.S., antimony is considered to be a critical mineral for industrial manufacturing that is at risk of supply chain disruption. With global production coming mainly from China (74%), Tajikistan(8%), and Russia(4%), these sources are critical to supply. European Union: Antimony is considered a critical raw material for defense, automotive, construction and textiles. The E.U. sources are 100% imported, coming mainly from Turkey (62%), Bolivia (20%) and Guatemala (7%). United Kingdom: The British Geological Survey's 2015 risk list ranks antimony second highest (after rare earth elements) on the relative supply risk index. United States: Antimony is a mineral commodity considered critical to the economic and national security. In 2021, no antimony was mined in the U.S. Applications About 60% of antimony is consumed in flame retardants, and 20% is used in alloys for batteries, plain bearings, and solders. Flame retardants Antimony is mainly used as the trioxide for flame-proofing compounds, always in combination with halogenated flame retardants except in halogen-containing polymers. The flame retarding effect of antimony trioxide is produced by the formation of halogenated antimony compounds, which react with hydrogen atoms, and probably also with oxygen atoms and OH radicals, thus inhibiting fire. Markets for these flame-retardants include children's clothing, toys, aircraft, and automobile seat covers. They are also added to polyester resins in fiberglass composites for such items as light aircraft engine covers. The resin will burn in the presence of an externally generated flame, but will extinguish when the external flame is removed. Alloys Antimony forms a highly useful alloy with lead, increasing its hardness and mechanical strength. For most applications involving lead, varying amounts of antimony are used as alloying metal. In lead–acid batteries, this addition improves plate strength and charging characteristics. For sailboats, lead keels are used to provide righting moment, ranging from 600 lbs to over 200 tons for the largest sailing superyachts; to improve hardness and tensile strength of the lead keel, antimony is mixed with lead between 2% and 5% by volume. Antimony is used in antifriction alloys (such as Babbitt metal), in bullets and lead shot, electrical cable sheathing, type metal (for example, for linotype printing machines), solder (some "lead-free" solders contain 5% Sb), in pewter, and in hardening alloys with low tin content in the manufacturing of organ pipes. Other applications Three other applications consume nearly all the rest of the world's supply. One application is as a stabilizer and catalyst for the production of polyethylene terephthalate. Another is as a fining agent to remove microscopic bubbles in glass, mostly for TV screens - antimony ions interact with oxygen, suppressing the tendency of the latter to form bubbles. The third application is pigments. In 1990s antimony was increasingly being used in semiconductors as a dopant in n-type silicon wafers for diodes, infrared detectors, and Hall-effect devices. In the 1950s, the emitters and collectors of n-p-n alloy junction transistors were doped with tiny beads of a lead-antimony alloy. Indium antimonide is used as a material for mid-infrared detectors. Biology and medicine have few uses for antimony. Treatments containing antimony, known as antimonials, are used as emetics. Antimony compounds are used as antiprotozoan drugs. Potassium antimonyl tartrate, or tartar emetic, was once used as an anti-schistosomal drug from 1919 on. It was subsequently replaced by praziquantel. Antimony and its compounds are used in several veterinary preparations, such as anthiomaline and lithium antimony thiomalate, as a skin conditioner in ruminants. Antimony has a nourishing or conditioning effect on keratinized tissues in animals. Antimony-based drugs, such as meglumine antimoniate, are also considered the drugs of choice for treatment of leishmaniasis in domestic animals. Besides having low therapeutic indices, the drugs have minimal penetration of the bone marrow, where some of the Leishmania amastigotes reside, and curing the disease – especially the visceral form – is very difficult. Elemental antimony as an antimony pill was once used as a medicine. It could be reused by others after ingestion and elimination. Antimony(III) sulfide is used in the heads of some safety matches. Antimony sulfides help to stabilize the friction coefficient in automotive brake pad materials. Antimony is used in bullets, bullet tracers, paint, glass art, and as an opacifier in enamel. Antimony-124 is used together with beryllium in neutron sources; the gamma rays emitted by antimony-124 initiate the photodisintegration of beryllium. The emitted neutrons have an average energy of 24 keV. Natural antimony is used in startup neutron sources. Historically, the powder derived from crushed antimony (kohl) has been applied to the eyes with a metal rod and with one's spittle, thought by the ancients to aid in curing eye infections. The practice is still seen in Yemen and in other Muslim countries. Precautions The effects of antimony and its compounds on human and environmental health differ widely. Elemental antimony metal does not affect human and environmental health. Inhalation of antimony trioxide (and similar poorly soluble Sb(III) dust particles such as antimony dust) is considered harmful and suspected of causing cancer. However, these effects are only observed with female rats and after long-term exposure to high dust concentrations. The effects are hypothesized to be attributed to inhalation of poorly soluble Sb particles leading to impaired lung clearance, lung overload, inflammation and ultimately tumour formation, not to exposure to antimony ions (OECD, 2008). Antimony chlorides are corrosive to skin. The effects of antimony are not comparable to those of arsenic; this might be caused by the significant differences of uptake, metabolism, and excretion between arsenic and antimony. For oral absorption, ICRP (1994) has recommended values of 10% for tartar emetic and 1% for all other antimony compounds. Dermal absorption for metals is estimated to be at most 1% (HERAG, 2007). Inhalation absorption of antimony trioxide and other poorly soluble Sb(III) substances (such as antimony dust) is estimated at 6.8% (OECD, 2008), whereas a value <1% is derived for Sb(V) substances. Antimony(V) is not quantitatively reduced to antimony(III) in the cell, and both species exist simultaneously. Antimony is mainly excreted from the human body via urine. Antimony and its compounds do not cause acute human health effects, with the exception of antimony potassium tartrate ("tartar emetic"), a prodrug that is intentionally used to treat leishmaniasis patients. Prolonged skin contact with antimony dust may cause dermatitis. However, it was agreed at the European Union level that the skin rashes observed are not substance-specific, but most probably due to a physical blocking of sweat ducts (ECHA/PR/09/09, Helsinki, 6 July 2009). Antimony dust may also be explosive when dispersed in the air; when in a bulk solid it is not combustible. Antimony is incompatible with strong acids, halogenated acids, and oxidizers; when exposed to newly formed hydrogen it may form stibine (SbH3). The 8-hour time-weighted average (TWA) is set at 0.5 mg/m3 by the American Conference of Governmental Industrial Hygienists and by the Occupational Safety and Health Administration (OSHA) as a legal permissible exposure limit (PEL) in the workplace. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m3 as an 8-hour TWA. Antimony compounds are used as catalysts for polyethylene terephthalate (PET) production. Some studies report minor antimony leaching from PET bottles into liquids, but levels are below drinking water guidelines. Antimony concentrations in fruit juice concentrates were somewhat higher (up to 44.7 µg/L of antimony), but juices do not fall under the drinking water regulations. The drinking water guidelines are: World Health Organization: 20 µg/L Japan: 15 µg/L United States Environmental Protection Agency, Health Canada and the Ontario Ministry of Environment: 6 µg/L EU and German Federal Ministry of Environment: 5 µg/L The tolerable daily intake (TDI) proposed by WHO is 6 µg antimony per kilogram of body weight. The immediately dangerous to life or health (IDLH) value for antimony is 50 mg/m3. Toxicity Certain compounds of antimony appear to be toxic, particularly antimony trioxide and antimony potassium tartrate. Effects may be similar to arsenic poisoning. Occupational exposure may cause respiratory irritation, pneumoconiosis, antimony spots on the skin, gastrointestinal symptoms, and cardiac arrhythmias. In addition, antimony trioxide is potentially carcinogenic to humans. Adverse health effects have been observed in humans and animals following inhalation, oral, or dermal exposure to antimony and antimony compounds. Antimony toxicity typically occurs either due to occupational exposure, during therapy or from accidental ingestion. It is unclear if antimony can enter the body through the skin. The presence of low levels of antimony in saliva may also be associated with dental decay. See also Phase change memory Notes References Bibliography Edmund Oscar von Lippmann (1919) Entstehung und Ausbreitung der Alchemie, teil 1. Berlin: Julius Springer (in German). Public Health Statement for Antimony External links International Antimony Association vzw (i2a) Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Antimony Antimony at The Periodic Table of Videos (University of Nottingham) CDC – NIOSH Pocket Guide to Chemical Hazards – Antimony Antimony Mineral data and specimen images Chemical elements Metalloids Native element minerals Nuclear materials Pnictogens Trigonal minerals Minerals in space group 166 Materials that expand upon freezing Chemical elements with rhombohedral structure
899
https://en.wikipedia.org/wiki/Actinium
Actinium
Actinium is a chemical element with the symbol Ac and atomic number 89. It was first isolated by Friedrich Oskar Giesel in 1902, who gave it the name emanium; the element got its name by being wrongly identified with a substance André-Louis Debierne found in 1899 and called actinium. Actinium gave the name to the actinide series, a group of 15 similar elements between actinium and lawrencium in the periodic table. Together with polonium, radium, and radon, actinium was one of the first non-primordial radioactive elements to be isolated. A soft, silvery-white radioactive metal, actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that prevents further oxidation. As with most lanthanides and many actinides, actinium assumes oxidation state +3 in nearly all its chemical compounds. Actinium is found only in traces in uranium and thorium ores as the isotope 227Ac, which decays with a half-life of 21.772 years, predominantly emitting beta and sometimes alpha particles, and 228Ac, which is beta active with a half-life of 6.15 hours. One tonne of natural uranium in ore contains about 0.2 milligrams of actinium-227, and one tonne of thorium contains about 5 nanograms of actinium-228. The close similarity of physical and chemical properties of actinium and lanthanum makes separation of actinium from the ore impractical. Instead, the element is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. Owing to its scarcity, high price and radioactivity, actinium has no significant industrial use. Its current applications include a neutron source and an agent for radiation therapy. History André-Louis Debierne, a French chemist, announced the discovery of a new element in 1899. He separated it from pitchblende residues left by Marie and Pierre Curie after they had extracted radium. In 1899, Debierne described the substance as similar to titanium and (in 1900) as similar to thorium. Friedrich Oskar Giesel found in 1902 a substance similar to lanthanum and called it "emanium" in 1904. After a comparison of the substances' half-lives determined by Debierne, Harriet Brooks in 1904, and Otto Hahn and Otto Sackur in 1905, Debierne's chosen name for the new element was retained because it had seniority, despite the contradicting chemical properties he claimed for the element at different times. Articles published in the 1970s and later suggest that Debierne's results published in 1904 conflict with those reported in 1899 and 1900. Furthermore, the now-known chemistry of actinium precludes its presence as anything other than a minor constituent of Debierne's 1899 and 1900 results; in fact, the chemical properties he reported make it likely that he had, instead, accidentally identified protactinium, which would not be discovered for another fourteen years, only to have it disappear due to its hydrolysis and adsorption onto his laboratory equipment. This has led some authors to advocate that Giesel alone should be credited with the discovery. A less confrontational vision of scientific discovery is proposed by Adloff. He suggests that hindsight criticism of the early publications should be mitigated by the then nascent state of radiochemistry: highlighting the prudence of Debierne's claims in the original papers, he notes that nobody can contend that Debierne's substance did not contain actinium. Debierne, who is now considered by the vast majority of historians as the discoverer, lost interest in the element and left the topic. Giesel, on the other hand, can rightfully be credited with the first preparation of radiochemically pure actinium and with the identification of its atomic number 89. The name actinium originates from the Ancient Greek aktis, aktinos (ακτίς, ακτίνος), meaning beam or ray. Its symbol Ac is also used in abbreviations of other compounds that have nothing to do with actinium, such as acetyl, acetate and sometimes acetaldehyde. Properties Actinium is a soft, silvery-white, radioactive, metallic element. Its estimated shear modulus is similar to that of lead. Owing to its strong radioactivity, actinium glows in the dark with a pale blue light, which originates from the surrounding air ionized by the emitted energetic particles. Actinium has similar chemical properties to lanthanum and other lanthanides, and therefore these elements are difficult to separate when extracting from uranium ores. Solvent extraction and ion chromatography are commonly used for the separation. The first element of the actinides, actinium gave the group its name, much as lanthanum had done for the lanthanides. The group of elements is more diverse than the lanthanides and therefore it was not until 1945 that the most significant change to Dmitri Mendeleev's periodic table since the recognition of the lanthanides, the introduction of the actinides, was generally accepted after Glenn T. Seaborg's research on the transuranium elements (although it had been proposed as early as 1892 by British chemist Henry Bassett). Actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that impedes further oxidation. As with most lanthanides and actinides, actinium exists in the oxidation state +3, and the Ac3+ ions are colorless in solutions. The oxidation state +3 originates from the [Rn]6d17s2 electronic configuration of actinium, with three valence electrons that are easily donated to give the stable closed-shell structure of the noble gas radon. The rare oxidation state +2 is only known for actinium dihydride (AcH2); even this may in reality be an electride compound like its lighter congener LaH2 and thus have actinium(III). Ac3+ is the largest of all known tripositive ions and its first coordination sphere contains approximately 10.9 ± 0.5 water molecules. Chemical compounds Due to actinium's intense radioactivity, only a limited number of actinium compounds are known. These include: AcF3, AcCl3, AcBr3, AcOF, AcOCl, AcOBr, Ac2S3, Ac2O3, AcPO4 and Ac(NO3)3. Except for AcPO4, they are all similar to the corresponding lanthanum compounds. They all contain actinium in the oxidation state +3. In particular, the lattice constants of the analogous lanthanum and actinium compounds differ by only a few percent. Here a, b and c are lattice constants, No is space group number and Z is the number of formula units per unit cell. Density was not measured directly but calculated from the lattice parameters. Oxides Actinium oxide (Ac2O3) can be obtained by heating the hydroxide at 500 °C or the oxalate at 1100 °C, in vacuum. Its crystal lattice is isotypic with the oxides of most trivalent rare-earth metals. Halides Actinium trifluoride can be produced either in solution or in solid reaction. The former reaction is carried out at room temperature, by adding hydrofluoric acid to a solution containing actinium ions. In the latter method, actinium metal is treated with hydrogen fluoride vapors at 700 °C in an all-platinum setup. Treating actinium trifluoride with ammonium hydroxide at 900–1000 °C yields oxyfluoride AcOF. Whereas lanthanum oxyfluoride can be easily obtained by burning lanthanum trifluoride in air at 800 °C for an hour, similar treatment of actinium trifluoride yields no AcOF and only results in melting of the initial product. AcF3 + 2 NH3 + H2O → AcOF + 2 NH4F Actinium trichloride is obtained by reacting actinium hydroxide or oxalate with carbon tetrachloride vapors at temperatures above 960 °C. Similar to oxyfluoride, actinium oxychloride can be prepared by hydrolyzing actinium trichloride with ammonium hydroxide at 1000 °C. However, in contrast to the oxyfluoride, the oxychloride could well be synthesized by igniting a solution of actinium trichloride in hydrochloric acid with ammonia. Reaction of aluminium bromide and actinium oxide yields actinium tribromide: Ac2O3 + 2 AlBr3 → 2 AcBr3 + Al2O3 and treating it with ammonium hydroxide at 500 °C results in the oxybromide AcOBr. Other compounds Actinium hydride was obtained by reduction of actinium trichloride with potassium at 300 °C, and its structure was deduced by analogy with the corresponding LaH2 hydride. The source of hydrogen in the reaction was uncertain. Mixing monosodium phosphate (NaH2PO4) with a solution of actinium in hydrochloric acid yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at 1400 °C for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at 1000 °C. Isotopes Naturally occurring actinium is composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-six radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac. Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 205 u () to 236 u (). Occurrence and synthesis Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U. The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. ^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant. 225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac. Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between 1100 and 1300 °C. Higher temperatures resulted in evaporation of the product and lower ones lead to an incomplete transformation. Lithium was chosen among other alkali metals because its fluoride is most volatile. Applications Owing to its scarcity, high price and radioactivity, 227Ac currently has no significant industrial use, but 225Ac is currently being studied for use in cancer treatments such as targeted alpha therapies. 227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction: ^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations. 225Ac is applied in medicine to produce in a reusable generator or can be used alone as an agent for radiation therapy, in particular targeted alpha therapy (TAT). This isotope has a half-life of 10 days, making it much more suitable for radiation therapy than 213Bi (half-life 46 minutes). Additionally, 225Ac decays to nontoxic 209Bi rather than stable but toxic lead, which is the final product in the decay chains of several other candidate isotopes, namely 227Th, 228Th, and 230U. Not only 225Ac itself, but also its daughters, emit alpha particles which kill cancer cells in the body. The major difficulty with application of 225Ac was that intravenous injection of simple actinium complexes resulted in their accumulation in the bones and liver for a period of tens of years. As a result, after the cancer cells were quickly killed by alpha particles from 225Ac, the radiation from the actinium and its daughters might induce new mutations. To solve this problem, 225Ac was bound to a chelating agent, such as citrate, ethylenediaminetetraacetic acid (EDTA) or diethylene triamine pentaacetic acid (DTPA). This reduced actinium accumulation in the bones, but the excretion from the body remained slow. Much better results were obtained with such chelating agents as HEHA () or DOTA () coupled to trastuzumab, a monoclonal antibody that interferes with the HER2/neu receptor. The latter delivery combination was tested on mice and proved to be effective against leukemia, lymphoma, breast, ovarian, neuroblastoma and prostate cancers. The medium half-life of 227Ac (21.77 years) makes it very convenient radioactive isotope in modeling the slow vertical mixing of oceanic waters. The associated processes cannot be studied with the required accuracy by direct measurements of current velocities (of the order 50 meters per year). However, evaluation of the concentration depth-profiles for different isotopes allows estimating the mixing rates. The physics behind this method is as follows: oceanic waters contain homogeneously dispersed 235U. Its decay product, 231Pa, gradually precipitates to the bottom, so that its concentration first increases with depth and then stays nearly constant. 231Pa decays to 227Ac; however, the concentration of the latter isotope does not follow the 231Pa depth profile, but instead increases toward the sea bottom. This occurs because of the mixing processes which raise some additional 227Ac from the sea bottom. Thus analysis of both 231Pa and 227Ac depth profiles allows researchers to model the mixing behavior. There are theoretical predictions that AcHx hydrides (in this case with very high pressure) are a candidate for a near room-temperature superconductor as they have Tc significantly higher than H3S, possibly near 250 K. Precautions 227Ac is highly radioactive and experiments with it are carried out in a specially designed laboratory equipped with a tight glove box. When actinium trichloride is administered intravenously to rats, about 33% of actinium is deposited into the bones and 50% into the liver. Its toxicity is comparable to, but slightly lower than that of americium and plutonium. For trace quantities, fume hoods with good aeration suffice; for gram amounts, hot cells with shielding from the intense gamma radiation emitted by 227Ac are necessary. See also Actinium series Notes References Bibliography Meyer, Gerd and Morss, Lester R. (1991) Synthesis of lanthanide and actinide compounds, Springer. External links Actinium at The Periodic Table of Videos (University of Nottingham) NLM Hazardous Substances Databank – Actinium, Radioactive Actinium in Chemical elements Actinides
900
https://en.wikipedia.org/wiki/Americium
Americium
Americium is a synthetic radioactive chemical element with the symbol Am and atomic number 95. It is a transuranic member of the actinide series, in the periodic table located under the lanthanide element europium, and thus by analogy was named after the Americas. Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, as part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer. Americium is a relatively soft radioactive metal with silvery appearance. Its most common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattice of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples. History Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series." The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness). Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years. The times are half-lives The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h. The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children Quiz Kids five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C. Occurrence The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of nuclear reactions, though this has not been confirmed. Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland. In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries/g (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils. Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Americium is also one of the elements that have been detected in Przybylski's Star. Synthesis and extraction Isotope nucleosynthesis Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about US$1,500 per gram of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order 100,000–160,000 USD/g. Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process: ^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am: ^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it spontaneously converts to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years. The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm: Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux: ^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am Metal generation Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away. Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten. An alternative is the reduction of americium dioxide by metallic lanthanum or thorium: Physical properties In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C). At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium. As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 µOhm·cm to 10 µOhm·cm after 40 hours, and saturates at about 16 µOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 µOhm·cm at liquid helium to 69 µOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium. Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is . Chemical properties Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3,. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states 2, 4, 5, 6 and 7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), AmV; (yellow), AmVI (brown) and AmVII (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm. Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state. The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction 3[AmO2]+ + 4H+ -> 2[AmO2]2+ + Am3+ + 2H2O is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like Li3AmO4 and Li6AmO6 are comparable to uranates and the ion AmO22+ is comparable to the uranyl ion, UO22+. Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate. Chemical compounds Oxygen compounds Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure. The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L. Halides Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions. Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are: Orthorhombic AmCl2: a = , b = and c = Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I: {Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg} Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions: Am^3+ + 3F^- -> AmF3(v) The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine: 2AmF3 + F2 -> 2AmF4 Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles. Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure. Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis: AmCl3 + H2O -> AmOCl + 2HCl Chalcogenides and pnictides The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice. Silicides and borides Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group I41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere. Organoamericium compounds Analogous to uranocene, americium forms the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3. Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides. Biological aspects Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus Citrobacter precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi. Fission The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of two other readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors. There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals. Isotopes About 19 isotopes and 8 nuclear isomers are known for americium. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass. Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV. Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U. Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U. Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle. Applications Ionization-type smoke detector Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation. The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms. Radionuclide As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes. Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator. One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer. In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function. Neutron source The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction: ^{241}_{95}Am -> ^{237}_{93}Np + ^{4}_{2}He + \gamma ^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations. Production of other elements Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm: ^{243}_{95}Am ->[\ce{(n,\gamma)}] ^{244}_{95}Am ->[\beta^-][10.1 \ \ce{h}] ^{244}_{96}Cm Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O. Spectrometer Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete. Health concerns As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth. If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity. Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease. See also Actinides in the environment :Category:Americium compounds Notes References Bibliography Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960 Further reading Nuclides and Isotopes – 14th Edition, GE Nuclear Energy, 1989. External links Americium at The Periodic Table of Videos (University of Nottingham) ATSDR – Public Health Statement: Americium World Nuclear Association – Smoke Detectors and Americium Chemical elements Actinides Carcinogens Synthetic elements
901
https://en.wikipedia.org/wiki/Astatine
Astatine
Astatine is a chemical element with the symbol At and atomic number 85. It is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. A sample of the pure element has never been assembled, because any macroscopic specimen would be immediately vaporized by the heat of its own radioactivity. The bulk properties of astatine are not known with certainty. Many of them have been estimated based on the element's position on the periodic table as a heavier analog of iodine, and a member of the halogens (the group of elements including fluorine, chlorine, bromine, and iodine). However, astatine also falls roughly along the dividing line between metals and nonmetals, and some metallic behavior has also been observed and predicted for it. Astatine is likely to have a dark or lustrous appearance and may be a semiconductor or possibly a metal. Chemically, several anionic species of astatine are known and most of its compounds resemble those of iodine, but it also sometimes displays metallic characteristics and shows some similarities to silver. The first synthesis of the element was in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio G. Segrè at the University of California, Berkeley, who named it from the Ancient Greek () 'unstable'. Four isotopes of astatine were subsequently found to be naturally occurring, although much less than one gram is present at any given time in the Earth's crust. Neither the most stable isotope astatine-210, nor the medically useful astatine-211, occur naturally; they can only be produced synthetically, usually by bombarding bismuth-209 with alpha particles. Characteristics Astatine is an extremely radioactive element; all its isotopes have half-lives of 8.1 hours or less, decaying into other astatine isotopes, bismuth, polonium, or radon. Most of its isotopes are very unstable, with half-lives of one second or less. Of the first 101 elements in the periodic table, only francium is less stable, and all the astatine isotopes more stable than francium are in any case synthetic and do not occur in nature. The bulk properties of astatine are not known with any certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would immediately vaporize itself because of the heat generated by its intense radioactivity. It remains to be seen if, with sufficient cooling, a macroscopic quantity of astatine could be deposited as a thin film. Astatine is usually classified as either a nonmetal or a metalloid; metal formation has also been predicted. Physical Most of the physical properties of astatine have been estimated (by interpolation or extrapolation), using theoretically or empirically derived methods. For example, halogens get darker with increasing atomic weight – fluorine is nearly colorless, chlorine is yellow green, bromine is red brown, and iodine is dark gray/violet. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance (if it is a metalloid or a metal). Astatine sublimes less readily than does iodine, having a lower vapor pressure. Even so, half of a given quantity of astatine will vaporize in approximately an hour if put on a clean glass surface at room temperature. The absorption spectrum of astatine in the middle ultraviolet region has lines at 224.401 and 216.225 nm, suggestive of 6p to 7s transitions. The structure of solid astatine is unknown. As an analogue of iodine it may have an orthorhombic crystalline structure composed of diatomic astatine molecules, and be a semiconductor (with a band gap of 0.7 eV). Alternatively, if condensed astatine forms a metallic phase, as has been predicted, it may have a monatomic face-centered cubic structure; in this structure it may well be a superconductor, like the similar high-pressure phase of iodine. Evidence for (or against) the existence of diatomic astatine (At2) is sparse and inconclusive. Some sources state that it does not exist, or at least has never been observed, while other sources assert or imply its existence. Despite this controversy, many properties of diatomic astatine have been predicted; for example, its bond length would be , dissociation energy , and heat of vaporization (∆Hvap) 54.39 kJ/mol. Many values have been predicted for the melting and boiling points of astatine, but only for At2. Chemical The chemistry of astatine is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions". Many of its apparent chemical properties have been observed using tracer studies on extremely dilute astatine solutions, typically less than 10−10 mol·L−1. Some properties, such as anion formation, align with other halogens. Astatine has some metallic characteristics as well, such as plating onto a cathode, and coprecipitating with metal sulfides in hydrochloric acid. It forms complexes with EDTA, a metal chelating agent, and is capable of acting as a metal in antibody radiolabeling; in some respects astatine in the +1 state is akin to silver in the same state. Most of the organic chemistry of astatine is, however, analogous to that of iodine. It has been suggested that astatine can form a stable monatomic cation in aqueous solution, but electromigration evidence suggests that the cationic At(I) species is protonated hypoastatous acid (H2OAt+), showing analogy to iodine. Astatine has an electronegativity of 2.2 on the revised Pauling scale – lower than that of iodine (2.66) and the same as hydrogen. In hydrogen astatide (HAt), the negative charge is predicted to be on the hydrogen atom, implying that this compound could be referred to as astatine hydride according to certain nomenclatures. That would be consistent with the electronegativity of astatine on the Allred–Rochow scale (1.9) being less than that of hydrogen (2.2). However, official IUPAC stoichiometric nomenclature is based on an idealized convention of determining the relative electronegativities of the elements by the mere virtue of their position within the periodic table. According to this convention, astatine is handled as though it is more electronegative than hydrogen, irrespective of its true electronegativity. The electron affinity of astatine, at 233 kJ mol−1, is 21% less than that of iodine. In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. The first ionisation energy of astatine is about 899 kJ mol−1, which continues the trend of decreasing first ionisation energies down the halogen group (fluorine, 1681; chlorine, 1251; bromine, 1140; iodine, 1008). Compounds Less reactive than iodine, astatine is the least reactive of the halogens. Its compounds have been synthesized in microscopic amounts and studied as intensively as possible before their radioactive disintegration. The reactions involved have been typically tested with dilute solutions of astatine mixed with larger amounts of iodine. Acting as a carrier, the iodine ensures there is sufficient material for laboratory techniques (such as filtration and precipitation) to work. Like iodine, astatine has been shown to adopt odd-numbered oxidation states ranging from −1 to +7. Only a few compounds with metals have been reported, in the form of astatides of sodium, palladium, silver, thallium, and lead. Some characteristic properties of silver and sodium astatide, and the other hypothetical alkali and alkaline earth astatides, have been estimated by extrapolation from other metal halides. The formation of an astatine compound with hydrogen – usually referred to as hydrogen astatide – was noted by the pioneers of astatine chemistry. As mentioned, there are grounds for instead referring to this compound as astatine hydride. It is easily oxidized; acidification by dilute nitric acid gives the At0 or At+ forms, and the subsequent addition of silver(I) may only partially, at best, precipitate astatine as silver(I) astatide (AgAt). Iodine, in contrast, is not oxidized, and precipitates readily as silver(I) iodide. Astatine is known to bind to boron, carbon, and nitrogen. Various boron cage compounds have been prepared with At–B bonds, these being more stable than At–C bonds. Astatine can replace a hydrogen atom in benzene to form astatobenzene C6H5At; this may be oxidized to C6H5AtCl2 by chlorine. By treating this compound with an alkaline solution of hypochlorite, C6H5AtO2 can be produced. The dipyridine-astatine(I) cation, [At(C5H5N)2]+, forms ionic compounds with perchlorate (a non-coordinating anion) and with nitrate, [At(C5H5N)2]NO3. This cation exists as a coordination complex in which two dative covalent bonds separately link the astatine(I) centre with each of the pyridine rings via their nitrogen atoms. With oxygen, there is evidence of the species AtO− and AtO+ in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid: the latter species might also be protonated astatous acid, . The species previously thought to be has since been determined to be , a hydrolysis product of AtO+ (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO3)3, following the oxidation of astatine by a hot Na2S2O8 solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate. Astatine may form bonds to the other chalcogens; these include S7At+ and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium. Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI2, CsAtI2, TlAtI2, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI]+, [AtBr]+, and [AtCl]+ have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride. History In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit eka – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries. The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine; moreover, astatine is not found in the thorium series, and the true identity of dakin is not known. In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 via X-ray analysis. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine, his means to detect it were too weak, by current standards, to enable correct identification. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work. In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Berta Karlik and Traude Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results. Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet been discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, Nature published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Greek astatos (αστατος) meaning "unstable", because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element. Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine … it also exhibits metallic properties, more like its metallic neighbors Po and Bi." Isotopes There are 39 known isotopes of astatine, with atomic masses (mass numbers) of 191–229. Theoretical modeling suggests that 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist. Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with A = 215. A beta decay mode has been found for all other astatine isotopes except for astatine-213, astatine-214, and astatine-216m. Astatine-210 and lighter isotopes exhibit beta plus decay (positron emission), astatine-216 and heavier isotopes exhibit beta minus decay, and astatine-212 decays via both modes, while astatine-211 undergoes electron capture. The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209. Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-214m1; its half-life of 265 nanoseconds is shorter than those of all ground states except that of astatine-213. Natural occurrence Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 1025 grams) is estimated by some to be less than one gram at any given time. Other sources estimate the amount of ephemeral astatine, present on earth at any given moment, to be up to one ounce (about 28 grams). Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10−10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest lived of the naturally occurring isotopes. Isotopes of astatine are sometimes not listed as naturally occurring because of misconceptions that there are no such isotopes, or discrepancies in the literature. Astatine-216 has been counted as a naturally occurring isotope but reports of its observation (which were described as doubtful) have not been confirmed. Synthesis Formation Astatine was first produced by bombarding bismuth-209 with energetic alpha particles, and this is still the major route used to create the relatively long-lived isotopes astatine-209 through astatine-211. Astatine is only produced in minuscule quantities, with modern techniques allowing production runs of up to 6.6 giga becquerels (about 86 nanograms or 2.47 × 1014 atoms). Synthesis of greater quantities of astatine using this method is constrained by the limited availability of suitable cyclotrons and the prospect of melting the target. Solvent radiolysis due to the cumulative effect of astatine decay is a related problem. With cryogenic technology, microgram quantities of astatine might be able to be generated via proton irradiation of thorium or uranium to yield radon-211, in turn decaying to astatine-211. Contamination with astatine-210 is expected to be a drawback of this method. The most important isotope is astatine-211, the only one in commercial use. To produce the bismuth target, the metal is sputtered onto a gold, copper, or aluminium surface at 50 to 100 milligrams per square centimeter. Bismuth oxide can be used instead; this is forcibly fused with a copper plate. The target is kept under a chemically neutral nitrogen atmosphere, and is cooled with water to prevent premature astatine vaporization. In a particle accelerator, such as a cyclotron, alpha particles are collided with the bismuth. Even though only one bismuth isotope is used (bismuth-209), the reaction may occur in three possible ways, producing astatine-209, astatine-210, or astatine-211. In order to eliminate undesired nuclides, the maximum energy of the particle accelerator is set to a value (optimally 29.17 MeV) above that for the reaction producing astatine-211 (to produce the desired isotope) and below the one producing astatine-210 (to avoid producing other astatine isotopes). Separation methods Since astatine is the main product of the synthesis, after its formation it must only be separated from the target and any significant contaminants. Several methods are available, "but they generally follow one of two approaches—dry distillation or [wet] acid treatment of the target followed by solvent extraction." The methods summarized below are modern adaptations of older procedures, as reviewed by Kugler and Keller. Pre-1985 techniques more often addressed the elimination of co-produced toxic polonium; this requirement is now mitigated by capping the energy of the cyclotron irradiation beam. Dry The astatine-containing cyclotron target is heated to a temperature of around 650 °C. The astatine volatilizes and is condensed in (typically) a cold trap. Higher temperatures of up to around 850 °C may increase the yield, at the risk of bismuth contamination from concurrent volatilization. Redistilling the condensate may be required to minimize the presence of bismuth (as bismuth can interfere with astatine labeling reactions). The astatine is recovered from the trap using one or more low concentration solvents such as sodium hydroxide, methanol or chloroform. Astatine yields of up to around 80% may be achieved. Dry separation is the method most commonly used to produce a chemically useful form of astatine. Wet The irradiated bismuth (or sometimes bismuth trioxide) target is first dissolved in, for example, concentrated nitric or perchloric acid. Following this first step, the acid can be distilled away to leave behind a white residue that contains both bismuth and the desired astatine product. This residue is then dissolved in a concentrated acid, such as hydrochloric acid. Astatine is extracted from this acid using an organic solvent such as butyl or isopropyl ether, diisopropylether (DIPE), or thiosemicarbazide. Using liquid-liquid extraction, the astatine product can be repeatedly washed with an acid, such as HCl, and extracted into the organic solvent layer. A separation yield of 93% using nitric acid has been reported, falling to 72% by the time purification procedures were completed (distillation of nitric acid, purging residual nitrogen oxides, and redissolving bismuth nitrate to enable liquid–liquid extraction). Wet methods involve "multiple radioactivity handling steps" and have not been considered well suited for isolating larger quantities of astatine. However, wet extraction methods are being examined for use in production of larger quantities of astatine-211, as it is thought that wet extraction methods can provide more consistency. They can enable the production of astatine in a specific oxidation state and may have greater applicability in experimental radiochemistry. Uses and precautions {| class="wikitable" |+ Several 211At-containing molecules and their experimental uses ! Agent ! Applications |- | [211At]astatine-tellurium colloids | Compartmental tumors |- | 6-[211At]astato-2-methyl-1,4-naphtaquinol diphosphate | Adenocarcinomas |- | 211At-labeled methylene blue | Melanomas |- | Meta-[211At]astatobenzyl guanidine | Neuroendocrine tumors |- | 5-[211At]astato-2'-deoxyuridine | Various |- | 211At-labeled biotin conjugates | Various pretargeting |- | 211At-labeled octreotide | Somatostatin receptor |- | 211At-labeled monoclonal antibodies and fragments | Various |- | 211At-labeled bisphosphonates | Bone metastases |} Newly formed astatine-211 is the subject of ongoing research in nuclear medicine. It must be used quickly as it decays with a half-life of 7.2 hours; this is long enough to permit multistep labeling strategies. Astatine-211 has potential for targeted alpha-particle therapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide, polonium-211, which undergoes further alpha decay), very quickly reaching its stable granddaughter lead-207. Polonium X-rays emitted as a result of the electron capture branch, in the range of 77–92 keV, enable the tracking of astatine in animals and patients. Although astatine-210 has a slightly longer half-life, it is wholly unsuitable because it usually undergoes beta plus decay to the extremely toxic polonium-210. The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that iodine-131 emits high-energy beta particles, and astatine does not. Beta particles have much greater penetrating power through tissues than do the much heavier alpha particles. An average alpha particle released by astatine-211 can travel up to 70 µm through surrounding tissues; an average-energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. The short half-life and limited penetrating power of alpha radiation through tissues offers advantages in situations where the "tumor burden is low and/or malignant cell populations are located in close proximity to essential normal tissues." Significant morbidity in cell culture models of human cancers has been achieved with from one to ten astatine-211 atoms bound per cell. Several obstacles have been encountered in the development of astatine-based radiopharmaceuticals for cancer treatment. World War II delayed research for close to a decade. Results of early experiments indicated that a cancer-selective carrier would need to be developed and it was not until the 1970s that monoclonal antibodies became available for this purpose. Unlike iodine, astatine shows a tendency to dehalogenate from molecular carriers such as these, particularly at sp3 carbon sites (less so from sp2 sites). Given the toxicity of astatine accumulated and retained in the body, this emphasized the need to ensure it remained attached to its host molecule. While astatine carriers that are slowly metabolized can be assessed for their efficacy, more rapidly metabolized carriers remain a significant obstacle to the evaluation of astatine in nuclear medicine. Mitigating the effects of astatine-induced radiolysis of labeling chemistry and carrier molecules is another area requiring further development. A practical application for astatine as a cancer treatment would potentially be suitable for a "staggering" number of patients; production of astatine in the quantities that would be required remains an issue. Animal studies show that astatine, similarly to iodine – although to a lesser extent, perhaps because of its slightly more metallic nature  – is preferentially (and dangerously) concentrated in the thyroid gland. Unlike iodine, astatine also shows a tendency to be taken up by the lungs and spleen, possibly because of in-body oxidation of At– to At+. If administered in the form of a radiocolloid it tends to concentrate in the liver. Experiments in rats and monkeys suggest that astatine-211 causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. Early research suggested that injection of astatine into female rodents caused morphological changes in breast tissue; this conclusion remained controversial for many years. General agreement was later reached that this was likely caused by the effect of breast tissue irradiation combined with hormonal changes due to irradiation of the ovaries. Trace amounts of astatine can be handled safely in fume hoods if they are well-aerated; biological uptake of the element must be avoided. See also Radiation protection Notes References Bibliography External links Astatine at The Periodic Table of Videos (University of Nottingham) Astatine: Halogen or Metal? Halogens Metalloids Chemical elements
902
https://en.wikipedia.org/wiki/Atom
Atom
An atom is the smallest unit of ordinary matter that forms a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are extremely small, typically around 100 picometers across. They are so small that accurately predicting their behavior using classical physics—as if they were tennis balls, for example—is not possible due to quantum effects. Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and a number of neutrons. Only the most common variety of hydrogen has no neutrons. More than 99.94% of an atom's mass is in the nucleus. The protons have a positive electric charge, the electrons have a negative electric charge, and the neutrons have no electric charge. If the number of protons and electrons are equal, then the atom is electrically neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively – such atoms are called ions. The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. The number of protons in the nucleus is the atomic number and it defines to which chemical element the atom belongs. For example, any atom that contains 29 protons is copper. The number of neutrons defines the isotope of the element. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature. Chemistry is the discipline that studies these changes. History of atomic theory In philosophy The basic idea that matter is made up of tiny, indivisible particles appears in many ancient cultures such as those of Greece and India. The word atom is derived from the ancient Greek word atomos (a combination of the negative term "a-" and "τομή," the term for "cut") that means "uncuttable". This ancient idea was based in philosophical reasoning rather than scientific reasoning; modern atomic theory is not based on these old concepts. Nonetheless, the term "atom" was used throughout the ages by thinkers who suspected that matter was ultimately granular in nature. It has since been discovered that "atoms" can be split, but the misnomer is still used. Dalton's law of multiple proportions In the early 1800s, the English chemist John Dalton compiled experimental data gathered by himself and other scientists and discovered a pattern now known as the "law of multiple proportions". He noticed that in chemical compounds which contain a particular chemical element, the content of that element in these compounds will differ by ratios of small whole numbers. This pattern suggested to Dalton that each chemical element combines with other elements by some basic and consistent unit of mass. For example, there are two types of tin oxide: one is a black powder that is 88.1% tin and 11.9% oxygen, and the other is a white powder that is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the black oxide there is about 13.5 g of oxygen for every 100 g of tin, and in the white oxide there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. In these oxides, for every tin atom there are one or two oxygen atoms respectively (SnO and SnO2). As a second example, Dalton considered two iron oxides: a black powder which is 78.1% iron and 21.9% oxygen, and a red powder which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black oxide there is about 28 g of oxygen for every 100 g of iron, and in the red oxide there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. In these respective oxides, for every two atoms of iron, there are two or three atoms of oxygen (Fe2O2 and Fe2O3). As a final example: nitrous oxide is 63.3% nitrogen and 36.7% oxygen, nitric oxide is 44.05% nitrogen and 55.95% oxygen, and nitrogen dioxide is 29.5% nitrogen and 70.5% oxygen. Adjusting these figures, in nitrous oxide there is 80 g of oxygen for every 140 g of nitrogen, in nitric oxide there is about 160 g of oxygen for every 140 g of nitrogen, and in nitrogen dioxide there is 320 g of oxygen for every 140 g of nitrogen. 80, 160, and 320 form a ratio of 1:2:4. The respective formulas for these oxides are N2O, NO, and NO2. Kinetic theory of gases In the late 18th century, a number of scientists found that they could better explain the behavior of gases by describing them as collections of sub-microscopic particles and modelling their behavior using statistics and probability. Unlike Dalton's atomic theory, the kinetic theory of gases describes not how gases react chemically with each other to form compounds, but how they behave physically: diffusion, viscosity, conductivity, pressure, etc. Brownian motion In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion". This was thought to be caused by water molecules knocking the grains about. In 1905, Albert Einstein proved the reality of these molecules and their motions by producing the first statistical physics analysis of Brownian motion. French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of molecules, thereby providing physical evidence for the particle nature of matter. Discovery of the electron In 1897, J. J. Thomson discovered that cathode rays are not electromagnetic waves but made of particles that are 1,800 times lighter than hydrogen (the lightest atom). Thomson concluded that these particles came from the atoms within the cathode — they were subatomic particles. He called these new particles corpuscles but they were later renamed electrons. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. It was quickly recognized that electrons are the particles that carry electric currents in metal wires. Thomson concluded that these electrons emerged from the very atoms of the cathode in his instruments, which meant that atoms are not indivisible as the name atomos suggests. Discovery of the nucleus J. J. Thomson thought that the negatively-charged electrons were distributed throughout the atom in a sea of positive charge that was distributed across the whole volume of the atom. This model is sometimes known as the plum pudding model. Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles are much heavier than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle, and the electrons are so lightweight they should be pushed aside effortlessly by the much heavier alpha particles. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully. Between 1908 and 1913, Rutheford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with alpha particles. They spotted alpha particles being deflected by angles greater than 90°. To explain this, Rutherford proposed that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. Only such an intense concentration of charge could produce an electric field strong enough to deflect the alpha particles as observed. Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table. The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J. J. Thomson created a technique for isotope separation through his work on ionized gases, which subsequently led to the discovery of stable isotopes. Bohr model In 1913, the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see synchrotron radiation) and why elements absorb and emit electromagnetic radiation in discrete spectra. Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today. Chemical bonds between atoms were explained by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons. As the chemical properties of the elements were known to largely repeat themselves according to the periodic law, in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus. The Bohr model of the atom was the first complete physical model of the atom. It described the overall structure of the atom, how atoms bond to each other, and predicted the spectral lines of hydrogen. Bohr's model was not perfect and was soon superseded by the more accurate Schrödinger model, but it was sufficient to evaporate any remaining doubts that matter is composed of atoms. For chemists, the idea of the atom had been a useful heuristic tool, but physicists had doubts as to whether matter really is made up of atoms as nobody had yet developed a complete physical model of the atom. The Schrödinger model The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of atomic properties. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split in a way correlated with the direction of an atom's angular momentum, or spin. As this spin direction is initially random, the beam would be expected to deflect in a random direction. Instead, the beam was split into two directional components, corresponding to the atomic spin being oriented up or down with respect to the magnetic field. In 1925, Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed the de Broglie hypothesis: that all particles behave like waves to some extent, and in 1926 Erwin Schrödinger used this idea to develop the Schrödinger equation, a mathematical model of the atom (wave mechanics) that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed. Discovery of the neutron The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus. Fission, high-energy physics and condensed matter In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product. A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental nuclear fission. In 1944, Hahn received the Nobel Prize in Chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized. In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies. Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions. Structure Subatomic particles Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. The electron is by far the least massive of these particles at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron, or . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. Nucleus All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to  femtometres, where is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass-energy equivalence formula, , where is the mass loss and is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. Electron cloud The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 million eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. Properties Nuclear properties By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single-proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible. About 339 nuclides occur naturally on Earth, of which 252 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 162 (bringing the total to 252) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the Solar System. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14). For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes. Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 252 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects. Mass The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of . As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg. Shape and size Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. This assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm. When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope, although individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion () atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of contains about 10 sextillion (1022) atoms of carbon. If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple. Radioactive decay Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm. The most common forms of radioactive decay are: Alpha decay: this process is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number. Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron. Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay. Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth. Magnetic moment Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ½ ħ, or "spin-½". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin. The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field, but the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons. In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field. The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium, but for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging. Energy levels The potential energy of an electron in an atom is negative relative to when the distance from the nucleus goes to infinity; its dependence on the electron's position reaches the minimum inside the nucleus, roughly in inverse proportion to the distance. In the quantum-mechanical model, a bound electron can occupy only a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for a theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state. The electron's energy increases along with n because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by the electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors. When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined. Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin-orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect. If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band. Valence and bonding behavior Valency is the combining power of an element. It is determined by the number of bonds it can form to other atoms or groups. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. Many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds. The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases. States Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior. Identification While atoms are too small to be seen, devices such as the scanning tunneling microscope (STM) enable their visualization at the surfaces of solids. The microscope uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would be insurmountable in the classical perspective. Electrons tunnel through the vacuum between two biased electrodes, providing a tunneling current that is exponentially dependent on their separation. One electrode is a sharp tip ideally ending with a single atom. At each point of the scan of the surface the tip's height is adjusted so as to keep the tunneling current at a set value. How much the tip moves to and away from the surface is interpreted as the height profile. For low bias, the microscope images the averaged electron orbitals across closely packed energy levels—the local density of the electronic states near the Fermi level. Because of the distances involved, both electrodes need to be extremely stable; only then periodicities can be observed that correspond to individual atoms. The method alone is not chemically specific, and cannot identify the atomic species present at the surface. Atoms can be easily identified by their mass. If an atom is ionized by removing one of its electrons, its trajectory when it passes through a magnetic field will bend. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry. Electron emission techniques such as X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES), which measure the binding energies of the core electrons, are used to identify the atomic species present in a sample in a non-destructive way. With proper focusing both can be made area-specific. Another such method is electron energy loss spectroscopy (EELS), which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth. Origin and current state Baryonic matter forms about 4% of the total energy density of the observable Universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons). Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3. The Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter. The total baryonic mass is about 10% of the mass of the galaxy; the remainder of the mass is an unknown dark matter. High temperature inside stars makes most "atoms" fully ionized, that is, separates all electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible. Formation Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron. Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei. Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae and colliding neutron stars through the r-process, and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements. Earth Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay. There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore. The Earth contains approximately atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter. Rare and theoretical forms Superheavy elements All nuclides with atomic numbers higher than 82 (lead) are known to be radioactive. No nuclide with an atomic number exceeding 92 (uranium) exists on Earth as a primordial nuclide, and heavier elements generally have shorter half-lives. Nevertheless, an "island of stability" encompassing relatively long-lived isotopes of superheavy elements with atomic numbers 110 to 114 might exist. Predictions for the half-life of the most stable nuclide on the island range from a few minutes to millions of years. In any case, superheavy elements (with Z > 104) would not exist due to increasing Coulomb repulsion (which results in spontaneous fission with increasingly short half-lives) in the absence of any stabilizing effects. Exotic matter Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. In 1996, the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva. Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test fundamental predictions of physics. See also Notes References Bibliography Further reading External links Chemistry Articles containing video clips
903
https://en.wikipedia.org/wiki/Arable%20land
Arable land
Arable land (from the , "able to be ploughed") is any land capable of being ploughed and used to grow crops. Alternatively, for the purposes of agricultural statistics, the term often has a more precise definition: A more concise definition appearing in the Eurostat glossary similarly refers to actual rather than potential uses: "land worked (ploughed or tilled) regularly, generally under a system of crop rotation". Non-arable land can sometimes be converted to arable land through methods such as loosening and tilling (breaking up) of the soil, though in more extreme cases the degree of modification required to make certain types of land arable can become prohibitively expensive. In Britain, arable land has traditionally been contrasted with pasturable land such as heaths, which could be used for sheep-rearing but not as farmland. Arable land area According to the Food and Agriculture Organization of the United Nations, in 2013, the world's arable land amounted to 1.407 billion hectares, out of a total of 4.924 billion hectares of land used for agriculture. Arable land (hectares per person) Non-arable land Agricultural land that is not arable according to the FAO definition above includes: Meadows and pasturesland used as pasture and grazed range, and those natural grasslands and sedge meadows that are used for hay production in some regions. Permanent cropland that produces crops from woody vegetation, e.g. orchard land, vineyards, coffee plantations, rubber plantations, and land producing nut trees; Other non-arable land includes land that is not suitable for any agricultural use. Land that is not arable, in the sense of lacking capability or suitability for cultivation for crop production, has one or more limitationsa lack of sufficient freshwater for irrigation, stoniness, steepness, adverse climate, excessive wetness with the impracticality of drainage, excessive salts, or a combination of these, among others. Although such limitations may preclude cultivation, and some will in some cases preclude any agricultural use, large areas unsuitable for cultivation may still be agriculturally productive. For example, United States NRCS statistics indicate that about 59 percent of US non-federal pasture and unforested rangeland is unsuitable for cultivation, yet such land has value for grazing of livestock. In British Columbia, Canada, 41 percent of the provincial Agricultural Land Reserve area is unsuitable for the production of cultivated crops, but is suitable for uncultivated production of forage usable by grazing livestock. Similar examples can be found in many rangeland areas elsewhere. Land incapable of being cultivated for the production of crops can sometimes be converted to arable land. New arable land makes more food and can reduce starvation. This outcome also makes a country more self-sufficient and politically independent, because food importation is reduced. Making non-arable land arable often involves digging new irrigation canals and new wells, aqueducts, desalination plants, planting trees for shade in the desert, hydroponics, fertilizer, nitrogen fertilizer, pesticides, reverse osmosis water processors, PET film insulation or other insulation against heat and cold, digging ditches and hills for protection against the wind, and installing greenhouses with internal light and heat for protection against the cold outside and to provide light in cloudy areas. Such modifications are often prohibitively expensive. An alternative is the seawater greenhouse, which desalinates water through evaporation and condensation using solar energy as the only energy input. This technology is optimized to grow crops on desert land close to the sea. The use of artifices does not make the land arable. Rock still remains rock, and shallowless than turnable soil is still not considered toilable. The use of artifice is an open-air none recycled water hydroponics relationship. The below described circumstances are not in perspective, have limited duration, and have a tendency to accumulate trace materials in soil that either there or elsewhere cause deoxygenation. The use of vast amounts of fertilizer may have unintended consequences for the environment by devastating rivers, waterways, and river endings through the accumulation of non-degradable toxins and nitrogen-bearing molecules that remove oxygen and cause non-aerobic processes to form. Examples of infertile non-arable land being turned into fertile arable land include: Aran Islands: These islands off the west coast of Ireland (not to be confused with the Isle of Arran in Scotland's Firth of Clyde) were unsuitable for arable farming because they were too rocky. The people covered the islands with a shallow layer of seaweed and sand from the ocean. Today, crops are grown there, even though the islands are still considered non-arable. Israel: The construction of desalination plants along Israel's coast allowed agriculture in some areas that were formerly desert. The desalination plants, which remove the salt from ocean water, have produced a new source of water for farming, drinking, and washing. Slash and burn agriculture uses nutrients in wood ash, but these expire within a few years. Terra preta, fertile tropical soils produced by adding charcoal. Examples of fertile arable land being turned into infertile land include: Droughts such as the "Dust Bowl" of the Great Depression in the US turned farmland into desert. Each year, arable land is lost due to desertification and human-induced erosion. Improper irrigation of farmland can wick the sodium, calcium, and magnesium from the soil and water to the surface. This process steadily concentrates salt in the root zone, decreasing productivity for crops that are not salt-tolerant. Rainforest deforestation: The fertile tropical forests are converted into infertile desert land. For example, Madagascar's central highland plateau has become virtually totally barren (about ten percent of the country) as a result of slash-and-burn deforestation, an element of shifting cultivation practiced by many natives. See also Development easement Land use statistics by country List of environment topics Soil fertility References External links Article from Technorati on Shrinking Arable Farmland in the world Surface area of the Earth Agricultural land
904
https://en.wikipedia.org/wiki/Aluminium
Aluminium
Aluminium (or aluminum in American English and Canadian English) is a chemical element with the symbol Al and atomic number 13. Aluminium has a density lower than those of other common metals, at approximately one third that of steel. It has a great affinity towards oxygen, and forms a protective layer of oxide on the surface when exposed to air. Aluminium visually resembles silver, both in its color and in its great ability to reflect light. It is soft, non-magnetic and ductile. It has one stable isotope, 27Al; this isotope is very common, making aluminium the twelfth most common element in the Universe. The radioactivity of 26Al is used in radiodating. Chemically, aluminium is a post-transition metal in the boron group; as is common for the group, aluminium forms compounds primarily in the +3 oxidation state. The aluminium cation Al3+ is small and highly charged; as such, it is polarizing, and bonds aluminium forms tend towards covalency. The strong affinity towards oxygen leads to aluminium's common association with oxygen in nature in the form of oxides; for this reason, aluminium is found on Earth primarily in rocks in the crust, where it is the third most abundant element after oxygen and silicon, rather than in the mantle, and virtually never as the free metal. The discovery of aluminium was announced in 1825 by Danish physicist Hans Christian Ørsted. The first industrial production of aluminium was initiated by French chemist Henri Étienne Sainte-Claire Deville in 1856. Aluminium became much more available to the public with the Hall–Héroult process developed independently by French engineer Paul Héroult and American engineer Charles Martin Hall in 1886, and the mass production of aluminium led to its extensive use in industry and everyday life. In World Wars I and II, aluminium was a crucial strategic resource for aviation. In 1954, aluminium became the most produced non-ferrous metal, surpassing copper. In the 21st century, most aluminium was consumed in transportation, engineering, construction, and packaging in the United States, Western Europe, and Japan. Despite its prevalence in the environment, no living organism is known to use aluminium salts metabolically, but aluminium is well tolerated by plants and animals. Because of the abundance of these salts, the potential for a biological role for them is of continuing interest, and studies continue. Physical characteristics Isotopes Of aluminium isotopes, only is stable. This situation is common for elements with an odd atomic number. It is the only primordial aluminium isotope, i.e. the only one that has existed on Earth in its current form since the formation of the planet. Nearly all aluminium on Earth is present as this isotope, which makes it a mononuclidic element and means that its standard atomic weight is virtually the same as that of the isotope. This makes aluminium very useful in nuclear magnetic resonance (NMR), as its single stable isotope has a high NMR sensitivity. The standard atomic weight of aluminium is low in comparison with many other metals. All other isotopes of aluminium are radioactive. The most stable of these is 26Al: while it was present along with stable 27Al in the interstellar medium from which the Solar System formed, having been produced by stellar nucleosynthesis as well, its half-life is only 717,000 years and therefore a detectable amount has not survived since the formation of the planet. However, minute traces of 26Al are produced from argon in the atmosphere by spallation caused by cosmic ray protons. The ratio of 26Al to 10Be has been used for radiodating of geological processes over 105 to 106 year time scales, in particular transport, deposition, sediment storage, burial times, and erosion. Most meteorite scientists believe that the energy released by the decay of 26Al was responsible for the melting and differentiation of some asteroids after their formation 4.55 billion years ago. The remaining isotopes of aluminium, with mass numbers ranging from 22 to 43, all have half-lives well under an hour. Three metastable states are known, all with half-lives under a minute. Electron shell An aluminium atom has 13 electrons, arranged in an electron configuration of [Ne] 3s2 3p1, with three electrons beyond a stable noble gas configuration. Accordingly, the combined first three ionization energies of aluminium are far lower than the fourth ionization energy alone. Such an electron configuration is shared with the other well-characterized members of its group, boron, gallium, indium, and thallium; it is also expected for nihonium. Aluminium can relatively easily surrender its three outermost electrons in many chemical reactions (see below). The electronegativity of aluminium is 1.61 (Pauling scale). A free aluminium atom has a radius of 143 pm. With the three outermost electrons removed, the radius shrinks to 39 pm for a 4-coordinated atom or 53.5 pm for a 6-coordinated atom. At standard temperature and pressure, aluminium atoms (when not affected by atoms of other elements) form a face-centered cubic crystal system bound by metallic bonding provided by atoms' outermost electrons; hence aluminium (at these conditions) is a metal. This crystal system is shared by many other metals, such as lead and copper; the size of a unit cell of aluminium is comparable to that of those other metals. The system, however, is not shared by the other members of its group; boron has ionization energies too high to allow metallization, thallium has a hexagonal close-packed structure, and gallium and indium have unusual structures that are not close-packed like those of aluminium and thallium. The few electrons that are available for metallic bonding in aluminium metal are a probable cause for it being soft with a low melting point and low electrical resistivity. Bulk Aluminium metal has an appearance ranging from silvery white to dull gray, depending on the surface roughness. A fresh film of aluminium serves as a good reflector (approximately 92%) of visible light and an excellent reflector (as much as 98%) of medium and far infrared radiation. Aluminium mirrors are the most reflective of all metal mirrors for the near ultraviolet and far infrared light, and one of the most reflective in the visible spectrum, nearly on par with silver, and the two therefore look similar. Aluminium is also good at reflecting solar radiation, although prolonged exposure to sunlight in air adds wear to the surface of the metal; this may be prevented if aluminium is anodized, which adds a protective layer of oxide on the surface. The density of aluminium is 2.70 g/cm3, about 1/3 that of steel, much lower than other commonly encountered metals, making aluminium parts easily identifiable through their lightness. Aluminium's low density compared to most other metals arises from the fact that its nuclei are much lighter, while difference in the unit cell size does not compensate for this difference. The only lighter metals are the metals of groups 1 and 2, which apart from beryllium and magnesium are too reactive for structural use (and beryllium is very toxic). Aluminium is not as strong or stiff as steel, but the low density makes up for this in the aerospace industry and for many other applications where light weight and relatively high strength are crucial. Pure aluminium is quite soft and lacking in strength. In most applications various aluminium alloys are used instead because of their higher strength and hardness. The yield strength of pure aluminium is 7–11 MPa, while aluminium alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminium is ductile, with a percent elongation of 50-70%, and malleable allowing it to be easily drawn and extruded. It is also easily machined and cast. Aluminium is an excellent thermal and electrical conductor, having around 60% the conductivity of copper, both thermal and electrical, while having only 30% of copper's density. Aluminium is capable of superconductivity, with a superconducting critical temperature of 1.2 kelvin and a critical magnetic field of about 100 gauss (10 milliteslas). It is paramagnetic and thus essentially unaffected by static magnetic fields. The high electrical conductivity, however, means that it is strongly affected by alternating magnetic fields through the induction of eddy currents. Chemistry Aluminium combines characteristics of pre- and post-transition metals. Since it has few available electrons for metallic bonding, like its heavier group 13 congeners, it has the characteristic physical properties of a post-transition metal, with longer-than-expected interatomic distances. Furthermore, as Al3+ is a small and highly charged cation, it is strongly polarizing and bonding in aluminium compounds tends towards covalency; this behavior is similar to that of beryllium (Be2+), and the two display an example of a diagonal relationship. The underlying core under aluminium's valence shell is that of the preceding noble gas, whereas those of its heavier congeners gallium, indium, thallium, and nihonium also include a filled d-subshell and in some cases a filled f-subshell. Hence, the inner electrons of aluminium shield the valence electrons almost completely, unlike those of aluminium's heavier congeners. As such, aluminium is the most electropositive metal in its group, and its hydroxide is in fact more basic than that of gallium. Aluminium also bears minor similarities to the metalloid boron in the same group: AlX3 compounds are valence isoelectronic to BX3 compounds (they have the same valence electronic structure), and both behave as Lewis acids and readily form adducts. Additionally, one of the main motifs of boron chemistry is regular icosahedral structures, and aluminium forms an important part of many icosahedral quasicrystal alloys, including the Al–Zn–Mg class. Aluminium has a high chemical affinity to oxygen, which renders it suitable for use as a reducing agent in the thermite reaction. A fine powder of aluminium metal reacts explosively on contact with liquid oxygen; under normal conditions, however, aluminium forms a thin oxide layer (~5 nm at room temperature) that protects the metal from further corrosion by oxygen, water, or dilute acid, a process termed passivation. Because of its general resistance to corrosion, aluminium is one of the few metals that retains silvery reflectance in finely powdered form, making it an important component of silver-colored paints. Aluminium is not attacked by oxidizing acids because of its passivation. This allows aluminium to be used to store reagents such as nitric acid, concentrated sulfuric acid, and some organic acids. In hot concentrated hydrochloric acid, aluminium reacts with water with evolution of hydrogen, and in aqueous sodium hydroxide or potassium hydroxide at room temperature to form aluminates—protective passivation under these conditions is negligible. Aqua regia also dissolves aluminium. Aluminium is corroded by dissolved chlorides, such as common sodium chloride, which is why household plumbing is never made from aluminium. The oxide layer on aluminium is also destroyed by contact with mercury due to amalgamation or with salts of some electropositive metals. As such, the strongest aluminium alloys are less corrosion-resistant due to galvanic reactions with alloyed copper, and aluminium's corrosion resistance is greatly reduced by aqueous salts, particularly in the presence of dissimilar metals. Aluminium reacts with most nonmetals upon heating, forming compounds such as aluminium nitride (AlN), aluminium sulfide (Al2S3), and the aluminium halides (AlX3). It also forms a wide range of intermetallic compounds involving metals from every group on the periodic table. Inorganic compounds The vast majority of compounds, including all aluminium-containing minerals and all commercially significant aluminium compounds, feature aluminium in the oxidation state 3+. The coordination number of such compounds varies, but generally Al3+ is either six- or four-coordinate. Almost all compounds of aluminium(III) are colorless. In aqueous solution, Al3+ exists as the hexaaqua cation [Al(H2O)6]3+, which has an approximate Ka of 10−5. Such solutions are acidic as this cation can act as a proton donor and progressively hydrolyze until a precipitate of aluminium hydroxide, Al(OH)3, forms. This is useful for clarification of water, as the precipitate nucleates on suspended particles in the water, hence removing them. Increasing the pH even further leads to the hydroxide dissolving again as aluminate, [Al(H2O)2(OH)4]−, is formed. Aluminium hydroxide forms both salts and aluminates and dissolves in acid and alkali, as well as on fusion with acidic and basic oxides. This behavior of Al(OH)3 is termed amphoterism and is characteristic of weakly basic cations that form insoluble hydroxides and whose hydrated species can also donate their protons. One effect of this is that aluminium salts with weak acids are hydrolyzed in water to the aquated hydroxide and the corresponding nonmetal hydride: for example, aluminium sulfide yields hydrogen sulfide. However, some salts like aluminium carbonate exist in aqueous solution but are unstable as such; and only incomplete hydrolysis takes place for salts with strong acids, such as the halides, nitrate, and sulfate. For similar reasons, anhydrous aluminium salts cannot be made by heating their "hydrates": hydrated aluminium chloride is in fact not AlCl3·6H2O but [Al(H2O)6]Cl3, and the Al–O bonds are so strong that heating is not sufficient to break them and form Al–Cl bonds instead: 2[Al(H2O)6]Cl3 Al2O3 + 6 HCl + 9 H2O All four trihalides are well known. Unlike the structures of the three heavier trihalides, aluminium fluoride (AlF3) features six-coordinate aluminium, which explains its involatility and insolubility as well as high heat of formation. Each aluminium atom is surrounded by six fluorine atoms in a distorted octahedral arrangement, with each fluorine atom being shared between the corners of two octahedra. Such {AlF6} units also exist in complex fluorides such as cryolite, Na3AlF6. AlF3 melts at and is made by reaction of aluminium oxide with hydrogen fluoride gas at . With heavier halides, the coordination numbers are lower. The other trihalides are dimeric or polymeric with tetrahedral four-coordinate aluminium centers. Aluminium trichloride (AlCl3) has a layered polymeric structure below its melting point of but transforms on melting to Al2Cl6 dimers. At higher temperatures those increasingly dissociate into trigonal planar AlCl3 monomers similar to the structure of BCl3. Aluminium tribromide and aluminium triiodide form Al2X6 dimers in all three phases and hence do not show such significant changes of properties upon phase change. These materials are prepared by treating aluminium metal with the halogen. The aluminium trihalides form many addition compounds or complexes; their Lewis acidic nature makes them useful as catalysts for the Friedel–Crafts reactions. Aluminium trichloride has major industrial uses involving this reaction, such as in the manufacture of anthraquinones and styrene; it is also often used as the precursor for many other aluminium compounds and as a reagent for converting nonmetal fluorides into the corresponding chlorides (a transhalogenation reaction). Aluminium forms one stable oxide with the chemical formula Al2O3, commonly called alumina. It can be found in nature in the mineral corundum, α-alumina; there is also a γ-alumina phase. Its crystalline form, corundum, is very hard (Mohs hardness 9), has a high melting point of , has very low volatility, is chemically inert, and a good electrical insulator, it is often used in abrasives (such as toothpaste), as a refractory material, and in ceramics, as well as being the starting material for the electrolytic production of aluminium metal. Sapphire and ruby are impure corundum contaminated with trace amounts of other metals. The two main oxide-hydroxides, AlO(OH), are boehmite and diaspore. There are three main trihydroxides: bayerite, gibbsite, and nordstrandite, which differ in their crystalline structure (polymorphs). Many other intermediate and related structures are also known. Most are produced from ores by a variety of wet processes using acid and base. Heating the hydroxides leads to formation of corundum. These materials are of central importance to the production of aluminium and are themselves extremely useful. Some mixed oxide phases are also very useful, such as spinel (MgAl2O4), Na-β-alumina (NaAl11O17), and tricalcium aluminate (Ca3Al2O6, an important mineral phase in Portland cement). The only stable chalcogenides under normal conditions are aluminium sulfide (Al2S3), selenide (Al2Se3), and telluride (Al2Te3). All three are prepared by direct reaction of their elements at about and quickly hydrolyze completely in water to yield aluminium hydroxide and the respective hydrogen chalcogenide. As aluminium is a small atom relative to these chalcogens, these have four-coordinate tetrahedral aluminium with various polymorphs having structures related to wurtzite, with two-thirds of the possible metal sites occupied either in an orderly (α) or random (β) fashion; the sulfide also has a γ form related to γ-alumina, and an unusual high-temperature hexagonal form where half the aluminium atoms have tetrahedral four-coordination and the other half have trigonal bipyramidal five-coordination. Four pnictides – aluminium nitride (AlN), aluminium phosphide (AlP), aluminium arsenide (AlAs), and aluminium antimonide (AlSb) – are known. They are all III-V semiconductors isoelectronic to silicon and germanium, all of which but AlN have the zinc blende structure. All four can be made by high-temperature (and possibly high-pressure) direct reaction of their component elements. Aluminium alloys well with most other metals (with the exception of most alkali metals and group 13 metals) and over 150 intermetallics with other metals are known. Preparation involves heating fixed metals together in certain proportion, followed by gradual cooling and annealing. Bonding in them is predominantly metallic and the crystal structure primarily depends on efficiency of packing. There are few compounds with lower oxidation states. A few aluminium(I) compounds exist: AlF, AlCl, AlBr, and AlI exist in the gaseous phase when the respective trihalide is heated with aluminium, and at cryogenic temperatures. A stable derivative of aluminium monoiodide is the cyclic adduct formed with triethylamine, Al4I4(NEt3)4. Al2O and Al2S also exist but are very unstable. Very simple aluminium(II) compounds are invoked or observed in the reactions of Al metal with oxidants. For example, aluminium monoxide, AlO, has been detected in the gas phase after explosion and in stellar absorption spectra. More thoroughly investigated are compounds of the formula R4Al2 which contain an Al–Al bond and where R is a large organic ligand. Organoaluminium compounds and related hydrides A variety of compounds of empirical formula AlR3 and AlR1.5Cl1.5 exist. The aluminium trialkyls and triaryls are reactive, volatile, and colorless liquids or low-melting solids. They catch fire spontaneously in air and react with water, thus necessitating precautions when handling them. They often form dimers, unlike their boron analogues, but this tendency diminishes for branched-chain alkyls (e.g. Pri, Bui, Me3CCH2); for example, triisobutylaluminium exists as an equilibrium mixture of the monomer and dimer. These dimers, such as trimethylaluminium (Al2Me6), usually feature tetrahedral Al centers formed by dimerization with some alkyl group bridging between both aluminium atoms. They are hard acids and react readily with ligands, forming adducts. In industry, they are mostly used in alkene insertion reactions, as discovered by Karl Ziegler, most importantly in "growth reactions" that form long-chain unbranched primary alkenes and alcohols, and in the low-pressure polymerization of ethene and propene. There are also some heterocyclic and cluster organoaluminium compounds involving Al–N bonds. The industrially most important aluminium hydride is lithium aluminium hydride (LiAlH4), which is used in as a reducing agent in organic chemistry. It can be produced from lithium hydride and aluminium trichloride. The simplest hydride, aluminium hydride or alane, is not as important. It is a polymer with the formula (AlH3)n, in contrast to the corresponding boron hydride that is a dimer with the formula (BH3)2. Natural occurrence Space Aluminium's per-particle abundance in the Solar System is 3.15 ppm (parts per million). It is the twelfth most abundant of all elements and third most abundant among the elements that have odd atomic numbers, after hydrogen and nitrogen. The only stable isotope of aluminium, 27Al, is the eighteenth most abundant nucleus in the Universe. It is created almost entirely after fusion of carbon in massive stars that will later become Type II supernovas: this fusion creates 26Mg, which, upon capturing free protons and neutrons becomes aluminium. Some smaller quantities of 27Al are created in hydrogen burning shells of evolved stars, where 26Mg can capture free protons. Essentially all aluminium now in existence is 27Al. 26Al was present in the early Solar System with abundance of 0.005% relative to 27Al but its half-life of 728,000 years is too short for any original nuclei to survive; 26Al is therefore extinct. Unlike for 27Al, hydrogen burning is the primary source of 26Al, with the nuclide emerging after a nucleus of 25Mg catches a free proton. However, the trace quantities of 26Al that do exist are the most common gamma ray emitter in the interstellar gas; if the original 26Al were still present, gamma ray maps of the Milky Way would be brighter. Earth Overall, the Earth is about 1.59% aluminium by mass (seventh in abundance by mass). Aluminium occurs in greater proportion in the Earth's crust than in the Universe at large, because aluminium easily forms the oxide and becomes bound into rocks and stays in the Earth's crust, while less reactive metals sink to the core. In the Earth's crust, aluminium is the most abundant metallic element (8.23% by mass) and the third most abundant of all elements (after oxygen and silicon). A large number of silicates in the Earth's crust contain aluminium. In contrast, the Earth's mantle is only 2.38% aluminium by mass. Aluminium also occurs in seawater at a concentration of 2 μg/kg. Because of its strong affinity for oxygen, aluminium is almost never found in the elemental state; instead it is found in oxides or silicates. Feldspars, the most common group of minerals in the Earth's crust, are aluminosilicates. Aluminium also occurs in the minerals beryl, cryolite, garnet, spinel, and turquoise. Impurities in Al2O3, such as chromium and iron, yield the gemstones ruby and sapphire, respectively. Native aluminium metal is extremely rare and can only be found as a minor phase in low oxygen fugacity environments, such as the interiors of certain volcanoes. Native aluminium has been reported in cold seeps in the northeastern continental slope of the South China Sea. It is possible that these deposits resulted from bacterial reduction of tetrahydroxoaluminate Al(OH)4−. Although aluminium is a common and widespread element, not all aluminium minerals are economically viable sources of the metal. Almost all metallic aluminium is produced from the ore bauxite (AlOx(OH)3–2x). Bauxite occurs as a weathering product of low iron and silica bedrock in tropical climatic conditions. In 2017, most bauxite was mined in Australia, China, Guinea, and India. History The history of aluminium has been shaped by usage of alum. The first written record of alum, made by Greek historian Herodotus, dates back to the 5th century BCE. The ancients are known to have used alum as a dyeing mordant and for city defense. After the Crusades, alum, an indispensable good in the European fabric industry, was a subject of international commerce; it was imported to Europe from the eastern Mediterranean until the mid-15th century. The nature of alum remained unknown. Around 1530, Swiss physician Paracelsus suggested alum was a salt of an earth of alum. In 1595, German doctor and chemist Andreas Libavius experimentally confirmed this. In 1722, German chemist Friedrich Hoffmann announced his belief that the base of alum was a distinct earth. In 1754, German chemist Andreas Sigismund Marggraf synthesized alumina by boiling clay in sulfuric acid and subsequently adding potash. Attempts to produce aluminium metal date back to 1760. The first successful attempt, however, was completed in 1824 by Danish physicist and chemist Hans Christian Ørsted. He reacted anhydrous aluminium chloride with potassium amalgam, yielding a lump of metal looking similar to tin. He presented his results and demonstrated a sample of the new metal in 1825. In 1827, German chemist Friedrich Wöhler repeated Ørsted's experiments but did not identify any aluminium. (The reason for this inconsistency was only discovered in 1921.) He conducted a similar experiment in the same year by mixing anhydrous aluminium chloride with potassium and produced a powder of aluminium. In 1845, he was able to produce small pieces of the metal and described some physical properties of this metal. For many years thereafter, Wöhler was credited as the discoverer of aluminium. As Wöhler's method could not yield great quantities of aluminium, the metal remained rare; its cost exceeded that of gold. The first industrial production of aluminium was established in 1856 by French chemist Henri Etienne Sainte-Claire Deville and companions. Deville had discovered that aluminium trichloride could be reduced by sodium, which was more convenient and less expensive than potassium, which Wöhler had used. Even then, aluminium was still not of great purity and produced aluminium differed in properties by sample. The first industrial large-scale production method was independently developed in 1886 by French engineer Paul Héroult and American engineer Charles Martin Hall; it is now known as the Hall–Héroult process. The Hall–Héroult process converts alumina into metal. Austrian chemist Carl Joseph Bayer discovered a way of purifying bauxite to yield alumina, now known as the Bayer process, in 1889. Modern production of the aluminium metal is based on the Bayer and Hall–Héroult processes. Prices of aluminium dropped and aluminium became widely used in jewelry, everyday items, eyeglass frames, optical instruments, tableware, and foil in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal with many uses at the time. During World War I, major governments demanded large shipments of aluminium for light strong airframes; during World War II, demand by major governments for aviation was even higher. By the mid-20th century, aluminium had become a part of everyday life and an essential component of housewares. In 1954, production of aluminium surpassed that of copper, historically second in production only to iron, making it the most produced non-ferrous metal. During the mid-20th century, aluminium emerged as a civil engineering material, with building applications in both basic construction and interior finish work, and increasingly being used in military engineering, for both airplanes and land armor vehicle engines. Earth's first artificial satellite, launched in 1957, consisted of two separate aluminium semi-spheres joined and all subsequent space vehicles have used aluminium to some extent. The aluminium can was invented in 1956 and employed as a storage for drinks in 1958. Throughout the 20th century, the production of aluminium rose rapidly: while the world production of aluminium in 1900 was 6,800 metric tons, the annual production first exceeded 100,000 metric tons in 1916; 1,000,000 tons in 1941; 10,000,000 tons in 1971. In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the oldest industrial metal exchange in the world, in 1978. The output continued to grow: the annual production of aluminium exceeded 50,000,000 metric tons in 2013. The real price for aluminium declined from $14,000 per metric ton in 1900 to $2,340 in 1948 (in 1998 United States dollars). Extraction and processing costs were lowered over technological progress and the scale of the economies. However, the need to exploit lower-grade poorer quality deposits and the use of fast increasing input costs (above all, energy) increased the net cost of aluminium; the real price began to grow in the 1970s with the rise of energy cost. Production moved from the industrialized countries to countries where production was cheaper. Production costs in the late 20th century changed because of advances in technology, lower energy prices, exchange rates of the United States dollar, and alumina prices. The BRIC countries' combined share in primary production and primary consumption grew substantially in the first decade of the 21st century. China is accumulating an especially large share of the world's production thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its consumption share from 2% in 1972 to 40% in 2010. In the United States, Western Europe, and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging. In 2021, prices for industrial metals such as aluminium have soared to near-record levels as energy shortages in China drive up costs for electricity. Etymology The names aluminium and aluminum are derived from the word alumine, an obsolete term for alumina, a naturally occurring oxide of aluminium. Alumine was borrowed from French, which in turn derived it from alumen, the classical Latin name for alum, the mineral from which it was collected. The Latin word alumen stems from the Proto-Indo-European root *alu- meaning "bitter" or "beer". Coinage British chemist Humphry Davy, who performed a number of experiments aimed to isolate the metal, is credited as the person who named the element. The first name proposed for the metal to be isolated from alum was alumium, which Davy suggested in an 1808 article on his electrochemical research, published in Philosophical Transactions of the Royal Society. It appeared that the name was coined from the English word alum and the Latin suffix -ium; however, it was customary at the time that the elements should have names originating in the Latin language, and as such, this name was not adopted universally. This name was criticized by contemporary chemists from France, Germany, and Sweden, who insisted the metal should be named for the oxide, alumina, from which it would be isolated. The English word name alum does not directly reference the Latin language, whereas alumine/alumina easily references the Latin word alumen (upon declension, alumen changes to alumin-). One example was a writing in French by Swedish chemist Jöns Jacob Berzelius titled Essai sur la Nomenclature chimique, published in July 1811; in this essay, among other things, Berzelius used the name aluminium for the element that would be synthesized from alum. (Another article in the same journal issue also refers to the metal whose oxide forms the basis of sapphire as to aluminium.) A January 1811 summary of one of Davy's lectures at the Royal Society mentioned the name aluminium as a possibility. The following year, Davy published a chemistry textbook in which he used the spelling aluminum. Both spellings have coexisted since; however, their usage has split by region: aluminum is the primary spelling in the United States and Canada while aluminium is in the rest of the English-speaking world. Spelling In 1812, British scientist Thomas Young wrote an anonymous review of Davy's book, in which he proposed the name aluminium instead of aluminum, which he felt had a "less classical sound". This name did catch on: while the spelling was occasionally used in Britain, the American scientific language used from the start. Most scientists used throughout the world in the 19th century, and it was entrenched in many other European languages, such as French, German, or Dutch. In 1828, American lexicographer Noah Webster used exclusively the aluminum spelling in his American Dictionary of the English Language. In the 1830s, the spelling started to gain usage in the United States; by the 1860s, it had become the more common spelling there outside science. In 1892, Hall used the spelling in his advertising handbill for his new electrolytic method of producing the metal, despite his constant use of the spelling in all the patents he filed between 1886 and 1903. It remains unknown whether this spelling was introduced by mistake or intentionally; however, Hall preferred aluminum since its introduction because it resembled platinum, the name of a prestigious metal. By 1890, both spellings had been common in the U.S. overall, the spelling being slightly more common; by 1895, the situation had reversed; by 1900, aluminum had become twice as common as aluminium; during the following decade, the spelling dominated American usage. In 1925, the American Chemical Society adopted this spelling. The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard international name for the element in 1990. In 1993, they recognized aluminum as an acceptable variant; the most recent 2005 edition of the IUPAC nomenclature of inorganic chemistry acknowledges this spelling as well. IUPAC official publications use the spelling as primary but list both where appropriate. Production and refinement The production of aluminium starts with the extraction of bauxite rock from the ground. The bauxite is processed and transformed using the Bayer process into alumina, which is then processed using the Hall–Héroult process, resulting in the final aluminium metal. Aluminium production is highly energy-consuming, and so the producers tend to locate smelters in places where electric power is both plentiful and inexpensive. As of 2019, the world's largest smelters of aluminium are located in China, India, Russia, Canada, and the United Arab Emirates, while China is by far the top producer of aluminium with a world share of fifty-five percent. According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of aluminium in use in society (i.e. in cars, buildings, electronics, etc.) is . Much of this is in more-developed countries ( per capita) rather than less-developed countries ( per capita). Bayer process Bauxite is converted to alumina by the Bayer process. Bauxite is blended for uniform composition and then is ground. The resulting slurry is mixed with a hot solution of sodium hydroxide; the mixture is then treated in a digester vessel at a pressure well above atmospheric, dissolving the aluminium hydroxide in bauxite while converting impurities into relatively insoluble compounds: After this reaction, the slurry is at a temperature above its atmospheric boiling point. It is cooled by removing steam as pressure is reduced. The bauxite residue is separated from the solution and discarded. The solution, free of solids, is seeded with small crystals of aluminium hydroxide; this causes decomposition of the [Al(OH)4]− ions to aluminium hydroxide. After about half of aluminium has precipitated, the mixture is sent to classifiers. Small crystals of aluminium hydroxide are collected to serve as seeding agents; coarse particles are converted to alumina by heating; the excess solution is removed by evaporation, (if needed) purified, and recycled. Hall–Héroult process The conversion of alumina to aluminium metal is achieved by the Hall–Héroult process. In this energy-intensive process, a solution of alumina in a molten () mixture of cryolite (Na3AlF6) with calcium fluoride is electrolyzed to produce metallic aluminium. The liquid aluminium metal sinks to the bottom of the solution and is tapped off, and usually cast into large blocks called aluminium billets for further processing. Anodes of the electrolysis cell are made of carbon—the most resistant material against fluoride corrosion—and either bake at the process or are prebaked. The former, also called Söderberg anodes, are less power-efficient and fumes released during baking are costly to collect, which is why they are being replaced by prebaked anodes even though they save the power, energy, and labor to prebake the cathodes. Carbon for anodes should be preferably pure so that neither aluminium nor the electrolyte is contaminated with ash. Despite carbon's resistivity against corrosion, it is still consumed at a rate of 0.4–0.5 kg per each kilogram of produced aluminium. Cathodes are made of anthracite; high purity for them is not required because impurities leach only very slowly. The cathode is consumed at a rate of 0.02–0.04 kg per each kilogram of produced aluminium. A cell is usually terminated after 2–6 years following a failure of the cathode. The Hall–Heroult process produces aluminium with a purity of above 99%. Further purification can be done by the Hoopes process. This process involves the electrolysis of molten aluminium with a sodium, barium, and aluminium fluoride electrolyte. The resulting aluminium has a purity of 99.99%. Electric power represents about 20 to 40% of the cost of producing aluminium, depending on the location of the smelter. Aluminium production consumes roughly 5% of electricity generated in the United States. Because of this, alternatives to the Hall–Héroult process have been researched, but none has turned out to be economically feasible. Recycling Recovery of the metal through recycling has become an important task of the aluminium industry. Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage cans brought it to public awareness. Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like oxide). An aluminium stack melter produces significantly less dross, with values reported below 1%. White dross from primary aluminium production and from secondary recycling operations still contains useful quantities of aluminium that can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases (including, among others, hydrogen, acetylene, and ammonia), which spontaneously ignites on contact with air; contact with damp air results in the release of copious quantities of ammonia gas. Despite these difficulties, the waste is used as a filler in asphalt and concrete. Applications Metal The global production of aluminium in 2016 was 58.8 million metric tons. It exceeded that of any other metal except iron (1,231 million metric tons). Aluminium is almost always alloyed, which markedly improves its mechanical properties, especially when tempered. For example, the common aluminium foils and beverage cans are alloys of 92% to 99% aluminium. The main alloying agents are copper, zinc, magnesium, manganese, and silicon (e.g., duralumin) with the levels of other metals in a few percent by weight. Aluminium, both wrought and cast, has been alloyed with: manganese, silicon, magnesium, copper and zinc among others. For example, the Kynal family of alloys was developed by the British chemical manufacturer Imperial Chemical Industries. The major uses for aluminium metal are in: Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles, spacecraft, etc.). Aluminium is used because of its low density; Packaging (cans, foil, frame, etc.). Aluminium is used because it is non-toxic (see below), non-adsorptive, and splinter-proof; Building and construction (windows, doors, siding, building wire, sheathing, roofing, etc.). Since steel is cheaper, aluminium is used when lightness, corrosion resistance, or engineering features are important; Electricity-related uses (conductor alloys, motors, and generators, transformers, capacitors, etc.). Aluminium is used because it is relatively cheap, highly conductive, has adequate mechanical strength and low density, and resists corrosion; A wide range of household items, from cooking utensils to furniture. Low density, good appearance, ease of fabrication, and durability are the key factors of aluminium usage; Machinery and equipment (processing equipment, pipes, tools). Aluminium is used because of its corrosion resistance, non-pyrophoricity, and mechanical strength. Portable computer cases. Currently rarely used without alloying, but aluminium can be recycled and clean aluminium has residual market value: for example, the used beverage can (UBC) material was used to encase the electronic components of MacBook Air laptop, Pixel 5 smartphone or Summit Lite smartwatch. Compounds The great majority (about 90%) of aluminium oxide is converted to metallic aluminium. Being a very hard material (Mohs hardness 9), alumina is widely used as an abrasive; being extraordinarily chemically inert, it is useful in highly reactive environments such as high pressure sodium lamps. Aluminium oxide is commonly used as a catalyst for industrial processes; e.g. the Claus process to convert hydrogen sulfide to sulfur in refineries and to alkylate amines. Many industrial catalysts are supported by alumina, meaning that the expensive catalyst material is dispersed over a surface of the inert alumina. Another principal use is as a drying agent or absorbent. Several sulfates of aluminium have industrial and commercial application. Aluminium sulfate (in its hydrate form) is produced on the annual scale of several millions of metric tons. About two-thirds is consumed in water treatment. The next major application is in the manufacture of paper. It is also used as a mordant in dyeing, in pickling seeds, deodorizing of mineral oils, in leather tanning, and in production of other aluminium compounds. Two kinds of alum, ammonium alum and potassium alum, were formerly used as mordants and in leather tanning, but their use has significantly declined following availability of high-purity aluminium sulfate. Anhydrous aluminium chloride is used as a catalyst in chemical and petrochemical industries, the dyeing industry, and in synthesis of various inorganic and organic compounds. Aluminium hydroxychlorides are used in purifying water, in the paper industry, and as antiperspirants. Sodium aluminate is used in treating water and as an accelerator of solidification of cement. Many aluminium compounds have niche applications, for example: Aluminium acetate in solution is used as an astringent. Aluminium phosphate is used in the manufacture of glass, ceramic, pulp and paper products, cosmetics, paints, varnishes, and in dental cement. Aluminium hydroxide is used as an antacid, and mordant; it is used also in water purification, the manufacture of glass and ceramics, and in the waterproofing of fabrics. Lithium aluminium hydride is a powerful reducing agent used in organic chemistry. Organoaluminiums are used as Lewis acids and co-catalysts. Methylaluminoxane is a co-catalyst for Ziegler–Natta olefin polymerization to produce vinyl polymers such as polyethene. Aqueous aluminium ions (such as aqueous aluminium sulfate) are used to treat against fish parasites such as Gyrodactylus salaris. In many vaccines, certain aluminium salts serve as an immune adjuvant (immune response booster) to allow the protein in the vaccine to achieve sufficient potency as an immune stimulant. Biology Despite its widespread occurrence in the Earth's crust, aluminium has no known function in biology. At pH 6–9 (relevant for most natural waters), aluminium precipitates out of water as the hydroxide and is hence not available; most elements behaving this way have no biological role or are toxic. Aluminium salts are nontoxic. Aluminium sulfate has an LD50 of 6207 mg/kg (oral, mouse), which corresponds to 435 grams for an person, though lethality and neurotoxicity differ in their implications. Andrási et al. discovered "significantly higher Aluminum" content in some brain regions when necroscopies of subjects with Alzheimer disease were compared to subjects without. Aluminium chelates with glyphosate. Toxicity Aluminium is classified as a non-carcinogen by the United States Department of Health and Human Services. A review published in 1988 said that there was little evidence that normal exposure to aluminium presents a risk to healthy adult, and a 2014 multi-element toxicology review was unable to find deleterious effects of aluminium consumed in amounts not greater than 40 mg/day per kg of body mass. Most aluminium consumed will leave the body in feces; most of the small part of it that enters the bloodstream, will be excreted via urine; nevertheless some aluminium does pass the blood-brain barrier and is lodged preferentially in the brains of Alzheimer's patients. Evidence published in 1989 indicates that, for Alzheimer's patients, aluminium may act by electrostatically crosslinking proteins, thus down-regulating genes in the superior temporal gyrus. Effects Aluminium, although rarely, can cause vitamin D-resistant osteomalacia, erythropoietin-resistant microcytic anemia, and central nervous system alterations. People with kidney insufficiency are especially at a risk. Chronic ingestion of hydrated aluminium silicates (for excess gastric acidity control) may result in aluminium binding to intestinal contents and increased elimination of other metals, such as iron or zinc; sufficiently high doses (>50 g/day) can cause anemia. During the 1988 Camelford water pollution incident people in Camelford had their drinking water contaminated with aluminium sulfate for several weeks. A final report into the incident in 2013 concluded it was unlikely that this had caused long-term health problems. Aluminium has been suspected of being a possible cause of Alzheimer's disease, but research into this for over 40 years has found, , no good evidence of causal effect. Aluminium increases estrogen-related gene expression in human breast cancer cells cultured in the laboratory. In very high doses, aluminium is associated with altered function of the blood–brain barrier. A small percentage of people have contact allergies to aluminium and experience itchy red rashes, headache, muscle pain, joint pain, poor memory, insomnia, depression, asthma, irritable bowel syndrome, or other symptoms upon contact with products containing aluminium. Exposure to powdered aluminium or aluminium welding fumes can cause pulmonary fibrosis. Fine aluminium powder can ignite or explode, posing another workplace hazard. Exposure routes Food is the main source of aluminium. Drinking water contains more aluminium than solid food; however, aluminium in food may be absorbed more than aluminium from water. Major sources of human oral exposure to aluminium include food (due to its use in food additives, food and beverage packaging, and cooking utensils), drinking water (due to its use in municipal water treatment), and aluminium-containing medications (particularly antacid/antiulcer and buffered aspirin formulations). Dietary exposure in Europeans averages to 0.2–1.5 mg/kg/week but can be as high as 2.3 mg/kg/week. Higher exposure levels of aluminium are mostly limited to miners, aluminium production workers, and dialysis patients. Consumption of antacids, antiperspirants, vaccines, and cosmetics provide possible routes of exposure. Consumption of acidic foods or liquids with aluminium enhances aluminium absorption, and maltol has been shown to increase the accumulation of aluminium in nerve and bone tissues. Treatment In case of suspected sudden intake of a large amount of aluminium, the only treatment is deferoxamine mesylate which may be given to help eliminate aluminium from the body by chelation. However, this should be applied with caution as this reduces not only aluminium body levels, but also those of other metals such as copper or iron. Environmental effects High levels of aluminium occur near mining sites; small amounts of aluminium are released to the environment at the coal-fired power plants or incinerators. Aluminium in the air is washed out by the rain or normally settles down but small particles of aluminium remain in the air for a long time. Acidic precipitation is the main natural factor to mobilize aluminium from natural sources and the main reason for the environmental effects of aluminium; however, the main factor of presence of aluminium in salt and freshwater are the industrial processes that also release aluminium into air. In water, aluminium acts as a toxiс agent on gill-breathing animals such as fish when the water is acidic, in which aluminium may precipitate on gills, which causes loss of plasma- and hemolymph ions leading to osmoregulatory failure. Organic complexes of aluminium may be easily absorbed and interfere with metabolism in mammals and birds, even though this rarely happens in practice. Aluminium is primary among the factors that reduce plant growth on acidic soils. Although it is generally harmless to plant growth in pH-neutral soils, in acid soils the concentration of toxic Al3+ cations increases and disturbs root growth and function. Wheat has developed a tolerance to aluminium, releasing organic compounds that bind to harmful aluminium cations. Sorghum is believed to have the same tolerance mechanism. Aluminium production possesses its own challenges to the environment on each step of the production process. The major challenge is the greenhouse gas emissions. These gases result from electrical consumption of the smelters and the byproducts of processing. The most potent of these gases are perfluorocarbons from the smelting process. Released sulfur dioxide is one of the primary precursors of acid rain. A Spanish scientific report from 2001 claimed that the fungus Geotrichum candidum consumes the aluminium in compact discs. Other reports all refer back to that report and there is no supporting original research. Better documented, the bacterium Pseudomonas aeruginosa and the fungus Cladosporium resinae are commonly detected in aircraft fuel tanks that use kerosene-based fuels (not avgas), and laboratory cultures can degrade aluminium. However, these life forms do not directly attack or consume the aluminium; rather, the metal is corroded by microbe waste products. See also Aluminium granules Aluminium joining Aluminium–air battery Panel edge staining Quantum clock Notes References Bibliography Further reading Mimi Sheller, Aluminum Dream: The Making of Light Modernity. Cambridge, Mass.: Massachusetts Institute of Technology Press, 2014. External links Aluminium at The Periodic Table of Videos (University of Nottingham) Toxic Substances Portal – Aluminum – from the Agency for Toxic Substances and Disease Registry, United States Department of Health and Human Services CDC – NIOSH Pocket Guide to Chemical Hazards – Aluminum World production of primary aluminium, by country Price history of aluminum, according to the IMF History of Aluminium – from the website of the International Aluminium Institute Emedicine – Aluminium Aluminium Electrical conductors Pyrotechnic fuels Airship technology Chemical elements Post-transition metals Reducing agents E-number additives Native element minerals Chemical elements with face-centered cubic structure
905
https://en.wikipedia.org/wiki/Advanced%20Chemistry
Advanced Chemistry
Advanced Chemistry is a German hip hop group from Heidelberg, a scenic city in Baden-Württemberg, South Germany. Advanced Chemistry was founded in 1987 by Toni L, Linguist, Gee-One, DJ Mike MD (Mike Dippon) and MC Torch. Each member of the group holds German citizenship, and Toni L, Linguist, and Torch are of Italian, Ghanaian, and Haitian backgrounds, respectively. Influenced by North American socially conscious rap and the Native tongues movement, Advanced Chemistry is regarded as one of the main pioneers in German hip hop. They were one of the first groups to rap in German (although their name is in English). Furthermore, their songs tackled controversial social and political issues, distinguishing them from early German hip hop group "Die Fantastischen Vier" (The Fantastic Four), which had a more light-hearted, playful, party image. Career Advanced Chemistry frequently rapped about their lives and experiences as children of immigrants, exposing the marginalization experienced by most ethnic minorities in Germany, and the feelings of frustration and resentment that being denied a German identity can cause. The song "Fremd im eigenen Land" (Foreign in your own nation) was released by Advanced Chemistry in November 1992. The single became a staple in the German hip hop scene. It made a strong statement about the status of immigrants throughout Germany, as the group was composed of multi-national and multi-racial members. The video shows several members brandishing their German passports as a demonstration of their German citizenship to skeptical and unaccepting 'ethnic' Germans. This idea of national identity is important, as many rap artists in Germany have been of foreign origin. These so-called Gastarbeiter (guest workers) children saw breakdance, graffiti, rap music, and hip hop culture as a means of expressing themselves. Since the release of "Fremd im eigenen Land", many other German-language rappers have also tried to confront anti-immigrant ideas and develop themes of citizenship. However, though many ethnic minority youth in Germany find these German identity themes appealing, others view the desire of immigrants to be seen as German negatively, and they have actively sought to revive and recreate concepts of identity in connection to traditional ethnic origins. Advanced Chemistry helped to found the German chapter of the Zulu nation. The rivalry between Advanced Chemistry and Die Fantastischen Vier has served to highlight a dichotomy in the routes that hip hop has taken in becoming a part of the German soundscape. While Die Fantastischen Vier may be said to view hip hop primarily as an aesthetic art form, Advanced Chemistry understand hip hop as being inextricably linked to the social and political circumstances under which it is created. For Advanced Chemistry, hip hop is a “vehicle of general human emancipation,”. In their undertaking of social and political issues, the band introduced the term "Afro-German" into the context of German hip hop, and the theme of race is highlighted in much of their music. With the release of the single “Fremd im eigenen Land”, Advanced Chemistry separated itself from the rest of the rap being produced in Germany. This single was the first of its kind to go beyond simply imitating US rap and addressed the current issues of the time. Fremd im eigenen Land which translates to “foreign in my own country” dealt with the widespread racism that non-white German citizens faced. This change from simple imitation to political commentary was the start of German identification with rap. The sound of “Fremd im eigenen Land” was influenced by the 'wall of noise' created by Public Enemy's producers, The Bomb Squad. After the reunification of Germany, an abundance of anti-immigrant sentiment emerged, as well as attacks on the homes of refugees in the early 1990s. Advanced Chemistry came to prominence in the wake of these actions because of their pro-multicultural society stance in their music. Advanced Chemistry's attitudes revolve around their attempts to create a distinct "Germanness" in hip hop, as opposed to imitating American hip hop as other groups had done. Torch has said, "What the Americans do is exotic for us because we don't live like they do. What they do seems to be more interesting and newer. But not for me. For me it's more exciting to experience my fellow Germans in new contexts...For me, it's interesting to see what the kids try to do that's different from what I know." Advanced Chemistry were the first to use the term "Afro-German" in a hip hop context. This was part of the pro-immigrant political message they sent via their music. While Advanced Chemistry's use of the German language in their rap allows them to make claims to authenticity and true German heritage, bolstering pro-immigration sentiment, their style can also be problematic for immigrant notions of any real ethnic roots. Indeed, part of the Turkish ethnic minority of Frankfurt views Advanced Chemistry's appeal to the German image as a "symbolic betrayal of the right of ethnic minorities to 'roots' or to any expression of cultural heritage." In this sense, their rap represents a complex social discourse internal to the German soundscape in which they attempt to negotiate immigrant assimilation into a xenophobic German culture with the maintenance of their own separate cultural traditions. It is quite possibly the feelings of alienation from the pure-blooded German demographic that drive Advanced Chemistry to attack nationalistic ideologies by asserting their "Germanness" as a group composed primarily of ethnic others. The response to this pseudo-German authenticity can be seen in what Andy Bennett refers to as "alternative forms of local hip hop culture which actively seek to rediscover and, in many cases, reconstruct notions of identity tied to cultural roots." These alternative local hip hop cultures include oriental hip hop, the members of which cling to their Turkish heritage and are confused by Advanced Chemistry's elicitation of a German identity politics to which they technically do not belong. This cultural binary illustrates that rap has taken different routes in Germany and that, even among an already isolated immigrant population, there is still disunity and, especially, disagreement on the relative importance of assimilation versus cultural defiance. According to German hip hop enthusiast 9@home, Advanced Chemistry is part of a "hip-hop movement [which] took a clear stance for the minorities and against the [marginalization] of immigrants who...might be German on paper, but not in real life," which speaks to the group's hope of actually being recognized as German citizens and not foreigners, despite their various other ethnic and cultural ties. Influences Advanced Chemistry's work was rooted in German history and the country's specific political realities. However, they also drew inspiration from African-American hip-hop acts like A Tribe Called Quest and Public Enemy, who had helped bring a soulful sound and political consciousness to American hip-hop. One member, Torch, later explicitly listed his references on his solo song "Als (When I Was in School):" "My favorite subject, which was quickly discovered poetry in load Poets, awakens the intellect or policy at Chuck D I'll never forget the lyrics by Public Enemy." Torch goes on to list other American rappers like Biz Markie, Big Daddy Kane and Dr. Dre as influences. Discography 1992 - "Fremd im eigenen Land" (12"/MCD, MZEE) 1993 - "Welcher Pfad führt zur Geschichte" (12"/MCD, MZEE) 1994 - "Operation § 3" (12"/MCD) 1994 - "Dir fehlt der Funk!" (12"/MCD) 1995 - Advanced Chemistry (2xLP/CD) External links Official Website of MC Torch Website of Toni L Official Website of Linguist Official Website DJ Mike MD (Mike Dippon) Website of 360° Records Bibliography El-Tayeb, Fatima “‘If You Cannot Pronounce My Name, You Can Just Call Me Pride.’ Afro-German Activism, Gender, and Hip Hop,” Gender & History15/3(2003):459-485. Felbert, Oliver von. “Die Unbestechlichen.” Spex (March 1993): 50–53. Weheliye, Alexander G. Phonographies:Grooves in Sonic Afro-Modernity, Duke University Press, 2005. References German hip hop groups
909
https://en.wikipedia.org/wiki/Anglican%20Communion
Anglican Communion
The Anglican Communion is the third largest Christian communion after the Roman Catholic and Eastern Orthodox churches. Founded in 1867 in London, the communion has more than 85 million members within the Church of England and other autocephalous national and regional churches in full communion. The traditional origins of Anglican doctrine are summarised in the Thirty-nine Articles (1571). The Archbishop of Canterbury (currently Justin Welby) in England acts as a focus of unity, recognised as primus inter pares ("first among equals"), but does not exercise authority in Anglican provinces outside of the Church of England. Most, but not all, member churches of the communion are the historic national or regional Anglican churches. The Anglican Communion was officially and formally organised and recognised as such at the Lambeth Conference in 1867 in London under the leadership of Charles Longley, Archbishop of Canterbury. The churches of the Anglican Communion consider themselves to be part of the one, holy, catholic and apostolic church, and to be both catholic and reformed. As in the Church of England itself, the Anglican Communion includes the broad spectrum of beliefs and liturgical practises found in the Evangelical, Central and Anglo-Catholic traditions of Anglicanism. Each national or regional church is fully independent, retaining its own legislative process and episcopal polity under the leadership of local primates. For some adherents, Anglicanism represents a non-papal Catholicism, for others a form of Protestantism though without a guiding figure such as Luther, Knox, Calvin, Zwingli or Wesley, or for yet others a combination of the two. Most of its members live in the Anglosphere of former British territories. Full participation in the sacramental life of each church is available to all communicant members. Because of their historical link to England (ecclesia anglicana means "English church"), some of the member churches are known as "Anglican", such as the Anglican Church of Canada. Others, for example the Church of Ireland and the Scottish and American Episcopal churches, have official names that do not include "Anglican". Additionally, some churches which use the name "Anglican" are not part of the communion. Ecclesiology, polity and ethos The Anglican Communion has no official legal existence nor any governing structure which might exercise authority over the member churches. There is an Anglican Communion Office in London, under the aegis of the Archbishop of Canterbury, but it only serves in a supporting and organisational role. The communion is held together by a shared history, expressed in its ecclesiology, polity and ethos, and also by participation in international consultative bodies. Three elements have been important in holding the communion together: first, the shared ecclesial structure of the component churches, manifested in an episcopal polity maintained through the apostolic succession of bishops and synodical government; second, the principle of belief expressed in worship, investing importance in approved prayer books and their rubrics; and third, the historical documents and the writings of early Anglican divines that have influenced the ethos of the communion. Originally, the Church of England was self-contained and relied for its unity and identity on its own history, its traditional legal and episcopal structure, and its status as an established church of the state. As such, Anglicanism was from the outset a movement with an explicitly episcopal polity, a characteristic that has been vital in maintaining the unity of the communion by conveying the episcopate's role in manifesting visible catholicity and ecumenism. Early in its development following the English Reformation, Anglicanism developed a vernacular prayer book, called the Book of Common Prayer. Unlike other traditions, Anglicanism has never been governed by a magisterium nor by appeal to one founding theologian, nor by an extra-credal summary of doctrine (such as the Westminster Confession of the Presbyterian churches). Instead, Anglicans have typically appealed to the Book of Common Prayer (1662) and its offshoots as a guide to Anglican theology and practise. This has had the effect of inculcating in Anglican identity and confession the principle of lex orandi, lex credendi ("the law of praying [is] the law of believing"). Protracted conflict through the 17th century, with radical Protestants on the one hand and Roman Catholics who recognised the primacy of the Pope on the other, resulted in an association of churches that was both deliberately vague about doctrinal principles, yet bold in developing parameters of acceptable deviation. These parameters were most clearly articulated in the various rubrics of the successive prayer books, as well as the Thirty-Nine Articles of Religion (1563). These articles have historically shaped and continue to direct the ethos of the communion, an ethos reinforced by its interpretation and expansion by such influential early theologians such as Richard Hooker, Lancelot Andrewes and John Cosin. With the expansion of the British Empire the growth of Anglicanism outside Great Britain and Ireland, the communion sought to establish new vehicles of unity. The first major expressions of this were the Lambeth Conferences of the communion's bishops, first convened in 1867 by Charles Longley, the Archbishop of Canterbury. From the beginning, these were not intended to displace the autonomy of the emerging provinces of the communion, but to "discuss matters of practical interest, and pronounce what we deem expedient in resolutions which may serve as safe guides to future action". Chicago Lambeth Quadrilateral One of the enduringly influential early resolutions of the conference was the so-called Chicago-Lambeth Quadrilateral of 1888. Its intent was to provide the basis for discussions of reunion with the Roman Catholic and Orthodox churches, but it had the ancillary effect of establishing parameters of Anglican identity. It establishes four principles with these words: Instruments of communion As mentioned above, the Anglican Communion has no international juridical organisation. The Archbishop of Canterbury's role is strictly symbolic and unifying and the communion's three international bodies are consultative and collaborative, their resolutions having no legal effect on the autonomous provinces of the communion. Taken together, however, the four do function as "instruments of communion", since all churches of the communion participate in them. In order of antiquity, they are: The Archbishop of Canterbury functions as the spiritual head of the communion. The archbishop is the focus of unity, since no church claims membership in the Communion without being in communion with him. The present archbishop is Justin Welby. The Lambeth Conference (first held in 1867) is the oldest international consultation. It is a forum for bishops of the communion to reinforce unity and collegiality through manifesting the episcopate, to discuss matters of mutual concern, and to pass resolutions intended to act as guideposts. It is held roughly every 10 years and invitation is by the Archbishop of Canterbury. The Anglican Consultative Council (first met in 1971) was created by a 1968 Lambeth Conference resolution, and meets usually at three-yearly intervals. The council consists of representative bishops, other clergy and laity chosen by the 38 provinces. The body has a permanent secretariat, the Anglican Communion Office, of which the Archbishop of Canterbury is president. The Primates' Meeting (first met in 1979) is the most recent manifestation of international consultation and deliberation, having been first convened by Archbishop Donald Coggan as a forum for "leisurely thought, prayer and deep consultation". Since there is no binding authority in the Anglican Communion, these international bodies are a vehicle for consultation and persuasion. In recent times, persuasion has tipped over into debates over conformity in certain areas of doctrine, discipline, worship and ethics. The most notable example has been the objection of many provinces of the communion (particularly in Africa and Asia) to the changing acceptance of LGBTQ+ individuals in the North American churches (e.g., by blessing same-sex unions and ordaining and consecrating same-sex relationships) and to the process by which changes were undertaken. (See Anglican realignment) Those who objected condemned these actions as unscriptural, unilateral, and without the agreement of the communion prior to these steps being taken. In response, the American Episcopal Church and the Anglican Church of Canada answered that the actions had been undertaken after lengthy scriptural and theological reflection, legally in accordance with their own canons and constitutions and after extensive consultation with the provinces of the communion. The Primates' Meeting voted to request the two churches to withdraw their delegates from the 2005 meeting of the Anglican Consultative Council. Canada and the United States decided to attend the meeting but without exercising their right to vote. They have not been expelled or suspended, since there is no mechanism in this voluntary association to suspend or expel an independent province of the communion. Since membership is based on a province's communion with Canterbury, expulsion would require the Archbishop of Canterbury's refusal to be in communion with the affected jurisdictions. In line with the suggestion of the Windsor Report, Rowan Williams (the then Archbishop of Canterbury) established a working group to examine the feasibility of an Anglican covenant which would articulate the conditions for communion in some fashion. Organisation Provinces The Anglican communion consists of forty-one autonomous provinces each with its own primate and governing structure. These provinces may take the form of national churches (such as in Canada, Uganda, or Japan) or a collection of nations (such as the West Indies, Central Africa, or Southeast Asia). Extraprovincial churches In addition to the forty-one provinces, there are five extraprovincial churches under the metropolitical authority of the Archbishop of Canterbury. Former provinces New provinces in formation At its Autumn 2020 meeting the provincial standing committee of the Church of Southern Africa approved a plan to form the dioceses in Mozambique and Angola into a separate autonomous province of the Anglican Communion, to be named the Anglican Church of Mozambique and Angola (IAMA). The plans were also outlined to the Mozambique and Angola Anglican Association (MANNA) at its September 2020 annual general meeting. The new province is Portuguese-speaking, and consists of twelve dioceses (four in Angola, and eight in Mozambique). The twelve proposed new dioceses have been defined and named, and each has a "Task Force Committee" working towards its establishment as a diocese. The plan received the consent of the bishops and diocesan synods of all four existing dioceses in the two nations, and was submitted to the Anglican Consultative Council. In September 2020 the Archbishop of Canterbury announced that he had asked the bishops of the Church of Ceylon to begin planning for the formation of an autonomous province of Ceylon, so as to end his current position as Metropolitan of the two dioceses in that country. Churches in full communion In addition to other member churches, the churches of the Anglican Communion are in full communion with the Old Catholic churches of the Union of Utrecht and the Scandinavian Lutheran churches of the Porvoo Communion in Europe, the India-based Malankara Mar Thoma Syrian and Malabar Independent Syrian churches and the Philippine Independent Church, also known as the Aglipayan Church. History The Anglican Communion traces much of its growth to the older mission organisations of the Church of England such as the Society for Promoting Christian Knowledge (founded 1698), the Society for the Propagation of the Gospel in Foreign Parts (founded 1701) and the Church Missionary Society (founded 1799). The Church of England (which until the 20th century included the Church in Wales) initially separated from the Roman Catholic Church in 1534 in the reign of Henry VIII, reunited in 1555 under Mary I and then separated again in 1570 under Elizabeth I (the Roman Catholic Church excommunicated Elizabeth I in 1570 in response to the Act of Supremacy 1559). The Church of England has always thought of itself not as a new foundation but rather as a reformed continuation of the ancient "English Church" (Ecclesia Anglicana) and a reassertion of that church's rights. As such it was a distinctly national phenomenon. The Church of Scotland was formed as a separate church from the Roman Catholic Church as a result of the Scottish Reformation in 1560 and the later formation of the Scottish Episcopal Church began in 1582 in the reign of James VI over disagreements about the role of bishops. The oldest-surviving Anglican church building outside the British Isles (Britain and Ireland) is St Peter's Church in St. George's, Bermuda, established in 1612 (though the actual building had to be rebuilt several times over the following century). This is also the oldest surviving non-Roman Catholic church in the New World. It remained part of the Church of England until 1978 when the Anglican Church of Bermuda separated. The Church of England was the established church not only in England, but in its trans-Oceanic colonies. Thus the only member churches of the present Anglican Communion existing by the mid-18th century were the Church of England, its closely linked sister church the Church of Ireland (which also separated from Roman Catholicism under Henry VIII) and the Scottish Episcopal Church which for parts of the 17th and 18th centuries was partially underground (it was suspected of Jacobite sympathies). Global spread of Anglicanism The enormous expansion in the 18th and 19th centuries of the British Empire brought Anglicanism along with it. At first all these colonial churches were under the jurisdiction of the bishop of London. After the American Revolution, the parishes in the newly independent country found it necessary to break formally from a church whose supreme governor was (and remains) the British monarch. Thus they formed their own dioceses and national church, the Episcopal Church in the United States of America, in a mostly amicable separation. At about the same time, in the colonies which remained linked to the crown, the Church of England began to appoint colonial bishops. In 1787, a bishop of Nova Scotia was appointed with a jurisdiction over all of British North America; in time several more colleagues were appointed to other cities in present-day Canada. In 1814, a bishop of Calcutta was made; in 1824 the first bishop was sent to the West Indies and in 1836 to Australia. By 1840 there were still only ten colonial bishops for the Church of England; but even this small beginning greatly facilitated the growth of Anglicanism around the world. In 1841, a "Colonial Bishoprics Council" was set up and soon many more dioceses were created. In time, it became natural to group these into provinces and a metropolitan bishop was appointed for each province. Although it had at first been somewhat established in many colonies, in 1861 it was ruled that, except where specifically established, the Church of England had just the same legal position as any other church. Thus a colonial bishop and colonial diocese was by nature quite a different thing from their counterparts back home. In time bishops came to be appointed locally rather than from England and eventually national synods began to pass ecclesiastical legislation independent of England. A crucial step in the development of the modern communion was the idea of the Lambeth Conferences (discussed above). These conferences demonstrated that the bishops of disparate churches could manifest the unity of the church in their episcopal collegiality despite the absence of universal legal ties. Some bishops were initially reluctant to attend, fearing that the meeting would declare itself a council with power to legislate for the church; but it agreed to pass only advisory resolutions. These Lambeth Conferences have been held roughly every 10 years since 1878 (the second such conference) and remain the most visible coming-together of the whole Communion. The Lambeth Conference of 1998 included what has been seen by Philip Jenkins and others as a "watershed in global Christianity". The 1998 Lambeth Conference considered the issue of the theology of same-sex attraction in relation to human sexuality. At this 1998 conference for the first time in centuries the Christians of developing regions, especially, Africa, Asia, and Latin America, prevailed over the bishops of more prosperous countries (many from the US, Canada, and the UK) who supported a redefinition of Anglican doctrine. Seen in this light 1998 is a date that marked the shift from a West-dominated Christianity to one wherein the growing churches of the two-thirds world are predominant, but the gay bishop controversy in subsequent years led to the reassertion of Western dominance, this time of the liberal variety. Ecumenical relations Historic episcopate The churches of the Anglican Communion have traditionally held that ordination in the historic episcopate is a core element in the validity of clerical ordinations. The Roman Catholic Church, however, does not recognise Anglican orders (see Apostolicae curae). Some Eastern Orthodox churches have issued statements to the effect that Anglican orders could be accepted, yet have still reordained former Anglican clergy; other Eastern Orthodox churches have rejected Anglican orders altogether. Orthodox bishop Kallistos Ware explains this apparent discrepancy as follows: Controversies One effect of the Communion's dispersed authority has been the conflicts arising over divergent practices and doctrines in parts of the Communion. Disputes that had been confined to the Church of England could be dealt with legislatively in that realm, but as the Communion spread out into new nations and disparate cultures, such controversies multiplied and intensified. These controversies have generally been of two types: liturgical and social. Anglo-Catholicism The first such controversy of note concerned that of the growing influence of the Catholic Revival manifested in the Tractarian and so-called Ritualist controversies of the late nineteenth and early twentieth centuries. This controversy produced the Free Church of England and, in the United States and Canada, the Reformed Episcopal Church. Social changes Later, rapid social change and the dissipation of British cultural hegemony over its former colonies contributed to disputes over the role of women, the parameters of marriage and divorce, and the practices of contraception and abortion. In the late 1970s, the Continuing Anglican movement produced a number of new church bodies in opposition to women's ordination, prayer book changes, and the new understandings concerning marriage. Same-sex unions and LGBT clergy More recently, disagreements over homosexuality have strained the unity of the communion as well as its relationships with other Christian denominations, leading to another round of withdrawals from the Anglican Communion. Some churches were founded outside the Anglican Communion in the late 20th and early 21st centuries, largely in opposition to the ordination of openly homosexual bishops and other clergy and are usually referred to as belonging to the Anglican realignment movement, or else as "orthodox" Anglicans. These disagreements were especially noted when the Episcopal Church (US) consecrated an openly gay bishop in a same-sex relationship, Gene Robinson, in 2003, which led some Episcopalians to defect and found the Anglican Church in North America (ACNA); then, the debate re-ignited when the Church of England agreed to allow clergy to enter into same-sex civil partnerships, as long as they remained celibate, in 2005. The Church of Nigeria opposed the Episcopal Church's decision as well as the Church of England's approval for celibate civil partnerships. "The more liberal provinces that are open to changing Church doctrine on marriage in order to allow for same-sex unions include Brazil, Canada, New Zealand, Scotland, South India, South Africa, the US and Wales". The Church of England does not allow same-gender marriages or blessing rites, but does permit special prayer services for same-sex couples following a civil marriage or partnership. The Church of England also permits clergy to enter into same-sex civil partnerships. The Church of Ireland has no official position on civil unions, and one senior cleric has entered into a same-sex civil partnership. The Church of Ireland recognised that it will "treat civil partners the same as spouses". The Anglican Church of Australia does not have an official position on homosexuality. The conservative Anglican churches, encouraging the realignment movement, are more concentrated in the Global South. For example, the Anglican Church of Kenya, the Church of Nigeria and the Church of Uganda have opposed homosexuality. GAFCON, a fellowship of conservative Anglican churches, has appointed "missionary bishops" in response to the disagreements with the perceived liberalisation in the Anglican churches in North America and Europe. Debates about social theology and ethics have occurred at the same time as debates on prayer book revision and the acceptable grounds for achieving full communion with non-Anglican churches. See also Acts of Supremacy English Reformation Dissolution of the Monasteries Ritualism in the Church of England Apostolicae curae Affirming Catholicism Anglican ministry Anglo-Catholicism British Israelism Church Society Church's Ministry Among Jewish People Compass rose Evangelical Anglicanism Flag of the Anglican Communion Liberal Anglo-Catholicism List of conservative evangelical Anglican churches in England List of heroes of the Christian Church in the Anglican Communion List of the largest Protestant bodies Reform (Anglican) Anglican Use Notes References Citations Sources Further reading Buchanan, Colin. Historical Dictionary of Anglicanism (2nd ed. 2015) excerpt Hebert, A. G. The Form of the Church. London: Faber and Faber, 1944. Wild, John. What is the Anglican Communion?, in series, The Advent Papers. Cincinnati, Ohio: Forward Movement Publications, [196-]. Note.: Expresses the "Anglo-Catholic" viewpoint. External links Anglicans Online Project Canterbury Anglican historical documents from around the world Brief description and history of the Anglican Communion 1997 article from the Anglican Communion Office 1867 establishments in England Religious organizations established in 1867 Religion in the British Empire
910
https://en.wikipedia.org/wiki/Arne%20Kaijser
Arne Kaijser
Arne Kaijser (born 1950) is a professor emeritus of history of technology at the KTH Royal Institute of Technology in Stockholm, and a former president of the Society for the History of Technology. Kaijser has published two books in Swedish: Stadens ljus. Etableringen av de första svenska gasverken and I fädrens spår. Den svenska infrastrukturens historiska utveckling och framtida utmaningar, and has co-edited several anthologies. Kaijser is a member of the Royal Swedish Academy of Engineering Sciences since 2007 and also a member of the editorial board of two scientific journals: Journal of Urban Technology and Centaurus. Lately, he has been occupied with the history of Large Technical Systems. References External links Homepage Extended homepage 1950 births Living people Swedish historians KTH Royal Institute of Technology faculty Members of the Royal Swedish Academy of Engineering Sciences Historians of science Historians of technology Linköping University alumni
911
https://en.wikipedia.org/wiki/Archipelago
Archipelago
An archipelago ( ), sometimes called an island group or island chain, is a chain, cluster, or collection of islands, or sometimes a sea containing a small number of scattered islands. Examples of archipelagos include: the Indonesian Archipelago, the Andaman and Nicobar Islands, the Lakshadweep Islands, the Galápagos Islands, the Japanese Archipelago, the Philippine Archipelago, the Maldives, the Balearic Isles, the Bahamas, the Aegean Islands, the Hawaiian Islands, the Canary Islands, Malta, the Azores, the Canadian Arctic Archipelago, the British Isles, the islands of the Archipelago Sea, and Shetland. They are sometimes defined by political boundaries. The Gulf archipelago off the north-eastern Pacific coast forms part of a larger archipelago that geographically includes Washington state's San Juan Islands. While the Gulf archipelago and San Juan Islands are geographically related, they are not technically included in the same archipelago due to manmade geopolitical borders. Etymology The word archipelago is derived from the Ancient Greek ἄρχι-(arkhi-, "chief") and πέλαγος (pélagos, "sea") through the Italian arcipelago. In antiquity, "Archipelago" (from medieval Greek *ἀρχιπέλαγος and Latin ) was the proper name for the Aegean Sea. Later, usage shifted to refer to the Aegean Islands (since the sea has a large number of islands). Geographic types Archipelagos may be found isolated in large amounts of water or neighbouring a large land mass. For example, Scotland has more than 700 islands surrounding its mainland which form an archipelago. Archipelagos are often volcanic, forming along island arcs generated by subduction zones or hotspots, but may also be the result of erosion, deposition, and land elevation. Depending on their geological origin, islands forming archipelagos can be referred to as oceanic islands, continental fragments, and continental islands. Oceanic islands Oceanic islands are mainly of volcanic origin, and widely separated from any adjacent continent. The Hawaiian Islands and Easter Island in the Pacific, and Île Amsterdam in the south Indian Ocean are examples. Continental fragments Continental fragments correspond to land masses that have separated from a continental mass due to tectonic displacement. The Farallon Islands off the coast of California are an example. Continental archipelagos Sets of islands formed close to the coast of a continent are considered continental archipelagos when they form part of the same continental shelf, when those islands are above-water extensions of the shelf. The islands of the Inside Passage off the coast of British Columbia and the Canadian Arctic Archipelago are examples. Artificial archipelagos Artificial archipelagos have been created in various countries for different purposes. Palm Islands and the World Islands off Dubai were or are being created for leisure and tourism purposes. Marker Wadden in the Netherlands is being built as a conservation area for birds and other wildlife. Further examples The largest archipelagic state in the world by area, and by population, is Indonesia. See also Island arc List of landforms List of archipelagos by number of islands List of archipelagos Archipelagic state List of islands Aquapelago References External links 30 Most Incredible Island Archipelagos Coastal and oceanic landforms Oceanographical terminology
914
https://en.wikipedia.org/wiki/Author
Author
An author is the creator or originator of any written work such as a book or play, and is also considered a writer or poet. More broadly defined, an author is "the person who originated or gave existence to anything" and whose authorship determines responsibility for what was created. Legal significance of authorship Typically, the first owner of a copyright is the person who created the work, i.e. the author. If more than one person created the work, then a case of joint authorship can be made provided some criteria are met. In the copyright laws of various jurisdictions, there is a necessity for little flexibility regarding what constitutes authorship. The United States Copyright Office, for example, defines copyright as "a form of protection provided by the laws of the United States (title 17, U.S. Code) to authors of 'original works of authorship.'" Holding the title of "author" over any "literary, dramatic, musical, artistic, [or] certain other intellectual works" gives rights to this person, the owner of the copyright, especially the exclusive right to engage in or authorize any production or distribution of their work. Any person or entity wishing to use intellectual property held under copyright must receive permission from the copyright holder to use this work, and often will be asked to pay for the use of copyrighted material. After a fixed amount of time, the copyright expires on intellectual work and it enters the public domain, where it can be used without limit. Copyright laws in many jurisdictions – mostly following the lead of the United States, in which the entertainment and publishing industries have very strong lobbying power – have been amended repeatedly since their inception, to extend the length of this fixed period where the work is exclusively controlled by the copyright holder. However, copyright is merely the legal reassurance that one owns their work. Technically, someone owns their work from the time it's created. A notable aspect of authorship emerges with copyright in that, in many jurisdictions, it can be passed down to another upon one's death. The person who inherits the copyright is not the author, but enjoys the same legal benefits. Questions arise as to the application of copyright law. How does it, for example, apply to the complex issue of fan fiction? If the media agency responsible for the authorized production allows material from fans, what is the limit before legal constraints from actors, music, and other considerations, come into play? Additionally, how does copyright apply to fan-generated stories for books? What powers do the original authors, as well as the publishers, have in regulating or even stopping the fan fiction? This particular sort of case also illustrates how complex intellectual property law can be, since such fiction may also involved trademark law (e.g. for names of characters in media franchises), likeness rights (such as for actors, or even entirely fictional entities), fair use rights held by the public (including the right to parody or satirize), and many other interacting complications. Authors may portion out different rights they hold to different parties, at different times, and for different purposes or uses, such as the right to adapt a plot into a film, but only with different character names, because the characters have already been optioned by another company for a television series or a video game. An author may also not have rights when working under contract that they would otherwise have, such as when creating a work for hire (e.g., hired to write a city tour guide by a municipal government that totally owns the copyright to the finished work), or when writing material using intellectual property owned by others (such as when writing a novel or screenplay that is a new installment in an already established media franchise). Philosophical views of the nature of authorship In literary theory, critics find complications in the term author beyond what constitutes authorship in a legal setting. In the wake of postmodern literature, critics such as Roland Barthes and Michel Foucault have examined the role and relevance of authorship to the meaning or interpretation of a text. Barthes challenges the idea that a text can be attributed to any single author. He writes, in his essay "Death of the Author" (1968), that "it is language which speaks, not the author." The words and language of a text itself determine and expose meaning for Barthes, and not someone possessing legal responsibility for the process of its production. Every line of written text is a mere reflection of references from any of a multitude of traditions, or, as Barthes puts it, "the text is a tissue of quotations drawn from the innumerable centres of culture"; it is never original. With this, the perspective of the author is removed from the text, and the limits formerly imposed by the idea of one authorial voice, one ultimate and universal meaning, are destroyed. The explanation and meaning of a work does not have to be sought in the one who produced it, "as if it were always in the end, through the more or less transparent allegory of the fiction, the voice of a single person, the author 'confiding' in us." The psyche, culture, fanaticism of an author can be disregarded when interpreting a text, because the words are rich enough themselves with all of the traditions of language. To expose meanings in a written work without appealing to the celebrity of an author, their tastes, passions, vices, is, to Barthes, to allow language to speak, rather than author. Michel Foucault argues in his essay "What is an author?" (1969) that all authors are writers, but not all writers are authors. He states that "a private letter may have a signatory—it does not have an author." For a reader to assign the title of author upon any written work is to attribute certain standards upon the text which, for Foucault, are working in conjunction with the idea of "the author function." Foucault's author function is the idea that an author exists only as a function of a written work, a part of its structure, but not necessarily part of the interpretive process. The author's name "indicates the status of the discourse within a society and culture," and at one time was used as an anchor for interpreting a text, a practice which Barthes would argue is not a particularly relevant or valid endeavour. Expanding upon Foucault's position, Alexander Nehamas writes that Foucault suggests "an author [...] is whoever can be understood to have produced a particular text as we interpret it," not necessarily who penned the text. It is this distinction between producing a written work and producing the interpretation or meaning in a written work that both Barthes and Foucault are interested in. Foucault warns of the risks of keeping the author's name in mind during interpretation, because it could affect the value and meaning with which one handles an interpretation. Literary critics Barthes and Foucault suggest that readers should not rely on or look for the notion of one overarching voice when interpreting a written work, because of the complications inherent with a writer's title of "author." They warn of the dangers interpretations could suffer from when associating the subject of inherently meaningful words and language with the personality of one authorial voice. Instead, readers should allow a text to be interpreted in terms of the language as "author." Relationship with publisher Self-publishing Self-publishing, self-publishing, independent publishing, or artisanal publishing is the "publication of any book, album or other media by its author without the involvement of a traditional publisher. It is the modern equivalent to traditional publishing." Types Unless a book is to be sold directly from the author to the public, an ISBN is required to uniquely identify the title. ISBN is a global standard used for all titles worldwide. Most self-publishing companies either provide their own ISBN to a title or can provide direction; it may be in the best interest of the self-published author to retain ownership of ISBN and copyright instead of using a number owned by a vanity press. A separate ISBN is needed for each edition of the book. Electronic (e-book) publishing There are a variety of book formats and tools that can be used to create them. Because it is possible to create e-books with no up-front or per-book costs, this is a popular option for self-publishers. E-book publishing platforms include Pronoun, Smashwords, Blurb, Amazon Kindle Direct Publishing, CinnamonTeal Publishing, Papyrus Editor, ebook leap, Bookbaby, Pubit, Lulu, Llumina Press, and CreateSpace. E-book formats include epub, mobi, and PDF, among others. Print-on-demand Print-on-demand (POD) publishing refers to the ability to print high-quality books as needed. For self-published books, this is often a more economical option than conducting a print run of hundreds or thousands of books. Many companies, such as Createspace (owned by Amazon.com), Outskirts Press, Blurb, Lulu, Llumina Press, ReadersMagnet, and iUniverse, allow printing single books at per-book costs not much higher than those paid by publishing companies for large print runs. Traditional publishing With commissioned publishing, the publisher makes all the publication arrangements and the author covers all expenses. The author of a work may receive a percentage calculated on a wholesale or a specific price or a fixed amount on each book sold. Publishers, at times, reduced the risk of this type of arrangement, by agreeing only to pay this after a certain number of copies had sold. In Canada, this practice occurred during the 1890s, but was not commonplace until the 1920s. Established and successful authors may receive advance payments, set against future royalties, but this is no longer common practice. Most independent publishers pay royalties as a percentage of net receipts – how net receipts are calculated varies from publisher to publisher. Under this arrangement, the author does not pay anything towards the expense of publication. The costs and financial risk are all carried by the publisher, who will then take the greatest percentage of the receipts. See Compensation for more. Vanity publishing This type of publisher normally charges a flat fee for arranging publication, offers a platform for selling, and then takes a percentage of the sale of every copy of a book. The author receives the rest of the money made. Relationship with editor The relationship between the author and the editor, often the author's only liaison to the publishing company, is often characterized as the site of tension. For the author to reach their audience, often through publication, the work usually must attract the attention of the editor. The idea of the author as the sole meaning-maker of necessity changes to include the influences of the editor and the publisher in order to engage the audience in writing as a social act. There are three principal areas covered by editors – Proofing (checking the Grammar and spelling, looking for typing errors), Story (potentially an area of deep angst for both author and publisher), and Layout (the setting of the final proof ready for publishing often requires minor text changes so a layout editor is required to ensure that these do not alter the sense of the text). Pierre Bourdieu's essay "The Field of Cultural Production" depicts the publishing industry as a "space of literary or artistic position-takings," also called the "field of struggles," which is defined by the tension and movement inherent among the various positions in the field. Bourdieu claims that the "field of position-takings [...] is not the product of coherence-seeking intention or objective consensus," meaning that an industry characterized by position-takings is not one of harmony and neutrality. In particular for the writer, their authorship in their work makes their work part of their identity, and there is much at stake personally over the negotiation of authority over that identity. However, it is the editor who has "the power to impose the dominant definition of the writer and therefore to delimit the population of those entitled to take part in the struggle to define the writer". As "cultural investors," publishers rely on the editor position to identify a good investment in "cultural capital" which may grow to yield economic capital across all positions. According to the studies of James Curran, the system of shared values among editors in Britain has generated a pressure among authors to write to fit the editors' expectations, removing the focus from the reader-audience and putting a strain on the relationship between authors and editors and on writing as a social act. Even the book review by the editors has more significance than the readership's reception. Compensation Authors rely on advance fees, royalty payments, adaptation of work to a screenplay, and fees collected from giving speeches. A standard contract for an author will usually include provision for payment in the form of an advance and royalties. An advance is a lump sum paid in advance of publication. An advance must be earned out before royalties are payable. An advance may be paid in two lump sums: the first payment on contract signing, and the second on delivery of the completed manuscript or on publication. Royalty payment is the sum paid to authors for each copy of a book sold and is traditionally around 10-12%, but self-published authors can earn about 40% – 60% royalties per each book sale. An author's contract may specify, for example, that they will earn 10% of the retail price of each book sold. Some contracts specify a scale of royalties payable (for example, where royalties start at 10% for the first 10,000 sales, but then increase to a higher percentage rate at higher sale thresholds). An author's book must earn the advance before any further royalties are paid. For example, if an author is paid a modest advance of $2000, and their royalty rate is 10% of a book priced at $20 – that is, $2 per book – the book will need to sell 1000 copies before any further payment will be made. Publishers typically withhold payment of a percentage of royalties earned against returns. In some countries, authors also earn income from a government scheme such as the ELR (educational lending right) and PLR (public lending right) schemes in Australia. Under these schemes, authors are paid a fee for the number of copies of their books in educational and/or public libraries. These days, many authors supplement their income from book sales with public speaking engagements, school visits, residencies, grants, and teaching positions. Ghostwriters, technical writers, and textbooks writers are typically paid in a different way: usually a set fee or a per word rate rather than on a percentage of sales. In the year 2016, according to the U.S. Bureau of Labor Statistics, nearly 130,000 people worked in the U.S. as authors making an average of $61,240 per year. See also Academic authorship Auteur Authors' editor Distributive writing Lead author List of novelists Lists of poets Lists of writers Novelist Professional writing References Writing occupations Literary criticism
915
https://en.wikipedia.org/wiki/Andrey%20Markov
Andrey Markov
Andrey Andreyevich Markov (14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work on stochastic processes. A primary subject of his research later became known as Markov chains or Markov processes. Markov and his younger brother Vladimir Andreevich Markov (1871–1897) proved the Markov brothers' inequality. His son, another Andrey Andreyevich Markov (1903–1979), was also a notable mathematician, making contributions to constructive mathematics and recursive function theory. Biography Andrey Markov was born on 14 June 1856 in Russia. He attended the St. Petersburg Grammar School, where some teachers saw him as a rebellious student. In his academics he performed poorly in most subjects other than mathematics. Later in life he attended Saint Petersburg Imperial University (now Saint Petersburg State University). among his teachers were Yulian Sokhotski (differential calculus, higher algebra), Konstantin Posse (analytic geometry), Yegor Zolotarev (integral calculus), Pafnuty Chebyshev (number theory and probability theory), Aleksandr Korkin (ordinary and partial differential equations), Mikhail Okatov (mechanism theory), Osip Somov (mechanics), and Nikolai Budajev (descriptive and higher geometry). He completed his studies at the university and was later asked if he would like to stay and have a career as a Mathematician. He later taught at high schools and continued his own mathematical studies. In this time he found a practical use for his mathematical skills. He figured out that he could use chains to model the alliteration of vowels and consonants in Russian literature. He also contributed to many other mathematical aspects in his time. He died at age 66 on 20 July 1922. Timeline In 1877, Markov was awarded a gold medal for his outstanding solution of the problem About Integration of Differential Equations by Continued Fractions with an Application to the Equation . During the following year, he passed the candidate's examinations, and he remained at the university to prepare for a lecturer's position. In April 1880, Markov defended his master's thesis "On the Binary Square Forms with Positive Determinant", which was directed by Aleksandr Korkin and Yegor Zolotarev. Four years later in 1884, he defended his doctoral thesis titled "On Certain Applications of the Algebraic Continuous Fractions". His pedagogical work began after the defense of his master's thesis in autumn 1880. As a privatdozent he lectured on differential and integral calculus. Later he lectured alternately on "introduction to analysis", probability theory (succeeding Chebyshev, who had left the university in 1882) and the calculus of differences. From 1895 through 1905 he also lectured in differential calculus. One year after the defense of his doctoral thesis, Markov was appointed extraordinary professor (1886) and in the same year he was elected adjunct to the Academy of Sciences. In 1890, after the death of Viktor Bunyakovsky, Markov became an extraordinary member of the academy. His promotion to an ordinary professor of St. Petersburg University followed in the fall of 1894. In 1896, Markov was elected an ordinary member of the academy as the successor of Chebyshev. In 1905, he was appointed merited professor and was granted the right to retire, which he did immediately. Until 1910, however, he continued to lecture in the calculus of differences. In connection with student riots in 1908, professors and lecturers of St. Petersburg University were ordered to monitor their students. Markov refused to accept this decree, and he wrote an explanation in which he declined to be an "agent of the governance". Markov was removed from further teaching duties at St. Petersburg University, and hence he decided to retire from the university. Markov was an atheist. In 1912, he protested Leo Tolstoy's excommunication from the Russian Orthodox Church by requesting his own excommunication. The Church complied with his request. In 1913, the council of St. Petersburg elected nine scientists honorary members of the university. Markov was among them, but his election was not affirmed by the minister of education. The affirmation only occurred four years later, after the February Revolution in 1917. Markov then resumed his teaching activities and lectured on probability theory and the calculus of differences until his death in 1922. See also List of things named after Andrey Markov Chebyshev–Markov–Stieltjes inequalities Gauss–Markov theorem Gauss–Markov process Hidden Markov model Markov blanket Markov chain Markov decision process Markov's inequality Markov brothers' inequality Markov information source Markov network Markov number Markov property Markov process Stochastic matrix (also known as Markov matrix) Subjunctive possibility Notes References Further reading А. А. Марков. "Распространение закона больших чисел на величины, зависящие друг от друга". "Известия Физико-математического общества при Казанском университете", 2-я серия, том 15, с. 135–156, 1906. A. A. Markov. "Extension of the limit theorems of probability theory to a sum of variables connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems, volume 1: Markov Chains. John Wiley and Sons, 1971. External links Markov, Andrei Andreyevich Markov, Andrei Andreyevich 19th-century Russian mathematicians 20th-century Russian mathematicians Russian atheists Former Russian Orthodox Christians Probability theorists Saint Petersburg State University alumni Full members of the Saint Petersburg Academy of Sciences Full Members of the Russian Academy of Sciences (1917–1925) People from Ryazan Russian statisticians
921
https://en.wikipedia.org/wiki/Angst
Angst
Angst is fear or anxiety (anguish is its Latinate equivalent, and the words anxious and anxiety are of similar origin). The dictionary definition for angst is a feeling of anxiety, apprehension, or insecurity. Etymology The word angst was introduced into English from the Danish, Norwegian, and Dutch word and the German word . It is attested since the 19th century in English translations of the works of Kierkegaard and Freud. It is used in English to describe an intense feeling of apprehension, anxiety, or inner turmoil. In other languages (with words from the Latin for "fear" or "panic"), the derived words differ in meaning; for example, as in the French and . The word angst has existed since the 8th century, from the Proto-Indo-European root , "restraint" from which Old High German developed. It is pre-cognate with the Latin , "tensity, tightness" and , "choking, clogging"; compare to the Ancient Greek () "strangle". Existentialist angst In existentialist philosophy, the term angst carries a specific conceptual meaning. The use of the term was first attributed to Danish philosopher Søren Kierkegaard (1813–1855). In The Concept of Anxiety (also known as The Concept of Dread, depending on the translation), Kierkegaard used the word Angest (in common Danish, angst, meaning "dread" or "anxiety") to describe a profound and deep-seated condition. Where non-human animals are guided solely by instinct, said Kierkegaard, human beings enjoy a freedom of choice that we find both appealing and terrifying. It is the anxiety of understanding of being free when considering undefined possibilities of one's life and the immense responsibility of having the power of choice over them. Kierkegaard's concept of angst reappeared in the works of existentialist philosophers who followed, such as Friedrich Nietzsche, Jean-Paul Sartre, and Martin Heidegger, each of whom developed the idea further in individual ways. While Kierkegaard's angst referred mainly to ambiguous feelings about moral freedom within a religious personal belief system, later existentialists discussed conflicts of personal principles, cultural norms, and existential despair. Music Existential angst makes its appearance in classical musical composition in the early twentieth century as a result of both philosophical developments and as a reflection of the war-torn times. Notable composers whose works are often linked with the concept include Gustav Mahler, Richard Strauss (operas Elektra and Salome), Claude-Achille Debussy (opera Pelleas et Melisande, ballet Jeux, other works), Jean Sibelius (especially the Fourth Symphony), Arnold Schoenberg (A Survivor from Warsaw, other works), Alban Berg, Francis Poulenc (opera Dialogues of the Carmelites), Dmitri Shostakovich (opera Lady Macbeth of the Mtsensk District, symphonies and chamber music), Béla Bartók (opera Bluebeard's Castle, other works), and Krzysztof Penderecki (especially Threnody to the Victims of Hiroshima). Angst began to be discussed in reference to popular music in the mid- to late 1950s amid widespread concerns over international tensions and nuclear proliferation. Jeff Nuttall's book Bomb Culture (1968) traced angst in popular culture to Hiroshima. Dread was expressed in works of folk rock such as Bob Dylan's "Masters of War" (1963) and "A Hard Rain's a-Gonna Fall". The term often makes an appearance in reference to punk rock, grunge, nu metal, and works of emo where expressions of melancholy, existential despair, or nihilism predominate. See also References External links Anxiety Emotions Existentialist concepts
922
https://en.wikipedia.org/wiki/Anxiety
Anxiety
Anxiety is an emotion characterized by an unpleasant state of inner turmoil and includes subjectively unpleasant feelings of dread over anticipated events. It is often accompanied by nervous behavior such as pacing back and forth, somatic complaints, and rumination. Anxiety is a feeling of uneasiness and worry, usually generalized and unfocused as an overreaction to a situation that is only subjectively seen as menacing. It is often accompanied by muscular tension, restlessness, fatigue, inability to catch one's breath, tightness in the abdominal region, and problems in concentration. Anxiety is closely related to fear, which is a response to a real or perceived immediate threat; anxiety involves the expectation of future threat including dread. People facing anxiety may withdraw from situations which have provoked anxiety in the past. Though anxiety is a normal human response, when excessive or persisting beyond developmentally appropriate periods it may be diagnosed as an anxiety disorder. There are multiple forms of anxiety disorder (such as Generalized Anxiety Disorder and Obsessive Compulsive Disorder) with specific clinical definitions. Part of the definition of an anxiety disorder, which distinguishes it from every day anxiety, is that it is persistent, typically lasting 6 months or more, although the criterion for duration is intended as a general guide with allowance for some degree of flexibility and is sometimes of shorter duration in children. Anxiety vs. fear Anxiety is distinguished from fear, which is an appropriate cognitive and emotional response to a perceived threat. Anxiety is related to the specific behaviors of fight-or-flight responses, defensive behavior or escape. There is a false presumption that often circulates that anxiety only occurs in situations perceived as uncontrollable or unavoidable, but this is not always so. David Barlow defines anxiety as "a future-oriented mood state in which one is not ready or prepared to attempt to cope with upcoming negative events," and that it is a distinction between future and present dangers which divides anxiety and fear. Another description of anxiety is agony, dread, terror, or even apprehension. In positive psychology, anxiety is described as the mental state that results from a difficult challenge for which the subject has insufficient coping skills. Fear and anxiety can be differentiated into four domains: (1) duration of emotional experience, (2) temporal focus, (3) specificity of the threat, and (4) motivated direction. Fear is short-lived, present-focused, geared towards a specific threat, and facilitating escape from threat. On the other hand, anxiety is long-acting, future-focused, broadly focused towards a diffuse threat, and promoting excessive caution while approaching a potential threat and interferes with constructive coping. Joseph E. LeDoux and Lisa Feldman Barrett have both sought to separate automatic threat responses from additional associated cognitive activity within anxiety. Symptoms Anxiety can be experienced with long, drawn-out daily symptoms that reduce quality of life, known as chronic (or generalized) anxiety, or it can be experienced in short spurts with sporadic, stressful panic attacks, known as acute anxiety. Symptoms of anxiety can range in number, intensity, and frequency, depending on the person. While almost everyone has experienced anxiety at some point in their lives, most do not develop long-term problems with anxiety. Anxiety may cause psychiatric and physiological symptoms. The risk of anxiety leading to depression could possibly even lead to an individual harming themselves, which is why there are many 24-hour suicide prevention hotlines. The behavioral effects of anxiety may include withdrawal from situations which have provoked anxiety or negative feelings in the past. Other effects may include changes in sleeping patterns, changes in habits, increase or decrease in food intake, and increased motor tension (such as foot tapping). The emotional effects of anxiety may include "feelings of apprehension or dread, trouble concentrating, feeling tense or jumpy, anticipating the worst, irritability, restlessness, watching (and waiting) for signs (and occurrences) of danger, and, feeling like your mind's gone blank" as well as "nightmares/bad dreams, obsessions about sensations, déjà vu, a trapped-in-your-mind feeling, and feeling like everything is scary." It may include a vague experience and feeling of helplessness. The cognitive effects of anxiety may include thoughts about suspected dangers, such as fear of dying: "You may ... fear that the chest pains are a deadly heart attack or that the shooting pains in your head are the result of a tumor or an aneurysm. You feel an intense fear when you think of dying, or you may think of it more often than normal, or can't get it out of your mind." The physiological symptoms of anxiety may include: Neurological, as headache, paresthesias, fasciculations, vertigo, or presyncope. Digestive, as abdominal pain, nausea, diarrhea, indigestion, dry mouth, or bolus. Stress hormones released in an anxious state have an impact on bowel function and can manifest physical symptoms that may contribute to or exacerbate IBS. Respiratory, as shortness of breath or sighing breathing. Cardiac, as palpitations, tachycardia, or chest pain. Muscular, as fatigue, tremors, or tetany. Cutaneous, as perspiration, or itchy skin. Uro-genital, as frequent urination, urinary urgency, dyspareunia, or impotence, chronic pelvic pain syndrome. Types There are various types of anxiety. Existential anxiety can occur when a person faces angst, an existential crisis, or nihilistic feelings. People can also face mathematical anxiety, somatic anxiety, stage fright, or test anxiety. Social anxiety refers to a fear of rejection and negative evaluation (being judged) by other people. Existential The philosopher Søren Kierkegaard, in The Concept of Anxiety (1844), described anxiety or dread associated with the "dizziness of freedom" and suggested the possibility for positive resolution of anxiety through the self-conscious exercise of responsibility and choosing. In Art and Artist (1932), the psychologist Otto Rank wrote that the psychological trauma of birth was the pre-eminent human symbol of existential anxiety and encompasses the creative person's simultaneous fear of – and desire for – separation, individuation, and differentiation. The theologian Paul Tillich characterized existential anxiety as "the state in which a being is aware of its possible nonbeing" and he listed three categories for the nonbeing and resulting anxiety: ontic (fate and death), moral (guilt and condemnation), and spiritual (emptiness and meaninglessness). According to Tillich, the last of these three types of existential anxiety, i.e. spiritual anxiety, is predominant in modern times while the others were predominant in earlier periods. Tillich argues that this anxiety can be accepted as part of the human condition or it can be resisted but with negative consequences. In its pathological form, spiritual anxiety may tend to "drive the person toward the creation of certitude in systems of meaning which are supported by tradition and authority" even though such "undoubted certitude is not built on the rock of reality". According to Viktor Frankl, the author of Man's Search for Meaning, when a person is faced with extreme mortal dangers, the most basic of all human wishes is to find a meaning of life to combat the "trauma of nonbeing" as death is near. Depending on the source of the threat, psychoanalytic theory distinguishes the following types of anxiety: realistic neurotic moral Test and performance According to Yerkes-Dodson law, an optimal level of arousal is necessary to best complete a task such as an exam, performance, or competitive event. However, when the anxiety or level of arousal exceeds that optimum, the result is a decline in performance. Test anxiety is the uneasiness, apprehension, or nervousness felt by students who have a fear of failing an exam. Students who have test anxiety may experience any of the following: the association of grades with personal worth; fear of embarrassment by a teacher; fear of alienation from parents or friends; time pressures; or feeling a loss of control. Sweating, dizziness, headaches, racing heartbeats, nausea, fidgeting, uncontrollable crying or laughing and drumming on a desk are all common. Because test anxiety hinges on fear of negative evaluation, debate exists as to whether test anxiety is itself a unique anxiety disorder or whether it is a specific type of social phobia. The DSM-IV classifies test anxiety as a type of social phobia. While the term "test anxiety" refers specifically to students, many workers share the same experience with regard to their career or profession. The fear of failing at a task and being negatively evaluated for failure can have a similarly negative effect on the adult. Management of test anxiety focuses on achieving relaxation and developing mechanisms to manage anxiety. Stranger, social, and intergroup anxiety Humans generally require social acceptance and thus sometimes dread the disapproval of others. Apprehension of being judged by others may cause anxiety in social environments. Anxiety during social interactions, particularly between strangers, is common among young people. It may persist into adulthood and become social anxiety or social phobia. "Stranger anxiety" in small children is not considered a phobia. In adults, an excessive fear of other people is not a developmentally common stage; it is called social anxiety. According to Cutting, social phobics do not fear the crowd but the fact that they may be judged negatively. Social anxiety varies in degree and severity. For some people, it is characterized by experiencing discomfort or awkwardness during physical social contact (e.g. embracing, shaking hands, etc.), while in other cases it can lead to a fear of interacting with unfamiliar people altogether. Those suffering from this condition may restrict their lifestyles to accommodate the anxiety, minimizing social interaction whenever possible. Social anxiety also forms a core aspect of certain personality disorders, including avoidant personality disorder. To the extent that a person is fearful of social encounters with unfamiliar others, some people may experience anxiety particularly during interactions with outgroup members, or people who share different group memberships (i.e., by race, ethnicity, class, gender, etc.). Depending on the nature of the antecedent relations, cognitions, and situational factors, intergroup contact may be stressful and lead to feelings of anxiety. This apprehension or fear of contact with outgroup members is often called interracial or intergroup anxiety. As is the case with the more generalized forms of social anxiety, intergroup anxiety has behavioral, cognitive, and affective effects. For instance, increases in schematic processing and simplified information processing can occur when anxiety is high. Indeed, such is consistent with related work on attentional bias in implicit memory. Additionally recent research has found that implicit racial evaluations (i.e. automatic prejudiced attitudes) can be amplified during intergroup interaction. Negative experiences have been illustrated in producing not only negative expectations, but also avoidant, or antagonistic, behavior such as hostility. Furthermore, when compared to anxiety levels and cognitive effort (e.g., impression management and self-presentation) in intragroup contexts, levels and depletion of resources may be exacerbated in the intergroup situation. Trait Anxiety can be either a short-term "state" or a long-term personality "trait." Trait anxiety reflects a stable tendency across the lifespan of responding with acute, state anxiety in the anticipation of threatening situations (whether they are actually deemed threatening or not). A meta-analysis showed that a high level of neuroticism is a risk factor for development of anxiety symptoms and disorders. Such anxiety may be conscious or unconscious. Personality can also be a trait leading to anxiety and depression. Through experience, many find it difficult to collect themselves due to their own personal nature. Choice or decision Anxiety induced by the need to choose between similar options is increasingly being recognized as a problem for individuals and for organizations. In 2004, Capgemini wrote: "Today we're all faced with greater choice, more competition and less time to consider our options or seek out the right advice." In a decision context, unpredictability or uncertainty may trigger emotional responses in anxious individuals that systematically alter decision-making. There are primarily two forms of this anxiety type. The first form refers to a choice in which there are multiple potential outcomes with known or calculable probabilities. The second form refers to the uncertainty and ambiguity related to a decision context in which there are multiple possible outcomes with unknown probabilities. Panic disorder Panic disorder may share symptoms of stress and anxiety, but it is actually very different. Panic disorder is an anxiety disorder that occurs without any triggers. According to the U.S Department of Health and Human Services, this disorder can be distinguished by unexpected and repeated episodes of intense fear. Someone who suffers from panic disorder will eventually develop constant fear of another attack and as this progresses it will begin to affect daily functioning and an individual's general quality of life. It is reported by the Cleveland Clinic that panic disorder affects 2 to 3 percent of adult Americans and can begin around the time of the teenage and early adult years. Some symptoms include: difficulty breathing, chest pain, dizziness, trembling or shaking, feeling faint, nausea, fear that you are losing control or are about to die. Even though they suffer from these symptoms during an attack, the main symptom is the persistent fear of having future panic attacks. Anxiety disorders Anxiety disorders are a group of mental disorders characterized by exaggerated feelings of anxiety and fear responses. Anxiety is a worry about future events and fear is a reaction to current events. These feelings may cause physical symptoms, such as a fast heart rate and shakiness. There are a number of anxiety disorders: including generalized anxiety disorder, specific phobia, social anxiety disorder, separation anxiety disorder, agoraphobia, panic disorder, and selective mutism. The disorder differs by what results in the symptoms. People often have more than one anxiety disorder. Anxiety disorders are caused by a complex combination of genetic and environmental factors. To be diagnosed, symptoms typically need to be present for at least six months, be more than would be expected for the situation, and decrease a person's ability to function in their daily lives. Other problems that may result in similar symptoms include hyperthyroidism, heart disease, caffeine, alcohol, or cannabis use, and withdrawal from certain drugs, among others. Without treatment, anxiety disorders tend to remain. Treatment may include lifestyle changes, counselling, and medications. Counselling is typically with a type of cognitive behavioural therapy. Medications, such as antidepressants or beta blockers, may improve symptoms. About 12% of people are affected by an anxiety disorder in a given year and between 5–30% are affected at some point in their life. They occur about twice as often in women than they do in men, and generally begin before the age of 25. The most common are specific phobia which affects nearly 12% and social anxiety disorder which affects 10% at some point in their life. They affect those between the ages of 15 and 35 the most and become less common after the age of 55. Rates appear to be higher in the United States and Europe. Short- and long-term anxiety Anxiety can be either a short-term "state" or a long-term "trait." Whereas trait anxiety represents worrying about future events, anxiety disorders are a group of mental disorders characterized by feelings of anxiety and fears. Four Ways to Be Anxious In his book Anxious: the modern mind in the age of anxiety Joseph LeDoux examines four experiences of anxiety through a brain-based lens: In the presence of an existing or imminent external threat, you worry about the event and its implications for your physical and/or psychological well-being. When a threat signal occurs, it signifies either that danger is present or near in space and time or that it might be coming in the future. Nonconscius threats processing by the brain activates defensive survival circuits, resulting in changes in information processing in the brain, controlled in part by increases in arousal and behavioral and physiological responses in the body that then produce signals that feed back to the brain and complement the physiological changes there, intensifying them and extending their duration. When you notice body sensations, you worry about what they might mean for your physical and/or psychological well-being. The trigger stimulus does not have to be an external stimulus but can be an internal one, as some people are particularly sensitive to body signals. Thoughts and memories may lead to you to worry about your physical and/or psychological well-being. We do not need to be presence of an external or internal stimulus to be anxious. An episodic memory of a past trauma or of a panic attack in the past is sufficient to activate the defence circuits. Thoughts and memories may result in existential dread, such as worry about leading a meaningful life or the eventuality of death. Examples are contemplations of whether one's life has been meaningful, the inevitability of death, or the difficulty of making decisions that have a moral value. These do not necessarily activate defensive systems; they are more or less pure forms of cognitive anxiety. Co-morbidity Anxiety disorders often occur with other mental health disorders, particularly major depressive disorder, bipolar disorder, eating disorders, or certain personality disorders. It also commonly occurs with personality traits such as neuroticism. This observed co-occurrence is partly due to genetic and environmental influences shared between these traits and anxiety. It is common for those with obsessive-compulsive disorder to experience anxiety. Anxiety is also commonly found in those who experience panic disorders, phobic anxiety disorders, severe stress, dissociative disorders, somatoform disorders, and some neurotic disorders. Risk factors Anxiety disorders are partly genetic, with twin studies suggesting 30-40% genetic influence on individual differences in anxiety. Environmental factors are also important. Twin studies show that individual-specific environments have a large influence on anxiety, whereas shared environmental influences (environments that affect twins in the same way) operate during childhood but decline through adolescence. Specific measured ‘environments’ that have been associated with anxiety include child abuse, family history of mental health disorders, and poverty. Anxiety is also associated with drug use, including alcohol, caffeine, and benzodiazepines (which are often prescribed to treat anxiety). Neuroanatomy Neural circuitry involving the amygdala (which regulates emotions like anxiety and fear, stimulating the HPA axis and sympathetic nervous system) and hippocampus (which is implicated in emotional memory along with the amygdala) is thought to underlie anxiety. People who have anxiety tend to show high activity in response to emotional stimuli in the amygdala. Some writers believe that excessive anxiety can lead to an overpotentiation of the limbic system (which includes the amygdala and nucleus accumbens), giving increased future anxiety, but this does not appear to have been proven. Research upon adolescents who as infants had been highly apprehensive, vigilant, and fearful finds that their nucleus accumbens is more sensitive than that in other people when deciding to make an action that determined whether they received a reward. This suggests a link between circuits responsible for fear and also reward in anxious people. As researchers note, "a sense of 'responsibility', or self-agency, in a context of uncertainty (probabilistic outcomes) drives the neural system underlying appetitive motivation (i.e., nucleus accumbens) more strongly in temperamentally inhibited than noninhibited adolescents". The gut-brain axis The microbes of the gut can connect with the brain to affect anxiety. There are various pathways along which this communication can take place. One is through the major neurotransmitters. The gut microbes such as Bifidobacterium and Bacillus produce the neurotransmitters GABA and dopamine, respectively. The neurotransmitters signal to the nervous system of the gastrointestinal tract, and those signals will be carried to the brain through the vagus nerve or the spinal system. This is demonstrated by the fact that altering the microbiome has shown anxiety- and depression-reducing effects in mice, but not in subjects without vagus nerves. Another key pathway is the HPA axis, as mentioned above. The microbes can control the levels of cytokines in the body, and altering cytokine levels creates direct effects on areas of the brain such as the hypothalmus, the area that triggers HPA axis activity. The HPA axis regulates production of cortisol, a hormone that takes part in the body's stress response. When HPA activity spikes, cortisol levels increase, processing and reducing anxiety in stressful situations. These pathways, as well as the specific effects of individual taxa of microbes, are not yet completely clear, but the communication between the gut microbiome and the brain is undeniable, as is the ability of these pathways to alter anxiety levels. With this communication comes the potential to treat anxiety. Prebiotics and probiotics have been shown to reduced anxiety. For example, experiments in which mice were given fructo- and galacto-oligosaccharide prebiotics and Lactobacillus probiotics have both demonstrated a capability to reduce anxiety. In humans, results are not as concrete, but promising. Genetics Genetics and family history (e.g. parental anxiety) may put an individual at increased risk of an anxiety disorder, but generally external stimuli will trigger its onset or exacerbation. Estimates of genetic influence on anxiety, based on studies of twins, range from 25 to 40% depending on the specific type and age-group under study. For example, genetic differences account for about 43% of variance in panic disorder and 28% in generalized anxiety disorder. Longitudinal twin studies have shown the moderate stability of anxiety from childhood through to adulthood is mainly influenced by stability in genetic influence. When investigating how anxiety is passed on from parents to children, it is important to account for sharing of genes as well as environments, for example using the intergenerational children-of-twins design. Many studies in the past used a candidate gene approach to test whether single genes were associated with anxiety. These investigations were based on hypotheses about how certain known genes influence neurotransmitters (such as serotonin and norepinephrine) and hormones (such as cortisol) that are implicated in anxiety. None of these findings are well replicated, with the possible exception of TMEM132D, COMT and MAO-A. The epigenetic signature of BDNF, a gene that codes for a protein called brain derived neurotrophic factor that is found in the brain, has also been associated with anxiety and specific patterns of neural activity. and a receptor gene for BDNF called NTRK2 was associated with anxiety in a large genome-wide investigation. The reason that most candidate gene findings have not replicated is that anxiety is a complex trait that is influenced by many genomic variants, each of which has a small effect on its own. Increasingly, studies of anxiety are using a hypothesis-free approach to look for parts of the genome that are implicated in anxiety using big enough samples to find associations with variants that have small effects. The largest explorations of the common genetic architecture of anxiety have been facilitated by the UK Biobank, the ANGST consortium and the CRC Fear, Anxiety and Anxiety Disorders. Medical conditions Many medical conditions can cause anxiety. This includes conditions that affect the ability to breathe, like COPD and asthma, and the difficulty in breathing that often occurs near death. Conditions that cause abdominal pain or chest pain can cause anxiety and may in some cases be a somatization of anxiety; the same is true for some sexual dysfunctions. Conditions that affect the face or the skin can cause social anxiety especially among adolescents, and developmental disabilities often lead to social anxiety for children as well. Life-threatening conditions like cancer also cause anxiety. Furthermore, certain organic diseases may present with anxiety or symptoms that mimic anxiety. These disorders include certain endocrine diseases (hypo- and hyperthyroidism, hyperprolactinemia), metabolic disorders (diabetes), deficiency states (low levels of vitamin D, B2, B12, folic acid), gastrointestinal diseases (celiac disease, non-celiac gluten sensitivity, inflammatory bowel disease), heart diseases, blood diseases (anemia), cerebral vascular accidents (transient ischemic attack, stroke), and brain degenerative diseases (Parkinson's disease, dementia, multiple sclerosis, Huntington's disease), among others. Substance-induced Several drugs can cause or worsen anxiety, whether in intoxication, withdrawal or as side effect. These include alcohol, tobacco, sedatives (including prescription benzodiazepines), opioids (including prescription pain killers and illicit drugs like heroin), stimulants (such as caffeine, cocaine and amphetamines), hallucinogens, and inhalants. While many often report self-medicating anxiety with these substances, improvements in anxiety from drugs are usually short-lived (with worsening of anxiety in the long term, sometimes with acute anxiety as soon as the drug effects wear off) and tend to be exaggerated. Acute exposure to toxic levels of benzene may cause euphoria, anxiety, and irritability lasting up to 2 weeks after the exposure. Psychological Poor coping skills (e.g., rigidity/inflexible problem solving, denial, avoidance, impulsivity, extreme self-expectation, negative thoughts, affective instability, and inability to focus on problems) are associated with anxiety. Anxiety is also linked and perpetuated by the person's own pessimistic outcome expectancy and how they cope with feedback negativity. Temperament (e.g., neuroticism) and attitudes (e.g. pessimism) have been found to be risk factors for anxiety. Cognitive distortions such as overgeneralizing, catastrophizing, mind reading, emotional reasoning, binocular trick, and mental filter can result in anxiety. For example, an overgeneralized belief that something bad "always" happens may lead someone to have excessive fears of even minimally risky situations and to avoid benign social situations due to anticipatory anxiety of embarrassment. In addition, those who have high anxiety can also create future stressful life events. Together, these findings suggest that anxious thoughts can lead to anticipatory anxiety as well as stressful events, which in turn cause more anxiety. Such unhealthy thoughts can be targets for successful treatment with cognitive therapy. Psychodynamic theory posits that anxiety is often the result of opposing unconscious wishes or fears that manifest via maladaptive defense mechanisms (such as suppression, repression, anticipation, regression, somatization, passive aggression, dissociation) that develop to adapt to problems with early objects (e.g., caregivers) and empathic failures in childhood. For example, persistent parental discouragement of anger may result in repression/suppression of angry feelings which manifests as gastrointestinal distress (somatization) when provoked by another while the anger remains unconscious and outside the individual's awareness. Such conflicts can be targets for successful treatment with psychodynamic therapy. While psychodynamic therapy tends to explore the underlying roots of anxiety, cognitive behavioral therapy has also been shown to be a successful treatment for anxiety by altering irrational thoughts and unwanted behaviors. Evolutionary psychology An evolutionary psychology explanation is that increased anxiety serves the purpose of increased vigilance regarding potential threats in the environment as well as increased tendency to take proactive actions regarding such possible threats. This may cause false positive reactions but an individual suffering from anxiety may also avoid real threats. This may explain why anxious people are less likely to die due to accidents. There is ample empirical evidence that anxiety can have adaptive value. Within a school, timid fish are more likely than bold fish to survive a predator. When people are confronted with unpleasant and potentially harmful stimuli such as foul odors or tastes, PET-scans show increased blood flow in the amygdala. In these studies, the participants also reported moderate anxiety. This might indicate that anxiety is a protective mechanism designed to prevent the organism from engaging in potentially harmful behaviors. Social Social risk factors for anxiety include a history of trauma (e.g., physical, sexual or emotional abuse or assault), bullying, early life experiences and parenting factors (e.g., rejection, lack of warmth, high hostility, harsh discipline, high parental negative affect, anxious childrearing, modelling of dysfunctional and drug-abusing behaviour, discouragement of emotions, poor socialization, poor attachment, and child abuse and neglect), cultural factors (e.g., stoic families/cultures, persecuted minorities including the disabled), and socioeconomics (e.g., uneducated, unemployed, impoverished although developed countries have higher rates of anxiety disorders than developing countries). A 2019 comprehensive systematic review of over 50 studies showed that food insecurity in the United States is strongly associated with depression, anxiety, and sleep disorders. Food-insecure individuals had an almost 3 fold risk increase of testing positive for anxiety when compared to food-secure individuals. Gender socialization Contextual factors that are thought to contribute to anxiety include gender socialization and learning experiences. In particular, learning mastery (the degree to which people perceive their lives to be under their own control) and instrumentality, which includes such traits as self-confidence, self-efficacy, independence, and competitiveness fully mediate the relation between gender and anxiety. That is, though gender differences in anxiety exist, with higher levels of anxiety in women compared to men, gender socialization and learning mastery explain these gender differences. Treatment The first step in the management of a person with anxiety symptoms involves evaluating the possible presence of an underlying medical cause, the recognition of which is essential in order to decide the correct treatment. Anxiety symptoms may mask an organic disease, or appear associated with or as a result of a medical disorder. Cognitive behavioral therapy (CBT) is effective for anxiety disorders and is a first line treatment. CBT appears to be equally effective when carried out via the internet. While evidence for mental health apps is promising, it is preliminary. Psychopharmacological treatment can be used in parallel to CBT or can be used alone. As a general rule, most anxiety disorders respond well to first-line agents. Such drugs, also used as anti-depressants, are the selective serotonin reuptake inhibitors and serotonin-norepinephrine reuptake inhibitors, that work by blocking the reuptake of specific neurotransmitters and resulting in the increase in availability of these neurotransmitters. Additionally, benzodiazepines are often prescribed to individuals with anxiety disorder. Benzodiazepines produce an anxiolytic response by modulating GABA and increasing its receptor binding. A third common treatment involves a category of drug known as serotonin agonists. This category of drug works by initiating a physiological response at 5-HT1A receptor by increasing the action of serotonin at this receptor. Other treatment options include pregabalin, tricyclic antidepressants, and moclobemide, among others. Prevention The above risk factors give natural avenues for prevention. A 2017 review found that psychological or educational interventions have a small yet statistically significant benefit for the prevention of anxiety in varied population types. Pathophysiology Anxiety disorder appears to be a genetically inherited neurochemical dysfunction that may involve autonomic imbalance; decreased GABA-ergic tone; allelic polymorphism of the catechol-O-methyltransferase (COMT) gene; increased adenosine receptor function; increased cortisol. In the central nervous system (CNS), the major mediators of the symptoms of anxiety disorders appear to be norepinephrine, serotonin, dopamine, and gamma-aminobutyric acid (GABA). Other neurotransmitters and peptides, such as corticotropin-releasing factor, may be involved. Peripherally, the autonomic nervous system, especially the sympathetic nervous system, mediates many of the symptoms. Increased flow in the right parahippocampal region and reduced serotonin type 1A receptor binding in the anterior and posterior cingulate and raphe of patients are the diagnostic factors for prevalence of anxiety disorder. The amygdala is central to the processing of fear and anxiety, and its function may be disrupted in anxiety disorders. Anxiety processing in the basolateral amygdala has been implicated with expansion of dendritic arborization of the amygdaloid neurons. SK2 potassium channels mediate inhibitory influence on action potentials and reduce arborization. See also List of people with an anxiety disorder References External links Emotions
924
https://en.wikipedia.org/wiki/A.%20A.%20Milne
A. A. Milne
Alan Alexander Milne (; 18 January 1882 – 31 January 1956) was an English author, best known for his books about the teddy bear Winnie-the-Pooh and for various poems. Milne was a noted writer, primarily as a playwright, before the huge success of Pooh overshadowed all his previous work. Milne served in both World Wars, joining the British Army in World War I, and as a captain of the British Home Guard in World War II. He was the father of bookseller Christopher Robin Milne, upon whom the character Christopher Robin is based. Early life and military career Alan Alexander Milne was born in Kilburn, London, to John Vine Milne, who was born in England, and Sarah Marie Milne (née Heginbotham). He grew up at Henley House School, 6/7 Mortimer Road (now Crescent), Kilburn, a small independent school run by his father. One of his teachers was H. G. Wells, who taught there in 1889–90. Milne attended Westminster School and Trinity College, Cambridge, where he studied on a mathematics scholarship, graduating with a B.A. in Mathematics in 1903. He edited and wrote for Granta, a student magazine. He collaborated with his brother Kenneth and their articles appeared over the initials AKM. Milne's work came to the attention of the leading British humour magazine Punch, where Milne was to become a contributor and later an assistant editor. Considered a talented cricket fielder, Milne played for two amateur teams that were largely composed of British writers: the Allahakbarries and the Authors XI. His teammates included fellow writers J. M. Barrie, Arthur Conan Doyle and P. G. Wodehouse. Milne joined the British Army in World War I and served as an officer in the Royal Warwickshire Regiment and later, after a debilitating illness, the Royal Corps of Signals. He was commissioned into the 4th Battalion, Royal Warwickshire Regiment, on 1 February 1915 as a second lieutenant (on probation). His commission was confirmed on 20 December 1915. On 7 July 1916, he was injured in the Battle of the Somme and invalided back to England. Having recuperated, he was recruited into Military Intelligence to write propaganda articles for MI7 (b) between 1916 and 1918. He was discharged on 14 February 1919, and settled in Mallord Street, Chelsea. He relinquished his commission on 19 February 1920, retaining the rank of lieutenant. After the war, he wrote a denunciation of war titled Peace with Honour (1934), which he retracted somewhat with 1940's War with Honour. During World War II, Milne was one of the most prominent critics of fellow English writer (and Authors XI cricket teammate) P. G. Wodehouse, who was captured at his country home in France by the Nazis and imprisoned for a year. Wodehouse made radio broadcasts about his internment, which were broadcast from Berlin. Although the light-hearted broadcasts made fun of the Germans, Milne accused Wodehouse of committing an act of near treason by cooperating with his country's enemy. Wodehouse got some revenge on his former friend (e.g. in The Mating Season) by creating fatuous parodies of the Christopher Robin poems in some of his later stories, and claiming that Milne "was probably jealous of all other writers.... But I loved his stuff." Milne married Dorothy "Daphne" de Sélincourt (1890–1971) in 1913 and their son Christopher Robin Milne was born in 1920. In 1925, Milne bought a country home, Cotchford Farm, in Hartfield, East Sussex. During World War II, Milne was a captain in the British Home Guard in Hartfield & Forest Row, insisting on being plain "Mr. Milne" to the members of his platoon. He retired to the farm after a stroke and brain surgery in 1952 left him an invalid, and by August 1953, "he seemed very old and disenchanted." Milne died in January 1956, aged 74. Literary career 1903 to 1925 After graduating from Cambridge University in 1903, A. A. Milne contributed humorous verse and whimsical essays to Punch, joining the staff in 1906 and becoming an assistant editor. During this period he published 18 plays and three novels, including the murder mystery The Red House Mystery (1922). His son was born in August 1920 and in 1924 Milne produced a collection of children's poems, When We Were Very Young, which were illustrated by Punch staff cartoonist E. H. Shepard. A collection of short stories for children A Gallery of Children, and other stories that became part of the Winnie-the-Pooh books, were first published in 1925. Milne was an early screenwriter for the nascent British film industry, writing four stories filmed in 1920 for the company Minerva Films (founded in 1920 by the actor Leslie Howard and his friend and story editor Adrian Brunel). These were The Bump, starring Aubrey Smith; Twice Two; Five Pound Reward; and Bookworms. Some of these films survive in the archives of the British Film Institute. Milne had met Howard when the actor starred in Milne's play Mr Pim Passes By in London. Looking back on this period (in 1926), Milne observed that when he told his agent that he was going to write a detective story, he was told that what the country wanted from a "Punch humorist" was a humorous story; when two years later he said he was writing nursery rhymes, his agent and publisher were convinced he should write another detective story; and after another two years, he was being told that writing a detective story would be in the worst of taste given the demand for children's books. He concluded that "the only excuse which I have yet discovered for writing anything is that I want to write it; and I should be as proud to be delivered of a Telephone Directory con amore as I should be ashamed to create a Blank Verse Tragedy at the bidding of others." 1926 to 1928 Milne is most famous for his two Pooh books about a boy named Christopher Robin after his son, Christopher Robin Milne (1920–1996), and various characters inspired by his son's stuffed animals, most notably the bear named Winnie-the-Pooh. Christopher Robin Milne's stuffed bear, originally named Edward, was renamed Winnie after a Canadian black bear named Winnie (after Winnipeg), which was used as a military mascot in World War I, and left to London Zoo during the war. "The Pooh" comes from a swan the young Milne named "Pooh". E. H. Shepard illustrated the original Pooh books, using his own son's teddy Growler ("a magnificent bear") as the model. The rest of Christopher Robin Milne's toys, Piglet, Eeyore, Kanga, Roo and Tigger, were incorporated into A. A. Milne's stories, and two more characters – Rabbit and Owl – were created by Milne's imagination. Christopher Robin Milne's own toys are now on display in New York where 750,000 people visit them every year. The fictional Hundred Acre Wood of the Pooh stories derives from Five Hundred Acre Wood in Ashdown Forest in East Sussex, South East England, where the Pooh stories were set. Milne lived on the northern edge of the forest at Cotchford Farm, , and took his son walking there. E. H. Shepard drew on the landscapes of Ashdown Forest as inspiration for many of the illustrations he provided for the Pooh books. The adult Christopher Robin commented: "Pooh's Forest and Ashdown Forest are identical." Popular tourist locations at Ashdown Forest include: Galleon's Lap, The Enchanted Place, the Heffalump Trap and Lone Pine, Eeyore’s Sad and Gloomy Place, and the wooden Pooh Bridge where Pooh and Piglet invented Poohsticks. Not yet known as Pooh, he made his first appearance in a poem, "Teddy Bear", published in Punch magazine in February 1924 and republished in When We Were Very Young. Pooh first appeared in the London Evening News on Christmas Eve, 1925, in a story called "The Wrong Sort of Bees". Winnie-the-Pooh was published in 1926, followed by The House at Pooh Corner in 1928. A second collection of nursery rhymes, Now We Are Six, was published in 1927. All four books were illustrated by E. H. Shepard. Milne also published four plays in this period. He also "gallantly stepped forward" to contribute a quarter of the costs of dramatising P. G. Wodehouse's A Damsel in Distress. The World of Pooh won the Lewis Carroll Shelf Award in 1958. 1929 onwards The success of his children's books was to become a source of considerable annoyance to Milne, whose self-avowed aim was to write whatever he pleased and who had, until then, found a ready audience for each change of direction: he had freed pre-war Punch from its ponderous facetiousness; he had made a considerable reputation as a playwright (like his idol J. M. Barrie) on both sides of the Atlantic; he had produced a witty piece of detective writing in The Red House Mystery (although this was severely criticised by Raymond Chandler for the implausibility of its plot in his essay The Simple Art of Murder in the eponymous collection that appeared in 1950). But once Milne had, in his own words, "said goodbye to all that in 70,000 words" (the approximate length of his four principal children's books), he had no intention of producing any reworkings lacking in originality, given that one of the sources of inspiration, his son, was growing older. Another reason Milne stopped writing children's books, and especially about Winnie-the-Pooh, was that he felt "amazement and disgust" over the fame his son was exposed to, and said that "I feel that the legal Christopher Robin has already had more publicity than I want for him. I do not want CR Milne to ever wish that his name were Charles Robert." In his literary home, Punch, where the When We Were Very Young verses had first appeared, Methuen continued to publish whatever Milne wrote, including the long poem "The Norman Church" and an assembly of articles entitled Year In, Year Out (which Milne likened to a benefit night for the author). In 1930, Milne adapted Kenneth Grahame's novel The Wind in the Willows for the stage as Toad of Toad Hall. The title was an implicit admission that such chapters as Chapter 7, "The Piper at the Gates of Dawn," could not survive translation to the theatre. A special introduction written by Milne is included in some editions of Grahame's novel. Milne and his wife became estranged from their son, who came to resent what he saw as his father's exploitation of his childhood and came to hate the books that had thrust him into the public eye. Christopher's marriage to his first cousin, Lesley de Sélincourt, distanced him still further from his parents – Lesley's father and Christopher's mother had not spoken to each other for 30 years. Death and legacy Commemoration A. A. Milne died at his home in Hartfield, Sussex, on 31 January 1956, nearly two weeks after his 74th birthday. After a memorial service in London, his ashes were scattered in a crematorium's memorial garden in Brighton. The rights to A. A. Milne's Pooh books were left to four beneficiaries: his family, the Royal Literary Fund, Westminster School and the Garrick Club. After Milne's death in 1956, thirteen days after his 74th birthday, his widow sold her rights to the Pooh characters to Stephen Slesinger, whose widow sold the rights after Slesinger's death to the Walt Disney Company, which has made many Pooh cartoon movies, a Disney Channel television show, as well as Pooh-related merchandise. In 2001, the other beneficiaries sold their interest in the estate to the Disney Corporation for $350m. Previously Disney had been paying twice-yearly royalties to these beneficiaries. The estate of E. H. Shepard also received a sum in the deal. The UK copyright on the text of the original Winnie the Pooh books expires on 1 January 2027; at the beginning of the year after the 70th anniversary of the author's death (PMA-70), and has already expired in those countries with a PMA-50 rule. This applies to all of Milne's works except those first published posthumously. The illustrations in the Pooh books will remain under copyright until the same amount of time has passed, after the illustrator's death; in the UK, this will be on 1 January 2047. In the United States, copyright will not expire until 95 years after publication for each of Milne's books first published before 1978, but this includes the illustrations. In 2008, a collection of original illustrations featuring Winnie-the-Pooh and his animal friends sold for more than £1.2 million at auction in Sotheby's, London. Forbes magazine ranked Winnie the Pooh the most valuable fictional character in 2002; Winnie the Pooh merchandising products alone had annual sales of more than $5.9 billion. In 2005, Winnie the Pooh generated $6 billion, a figure surpassed only by Mickey Mouse. A memorial plaque in Ashdown Forest, unveiled by Christopher Robin in 1979, commemorates the work of A. A. Milne and Shepard in creating the world of Pooh. Milne once wrote of Ashdown Forest: "In that enchanted place on the top of the forest a little boy and his bear will always be playing." In 2003, Winnie the Pooh was listed at number 7 on the BBC's poll The Big Read which determined the UK's "best-loved novels" of all time. In 2006, Winnie the Pooh received a star on the Hollywood Walk of Fame, marking the 80th birthday of Milne's creation. That same year a UK poll saw Winnie the Pooh voted onto the list of icons of England. Marking the 90th anniversary of Milne's creation of the character, and the 90th birthday of Elizabeth II, in 2016 a new story sees Winnie the Pooh meet the Queen at Buckingham Palace. The illustrated and audio adventure is titled Winnie-the-Pooh Meets the Queen, and has been narrated by actor Jim Broadbent. Also in 2016, a new character, a Penguin, was unveiled in The Best Bear in All the World, which was inspired by a long-lost photograph of Milne and his son Christopher with a toy penguin. Several of Milne's children's poems were set to music by the composer Harold Fraser-Simson. His poems have been parodied many times, including with the books When We Were Rather Older and Now We Are Sixty. The 1963 film The King's Breakfast was based on Milne's poem of the same name. The Pooh books were used as the basis for two academic satires by Frederick C Crews: 'The Pooh Perplex'(1963–4) and 'Postmodern Pooh'(2002). An exhibition entitled "Winnie-the-Pooh: Exploring a Classic" appeared at the V & A from 9 December 2017 to 8 April 2018. An elementary school in Houston, Texas, United States, operated by the Houston Independent School District (HISD), is named after Milne. The school, A. A. Milne Elementary School in Brays Oaks, opened in 1991. Archive The bulk of A. A. Milne's papers are housed at the Harry Ransom Center at the University of Texas at Austin. The collection, established at the center in 1964, consists of manuscript drafts and fragments for over 150 of Milne's works, as well as correspondence, legal documents, genealogical records, and some personal effects. The library division holds several books formerly belonging to Milne and his wife Dorothy. The Harry Ransom Center also has small collections of correspondence from Christopher Robin Milne and Milne's frequent illustrator Ernest Shepard. The original manuscripts for Winnie the Pooh and The House at Pooh Corner are archived separately at Trinity College Library, Cambridge. Religious views Milne did not speak out much on the subject of religion, although he used religious terms to explain his decision, while remaining a pacifist, to join the British Home Guard: "In fighting Hitler," he wrote, "we are truly fighting the Devil, the Anti-Christ ... Hitler was a crusader against God." His best known comment on the subject was recalled on his death: He wrote in the poem "Explained": He also wrote in the poem "Vespers": Works Novels Lovers in London (1905. Some consider this more of a short story collection; Milne did not like it and considered The Day's Play as his first book.) Once on a Time (1917) Mr. Pim (1921) (A novelisation of his 1919 play Mr. Pim Passes By) The Red House Mystery (1922). Serialised: London (Daily News), serialised daily from 3 to 28 August 1921 Two People (1931) (Inside jacket claims this is Milne's first attempt at a novel.) Four Days' Wonder (1933) Chloe Marr (1946) Non-fiction Peace With Honour (1934) It's Too Late Now: The Autobiography of a Writer (1939) War With Honour (1940) War Aims Unlimited (1941) Year In, Year Out (1952) (illustrated by E. H. Shepard) Punch articles The Day's Play (1910) The Holiday Round (1912) Once a Week (1914) The Sunny Side (1921) Those Were the Days (1929) [The four volumes above, compiled] Newspaper articles and book introductions The Chronicles of Clovis by "Saki" (1911) [Introduction to] Not That It Matters (1919) If I May (1920) By Way of Introduction (1929) ‘'Women and Children First!’’. John Bull, 10 November 1934 It Depends on the Book (1943, in September issue of Red Cross Newspaper The Prisoner of War) Story collections for children A Gallery of Children (1925) Winnie-the-Pooh (1926) (illustrated by Ernest H. Shepard) The House at Pooh Corner (1928) (illustrated by E. H. Shepard) Short Stories Poetry collections for children When We Were Very Young (1924) (illustrated by E. H. Shepard) Now We Are Six (1927) (illustrated by E. H. Shepard) Story collections The Secret and other stories (1929) The Birthday Party (1948) A Table Near the Band (1950) Poetry When We Were Very Young (1924) (illustrated by E. H. Shepard) For the Luncheon Interval (1925) [poems from Punch] Now We Are Six (1927) (illustrated by E. H. Shepard) Behind the Lines (1940) The Norman Church (1948) Screenplays and plays Wurzel-Flummery (1917) Belinda (1918) The Boy Comes Home (1918) Make-Believe (1918) (children's play) The Camberley Triangle (1919) Mr. Pim Passes By (1919) The Red Feathers (1920) The Romantic Age (1920) The Stepmother (1920) The Truth About Blayds (1920) The Bump (1920, Minerva Films), starring C. Aubrey Smith and Faith Celli Twice Two (1920, Minerva Films) Five Pound Reward (1920, Minerva Films) Bookworms (1920, Minerva Films) The Great Broxopp (1921) The Dover Road (1921) The Lucky One (1922) The Truth About Blayds (1922) The Artist: A Duologue (1923) Give Me Yesterday (1923) (a.k.a. Success in the UK) Ariadne (1924) The Man in the Bowler Hat: A Terribly Exciting Affair (1924) To Have the Honour (1924) Portrait of a Gentleman in Slippers (1926) Success (1926) Miss Marlow at Play (1927) Winnie the Pooh. Written specially by Milne for a 'Winnie the Pooh Party' in aid of the National Mother-Saving Campaign, and performed once at Seaford House on 17 March 1928 The Fourth Wall or The Perfect Alibi (1928) (later adapted for the film Birds of Prey (1930), directed by Basil Dean) The Ivory Door (1929) Toad of Toad Hall (1929) (adaptation of The Wind in the Willows) Michael and Mary (1930) Other People's Lives (1933) (a.k.a. They Don't Mean Any Harm) Miss Elizabeth Bennet (1936) [based on Pride and Prejudice] Sarah Simple (1937) Gentleman Unknown (1938) The General Takes Off His Helmet (1939) in The Queen's Book of the Red Cross The Ugly Duckling (1941) Before the Flood (1951). Portrayal Milne is portrayed by Domhnall Gleeson in Goodbye Christopher Robin, a 2017 film. In the 2018 fantasy film Christopher Robin, an extension of the Disney Winnie the Pooh franchise, Tristan Sturrock plays A.A. Milne. References Further reading Thwaite, Ann. A.A. Milne: His Life. London: Faber and Faber, 1990. Toby, Marlene. A.A. Milne, Author of Winnie-the-Pooh. Chicago: Children's Press, 1995. External links A. A. Milne Papers at the Harry Ransom Center Works by A. A. Milne at BiblioWiki (Canada) includes the complete text of the four Pooh books Portraits of A. A. Milne in the National Portrait Gallery Essays by Milne at Quotidiana.org Milne extract in The Guardian Profile at Just-Pooh.com A. A. Milne at poeticous.com AA Milne | Books | The Guardian Finding aid to the A.A. Milne letters at Columbia University Rare Book & Manuscript Library 1882 births 1956 deaths English people of Scottish descent People from Hampstead People from Kilburn, London 20th-century British dramatists and playwrights 20th-century British short story writers 20th-century English novelists 20th-century English poets Alumni of Trinity College, Cambridge British Army personnel of World War I British Home Guard officers Royal Warwickshire Fusiliers officers English children's writers Members of the Detection Club People educated at Westminster School, London Punch (magazine) people English male poets Winnie-the-Pooh Writers from London English male novelists Children's poets Royal Corps of Signals officers Military personnel from London
925
https://en.wikipedia.org/wiki/Asociaci%C3%B3n%20Alumni
Asociación Alumni
Asociación Alumni, usually just Alumni, is an Argentine rugby union club located in Tortuguitas, Greater Buenos Aires. The senior squad currently competes at Top 12, the first division of the Unión de Rugby de Buenos Aires league system. The club has ties with former football club Alumni because both were established by Buenos Aires English High School students. History Background The first club with the name "Alumni" played association football, having been found in 1898 by students of Buenos Aires English High School (BAEHS) along with director Alexander Watson Hutton. Originally under the name "English High School A.C.", the team would be later obliged by the Association to change its name, therefore "Alumni" was chosen, following a proposal by Carlos Bowers, a former student of the school. Alumni was the most successful team during the first years of Argentine football, winning 10 of 14 league championships contested. Alumni is still considered the first great football team in the country. Alumni was reorganised in 1908, "in order to encourage people to practise all kind of sports, specially football". This was the last try to develop itself as a sports club rather than just a football team, such as Lomas, Belgrano and Quilmes had successfully done in the past, but the efforts were not enough. Alumni played its last game in 1911 and was definitely dissolved on April 24, 1913. Rebirth through rugby In 1951, two guards of the BAEHS, Daniel Ginhson (also a former player of Buenos Aires F.C.) and Guillermo Cubelli, supported by the school's alumni and fathers of the students, they decided to establish a club focused on rugby union exclusively. Former players still alive of Alumni football club and descendants of other players already dead gave their permission to use the name "Alumni". On December 13, in a meeting presided by Carlos Bowers himself (who had proposed the name "Alumni" to the original football team 50 years before), the club was officially established under the name "Asociación Juvenil Alumni", also adopting the same colors as its predecessor. The team achieved good results and in 1960 the club presented a team that won the third division of the Buenos Aires league, reaching the second division. Since then, Alumni has played at the highest level of Argentine rugby and its rivalry with Belgrano Athletic Club is one of the fiercest local derbies in Buenos Aires. Alumni would later climb up to first division winning 5 titles: 4 consecutive between 1989 and 1992, and the other in 2001. In 2002, Alumni won its first Nacional de Clubes title, defeating Jockey Club de Rosario 23–21 in the final. Players Current roster As of January 2018: Federico Lucca Gaspar Baldunciel Guido Cambareri Iñaki Etchegaray Bernardo Quaranta Tobias Moyano Mariano Romanini Santiago Montagner Tomas Passerotti Lucas Frana Luca Sabato Franco Batezzatti Franco Sabato Rafael Desanto Nito Provenzano Tomas Bivort Juan.P Ceraso Santiago Alduncin Juan.P Anderson Lucas Magnasco Joaquin Diaz Luzzi Felipe Martignone Tomas Corneille Honours Nacional de Clubes (1): 2002 Torneo de la URBA (6): 1989, 1990, 1991, 1992, 2001, 2018 See also Buenos Aires English High School Alumni Athletic Club References External links Rugby clubs established in 1951 A 1951 establishments in Argentina
928
https://en.wikipedia.org/wiki/Axiom
Axiom
An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or fit' or 'that which commends itself as evident'. The term has subtle differences in definition when used in the context of different fields of study. As defined in classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question. As used in modern logic, an axiom is a premise or starting point for reasoning. As used in mathematics, the term axiom is used in two related but distinguishable senses: "logical axioms" and "non-logical axioms". Logical axioms are usually statements that are taken to be true within the system of logic they define and are often shown in symbolic form (e.g., (A and B) implies A), while non-logical axioms (e.g., ) are actually substantive assertions about the elements of the domain of a specific mathematical theory (such as arithmetic). When used in the latter sense, "axiom", "postulate", and "assumption" may be used interchangeably. In most cases, a non-logical axiom is simply a formal logical expression used in deduction to build a mathematical theory, and might or might not be self-evident in nature (e.g., parallel postulate in Euclidean geometry). To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms), and there may be multiple ways to axiomatize a given mathematical domain. Any axiom is a statement that serves as a starting point from which other statements are logically derived. Whether it is meaningful (and, if so, what it means) for an axiom to be "true" is a subject of debate in the philosophy of mathematics. Etymology The word axiom comes from the Greek word (axíōma), a verbal noun from the verb (axioein), meaning "to deem worthy", but also "to require", which in turn comes from (áxios), meaning "being in balance", and hence "having (the same) value (as)", "worthy", "proper". Among the ancient Greek philosophers an axiom was a claim which could be seen to be self-evidently true without any need for proof. The root meaning of the word postulate is to "demand"; for instance, Euclid demands that one agree that some things can be done (e.g., any two points can be joined by a straight line). Ancient geometers maintained some distinction between axioms and postulates. While commenting on Euclid's books, Proclus remarks that "Geminus held that this [4th] Postulate should not be classed as a postulate but as an axiom, since it does not, like the first three Postulates, assert the possibility of some construction but expresses an essential property." Boethius translated 'postulate' as petitio and called the axioms notiones communes but in later manuscripts this usage was not always strictly kept. Historical development Early Greeks The logico-deductive method whereby conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference) was developed by the ancient Greeks, and has become the core principle of modern mathematics. Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are thus the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, in the case of mathematics) must be proven with the aid of these basic assumptions. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid. The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico-deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle's posterior analytics is a definitive exposition of the classical view. An "axiom", in classical terminology, referred to a self-evident assumption common to many branches of science. A good example would be the assertion that When an equal amount is taken from equals, an equal amount results. At the foundation of the various sciences lay certain additional hypotheses that were accepted without proof. Such a hypothesis was termed a postulate. While the axioms were common to many sciences, the postulates of each particular science were different. Their validity had to be established by means of real-world experience. Aristotle warns that the content of a science cannot be successfully communicated if the learner is in doubt about the truth of the postulates. The classical approach is well-illustrated by Euclid's Elements, where a list of postulates is given (common-sensical geometric facts drawn from our experience), followed by a list of "common notions" (very basic, self-evident assertions). Postulates It is possible to draw a straight line from any point to any other point. It is possible to extend a line segment continuously in both directions. It is possible to describe a circle with any center and any radius. It is true that all right angles are equal to one another. ("Parallel postulate") It is true that, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, intersect on that side on which are the angles less than the two right angles. Common notions Things which are equal to the same thing are also equal to one another. If equals are added to equals, the wholes are equal. If equals are subtracted from equals, the remainders are equal. Things which coincide with one another are equal to one another. The whole is greater than the part. Modern development A lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathematical assertions (axioms, postulates, propositions, theorems) and definitions. One must concede the need for primitive notions, or undefined terms or concepts, in any study. Such abstraction or formalization makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts. Alessandro Padoa, Mario Pieri, and Giuseppe Peano were pioneers in this movement. Structuralist mathematics goes further, and develops theories and axioms (e.g. field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an "axiom" and a "postulate" disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. However, by throwing out Euclid's fifth postulate, one can get theories that have meaning in wider contexts (e.g., hyperbolic geometry). As such, one must simply be prepared to use labels such as "line" and "parallel" with greater flexibility. The development of hyperbolic geometry taught mathematicians that it is useful to regard postulates as purely formal statements, and not as facts based on experience. When mathematicians employ the field axioms, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields; field theory gives correct knowledge about them all. It is not correct to say that the axioms of field theory are "propositions that are regarded as true without proof." Rather, the field axioms are a set of constraints. If any given system of addition and multiplication satisfies these constraints, then one is in a position to instantly know a great deal of extra information about this system. Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and mathematics itself can be regarded as a branch of logic. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development. Another lesson learned in modern mathematics is to examine purported proofs carefully for hidden assumptions. In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow – by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axioms. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom. It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms. In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here, the emergence of Russell's paradox and similar antinomies of naïve set theory raised the possibility that any such system could turn out to be inconsistent. The formalist project suffered a decisive setback, when in 1931 Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an unprovable assertion within the scope of that theory. It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at present, there is no known way of demonstrating the consistency of the modern Zermelo–Fraenkel axioms for set theory. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo–Fraenkel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics. Other sciences Experimental sciences - as opposed to mathematics and logic - also have general founding assertions from which a deductive reasoning can be built so as to express propositions that predict properties - either still general or much more specialized to a specific experimental context. For instance, Newton's laws in classical mechanics, Maxwell's equations in classical electromagnetism, Einstein's equation in general relativity, Mandel's laws of genetics, Darwin's Natural selection law, etc. These founding assertions are usually called principles or postulates so as to distinguish from mathematical axioms. As a matter of facts, the role of axioms in mathematics and postulates in experimental sciences is different. In mathematics one neither "proves" nor "disproves" an axiom. A set of mathematical axioms gives a set of rules that fix a conceptual realm, in which the theorems logically follow. In contrast, in experimental sciences, a set of postulates shall allow deducing results that match or do not match experimental results. If postulates do not allow deducing experimental predictions, they do not set a scientific conceptual framework and have to be completed or made more accurate. If the postulates allow deducing predictions of experimental results, the comparison with experiments allows falsifying (falsified) the theory that the postulates install. A theory is considered valid as long as it has not been falsified. Now, the transition between the mathematical axioms and scientific postulates is always slightly blurred, especially in physics. This is due to the heavy use of mathematical tools to support the physical theories. For instance, the introduction of Newton's laws rarely establishes as a prerequisite neither Euclidian geometry or differential calculus that they imply. It became more apparent when Albert Einstein first introduced special relativity where the invariant quantity is no more the Euclidian length (defined as ) > but the Minkowski spacetime interval (defined as ), and then general relativity where flat Minkowskian geometry is replaced with pseudo-Riemannian geometry on curved manifolds. In quantum physics, two sets of postulates have coexisted for some time, which provide a very nice example of falsification. The 'Copenhagen school' (Niels Bohr, Werner Heisenberg, Max Born) developed an operational approach with a complete mathematical formalism that involves the description of quantum system by vectors ('states') in a separable Hilbert space, and physical quantities as linear operators that act in this Hilbert space. This approach is fully falsifiable and has so far produced the most accurate predictions in physics. But it has the unsatisfactory aspect of not allowing answers to questions one would naturally ask. For this reason, another 'hidden variables' approach was developed for some time by Albert Einstein, Erwin Schrödinger, David Bohm. It was created so as to try to give deterministic explanation to phenomena such as entanglement. This approach assumed that the Copenhagen school description was not complete, and postulated that some yet unknown variable was to be added to the theory so as to allow answering some of the questions it does not answer (the founding elements of which were discussed as the EPR paradox in 1935). Taking this ideas seriously, John Bell derived in 1964 a prediction that would lead to different experimental results (Bell's inequalities) in the Copenhagen and the Hidden variable case. The experiment was conducted first by Alain Aspect in the early 1980's, and the result excluded the simple hidden variable approach (sophisticated hidden variables could still exist but their properties would still be more disturbing than the problems they try to solve). This does not mean that the conceptual framework of quantum physics can be considered as complete now, since some open questions still exist (the limit between the quantum and classical realms, what happens during a quantum measurement, what happens in a completely closed quantum system such as the universe itself, etc). Mathematical logic In the field of mathematical logic, a clear distinction is made between two notions of axioms: logical and non-logical (somewhat similar to the ancient distinction between "axioms" and "postulates" respectively). Logical axioms These are certain formulas in a formal language that are universally valid, that is, formulas that are satisfied by every assignment of values. Usually one takes as logical axioms at least some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense. Examples Propositional logic In propositional logic it is common to take as logical axioms all formulae of the following forms, where , , and can be any formulae of the language and where the included primitive connectives are only "" for negation of the immediately following proposition and "" for implication from antecedent to consequent propositions: Each of these patterns is an axiom schema, a rule for generating an infinite number of axioms. For example, if , , and are propositional variables, then and are both instances of axiom schema 1, and hence are axioms. It can be shown that with only these three axiom schemata and modus ponens, one can prove all tautologies of the propositional calculus. It can also be shown that no pair of these schemata is sufficient for proving all tautologies with modus ponens. Other axiom schemata involving the same or different sets of primitive connectives can be alternatively constructed. These axiom schemata are also used in the predicate calculus, but additional logical axioms are needed to include a quantifier in the calculus. First-order logic Axiom of Equality. Let be a first-order language. For each variable , the formula is universally valid. This means that, for any variable symbol the formula can be regarded as an axiom. Also, in this example, for this not to fall into vagueness and a never-ending series of "primitive notions", either a precise notion of what we mean by (or, for that matter, "to be equal") has to be well established first, or a purely formal and syntactical usage of the symbol has to be enforced, only regarding it as a string and only a string of symbols, and mathematical logic does indeed do that. Another, more interesting example axiom scheme, is that which provides us with what is known as Universal Instantiation: Axiom scheme for Universal Instantiation. Given a formula in a first-order language , a variable and a term that is substitutable for in , the formula is universally valid. Where the symbol stands for the formula with the term substituted for . (See Substitution of variables.) In informal terms, this example allows us to state that, if we know that a certain property holds for every and that stands for a particular object in our structure, then we should be able to claim . Again, we are claiming that the formula is valid, that is, we must be able to give a "proof" of this fact, or more properly speaking, a metaproof. These examples are metatheorems of our theory of mathematical logic since we are dealing with the very concept of proof itself. Aside from this, we can also have Existential Generalization: Axiom scheme for Existential Generalization. Given a formula in a first-order language , a variable and a term that is substitutable for in , the formula is universally valid. Non-logical axioms Non-logical axioms are formulas that play the role of theory-specific assumptions. Reasoning about two different structures, for example, the natural numbers and the integers, may involve the same logical axioms; the non-logical axioms aim to capture what is special about a particular structure (or set of structures, such as groups). Thus non-logical axioms, unlike logical axioms, are not tautologies. Another name for a non-logical axiom is postulate. Almost every modern mathematical theory starts from a given set of non-logical axioms, and it was thought that in principle every theory could be axiomatized in this way and formalized down to the bare language of logical formulas. Non-logical axioms are often simply referred to as axioms in mathematical discourse. This does not mean that it is claimed that they are true in some absolute sense. For example, in some groups, the group operation is commutative, and this can be asserted with the introduction of an additional axiom, but without this axiom, we can do quite well developing (the more general) group theory, and we can even take its negation as an axiom for the study of non-commutative groups. Thus, an axiom is an elementary basis for a formal logic system that together with the rules of inference define a deductive system. Examples This section gives examples of mathematical theories that are developed entirely from a set of non-logical axioms (axioms, henceforth). A rigorous treatment of any of these topics begins with a specification of these axioms. Basic theories, such as arithmetic, real analysis and complex analysis are often introduced non-axiomatically, but implicitly or explicitly there is generally an assumption that the axioms being used are the axioms of Zermelo–Fraenkel set theory with choice, abbreviated ZFC, or some very similar system of axiomatic set theory like Von Neumann–Bernays–Gödel set theory, a conservative extension of ZFC. Sometimes slightly stronger theories such as Morse–Kelley set theory or set theory with a strongly inaccessible cardinal allowing the use of a Grothendieck universe is used, but in fact, most mathematicians can actually prove all they need in systems weaker than ZFC, such as second-order arithmetic. The study of topology in mathematics extends all over through point set topology, algebraic topology, differential topology, and all the related paraphernalia, such as homology theory, homotopy theory. The development of abstract algebra brought with itself group theory, rings, fields, and Galois theory. This list could be expanded to include most fields of mathematics, including measure theory, ergodic theory, probability, representation theory, and differential geometry. Arithmetic The Peano axioms are the most widely used axiomatization of first-order arithmetic. They are a set of axioms strong enough to prove many important facts about number theory and they allowed Gödel to establish his famous second incompleteness theorem. We have a language where is a constant symbol and is a unary function and the following axioms: for any formula with one free variable. The standard structure is where is the set of natural numbers, is the successor function and is naturally interpreted as the number 0. Euclidean geometry Probably the oldest, and most famous, list of axioms are the 4 + 1 Euclid's postulates of plane geometry. The axioms are referred to as "4 + 1" because for nearly two millennia the fifth (parallel) postulate ("through a point outside a line there is exactly one parallel") was suspected of being derivable from the first four. Ultimately, the fifth postulate was found to be independent of the first four. One can assume that exactly one parallel through a point outside a line exists, or that infinitely many exist. This choice gives us two alternative forms of geometry in which the interior angles of a triangle add up to exactly 180 degrees or less, respectively, and are known as Euclidean and hyperbolic geometries. If one also removes the second postulate ("a line can be extended indefinitely") then elliptic geometry arises, where there is no parallel through a point outside a line, and in which the interior angles of a triangle add up to more than 180 degrees. Real analysis The objectives of the study are within the domain of real numbers. The real numbers are uniquely picked out (up to isomorphism) by the properties of a Dedekind complete ordered field, meaning that any nonempty set of real numbers with an upper bound has a least upper bound. However, expressing these properties as axioms requires the use of second-order logic. The Löwenheim–Skolem theorems tell us that if we restrict ourselves to first-order logic, any axiom system for the reals admits other models, including both models that are smaller than the reals and models that are larger. Some of the latter are studied in non-standard analysis. Role in mathematical logic Deductive systems and completeness A deductive system consists of a set of logical axioms, a set of non-logical axioms, and a set of rules of inference. A desirable property of a deductive system is that it be complete. A system is said to be complete if, for all formulas , that is, for any statement that is a logical consequence of there actually exists a deduction of the statement from . This is sometimes expressed as "everything that is true is provable", but it must be understood that "true" here means "made true by the set of axioms", and not, for example, "true in the intended interpretation". Gödel's completeness theorem establishes the completeness of a certain commonly used type of deductive system. Note that "completeness" has a different meaning here than it does in the context of Gödel's first incompleteness theorem, which states that no recursive, consistent set of non-logical axioms of the Theory of Arithmetic is complete, in the sense that there will always exist an arithmetic statement such that neither nor can be proved from the given set of axioms. There is thus, on the one hand, the notion of completeness of a deductive system and on the other hand that of completeness of a set of non-logical axioms. The completeness theorem and the incompleteness theorem, despite their names, do not contradict one another. Further discussion Early mathematicians regarded axiomatic geometry as a model of physical space, and obviously, there could only be one such model. The idea that alternative mathematical systems might exist was very troubling to mathematicians of the 19th century and the developers of systems such as Boolean algebra made elaborate efforts to derive them from traditional arithmetic. Galois showed just before his untimely death that these efforts were largely wasted. Ultimately, the abstract parallels between algebraic systems were seen to be more important than the details, and modern algebra was born. In the modern view, axioms may be any set of formulas, as long as they are not known to be inconsistent. See also Axiomatic system Dogma First principle, axiom in science and philosophy List of axioms Model theory Regulæ Juris Theorem Presupposition Physical law Principle Notes References Further reading Mendelson, Elliot (1987). Introduction to mathematical logic. Belmont, California: Wadsworth & Brooks. External links Metamath axioms page Ancient Greek philosophy Concepts in ancient Greek metaphysics Concepts in epistemology Concepts in ethics Concepts in logic Concepts in metaphysics Concepts in the philosophy of science Deductive reasoning Formal systems History of logic History of mathematics History of philosophy History of science Intellectual history Logic Mathematical logic Mathematical terminology Philosophical terminology Reasoning
929
https://en.wikipedia.org/wiki/Alpha
Alpha
Alpha (uppercase , lowercase ; , álpha, or ) is the first letter of the Greek alphabet. In the system of Greek numerals, it has a value of one. Alpha is derived from the Phoenician letter aleph , which is the West Semitic word for "ox". Letters that arose from alpha include the Latin letter A and the Cyrillic letter А. Uses Greek In Ancient Greek, alpha was pronounced and could be either phonemically long ([aː]) or short ([a]). Where there is ambiguity, long and short alpha are sometimes written with a macron and breve today: Ᾱᾱ, Ᾰᾰ. ὥρα = ὥρᾱ hōrā "a time" γλῶσσα = γλῶσσᾰ glôssa "tongue" In Modern Greek, vowel length has been lost, and all instances of alpha simply represent . In the polytonic orthography of Greek, alpha, like other vowel letters, can occur with several diacritic marks: any of three accent symbols (), and either of two breathing marks (), as well as combinations of these. It can also combine with the iota subscript (). Greek grammar In the Attic–Ionic dialect of Ancient Greek, long alpha fronted to (eta). In Ionic, the shift took place in all positions. In Attic, the shift did not take place after epsilon, iota, and rho (ε, ι, ρ; e, i, r). In Doric and Aeolic, long alpha is preserved in all positions. Doric, Aeolic, Attic chṓrā – Ionic chṓrē, "country" Doric, Aeolic phā́mā – Attic, Ionic phḗmē, "report" Privative a is the Ancient Greek prefix ἀ- or ἀν- a-, an-, added to words to negate them. It originates from the Proto-Indo-European * (syllabic nasal) and is cognate with English un-. Copulative a is the Greek prefix ἁ- or ἀ- ha-, a-. It comes from Proto-Indo-European *. Mathematics and science The letter alpha represents various concepts in physics and chemistry, including alpha radiation, angular acceleration, alpha particles, alpha carbon and strength of electromagnetic interaction (as Fine-structure constant). Alpha also stands for thermal expansion coefficient of a compound in physical chemistry. It is also commonly used in mathematics in algebraic solutions representing quantities such as angles. Furthermore, in mathematics, the letter alpha is used to denote the area underneath a normal curve in statistics to denote significance level when proving null and alternative hypotheses. In ethology, it is used to name the dominant individual in a group of animals. In aerodynamics, the letter is used as a symbol for the angle of attack of an aircraft and the word "alpha" is used as a synonym for this property. In mathematical logic, α is sometimes used as a placeholder for ordinal numbers. The proportionality operator "∝" (in Unicode: U+221D) is sometimes mistaken for alpha. The uppercase letter alpha is not generally used as a symbol because it tends to be rendered identically to the uppercase Latin A. International Phonetic Alphabet In the International Phonetic Alphabet, the letter ɑ, which looks similar to the lower-case alpha, represents the open back unrounded vowel. History and symbolism Origin The Phoenician alphabet was adopted for Greek in the early 8th century BC, perhaps in Euboea. The majority of the letters of the Phoenician alphabet were adopted into Greek with much the same sounds as they had had in Phoenician, but ʼāleph, the Phoenician letter representing the glottal stop , was adopted as representing the vowel ; similarly, hē and ʽayin are Phoenician consonants that became Greek vowels, epsilon and omicron , respectively. Plutarch Plutarch, in Moralia, presents a discussion on why the letter alpha stands first in the alphabet. Ammonius asks Plutarch what he, being a Boeotian, has to say for Cadmus, the Phoenician who reputedly settled in Thebes and introduced the alphabet to Greece, placing alpha first because it is the Phoenician name for ox—which, unlike Hesiod, the Phoenicians considered not the second or third, but the first of all necessities. "Nothing at all," Plutarch replied. He then added that he would rather be assisted by Lamprias, his own grandfather, than by Dionysus' grandfather, i.e. Cadmus. For Lamprias had said that the first articulate sound made is "alpha", because it is very plain and simple—the air coming off the mouth does not require any motion of the tongue—and therefore this is the first sound that children make. According to Plutarch's natural order of attribution of the vowels to the planets, alpha was connected with the Moon. Alpha and Omega As the first letter of the alphabet, Alpha as a Greek numeral came to represent the number 1. Therefore, Alpha, both as a symbol and term, is used to refer to the "first", or "primary", or "principal" (most significant) occurrence or status of a thing. The New Testament has God declaring himself to be the "Alpha and Omega, the beginning and the end, the first and the last." (Revelation 22:13, KJV, and see also 1:8). Consequently, the term "alpha" has also come to be used to denote "primary" position in social hierarchy, examples being "alpha males" or pack leaders. Computer encodings Greek alpha / Coptic alfa For accented Greek characters, see Greek diacritics: Computer encoding. Latin / IPA alpha Mathematical / Technical alpha References Greek letters Vowel letters
930
https://en.wikipedia.org/wiki/Alvin%20Toffler
Alvin Toffler
Alvin Toffler (October 4, 1928 – June 27, 2016) was an American writer, futurist, and businessman known for his works discussing modern technologies, including the digital revolution and the communication revolution, with emphasis on their effects on cultures worldwide. He is regarded as one of the world's outstanding futurists. Toffler was an associate editor of Fortune magazine. In his early works he focused on technology and its impact, which he termed "information overload." In 1970, his first major book about the future, Future Shock, became a worldwide best-seller and has sold over 6 million copies. He and his wife Heidi Toffler, who collaborated with him for most of his writings, moved on to examining the reaction to changes in society with another best-selling book, The Third Wave in 1980. In it, he foresaw such technological advances as cloning, personal computers, the Internet, cable television and mobile communication. His later focus, via their other best-seller, Powershift, (1990), was on the increasing power of 21st-century military hardware and the proliferation of new technologies. He founded Toffler Associates, a management consulting company, and was a visiting scholar at the Russell Sage Foundation, visiting professor at Cornell University, faculty member of the New School for Social Research, a White House correspondent, and a business consultant. Toffler's ideas and writings were a significant influence on the thinking of business and government leaders worldwide, including China's Zhao Ziyang, and AOL founder Steve Case. Early life Alvin Toffler was born on October 4, 1928, in New York City, and raised in Brooklyn. He was the son of Rose (Albaum) and Sam Toffler, a furrier, both Jewish immigrants from Poland. He had one younger sister. He was inspired to become a writer at the age of 7 by his aunt and uncle, who lived with the Tofflers. "They were Depression-era literary intellectuals," Toffler said, "and they always talked about exciting ideas." Toffler graduated from New York University in 1950 as an English major, though by his own account he was more focused on political activism than grades. He met his future wife, Adelaide Elizabeth Farrell (nicknamed "Heidi"), when she was starting a graduate course in linguistics. Being radical students, they decided against further graduate work and moved to the Midwest, where they married on April 29, 1950. Career Seeking experiences to write about, Alvin and Heidi Toffler spent the next five years as blue collar workers on assembly lines while studying industrial mass production in their daily work. He compared his own desire for experience to other writers, such as Jack London, who in his quest for subjects to write about sailed the seas, and John Steinbeck, who went to pick grapes with migrant workers. In their first factory jobs, Heidi became a union shop steward in the aluminum foundry where she worked. Alvin became a millwright and welder. In the evenings Alvin would write poetry and fiction, but discovered he was proficient at neither. His hands-on practical labor experience helped Alvin Toffler land a position at a union-backed newspaper, a transfer to its Washington bureau in 1957, then three years as a White House correspondent, covering Congress and the White House for a Pennsylvania daily newspaper. They returned to New York City in 1959 when Fortune magazine invited Alvin to become its labor columnist, later having him write about business and management. After leaving Fortune magazine in 1962, Toffler began a freelance career, writing long form articles for scholarly journals and magazines. His 1964 Playboy interviews with Russian novelist Vladimir Nabokov and Ayn Rand were considered among the magazine's best. His interview with Rand was the first time the magazine had given such a platform to a female intellectual, which as one commentator said, "the real bird of paradise Toffler captured for Playboy in 1964 was Ayn Rand." Toffler was hired by IBM to conduct research and write a paper on the social and organizational impact of computers, leading to his contact with the earliest computer "gurus" and artificial intelligence researchers and proponents. Xerox invited him to write about its research laboratory and AT&T consulted him for strategic advice. This AT&T work led to a study of telecommunications, which advised the company's top management to break up the company more than a decade before the government forced AT&T to break up. In the mid-1960s, the Tofflers began five years of research on what would become Future Shock, published in 1970. It has sold over 6 million copies worldwide, according to the New York Times, or over 15 million copies according to the Tofflers' Web site. Toffler coined the term "future shock" to refer to what happens to a society when change happens too fast, which results in social confusion and normal decision-making processes breaking down. The book has never been out of print and has been translated into dozens of languages. He continued the theme in The Third Wave in 1980. While he describes the first and second waves as the agricultural and industrial revolutions, the "third wave," a phrase he coined, represents the current information, computer-based revolution. He forecast the spread of the Internet and email, interactive media, cable television, cloning, and other digital advancements. He claimed that one of the side effects of the digital age has been "information overload," another term he coined. In 1990, he wrote Powershift, also with the help of his wife, Heidi. In 1996, with American business consultant Tom Johnson, they co-founded Toffler Associates, an advisory firm designed to implement many of the ideas the Tofflers had written on. The firm worked with businesses, NGOs, and governments in the United States, South Korea, Mexico, Brazil, Singapore, Australia, and other countries. During this period in his career, Toffler lectured worldwide, taught at several schools and met world leaders, such as Mikhail Gorbachev, along with key executives and military officials. Ideas and opinions Toffler stated many of his ideas during an interview with the Australian Broadcasting Corporation in 1998. "Society needs people who take care of the elderly and who know how to be compassionate and honest," he said. "Society needs people who work in hospitals. Society needs all kinds of skills that are not just cognitive; they're emotional, they're affectional. You can't run the society on data and computers alone." His opinions about the future of education, many of which were in Future Shock, have often been quoted. An often misattributed quote, however, is that of psychologist Herbert Gerjuoy: "Tomorrow's illiterate will not be the man who can't read; he will be the man who has not learned how to learn." Early in his career, after traveling to other countries, he became aware of the new and myriad inputs that visitors received from these other cultures. He explained during an interview that some visitors would become "truly disoriented and upset" by the strange environment, which he described as a reaction to culture shock. From that issue, he foresaw another problem for the future, when a culturally "new environment comes to you ... and comes to you rapidly." That kind of sudden cultural change within one's own country, which he felt many would not understand, would lead to a similar reaction, one of "future shock", which he wrote about in his book by that title. Toffler writes: In The Third Wave, Toffler describes three types of societies, based on the concept of "waves"—each wave pushes the older societies and cultures aside. He describes the "First Wave" as the society after agrarian revolution and replaced the first hunter-gatherer cultures. The "Second Wave," he labels society during the Industrial Revolution (ca. late 17th century through the mid-20th century). That period saw the increase of urban industrial populations which had undermined the traditional nuclear family, and initiated a factory-like education system, and the growth of the corporation. Toffler said: The "Third Wave" was a term he coined to describe the post-industrial society, which began in the late 1950s. His description of this period dovetails with other futurist writers, who also wrote about the Information Age, Space Age, Electronic Era, Global Village, terms which highlighted a scientific-technological revolution. The Tofflers claimed to have predicted a number of geopolitical events, such as the collapse of the Soviet Union, the fall of the Berlin Wall and the future economic growth in the Asia-Pacific region. Influences and popular culture Toffler often visited with dignitaries in Asia, including China's Zhao Ziyang, Singapore's Lee Kuan Yew and South Korea's Kim Dae Jung, all of whom were influenced by his views as Asia's emerging markets increased in global significance during the 1980s and 1990s. Although they had originally censored some of his books and ideas, China's government cited him along with Franklin Roosevelt and Bill Gates as being among the Westerners who had most influenced their country. The Third Wave along with a video documentary based on it became best-sellers in China and were widely distributed to schools. The video's success inspired the marketing of videos on related themes in the late 1990s by Infowars, whose name is derived from the term coined by Toffler in the book. Toffler's influence on Asian thinkers was summed up in an article in Daedalus, published by the American Academy of Arts & Sciences: U.S. House Speaker Newt Gingrich publicly lauded his ideas about the future, and urged members of Congress to read Toffler's book, Creating a New Civilization (1995). Others, such as AOL founder Steve Case, cited Toffler's The Third Wave as a formative influence on his thinking, which inspired him to write The Third Wave: An Entrepreneur's Vision of the Future in 2016. Case said that Toffler was a "real pioneer in helping people, companies and even countries lean into the future." In 1980, Ted Turner founded CNN, which he said was inspired by Toffler's forecasting the end of the dominance of the three main television networks. Turner's company, Turner Broadcasting, published Toffler's Creating a New Civilization in 1995. Shortly after the book was released, the former Soviet president Mikhail Gorbachev hosted the Global Governance Conference in San Francisco with the theme, Toward a New Civilization, which was attended by dozens of world figures, including the Tofflers, George H. W. Bush, Margaret Thatcher, Carl Sagan, Abba Eban and Turner with his then-wife, actress Jane Fonda. Mexican billionaire Carlos Slim was influenced by his works, and became a friend of the writer. Global marketer J.D. Power also said he was inspired by Toffler's works. Since the 1960s, people had tried to make sense out of the effect of new technologies and social change, a problem which made Toffler's writings widely influential beyond the confines of scientific, economic, and public policy. His works and ideas have been subject to various criticisms, usually with the same argumentation used against futurology: that foreseeing the future is nigh impossible. Techno music pioneer Juan Atkins cites Toffler's phrase "techno rebels" in The Third Wave as inspiring him to use the word "techno" to describe the musical style he helped to create Musician Curtis Mayfield released a disco song called "Future Shock," later covered in an electro version by Herbie Hancock. Science fiction author John Brunner wrote "The Shockwave Rider," from the concept of "future shock." The nightclub Toffler, in Rotterdam, is named after him. In the song "Victoria" by The Exponents, the protagonist's daily routine and cultural interests are described: "She's up in time to watch the soap operas, reads Cosmopolitan and Alvin Toffler". Critical assessment Accenture, the management consultancy firm, identified Toffler in 2002 as being among the most influential voices in business leaders, along with Bill Gates and Peter Drucker. Toffler has also been described in a Financial Times interview as the "world's most famous futurologist". In 2006, the People's Daily classed him among the 50 foreigners who shaped modern China, which one U.S. newspaper notes made him a "guru of sorts to world statesmen." Chinese Premier and General Secretary Zhao Ziyang was greatly influenced by Toffler. He convened conferences to discuss The Third Wave in the early 1980s, and in 1985 the book was the No. 2 best seller in China. Author Mark Satin characterizes Toffler as an important early influence on radical centrist political thought. Newt Gingrich became close to the Tofflers in the 1970s and said The Third Wave had immensely influenced his own thinking and was "one of the great seminal works of our time." Selected awards Toffler has received several prestigious prizes and awards, including the McKinsey Foundation Book Award for Contributions to Management Literature, Officier de L'Ordre des Arts et Lettres, and appointments, including Fellow of the American Association for the Advancement of Science and the International Institute for Strategic Studies. In 2006, Alvin and Heidi Toffler were recipients of Brown University's Independent Award. Personal life Toffler was married to Heidi Toffler, also a writer and futurist. They lived in the Bel Air section of Los Angeles, California, and previously lived in Redding, Connecticut. The couple's only child, Karen Toffler (1954–2000), died at age 46 after more than a decade suffering from Guillain–Barré syndrome. Alvin Toffler died in his sleep on June 27, 2016, at his home in Los Angeles. No cause of death was given. He is buried at Westwood Memorial Park. Bibliography Alvin Toffler co-wrote his books with his wife Heidi. The Culture Consumers (1964) St. Martin's Press, The Schoolhouse in the City (1968) Praeger (editors), Future Shock (1970) Bantam Books, The Futurists (1972) Random House (editors), Learning for Tomorrow (1974) Random House (editors), The Eco-Spasm Report (1975) Bantam Books, The Third Wave (1980) Bantam Books, Previews & Premises (1983) William Morrow & Co, The Adaptive Corporation (1985) McGraw-Hill, Powershift: Knowledge, Wealth and Violence at the Edge of the 21st Century (1990) Bantam Books, War and Anti-War (1993) Warner Books, Creating a New Civilization (1995) Turner Pub, Revolutionary Wealth (2006) Knopf, See also Daniel Bell Norman Swan Human nature John Naisbitt References External links  – official Alvin Toffler site Toffler Associates Interview with Alvin Toffler by the World Affairs Council Discuss Alvin Toffler's Future Shock with other readers, BookTalk.org Alvin Toffler at Find a Grave Future Shock Forum 2018 Finding aid to the Alvin and Heidi Toffler papers at Columbia University. Rare Book & Manuscript Library 1928 births 2016 deaths American people of Polish-Jewish descent American technology writers American futurologists Burials at Westwood Village Memorial Park Cemetery Jewish American writers People from Ridgefield, Connecticut Writers from Connecticut Writers from Brooklyn 20th-century American non-fiction writers 21st-century American non-fiction writers American transhumanists New York University alumni Singularitarians People from Redding, Connecticut 20th-century American male writers American male non-fiction writers Jewish American journalists People from Bel Air, Los Angeles 21st-century American male writers 21st-century American Jews
931
https://en.wikipedia.org/wiki/The%20Amazing%20Spider-Man
The Amazing Spider-Man
The Amazing Spider-Man is an American comic book series published by Marvel Comics, featuring the fictional superhero Spider-Man as its main protagonist. Being in the mainstream continuity of the franchise, it began publication in 1963 as a bimonthly periodical (as Amazing Fantasy had been), quickly being increased to monthly, and was published continuously, with a brief interruption in 1995, until its second volume with a new numbering order in 1999. In 2003, the series reverted to the numbering order of the first volume. The title has occasionally been published biweekly, and was published three times a month from 2008 to 2010. After DC Comics' relaunch of Action Comics and Detective Comics with new No. 1 issues in 2011, it had been the highest-numbered American comic still in circulation until it was cancelled. The title ended its 50-year run as a continuously published comic with the landmark issue #700 in December 2012. It was replaced by The Superior Spider-Man as part of the Marvel NOW! relaunch of Marvel's comic lines. Volume 3 of The Amazing Spider-Man was published in April 2014, following the conclusion of The Superior Spider-Man story arc. In late 2015, the series was relaunched with a 4th volume, following the 2015 Secret Wars event. The 5th and current volume began in 2018, as part of Marvel's Fresh Start series of comic relaunches. Publication history Writer-editor Stan Lee and artist and co-plotter Steve Ditko created the character of Spider-Man, and the pair produced 38 issues from March 1963 to July 1966. Ditko left after the 38th issue, while Lee remained as writer until issue 100. Since then, many writers and artists have taken over the monthly comic through the years, chronicling the adventures of Marvel's most identifiable hero. The Amazing Spider-Man has been the character's flagship series for his first fifty years in publication, and was the only monthly series to star Spider-Man until Peter Parker, The Spectacular Spider-Man, in 1976, although 1972 saw the debut of Marvel Team-Up, with the vast majority of issues featuring Spider-Man along with a rotating cast of other Marvel characters. Most of the major characters and villains of the Spider-Man saga have been introduced in Amazing, and with few exceptions, it is where most key events in the character's history have occurred. The title was published continuously until No. 441 (Nov. 1998) when Marvel Comics relaunched it as vol. 2 No. 1 (Jan. 1999), but on Spider-Man's 40th anniversary, this new title reverted to using the numbering of the original series, beginning again with issue No. 500 (Dec. 2003) and lasting until the final issue, No. 700 (Feb. 2013). 1960s Due to strong sales on the character's first appearance in Amazing Fantasy No. 15, Spider-Man was given his own ongoing series in March 1963. The initial years of the series, under Lee and Ditko, chronicled Spider-Man's nascent career as a masked super-human vigilante with his civilian life as hard-luck yet perpetually good-humored and well-meaning teenager Peter Parker. Peter balanced his career as Spider-Man with his job as a freelance photographer for The Daily Bugle under the bombastic editor-publisher J. Jonah Jameson to support himself and his frail Aunt May. At the same time, Peter dealt with public hostility towards Spider-Man and the antagonism of his classmates Flash Thompson and Liz Allan at Midtown High School, while embarking on a tentative, ill-fated romance with Jameson's secretary, Betty Brant. By focusing on Parker's everyday problems, Lee and Ditko created a groundbreakingly flawed, self-doubting superhero, and the first major teenaged superhero to be a protagonist and not a sidekick. Ditko's quirky art provided a stark contrast to the more cleanly dynamic stylings of Marvel's most prominent artist, Jack Kirby, and combined with the humor and pathos of Lee's writing to lay the foundation for what became an enduring mythos. Most of Spider-Man's key villains and supporting characters were introduced during this time. Issue No. 1 (March 1963) featured the first appearances of J. Jonah Jameson and his astronaut son John Jameson, and the supervillain the Chameleon. It included the hero's first encounter with the superhero team the Fantastic Four. Issue No. 2 (May 1963) featured the first appearance of the Vulture and the Tinkerer as well as the beginning of Parker's freelance photography career at the newspaper The Daily Bugle. The Lee-Ditko era continued to usher in a significant number of villains and supporting characters, including Doctor Octopus in No. 3 (July 1963); the Sandman and Betty Brant in No. 4 (Sept. 1963); the Lizard in No. 6 (Nov. 1963); Living Brain in (#8, January 1964); Electro in No. 9 (March 1964); Mysterio in No. 13 (June 1964); the Green Goblin in No. 14 (July 1964); Kraven The Hunter in No. 15 (Aug. 1964); reporter Ned Leeds in No. 18 (Nov. 1964); and the Scorpion in No. 20 (Jan. 1965). The Molten Man was introduced in No. 28 (Sept. 1965) which also featured Parker's graduation from high school. Peter began attending Empire State University in No. 31 (Dec. 1965), the issue which featured the first appearances of friends and classmates Gwen Stacy and Harry Osborn. Harry's father, Norman Osborn first appeared in No. 23 (April 1965) as a member of Jameson's country club but is not named nor revealed as Harry's father until No. 37 (June 1966). One of the most celebrated issues of the Lee-Ditko run is No. 33 (Feb. 1966), the third part of the story arc "If This Be My Destiny...!", which features the dramatic scene of Spider-Man, through force of will and thoughts of family, escaping from being pinned by heavy machinery. Comics historian Les Daniels noted that "Steve Ditko squeezes every ounce of anguish out of Spider-Man's predicament, complete with visions of the uncle he failed and the aunt he has sworn to save." Peter David observed that "After his origin, this two-page sequence from Amazing Spider-Man No. 33 is perhaps the best-loved sequence from the Stan Lee/Steve Ditko era." Steve Saffel stated the "full page Ditko image from The Amazing Spider-Man No. 33 is one of the most powerful ever to appear in the series and influenced writers and artists for many years to come." and Matthew K. Manning wrote that "Ditko's illustrations for the first few pages of this Lee story included what would become one of the most iconic scenes in Spider-Man's history." The story was chosen as No. 15 in the 100 Greatest Marvels of All Time poll of Marvel's readers in 2001. Editor Robert Greenberger wrote in his introduction to the story that "These first five pages are a modern-day equivalent to Shakespeare as Parker's soliloquy sets the stage for his next action. And with dramatic pacing and storytelling, Ditko delivers one of the great sequences in all comics." Although credited only as artist for most of his run, Ditko would eventually plot the stories as well as draw them, leaving Lee to script the dialogue. A rift between Ditko and Lee developed, and the two men were not on speaking terms long before Ditko completed his last issue, The Amazing Spider-Man No. 38 (July 1966). The exact reasons for the Ditko-Lee split have never been fully explained. Spider-Man successor artist John Romita Sr., in a 2010 deposition, recalled that Lee and Ditko "ended up not being able to work together because they disagreed on almost everything, cultural, social, historically, everything, they disagreed on characters..." In successor penciler Romita Sr.'s first issue, No. 39 (Aug. 1966), nemesis the Green Goblin discovers Spider-Man's secret identity and reveals his own to the captive hero. Romita's Spider-Man – more polished and heroic-looking than Ditko's – became the model for two decades. The Lee-Romita era saw the introduction of such characters as Daily Bugle managing editor Robbie Robertson in No. 52 (Sept. 1967) and NYPD Captain George Stacy, father of Parker's girlfriend Gwen Stacy, in No. 56 (Jan. 1968). The most important supporting character to be introduced during the Romita era was Mary Jane Watson, who made her first full appearance in No. 42, (Nov. 1966), although she first appeared in No. 25 (June 1965) with her face obscured and had been mentioned since No. 15 (Aug. 1964). Peter David wrote in 2010 that Romita "made the definitive statement of his arrival by pulling Mary Jane out from behind the oversized potted plant [that blocked the readers' view of her face in issue #25] and placing her on panel in what would instantly become an iconic moment." Romita has stated that in designing Mary Jane, he "used Ann-Margret from the movie Bye Bye Birdie as a guide, using her coloring, the shape of her face, her red hair and her form-fitting short skirts." Lee and Romita toned down the prevalent sense of antagonism in Parker's world by improving Parker's relationship with the supporting characters and having stories focused as much on the social and college lives of the characters as they did on Spider-Man's adventures. The stories became more topical, addressing issues such as civil rights, racism, prisoners' rights, the Vietnam War, and political elections. Issue No. 50 (June 1967) introduced the highly enduring criminal mastermind the Kingpin, who would become a major force as well in the superhero series Daredevil. Other notable first appearances in the Lee-Romita era include the Rhino in No. 41 (Oct. 1966), the Shocker in No. 46 (March 1967), the Prowler in No. 78 (Nov. 1969), and the Kingpin's son, Richard Fisk, in No. 83 (April 1970). 1970s Several spin-off series debuted in the 1970s: Marvel Team-Up in 1972, and The Spectacular Spider-Man in 1976. A short-lived series titled Giant-Size Spider-Man began in July 1974 and ran six issues through 1975. Spidey Super Stories, a series aimed at children ages 6–10, ran for 57 issues from October 1974 through 1982. The flagship title's second decade took a grim turn with a story in #89-90 (Oct.-Nov. 1970) featuring the death of Captain George Stacy. This was the first Spider-Man story to be penciled by Gil Kane, who would alternate drawing duties with Romita for the next year-and-a-half and would draw several landmark issues. One such story took place in the controversial issues #96–98 (May–July 1971). Writer-editor Lee defied the Comics Code Authority with this story, in which Parker's friend Harry Osborn, was hospitalized after over-dosing on pills. Lee wrote this story upon a request from the U. S. Department of Health, Education, and Welfare for a story about the dangers of drugs. Citing its dictum against depicting drug use, even in an anti-drug context, the CCA refused to put its seal on these issues. With the approval of Marvel publisher Martin Goodman, Lee had the comics published without the seal. The comics sold well and Marvel won praise for its socially conscious efforts. The CCA subsequently loosened the Code to permit negative depictions of drugs, among other new freedoms. "The Six Arms Saga" of #100–102 (Sept.–Nov. 1971) introduced Morbius, the Living Vampire. The second installment was the first Amazing Spider-Man story not written by co-creator Lee, with Roy Thomas taking over writing the book for several months before Lee returned to write #105–110 (Feb.-July 1972). Lee, who was going on to become Marvel Comics' publisher, with Thomas becoming editor-in-chief, then turned writing duties over to 19-year-old Gerry Conway, who scripted the series through 1975. Romita penciled Conway's first half-dozen issues, which introduced the gangster Hammerhead in No. 113 (Oct. 1972). Kane then succeeded Romita as penciler, although Romita would continue inking Kane for a time. Issues 121–122 (June–July 1973, by Conway-Kane-Romita), which featured the death of Gwen Stacy at the hands of the Green Goblin in "The Night Gwen Stacy Died" in issue No. 121. Her demise and the Goblin's apparent death one issue later formed a story arc widely considered as the most defining in the history of Spider-Man. The aftermath of the story deepened both the characterization of Mary Jane Watson and her relationship with Parker. In 1973, Gil Kane was succeeded by Ross Andru, whose run lasted from issue No. 125 (October 1973) to No. 185 (October 1978). Issue#129 (Feb. 1974) introduced the Punisher, who would become one of Marvel Comics' most popular characters. The Conway-Andru era featured the first appearances of the Man-Wolf in #124–125 (Sept.-Oct. 1973); the near-marriage of Doctor Octopus and Aunt May in No. 131 (April 1974); Harry Osborn stepping into his father's role as the Green Goblin in #135–137 (Aug.-Oct.1974); and the original "Clone Saga", containing the introduction of Spider-Man's clone, in #147–149 (Aug.-Oct. 1975). Archie Goodwin and Gil Kane produced the title's 150th issue (Nov. 1975) before Len Wein became writer with issue No. 151. During Wein's tenure, Harry Osborn and Liz Allen dated and became engaged; J. Jonah Jameson was introduced to his eventual second wife, Marla Madison; and Aunt May suffered a heart attack. Wein's last story on Amazing was a five-issue arc in #176–180 (Jan.-May 1978) featuring a third Green Goblin (Harry Osborn's psychiatrist, Bart Hamilton). Marv Wolfman, Marvel's editor-in-chief from 1975 to 1976, succeeded Wein as writer, and in his first issue, No. 182 (July 1978), had Parker propose marriage to Watson who refused, in the following issue. Keith Pollard succeeded Ross Andru as artist shortly afterward, and with Wolfman introduced the likable rogue the Black Cat (Felicia Hardy) in No. 194 (July 1979). As a love interest for Spider-Man, the Black Cat would go on to be an important supporting character for the better part of the next decade, and remain a friend and occasional lover into the 2010s. 1980s The Amazing Spider-Man No. 200 (Jan. 1980) featured the return and death of the burglar who killed Spider-Man's Uncle Ben. Writer Marv Wolfman and penciler Keith Pollard both left the title by mid-year, succeeded by Dennis O'Neil, a writer known for groundbreaking 1970s work at rival DC Comics, and penciler John Romita Jr. O'Neil wrote two issues of The Amazing Spider-Man Annual which were both drawn by Frank Miller. The 1980 Annual featured a team-up with Doctor Strange while the 1981 Annual showcased a meeting with the Punisher. Roger Stern, who had written nearly 20 issues of sister title The Spectacular Spider-Man, took over Amazing with issue No. 224 (January 1982). During his two years on the title, Stern augmented the backgrounds of long-established Spider-Man villains, and with Romita Jr. created the mysterious supervillain the Hobgoblin in #238–239 (March–April 1983). Fans engaged with the mystery of the Hobgoblin's secret identity, which continued throughout #244–245 and 249–251 (Sept.-Oct. 1983 and Feb.-April 1984). One lasting change was the reintroduction of Mary Jane Watson as a more serious, mature woman who becomes Peter's confidante after she reveals that she knows his secret identity. Stern also wrote "The Kid Who Collects Spider-Man" in The Amazing Spider-Man No. 248 (January 1984), a story which ranks among his most popular. By mid-1984, Tom DeFalco and Ron Frenz took over scripting and penciling. DeFalco helped establish Parker and Watson's mature relationship, laying the foundation for the characters' wedding in 1987. Notably, in No. 257 (Oct. 1984), Watson tells Parker that she knows he is Spider-Man, and in No. 259 (Dec. 1984), she reveals to Parker the extent of her troubled childhood. Other notable issues of the DeFalco-Frenz era include No. 252 (May 1984), with the first appearance of Spider-Man's black costume, which the hero would wear almost exclusively for the next four years' worth of comics; the debut of criminal mastermind the Rose, in No. 253 (June 1984); the revelation in No. 258 (Nov. 1984) that the black costume is a living being, a symbiote; and the introduction of the female mercenary Silver Sable in No. 265 (June 1985). Tom DeFalco and Ron Frenz were both removed from The Amazing Spider-Man in 1986 by editor Jim Owsley under acrimonious circumstances. A succession of artists including Alan Kupperberg, John Romita Jr., and Alex Saviuk penciled the series from 1987 to 1988; Owsley wrote the book for the first half of 1987, scripting the five-part "Gang War" story (#284–288) that DeFalco plotted. Former Spectacular Spider-Man writer Peter David scripted No. 289 (June 1987), which revealed Ned Leeds as being the Hobgoblin although this was retconned in 1996 by Roger Stern into Leeds not being the original Hobgoblin after all. David Michelinie took over as writer in the next issue, for a story arc in #290–292 (July–Sept. 1987) that led to the marriage of Peter Parker and Mary Jane Watson in Amazing Spider-Man Annual No. 21. The "Kraven's Last Hunt" storyline by writer J.M. DeMatteis and artists Mike Zeck and Bob McLeod crossed over into The Amazing Spider-Man No. 293 and 294. Issue No. 298 (March 1988) was the first Spider-Man comic to be drawn by future industry star Todd McFarlane, the first regular artist on The Amazing Spider-Man since Frenz's departure. McFarlane revolutionized Spider-Man's look. His depiction – "Ditko-esque" poses, large-eyed, with wiry, contorted limbs, and messy, knotted, convoluted webbing – influenced the way virtually all subsequent artists would draw the character. McFarlane's other significant contribution to the Spider-Man canon was the design for what would become one of Spider-Man's most wildly popular antagonists, the supervillain Venom. Issue No. 299 (April 1988) featured Venom's first appearance (a last-page cameo) before his first full appearance in No. 300 (May 1988). The latter issue featured Spider-Man reverting to his original red-and-blue costume. Other notable issues of the Michelinie-McFarlane era include No. 312 (Feb. 1989), featuring the Green Goblin vs. the Hobgoblin; and #315–317 (May–July 1989), with the return of Venom. In July 2012, Todd McFarlane's original cover art for The Amazing Spider-Man No. 328 sold for a bid of $657,250, making it the most expensive American comic book art ever sold at auction. 1990s With a civilian life as a married man, the Spider-Man of the 1990s was different from the superhero of the previous three decades. McFarlane left the title in 1990 to write and draw a new series titled simply Spider-Man. His successor, Erik Larsen, penciled the book from early 1990 to mid-1991. After issue No. 350, Larsen was succeeded by Mark Bagley, who had won the 1986 Marvel Tryout Contest and was assigned a number of low-profile penciling jobs followed by a run on New Warriors in 1990. Bagley penciled the flagship Spider-Man title from 1991 to 1996. During that time, Bagley's rendition of Spider-Man was used extensively for licensed material and merchandise. Issues #361–363 (April–June 1992) introduced Carnage, a second symbiote nemesis for Spider-Man. The series' 30th-anniversary issue, No. 365 (Aug. 1992), was a double-sized, hologram-cover issue with the cliffhanger ending of Peter Parker's parents, long thought dead, reappearing alive. It would be close to two years before they were revealed to be impostors, who are killed in No. 388 (April 1994), scripter Michelinie's last issue. His 1987–1994 stint gave him the second-longest run as writer on the title, behind Stan Lee. Issue No. 375 was released with a gold foil cover. There was an error affecting some issues and which are missing the majority of the foil. With No. 389, writer J. M. DeMatteis, whose Spider-Man credits included the 1987 "Kraven's Last Hunt" story arc and a 1991–1993 run on The Spectacular Spider-Man, took over the title. From October 1994 to June 1996, Amazing stopped running stories exclusive to it, and ran installments of multi-part stories that crossed over into all the Spider-Man books. One of the few self-contained stories during this period was in No. 400 (April 1995), which featured the death of Aunt May – later revealed to have been faked (although the death still stands in the MC2 continuity). The "Clone Saga" culminated with the revelation that the Spider-Man who had appeared in the previous 20 years of comics was a clone of the real Spider-Man. This plot twist was massively unpopular with many readers, and was later reversed in the "Revelations" story arc that crossed over the Spider-Man books in late 1996. The Clone Saga tied into a publishing gap after No. 406 (Oct. 1995), when the title was temporarily replaced by The Amazing Scarlet Spider #1–2 (Nov.-Dec. 1995), featuring Ben Reilly. The series picked up again with No. 407 (Jan. 1996), with Tom DeFalco returning as writer. Bagley completed his 5½-year run by September 1996. A succession of artists, including Ron Garney, Steve Skroce, Joe Bennett, Rafael Kayanan and John Byrne penciled the book until the final issue, No. 441 (Nov. 1998), after which Marvel rebooted the title with vol. 2, No. 1 (Jan. 1999). Relaunch and the 2000s Marvel began The Amazing Spider-Man relaunching the 'Amazing' comic book series with (vol. 2) #1 (Jan. 1999). Howard Mackie wrote the first 29 issues. The relaunch included the Sandman being regressed to his criminal ways and the "death" of Mary Jane, which was ultimately reversed. Other elements included the introduction of a new Spider-Woman (who was spun off into her own short-lived series) and references to John Byrne's miniseries Spider-Man: Chapter One, which was launched at the same time as the reboot. Byrne also penciled issues #1–18 (from 1999 to 2000) and wrote #13–14, John Romita Jr. took his place soon after in October 2000. Mackie's run ended with The Amazing Spider-Man Annual 2001, which saw the return of Mary Jane, who then left Parker upon reuniting with him. With issue #30 (June 2001), J. Michael Straczynski took over as writer and oversaw additional storylines – most notably his lengthy "Spider-Totem" arc, which raised the issue of whether Spider-Man's powers were magic-based, rather than as the result of a radioactive spider's bite. Additionally, Straczynski resurrected the plot point of Aunt May discovering her nephew was Spider-Man, and returned Mary Jane, with the couple reuniting in The Amazing Spider-Man (vol. 2) #50. Straczynski gave Spider-Man a new profession, having Parker teach at his former high school. Issue #30 began a dual numbering system, with the original series numbering (#471) returned and placed alongside the volume two number on the cover. Other longtime, rebooted Marvel Comics titles, including Fantastic Four, likewise were given the dual numbering around this time. After (vol. 2) #58 (Nov. 2003), the title reverted completely to its original numbering for issue #500 (Dec. 2003). Mike Deodato, Jr. penciled the series from mid-2004 until 2006. That year Peter Parker revealed his Spider-Man identity on live television in the company-crossover storyline "Civil War", in which the superhero community is split over whether to conform to the federal government's new Superhuman Registration Act. This knowledge was erased from the world with the event of the four-part, crossover story arc, "One More Day", written partially by J. Michael Straczynski and illustrated by Joe Quesada, running through The Amazing Spider-Man #544–545 (Nov.-Dec. 2007), Friendly Neighborhood Spider-Man No. 24 (Nov. 2007) and The Sensational Spider-Man No. 41 (Dec. 2007), the final issues of those two titles. Here, the demon Mephisto makes a Faustian bargain with Parker and Mary Jane, offering to save Parker's dying Aunt May if the couple will allow their marriage to have never existed, rewriting that portion of their pasts. This story arc marked the end of Straczynski's work on the title. Following this, Marvel made The Amazing Spider-Man the company's sole Spider-Man title, increasing its frequency of publication to three issues monthly, and inaugurating the series with a sequence of "back to basics" story arcs under the banner of "Brand New Day". Parker now exists in a changed world where he and Mary Jane had never married, and Parker has no memory of being married to her, with domino effect differences in their immediate world. The most notable of these revisions to Spider-Man continuity are the return of Harry Osborn, whose death in The Spectacular Spider-Man No. 200 (May 1993) is erased; and the reestablishment of Spider-Man's secret identity, with no one except Mary Jane able to recall that Parker is Spider-Man (although he soon reveals his secret identity to the New Avengers and the Fantastic Four). Under the banner of Brand New Day, Marvel tried to only use newly created villains instead of relying on older ones. Characters like Mister Negative and Overdrive both in Free Comic Book Day 2007 Spider-Man (July 2007), Menace in No. 549 (March 2008), Ana and Sasha Kravinoff in No. 565 (September 2008) and No. 567 (October 2008) respectively, and several more were introduced. The alternating regular writers were initially Dan Slott, Bob Gale, Marc Guggenheim, and Zeb Wells, joined by a rotation of artists that included Steve McNiven, Salvador Larroca, Phil Jimenez, Barry Kitson, Chris Bachalo, Mike McKone, Marcos Martín, and John Romita Jr. Joe Kelly, Mark Waid, Fred Van Lente and Roger Stern later joined the writing team and Paolo Rivera, Lee Weeks and Marco Checchetto the artist roster. Waid's work on the series included a meeting between Spider-Man and Stephen Colbert in The Amazing Spider-Man No. 573 (Dec. 2008). Issue No. 583 (March 2009) included a back-up story in which Spider-Man meets President Barack Obama. 2010s and temporary end of publication Mark Waid scripted the opening of "The Gauntlet" storyline in issue No. 612 (Jan. 2010). The Gauntlet story was concluded by Grim Hunt (No. 634-637) which saw the resurrection of long-dead Spider-Man villain, Kraven the Hunter. The series became a twice-monthly title with Dan Slott as sole writer at issue No. 648 (Jan. 2011), launching the Big Time storyline. Eight additional pages were added per issue. Big Time saw major changes in Spider-Man/Peter Parker's life, Peter would start working at Horizon Labs and begin a relationship with Carlie Cooper (his first serious relationship since his marriage to Mary Jane), Mac Gargan returned as Scorpion after spending the past few years as Venom, Phil Urich would take up the mantle of Hobgoblin, and the death of J. Jonah Jameson's wife, Marla Jameson. Issues 654 and 654.1 saw the birth of Agent Venom, Flash Thompson bonded with the Venom symbiote, which would lead to Venom getting his own series Venom (volume 2). Starting in No. 659 and going to No. 655, the series built-up to the Spider-Island event which officially started in No. 666 and ended in No. 673. Ends of the Earth was the next event that ran from No. 682 through No. 687. This publishing format lasted until issue No. 700, which concluded the "Dying Wish" storyline, in which Parker and Doctor Octopus swapped bodies, and the latter taking on the mantle of Spider-Man when Parker apparently died in Doctor Octopus' body. The Amazing Spider-Man ended with this issue, with the story continuing in the new series The Superior Spider-Man. Despite The Superior Spider-Man being considered a different series to The Amazing Spider-Man, the first 33 issue run goes towards the legacy numbering of The Amazing Spider-Man acting as issues 701–733. In December 2013, the series returned for five issues, numbered 700.1 through 700.5, with the first two written by David Morrell and drawn by Klaus Janson. 2014 relaunch In January 2014, Marvel confirmed that The Amazing Spider-Man would be relaunched on April 30, 2014, starting from issue No. 1, with Peter Parker as Spider-Man once again. The first issue of this new version of The Amazing Spider-Man was, according to Diamond Comics Distributors, the "best-selling comic book... in over a decade." Issues #1–6 were a story arc called "Lucky to be Alive", taking place immediately after "Goblin Nation", with issues No. 4 and No. 5 being a crossover with the Original Sin storyline. Issue No. 4 introduced Silk, a new heroine who was bitten by the same spider as Peter Parker. Issues #7–8 featured a team-up between Ms. Marvel and Spider-Man, and had backup stories that tied into "Edge of Spider-Verse". The next major plot arc, titled "Spider-Verse", began in Issue No. 9 and ended in No. 15, features every Spider-Man from across the dimensions being hunted by Morlun, and a team-up to stop him, with Peter Parker of Earth-616 in command of the Spider-Men's Alliance. The Amazing Spider-Man Annual No. 1 of the relaunched series was released in December 2014, featuring stories unrelated to "Spider-Verse". The Amazing Spider-Man: Renew Your Vows In 2015, Marvel started the universe wide Secret Wars event where the core and several other Marvel universes were combined into one big planet called Battleworld. Battleworld was divided into sections with most of them being self-contained universes. Marvel announced that several of these self-contained universes would get their own tie in series and one of them was Amazing Spider-Man: Renew Your Vows, an alternate universe where Peter Parker and Mary Jane are still married and give birth to their child Annie May Parker, written by Dan Slott. Despite the series being considered separate from the main Amazing Spider-Man series, the original 5 issue run is counted towards its legacy numbering acting as No. 752-756. 2015 relaunch Following the 2015 Secret Wars event, a number of Spider-Man-related titles were either relaunched or created as part of the "All-New, All-Different Marvel" event. Among them, The Amazing Spider-Man was relaunched as well and primarily focuses on Peter Parker continuing to run Parker Industries, and becoming a successful businessman who is operating worldwide. It also tied with Civil War II (involving an Inhuman who can predict possible future named Ulysses Cain), Dead No More (where Ben Reilly [the original Scarlet Spider] revealed to be revived and as one of the antagonists instead), and Secret Empire (during Hydra's reign led by a Hydra influenced Captain America/Steve Rogers, and the dismissal of Parker Industries by Peter Parker to stop Otto Octavius). Starting in September 2017, Marvel started the Marvel Legacy event which renumbered several Marvel series to their original numbering, The Amazing Spider-Man was put back to its original numbering for issue 789. Issues 789 through 791 focused on the aftermath of Peter destroying Parker Industries and his fall from grace. Issues 792 and 793 were part of the Venom Inc. story. Threat Level: Red was the story for the next three issues which saw Norman Osborn obtain and bond with the Carnage symbiote. Go Down Swinging saw the results of the combination of Osborn's goblin serum and Carnage symbiote creating the Red Goblin. Issue 801 was Dan Slott's goodbye issue. 2018 relaunch In March 2018, it was announced that writer Nick Spencer would be writing the main bi-monthly The Amazing Spider-Man series beginning with a new No. 1, replacing long-time writer Dan Slott, as part of the Fresh Start relaunch that July. The first five-issue story arc was titled 'Back to Basics.' During the Back to Basics story, Kindred, a mysterious villain with some relation to Peter's past, was introduced. The first major story under Spencer was Hunted which ran through issues 16 through 23, the story also included four ".HU" issues for issues 16, 18, 19, and 20. The end of the story saw the death of long-running Spider-Man villain Kraven the Hunter, being replaced by his clone son, The Last Son of Kraven. 2020s Issue 45 kicked off the Sins Rising story which saw the resurrected Sin-Eater carry out the plans of Kindred to cleanse the world of sin, particularly that of Norman Osborn. The story concluded with issue 49, issue 850 in legacy numbering, seeing Spider-Man and Green Goblin team up to defeat Sin-Eater. Last Remains started in issue 50 and concluded in issue 55, the story saw Kindred's plans come to fruition as he tormented Spider-Man. The story has also saw five ".LR" for issues 50, 51, 52, 53, and 54 which focused on The Order of the Web, a new faction of Spider-People consisting of Julia Carpenter (Madame Web), Miles Morales (Spider-Man), Gwen Stacy (Ghost-Spider), Cindy Moon (Silk), Jessica Drew (Spider-Woman), and Anya Corazon (Spider-Girl) . The story also revealed that Kindred is Harry Osborn. Last Remains also received two fallout issues called Last Remains Post-Mortem. Nick Spencer concluded his run with the Sinister War story which wrapped up in Np. 74 (legacy numbering 875). The story saw several retcons to the Spider-Man mythos including that Kindred was Gabriel and Sarah Stacy all along, the fact that the Stacy twins were actually genetically engineered beings using Norman Osborn and Gwen Stacy's DNA, that the Harry Osborn that returned in Brand New Day was actually a clone, and that Norman had made a deal with Mephisto where he sold Harry's soul to the demon. The story ended with the deaths of the Harry clone, Gabriel, and Sarah and the real Harry's soul being freed from Mephisto's grasp. After Spencer left the book, Marvel announced the "Beyond" era of Spider-Man which would start in No. 75. The book would be moving back to the format it had during Brand New Day where the it would have a rotating cast of writers including Kelly Thompson, Saladin Ahmed, Cody Ziglar, Patrick Gleason, and Zeb Wells. The book would also release three times a month. Beyond would focus on Ben Reilly taking up the mantle of Spider-Man once again, but backed by the Beyond corporation. Peter also falls ill and cannot be Spider-Man so he gives Ben his blessing to carry on as the main Spider-Man. Collected editions Black-and-white Essential Spider-Man Vol. 1 [#1–20, Annual #1; Amazing Fantasy #15] () Essential Spider-Man Vol. 2 [#21–43, Annual #2–3] () Essential Spider-Man Vol. 3 [#44–65, Annual #4] () Essential Spider-Man Vol. 4 [#66–89, Annual #5] () Essential Spider-Man Vol. 5 [#90–113] () Essential Spider-Man Vol. 6 [#114–137; Giant-Size Super Heroes #1; Giant-Size Spider-Man #1–2] () Essential Spider-Man Vol. 7 [#138–160, Annual #10; Giant-Size Spider-Man #4–5] () Essential Spider-Man Vol. 8 [#161–185, Annual #11; Giant-Size Spider-Man #6; Nova #12] () Essential Spider-Man Vol. 9 [#186–210, Annual #13–14; Peter Parker: Spectacular Spider-Man Annual #1] () Essential Spider-Man Vol. 10 [#211–230, Annual #15] () Essential Spider-Man Vol. 11 [#231–248, Annual #16–17] () Major story arcs/artist runs Marvel Visionaries: John Romita Sr. [#39–40, 42, 50, 108–109, 365; Daredevil #16–17; Untold Tales of Spider-Man #-1] () Spider-Man: The Death of Captain Stacy [#88–90] () Spider-Man: The Death of Gwen Stacy [#96–98, 121–122; Webspinners: Tales of Spider-Man #1] () Spider-Man: Death of the Stacys [#88–92, 121–122] () A New Goblin [#176–180] () Spider-Man vs. the Black Cat [#194–195, 204–205, 226–227] () Spider-Man: Origin of The Hobgoblin [#238–239, 244–245, 249–251, Spectacular Spider-Man (vol. 1) #85] () Spider-Man: Birth of Venom [#252–259, 298–300, 315–317, Annual #25; Fantastic Four #274; Secret Wars #8; Web of Spider-Man #1] () The Amazing Spider-Man: The Wedding [#290–292, Annual #2, Not Brand Echh #6] () Spider-Man: Kraven's Last Hunt [#293–294; Web of Spider-Man #31–32; The Spectacular Spider-Man #131–132] () Visionaries: Todd McFarlane [#298–305] () Legends, Vol. 2: Todd McFarlane [#306–314; The Spectacular Spider-Man Annual #10] () Legends, Vol. 3: Todd McFarlane [#315–323, 325, 328] () Spider-Man: Venom Returns [#330–333, 344–347;Annual #25] () Spider-Man: Carnage [#344–345, 359–363] () Collections Vol. 1: Coming Home [#30-35/471-476] () Vol. 2: Revelations [#36-39/477-480] () Vol. 3: Until the Stars Turn Cold [#40-45/481-486] () Vol. 4: The Life and Death of Spiders [#46-50/487-491] () Vol. 5: Unintended Consequences [#51-56/492-497] () Vol. 6: Happy Birthday [#57–58,500-502/498-502] () Vol. 7: The Book of Ezekiel [#503–508] () Vol. 8: Sins Past [#509–514] () Vol. 9: Skin Deep [#515–518] () Vol. 10: New Avengers [#519–524] () Spider-Man: The Other [#525–528; Friendly Neighborhood Spider-Man #1–4; Marvel Knights Spider-Man #19–22] () Civil War: The Road to Civil War [#529–531; New Avengers: Illuminati (one-shot); Fantastic Four #536–537] () Vol. 11: Civil War [#532–538] () Vol. 12: Back in Black [#539–543; Friendly Neighborhood Spider-Man #17–23, Annual #1] () Spider-Man: One More Day [#544–545; Friendly Neighborhood Spider-Man #24; The Sensational Spider-Man #41; Marvel Spotlight: Spider-Man – One More Day/Brand New Day] () Brand New Day Vol. 1 [#546–551; The Amazing Spider-Man: Swing Shift (Director's Cut); Venom Super-Special] () Brand New Day Vol. 2 [#552–558] () Brand New Day Vol. 3 [#559–563] () Kraven's First Hunt [#564–567; The Amazing Spider-Man: Extra! #1 (story #2)] () New Ways to Die [#568–573; Marvel Spotlight: Spider-Man – Brand New Day] () Crime and Punisher [#574–577; The Amazing Spider-Man: Extra! #1 (story #1)] () Death and Dating [#578–583, Annual #35/1] () Election Day [#584–588; The Amazing Spider-Man: Extra! #1 (story #3), 3 (story #1); The Amazing Spider-Man Presidents' Day Special] () 24/7 [#589–594; The Amazing Spider-Man: Extra! #2] () American Son [#595–599; material from The Amazing Spider-Man: Extra! #3] () Died in Your Arms Tonight [#600–601, Annual #36; material from Amazing Spider-Man Family #7] () Red-Headed Stranger [#602–605] () Return of the Black Cat [#606–611; material from Web of Spider-Man (vol. 2) #1] () The Gauntlet Book 1: Electro and Sandman [#612–616; Dark Reign: The List – The Amazing Spider-Man; Web of Spider-Man (vol. 2) #2 (Electro story)] () The Gauntlet Book 2: Rhino and Mysterio [#617–621; Web of Spider-Man (vol. 2) #3–4] () The Gauntlet Book 3: Vulture and Morbius [#622–625; Web of Spider-Man (vol. 2) #2, 5 (Vulture story)] () The Gauntlet Book 4: Juggernaut [#229–230, 626–629] () The Gauntlet Book 5: Lizard [#629–633; Web of Spider-Man (vol. 2) #6] () Spider-Man: Grim Hunt [#634–637; The Amazing Spider-Man: Extra! #3; Spider-Man: Grim Hunt – The Kraven Saga; Web of Spider-Man (vol. 2) #7] () One Moment in Time [#638–641] () Origin of the Species [#642–647; Spider-Man Saga; Web of Spider-Man (vol. 2) #12] () Big Time [#648–651] () Matters of Life and Death [#652–657, 654.1] () Spider-Man: The Fantastic Spider-Man [#658–662] () Spider-Man: The Return Of Anti-Venom [#663–665; Free Comic Book Day 2011: Spider-Man] () Spider-Man: Spider-Island [#666–673; Venom (2011) #6–8, Spider-Island: Deadly Foes; Infested prologues from #659–660 and 662–665] () Spider-Man: Flying Blind [#674–677; Daredevil #8] () Spider-Man: Trouble on the Horizon [#678–681, 679.1] () Spider-Man: Ends of the Earth [#682–687; Amazing Spider-Man: Ends of the Earth #1; Avenging Spider-Man #8] () Spider-Man: Lizard – No Turning Back [#688–691; Untold Tales of Spider-Man #9] () Spider-Man: Danger Zone [#692–697; Avenging Spider-Man #11] () Spider-Man: Dying Wish [#698–700] () The Amazing Spider-Man Omnibus Vol. 1 [#1–38, Annual #1–2; Amazing Fantasy #15; Strange Tales Annual #2; Fantastic Four Annual #1] () The Amazing Spider-Man Omnibus Vol. 2 [#39–67, Annual #3–5; Spectacular Spider-Man #1–2] () Marvel Masterworks: The Amazing Spider-Man Vol. 1 [#1–10; Amazing Fantasy #15] () Marvel Masterworks: The Amazing Spider-Man Vol. 2 [#11–19, Annual #1] () Marvel Masterworks: The Amazing Spider-Man Vol. 3 [#20–30, Annual #2] () Marvel Masterworks: The Amazing Spider-Man Vol. 4 [#31–40] () Marvel Masterworks: The Amazing Spider-Man Vol. 5 [#41–50, Annual #3] () Marvel Masterworks: The Amazing Spider-Man Vol. 6 [#51–61, Annual #4] () Marvel Masterworks: The Amazing Spider-Man Vol. 7 [#62–67, Annual #5; The Spectacular Spider-Man #1–2 (magazine)] () Marvel Masterworks: The Amazing Spider-Man Vol. 8 [#68–77; Marvel Super Heroes #14] () Marvel Masterworks: The Amazing Spider-Man Vol. 9 [#78–87] () Marvel Masterworks: The Amazing Spider-Man Vol. 10 [#88–99] () Marvel Masterworks: The Amazing Spider-Man Vol. 11 [#100–109] () Marvel Masterworks: The Amazing Spider-Man Vol. 12 [#110–120] () Marvel Masterworks: The Amazing Spider-Man Vol. 13 [#121–131] () Marvel Masterworks: The Amazing Spider-Man Vol. 14 [#132–142; Giant-Size Super-Heroes #1] () Marvel Masterworks: The Amazing Spider-Man Vol. 15 [#143–155; Marvel Special Edition Treasury #1] () Marvel Masterworks: The Amazing Spider-Man Vol. 16 [#156–168; Annual #10] () Marvel Masterworks: The Amazing Spider-Man Vol. 17 [#169–180; Annual #11; Nova #12; Marvel Treasury Edition #14] () Marvel Masterworks: The Amazing Spider-Man Vol. 18 [#181–192; Mighty Marvel Comics Calendar 1978; material From Annual #12] () Marvel Masterworks: The Amazing Spider-Man Vol. 19 [#193–202; Annual #13; Peter Parker, the Spectacular Spider-Man Annual #1] () Marvel Masterworks: The Amazing Spider-Man Vol. 20 [#203–212; Annual #14] () Marvel Masterworks: The Amazing Spider-Man Vol. 21 [#213–223; Annual #15] () Amazing Spider-Man Vol. 1: (The) Parker Luck [Vol. 3 #1 - 6 (e.g. legacy #732 - 737)] () Amazing Spider-Man Vol. 2: Spider-Verse Prelude [#7 - 8 (e.g. legacy #738 - 739); Superior Spider-Man #32 - 33; Free Comic Book Day 2014 (Guardians of the Galaxy) #1] () Amazing Spider-Man Vol. 3: Spider-Verse [#09 - 15 (e.g. legacy #740 - 746)] () Amazing Spider-Man Vol. 4: Graveyard Shift [#16 - 18 (e.g. legacy #747 - 749); Annual 2015] () Amazing Spider-Man Vol. 5: Spiral [#16.1-20.1(e.g. legacy #750 - 751)] () Amazing Spider-Man: Renew Your Vows [#1 - 5 (e.g. legacy #752 - 756)] Amazing Spider-Man Worldwide Vol. 1 [Vol. 4 #1 – 5] Amazing Spider-Man Worldwide Vol. 2 [#6 – 11] Amazing Spider-Man Worldwide Vol. 3 [#12 – 15] Amazing Spider-Man Worldwide Vol. 4 [#16 – 19] Amazing Spider-Man Worldwide Vol. 5 [#20 – 24, Annual #1] Amazing Spider-Man Worldwide Vol. 6 [#25 – 28] Amazing Spider-Man Worldwide Vol. 7 [#29 – 32 (e.g. legacy #785 - 788), #789 - 791] Amazing Spider-Man: Venom Inc. [Venom Inc. Alpha, Venom Inc. Omega, #792 - 793, Venom #159 - 160] Amazing Spider-Man Worldwide Vol. 8 [#794-796, Annual] Amazing Spider-Man Worldwide Vol. 9 [#797-801] Amazing Spider-Man: Red Goblin [#794-801] Amazing Spider-Man Vol. 1: Back to Basics [#1-5, FCBD 2018: Amazing Spider-Man] Amazing Spider-Man Vol. 2: Friends and Foes [#6-10] Amazing Spider-Man Vol. 3: Lifetime Achievement [#11-15] Amazing Spider-Man Vol. 4: Hunted [#16-23, #16.1, #18.1-20.1] Amazing Spider-Man Vol. 5: Behind the Scenes [#24-28] Amazing Spider-Man Vol. 6: Absolute Carnage [#29-31] Amazing Spider-Man Vol. 7: 2099 [#32-36] Amazing Spider-Man Vol. 8: Threats & Menaces [#37 - 43 (e.g. legacy #838 - 844)] Amazing Spider-Man Vol. 9: Sins Rising [#44-47, Amazing Spider Man: Sins Rising #1] Amazing Spider-Man Vol. 10: Green Goblin Returns [#48-49, Amazing Spider-Man: The Sins of Norman Osborn #1, FCBD 2020: Spider-Man/Venom] Amazing Spider-Man Vol. 11: Last Remains [#50-55] Amazing Spider-Man: Last Remains Companion [#50.1-54.1] Amazing Spider-Man Vol. 12: Shattered Web [#56-60] Amazing Spider-Man Vol. 13: King's Ransom [#61-65, Giant Size Amazing Spider-Man: King's Ransom #1] Amazing Spider-Man Vol. 14: Chameleon Conspiracy [#66-69, Giant Size Amazing Spider-Man: Chameleon Conspiracy #1] Amazing Spider-Man Vol. 15: What Cost Victory? [#70-74] Amazing Spider-Man: Beyond Vol. 1 [#75-80] See also References External links The Amazing Spider-Man comic book sales figures from 1966–present at The Comics Chronicles Spider-Man at Marvel Comics wikia The Amazing Spider-Man cover gallery Spiderman Videos 1963 comics debuts Comics by Archie Goodwin (comics) Comics by Dennis O'Neil Comics by Gerry Conway Comics by J. M. DeMatteis Comics by J. Michael Straczynski Comics by John Byrne (comics) Comics by Len Wein Comics by Mark Waid Comics by Marv Wolfman Comics by Stan Lee Comics by Steve Ditko Spider-Man titles
933
https://en.wikipedia.org/wiki/AM
AM
AM may refer to: Arts and entertainment Music Skengdo & AM, British rap duo AM (musician), American musician A.M. (musician), Canadian musician DJ AM, American DJ and producer AM (Abraham Mateo album) A.M. (Wilco album) A.M. (Chris Young album) AM (Arctic Monkeys album) Am, the A minor chord symbol A minor, a minor scale in music Armeemarschsammlung, Prussian Army March Collection (Preußische Armeemarschsammlung) Television and radio AM (ABC Radio), Australian radio programme American Morning, American television program Am, Antes del Mediodia, Argentine television program Other media Allied Mastercomputer, the antagonist of the short story "I Have No Mouth, and I Must Scream" Education Master of Arts, an academic degree Arts et Métiers ParisTech, a French engineering school Active Minds, a mental health awareness charity Science Americium, a chemical element Attometre, a unit of length Adrenomedullin, a protein Air mass (astronomy) attomolar (aM), a unit of molar concentration Am, tropical monsoon climate in the Köppen climate classification AM, a complexity class related to Arthur–Merlin protocol Technology .am, Internet domain for Armenia .am, a file extension associated with Automake software Agile modeling, a software engineering methodology for modeling and documenting software systems Amplitude modulation, an electronic communication technique Additive Manufacturing, a process of making a three-dimensional solid object of virtually any shape from a digital model. AM broadcasting, radio broadcasting using amplitude modulation Anti-materiel rifle Automated Mathematician, an artificial intelligence program Timekeeping ante meridiem, Latin for "before midday" Anno Mundi, a calendar era based on the biblical creation of the world Anno Martyrum, a method of numbering years in the Coptic calendar Transportation A.M. (automobile), a 1906 French car Aeroméxico (IATA airline code AM) Arkansas and Missouri Railroad All-mountain, a discipline of mountain biking Military AM, the United States Navy hull classification symbol for "minesweeper" Air marshal, a senior air officer rank used in Commonwealth countries Anti-materiel rifle Aviation Structural Mechanic, a U.S. Navy occupational rating Other uses Am (cuneiform), a written syllable Member of the Order of Australia, postnominal letters which can be used by a Member of the Order Assembly Member (disambiguation), a political office Member of the National Assembly for Wales Member of the London Assembly Amharic language (ISO 639-1 language code am) Armenia (ISO country code AM) Attacking midfielder, a position in association football First person singular present of the copula verb to be. See also Pro–am `am (disambiguation) A&M (disambiguation) AM2 (disambiguation) AMS (disambiguation)
951
https://en.wikipedia.org/wiki/Antigua%20and%20Barbuda
Antigua and Barbuda
Antigua and Barbuda (; ) is a sovereign island country in the West Indies in the Americas, lying between the Caribbean Sea and the Atlantic Ocean. It consists of two major islands, Antigua and Barbuda separated by around , and smaller islands (including Great Bird, Green, Guiana, Long, Maiden, Prickly Pear, York Islands, Redonda). The permanent population number is about 97,120 (2019 est.), with 97% residing on Antigua. The capital and largest port and city is St. John's on Antigua, with Codrington being the largest town on Barbuda. Lying near each other, Antigua and Barbuda are in the middle of the Leeward Islands, part of the Lesser Antilles, roughly at 17°N of the equator. The island of Antigua was explored by Christopher Columbus in 1493 and named for the Church of Santa María La Antigua. Antigua was colonized by Britain in 1632; Barbuda island was first colonised in 1678. Having been part of the Federal Colony of the Leeward Islands from 1871, Antigua and Barbuda joined the West Indies Federation in 1958. With the breakup of the federation, it became one of the West Indies Associated States in 1967. Following self-governance in its internal affairs, independence was granted from the United Kingdom on 1 November 1981. Antigua and Barbuda is a member of the Commonwealth and Elizabeth II is the country's queen and head of state. The economy of Antigua and Barbuda is particularly dependent on tourism, which accounts for 80% of GDP. Like other island nations, Antigua and Barbuda is particularly vulnerable to the effects of climate change, such as sea level rise, and increased intensity of extreme weather like hurricanes, which have direct impacts on the island through coastal erosion, water scarcity, and other challenges. As of 2019, Antigua and Barbuda has a 0% individual income tax rate, as does neighboring St. Kitts and Nevis. Etymology is Spanish for 'ancient' and is Spanish for 'bearded'. The island of Antigua was originally called by Arawaks and is locally known by that name today; Caribs possibly called Barbuda . Christopher Columbus, while sailing by in 1493 may have named it , after an icon in the Spanish Seville Cathedral. The "bearded" of Barbuda is thought to refer either to the male inhabitants of the island, or the bearded fig trees present there. History Pre-colonial period Antigua was first settled by archaic age hunter-gatherer Amerindians called the Ciboney. Carbon dating has established the earliest settlements started around 3100 BC. They were succeeded by the ceramic age pre-Columbian Arawak-speaking Saladoid people who migrated from the lower Orinoco River. They introduced agriculture, raising, among other crops, the famous Antigua black pineapple (Ananas comosus), corn, sweet potatoes, chiles, guava, tobacco, and cotton. Later on the more bellicose Caribs also settled the island, possibly by force. European arrival and settlement Christopher Columbus was the first European to sight the islands in 1493. The Spanish did not colonise Antigua until after a combination of European and African diseases, malnutrition, and slavery eventually extirpated most of the native population; smallpox was probably the greatest killer. The English settled on Antigua in 1632; Christopher Codrington settled on Barbuda in 1685. Tobacco and then sugar was grown, worked by a large population of slaves from West Africa who soon came to vastly outnumber the European settlers. Colonial era The English maintained control of the islands, repulsing an attempted French attack in 1666. The brutal conditions endured by the slaves led to revolts in 1701 and 1729 and a planned revolt in 1736, the last led by Prince Klaas, though it was discovered before it began and the ringleaders were executed. Slavery was abolished in the British Empire in 1833, affecting the economy. This was exacerbated by natural disasters such as the 1843 earthquake and the 1847 hurricane. Mining occurred on the isle of Redonda, however, this ceased in 1929 and the island has since remained uninhabited. Part of the Leeward Islands colony, Antigua and Barbuda became part of the short-lived West Indies Federation from 1958 to 1962. Antigua and Barbuda subsequently became an associated state of the United Kingdom with full internal autonomy on 27 February 1967. The 1970s were dominated by discussions as to the islands' future and the rivalry between Vere Bird of the Antigua and Barbuda Labour Party (ABLP) (Premier from 1967 to 1971 and 1976 to 1981) and the Progressive Labour Movement (PLM) of George Walter (Premier 1971–1976). Eventually, Antigua and Barbuda gained full independence on 1 November 1981; Vere Bird became Prime Minister of the new country. The country opted to remain within the Commonwealth, retaining Queen Elizabeth as head of state, with the last Governor, Sir Wilfred Jacobs, as Governor-General. Independence era The first two decades of Antigua's independence were dominated politically by the Bird family and the ABLP, with Vere Bird ruling from 1981 to 1994, followed by his son Lester Bird from 1994 to 2004. Though providing a degree of political stability, and boosting tourism to the country, the Bird governments were frequently accused of corruption, cronyism and financial malfeasance. Vere Bird Jr., the elder son, was forced to leave the cabinet in 1990 following a scandal in which he was accused of smuggling Israeli weapons to Colombian drug-traffickers. Another son, Ivor Bird, was convicted of selling cocaine in 1995. In 1995, Hurricane Luis caused severe damage on Barbuda. The ABLP's dominance of Antiguan politics ended with the 2004 Antiguan general election, which was won by Winston Baldwin Spencer's United Progressive Party (UPP). Winston Baldwin Spencer was Prime Minister of Antigua and Barbuda from 2004 to 2014. However the UPP lost the 2014 Antiguan general election, with the ABLP returning to power under Gaston Browne. ABLP won 15 of the 17 seats in the 2018 snap election under the leadership of incumbent Prime Minister Gaston Browne. Most of Barbuda was devastated in early September 2017 by Hurricane Irma, which brought winds with speeds reaching 295 km/h (185 mph). The storm damaged or destroyed 95% of the island's buildings and infrastructure, leaving Barbuda "barely habitable" according to Prime Minister Gaston Browne. Nearly everyone on the island was evacuated to Antigua. Amidst the following rebuilding efforts on Barbuda that were estimated to cost at least $100 million, the government announced plans to revoke a century-old law of communal land ownership by allowing residents to buy land; a move that has been criticised as promoting "disaster capitalism". Geography Antigua and Barbuda both are generally low-lying islands whose terrain has been influenced more by limestone formations than volcanic activity. The highest point on Antigua and Barbuda is Boggy Peak, located in southwestern Antigua, which is the remnant of a volcanic crater rising . The shorelines of both islands are greatly indented with beaches, lagoons, and natural harbours. The islands are rimmed by reefs and shoals. There are few streams as rainfall is slight. Both islands lack adequate amounts of fresh groundwater. About south-west of Antigua lies the small, rocky island of Redonda, which is uninhabited. Cities and villages The most populous cities in Antigua and Barbuda are mostly on Antigua, being Saint John's, All Saints, Piggotts, and Liberta. The most populous city on Barbuda is Codrington. It is estimated that 25% of the population lives in an Urban area, which is much lower than the international average of 55%. Islands Antigua and Barbuda consists mostly of its two namesake islands, Antigua, and Barbuda, other than that, Antigua and Barbuda's biggest islands are Guiana Island and Long Island off the coast of Antigua, and Redonda island, which is far from both of the main islands. Climate Rainfall averages per year, with the amount varying widely from season to season. In general the wettest period is between September and November. The islands generally experience low humidity and recurrent droughts. Temperatures average , with a range from to in the winter to from to in the summer and autumn. The coolest period is between December and February. Hurricanes strike on an average of once a year, including the powerful Category 5 Hurricane Irma, on 6 September 2017, which damaged 95% of the structures on Barbuda. Some 1,800 people were evacuated to Antigua. An estimate published by Time indicated that over $100 million would be required to rebuild homes and infrastructure. Philmore Mullin, Director of Barbuda's National Office of Disaster Services, said that "all critical infrastructure and utilities are non-existent – food supply, medicine, shelter, electricity, water, communications, waste management". He summarised the situation as follows: "Public utilities need to be rebuilt in their entirety... It is optimistic to think anything can be rebuilt in six months ... In my 25 years in disaster management, I have never seen something like this." Environmental issues Demographics Ethnic groups Antigua has a population of , mostly made up of people of West African, British, and Madeiran descent. The ethnic distribution consists of 91% Black, 4.4% mixed race, 1.7% White, and 2.9% other (primarily East Indian). Most Whites are of British descent. Christian Levantine Arabs and a small number of East Asians and Sephardic Jews make up the remainder of the population. An increasingly large percentage of the population lives abroad, most notably in the United Kingdom (Antiguan Britons), the United States and Canada. A minority of Antiguan residents are immigrants from other countries, particularly from Dominica, Guyana and Jamaica, and, increasingly, from the Dominican Republic, St. Vincent and the Grenadines and Nigeria. An estimated 4,500 American citizens also make their home in Antigua and Barbuda, making their numbers one of the largest American populations in the English-speaking Eastern Caribbean. Languages English is the official language. The Barbudan accent is slightly different from the Antiguan. In the years before Antigua and Barbuda's independence, Standard English was widely spoken in preference to Antiguan Creole. Generally, the upper and middle classes shun Antiguan Creole. The educational system dissuades the use of Antiguan Creole and instruction is done in Standard (British) English. Many of the words used in the Antiguan dialect are derived from British as well as African languages. This can be easily seen in phrases such as: "Ent it?" meaning "Ain't it?" which is itself dialectal and means "Isn't it?". Common island proverbs can often be traced to Africa. Spanish is spoken by around 10,000 inhabitants. Religion A majority (77%) of Antiguans are Christians, with the Anglicans (17.6%) being the largest single denomination. Other Christian denominations present are Seventh-day Adventist Church (12.4%), Pentecostalism (12.2%), Moravian Church (8.3%), Roman Catholics (8.2%), Methodist Church (5.6%), Wesleyan Holiness Church (4.5%), Church of God (4.1%), Baptists (3.6%), Mormonism (<1.0%), as well as Jehovah's Witnesses. Non-Christian religions practiced in the islands include the Rastafari, Islam, and Baháʼí Faith. Governance Political system The politics of Antigua and Barbuda take place within a framework of a unitary, parliamentary, representative democratic monarchy, in which the head of State is the monarch who appoints the Governor-General as vice-regal representative. Elizabeth II is the present Queen of Antigua and Barbuda, having served in that position since the islands' independence from the United Kingdom in 1981. The Queen is currently represented by Governor-General Sir Rodney Williams. A council of ministers is appointed by the governor-general on the advice of the prime minister, currently Gaston Browne (2014–). The prime minister is the head of government. Executive power is exercised by the government while legislative power is vested in both the government and the two Chambers of Parliament. The bicameral Parliament consists of the Senate (17 members appointed by members of the government and the opposition party, and approved by the Governor-General), and the House of Representatives (17 members elected by first past the post) to serve five-year terms. The current Leader of Her Majesty's Loyal Opposition is the United Progressive Party Member of Parliament (MP), the Honourable Baldwin Spencer. Elections The last election was held on 21 March 2018. The Antigua Barbuda Labour Party (ABLP) led by Prime Minister Gaston Browne won 15 of the 17 seats in the House of Representatives. The previous election was on 12 June 2014, during which the Antigua Labour Party won 14 seats, and the United Progressive Party 3 seats. Since 1951, elections have been won by the populist Antigua Labour Party. However, in the Antigua and Barbuda legislative election of 2004 saw the defeat of the longest-serving elected government in the Caribbean. Vere Bird was Prime Minister from 1981 to 1994 and Chief Minister of Antigua from 1960 to 1981, except for the 1971–1976 period when the Progressive Labour Movement (PLM) defeated his party. Bird, the nation's first Prime Minister, is credited with having brought Antigua and Barbuda and the Caribbean into a new era of independence. Prime Minister Lester Bryant Bird succeeded the elder Bird in 1994. Party elections Gaston Browne defeated his predecessor Lester Bryant Bird at the Antigua Labour Party's biennial convention in November 2012 held to elect a political leader and other officers. The party then altered its name from the Antigua Labour Party (ALP) to the Antigua and Barbuda Labour Party (ABLP). This was done to officially include the party's presence on the sister island of Barbuda in its organisation, the only political party on the mainland to have a physical branch in Barbuda. Judiciary The Judicial branch is the Eastern Caribbean Supreme Court (based in Saint Lucia; one judge of the Supreme Court is a resident of the islands and presides over the High Court of Justice). Antigua is also a member of the Caribbean Court of Justice. The Judicial Committee of the Privy Council serves as its Supreme Court of Appeal. Foreign relations Antigua and Barbuda is a member of the United Nations, the Bolivarian Alliance for the Americas, the Commonwealth of Nations, the Caribbean Community, the Organization of Eastern Caribbean States, the Organization of American States, the World Trade Organization and the Eastern Caribbean's Regional Security System. Antigua and Barbuda is also a member of the International Criminal Court (with a Bilateral Immunity Agreement of Protection for the US military as covered under Article 98 of the Rome Statute). In 2013, Antigua and Barbuda called for reparations for slavery at the United Nations. Prime Minister Baldwin Spencer said "We have recently seen a number of leaders apologising", and that they should now "match their words with concrete and material benefits." Military The Royal Antigua and Barbuda Defence Force has around 260 members dispersed between the line infantry regiment, service and support unit and coast guard. There is also the Antigua and Barbuda Cadet Corps made up of 200 teenagers between the ages of 12 to 18. In 2018, Antigua and Barbuda signed the UN treaty on the Prohibition of Nuclear Weapons. Administrative divisions Antigua and Barbuda is divided into six parishes and two dependencies: Note: Though Barbuda and Redonda are called dependencies they are integral parts of the state, making them essentially administrative divisions. Dependency is simply a title. Human rights Antigua and Barbuda does not allow discrimination in employment, child labor, human trafficking, and there are laws against domestic abuse and child abuse. Although it has not been enforced or a case brought to trial in many years, like other Caribbean islands, same-sex sexual activity is illegal in Antigua and Barbuda and punishable by prison time. There are several current movements under way to repeal the buggery laws. Economy Tourism dominates the economy, accounting for more than half of the gross domestic product (GDP). Antigua is famous for its many luxury resorts as an ultra-high-end travel destination. Weakened tourist activity in the lower and middle market segments since early 2000 has slowed the economy, however, and squeezed the government into a tight fiscal corner. Antigua and Barbuda has enacted policies to attract high-net-worth citizens and residents, such as enacting a 0% personal income tax rate in 2019. Investment banking and financial services also make up an important part of the economy. Major world banks with offices in Antigua include the Royal Bank of Canada (RBC) and Scotiabank. Financial-services corporations with offices in Antigua include PriceWaterhouseCoopers. The US Securities and Exchange Commission has accused the Antigua-based Stanford International Bank, owned by Texas billionaire Allen Stanford, of orchestrating a huge fraud which may have bilked investors of some $8 billion. The twin-island nation's agricultural production is focused on its domestic market and constrained by a limited water supply and a labour shortage stemming from the lure of higher wages in tourism and construction work. Manufacturing is made up of enclave-type assembly for export, the major products being bedding, handicrafts and electronic components. Prospects for economic growth in the medium term will continue to depend on income growth in the industrialised world, especially in the United States, from which about one-third of all tourists come. Access to biocapacity is lower than world average. In 2016, Antigua and Barbuda had 0.8 global hectares of biocapacity per person within its territory, much less than the world average of 1.6 global hectares per person. In 2016, Antigua and Barbuda used 4.3 global hectares of biocapacity per person – their ecological footprint of consumption. This means they use more biocapacity than Antigua and Barbuda contains. As a result, Antigua and Barbuda are running a biocapacity deficit. Following the opening of the American University of Antigua College of Medicine by investor and attorney Neil Simon in 2003, a new source of revenue was established. The university employs many local Antiguans and the approximate 1000 students consume a large amount of the goods and services. Antigua and Barbuda also uses an economic citizenship program to spur investment into the country. Transport Education Culture The culture is predominantly a mixture of West African and British cultural influences. Cricket is the national sport. Other popular sports include football, boat racing and surfing. (Antigua Sailing Week attracts locals and visitors from all over the world). Music Festivals The national Carnival held each August commemorates the abolition of slavery in the British West Indies, although on some islands, Carnival may celebrate the coming of Lent. Its festive pageants, shows, contests and other activities are a major tourist attraction. Cuisine Media There are three newspapers: the Antigua Daily Observer, Antigua New Room and The Antiguan Times. The Antigua Observer is the only daily printed newspaper. The local television channel ABS TV 10 is available (it is the only station that shows exclusively local programs). There are also several local and regional radio stations, such as V2C-AM 620, ZDK-AM 1100, VYBZ-FM 92.9, ZDK-FM 97.1, Observer Radio 91.1 FM, DNECA Radio 90.1 FM, Second Advent Radio 101.5 FM, Abundant Life Radio 103.9 FM, Crusader Radio 107.3 FM, Nice FM 104.3. Literature Antiguan author Jamaica Kincaid has published over 20 works of literature. Sports The Antigua and Barbuda national cricket team represented the country at the 1998 Commonwealth Games, but Antiguan cricketers otherwise play for the Leeward Islands cricket team in domestic matches and the West Indies cricket team internationally. The 2007 Cricket World Cup was hosted in the West Indies from 11 March to 28 April 2007. Antigua hosted eight matches at the Sir Vivian Richards Stadium, which was completed on 11 February 2007 and can hold up to 20,000 people. Antigua is a Host of Stanford Twenty20 – Twenty20 Cricket, a version started by Allen Stanford in 2006 as a regional cricket game with almost all Caribbean islands taking part.Sir Vivian Richards Stadium is set to host 2022 ICC Under-19 Cricket World Cup. Rugby and Netball are popular as well. Association football, or soccer, is also a very popular sport. Antigua has a national football team which entered World Cup qualification for the 1974 tournament and for 1986 and beyond. A professional team was formed in 2011, Antigua Barracuda FC, which played in the USL Pro, a lower professional league in the USA. The nation's team had a major achievement in 2012, getting out of its preliminary group for the 2014 World Cup, notably due to a victory over powerful Haiti. In its first game in the next CONCACAF group play on 8 June 2012 in Tampa, FL, Antigua and Barbuda, comprising 17 Barracuda players and 7 from the lower English professional leagues, scored a goal against the United States. However, the team lost 3:1 to the US. Daniel Bailey had become the first Antiguan to reach a world indoor final, where he won a bronze medal at the 2010 IAAF World Indoor Championships. He was also the first Antiguan to make a 100m final at the 2009 World Championships in Athletics, and the first Antiguan to run under 10 seconds over 100m. Brendan Christian won a gold medal in the 200m and bronze medal in the 100m at the 2007 Pan American Games. James Grayman won a bronze medal at the same games in the men's High Jump. Miguel Francis is the first Antiguan to run sub 20 seconds in the 200m Heather Samuel won a bronze medal at the 1995 Pan American Games over 100m. 400m Hurdles Olympian Gold Medalist Rai Benjamin previously represented Antigua and Barbuda before representing the United States. His Silver medal run at the 2020 Olympic Games made him the second-fastest person in history over 400m Hurdles with a time of 46.17. Notable people Symbols The national bird is the frigate bird, and the national tree is the Bucida buceras (Whitewood tree). Clare Waight Keller included agave karatto to represent Antigua and Barbuda in Meghan Markle's wedding veil, which included the distinctive flora of each Commonwealth country. Despite being an introduced species, the European fallow deer (Dama dama) is the national animal. In 1992, the government ran a national competition to design a new national dress for the country; this was won by artist Heather Doram. See also Geology of Antigua and Barbuda Outline of Antigua and Barbuda Index of Antigua and Barbuda–related articles Transport in Antigua and Barbuda References Works cited Further reading Nicholson, Desmond V., Antigua, Barbuda, and Redonda: A Historical Sketch, St. Johns, Antigua: Antigua and Barbuda Museum, 1991. Dyde, Brian, A History of Antigua: The Unsuspected Isle, London: Macmillan Caribbean, 2000. Gaspar, David Barry – Bondmen & Rebels: A Study of Master-Slave Relations in Antigua, with Implications for Colonial America. Harris, David R. – Plants, Animals, and Man in the Outer Leeward Islands, West Indies. An Ecological Study of Antigua, Barbuda, and Anguilla. Henry, Paget – Peripheral Capitalism and Underdevelopment in Antigua. Lazarus-Black, Mindie – Legitimate Acts and Illegal Encounters: Law and Society in Antigua and Barbuda. Riley, J. H. – Catalogue of a Collection of Birds from Barbuda and Antigua, British West Indies. Rouse, Irving and Birgit Faber Morse – Excavations at the Indian Creek Site, Antigua, West Indies. Thomas Hearne. Southampton. External links Antigua and Barbuda, United States Library of Congress Antigua and Barbuda. The World Factbook. Central Intelligence Agency. Antigua and Barbuda from UCB Libraries GovPubs Antigua and Barbuda from the BBC News World Bank's country data profile for Antigua and Barbuda ArchaeologyAntigua.org – 2010March13 source of archaeological information for Antigua and Barbuda Countries in the Caribbean Island countries Commonwealth realms Countries in North America English-speaking countries and territories Member states of the Caribbean Community Member states of the Commonwealth of Nations Member states of the Organisation of Eastern Caribbean States Current member states of the United Nations Small Island Developing States British Leeward Islands Former British colonies and protectorates in the Americas Former colonies in North America 1630s establishments in the Caribbean 1632 establishments in the British Empire 1981 disestablishments in the United Kingdom States and territories established in 1981
953
https://en.wikipedia.org/wiki/Azincourt
Azincourt
Azincourt (), historically known in English as Agincourt ( ), is a commune in the Pas-de-Calais department in northern France. It is situated north-west of Saint-Pol-sur-Ternoise on the D71 road between Hesdin and Fruges The Late Medieval Battle of Agincourt between the English and the French took place in the commune in 1415. Toponym The name is attested as Aisincurt in 1175, derived from a Germanic masculine name Aizo, Aizino and the early Northern French word curt (which meant a farm with a courtyard; derived from the Late Latin cortem). The name has no etymological link with Agincourt, Meurthe-et-Moselle (attested as Egincourt 875), which is derived separately from another Germanic male name *Ingin-. History Azincourt is famous as being near the site of the battle fought on 25 October 1415 in which the army led by King Henry V of England defeated the forces led by Charles d'Albret on behalf of Charles VI of France, which has gone down in history as the Battle of Agincourt. According to M. Forrest, the French knights were so encumbered by their armour that they were exhausted even before the start of the battle. Later on, when he became king in 1509, Henry VIII is supposed to have commissioned an English translation of a Life of Henry V so that he could emulate him, on the grounds that he thought that launching a campaign against France would help him to impose himself on the European stage. In 1513, Henry VIII crossed the English Channel, stopping by at Azincourt. The battle, as was the tradition, was named after a nearby castle called Azincourt. The castle has since disappeared and the settlement now known as Azincourt adopted the name in the seventeenth century. John Cassell wrote in 1857 that "the village of Azincourt itself is now a group of dirty farmhouses and wretched cottages, but where the hottest of the battle raged, between that village and the commune of Tramecourt, there still remains a wood precisely corresponding with the one in which Henry placed his ambush; and there are yet existing the foundations of the castle of Azincourt, from which the king named the field." Population Sights The original battlefield museum in the village featured model knights made out of Action Man figures. This has now been replaced by the Centre historique médiéval d'Azincourt (CHM)a more professional museum, conference centre and exhibition space incorporating laser, video, slide shows, audio commentaries, and some interactive elements. The museum building is shaped like a longbow similar to those used at the battle by archers under King Henry. Since 2004 a large medieval festival organised by the local community, the CHM, The Azincourt Alliance, and various other UK societies commemorating the battle, local history and medieval life, arts and crafts has been held in the village. Prior to this date the festival was held in October, but due to the inclement weather and local heavy clay soil (like the battle) making the festival difficult, it was moved to the last Sunday in July. International relations Azincourt is twinned with Middleham, United Kingdom. See also Communes of the Pas-de-Calais department The neighbourhood of Agincourt, Toronto, Canada, named for Azincourt, not Agincourt, Meurthe-et-Moselle References INSEE commune file Communes of Pas-de-Calais
954
https://en.wikipedia.org/wiki/Albert%20Speer
Albert Speer
Berthold Konrad Hermann Albert Speer (; ; 19 March 1905 – 1 September 1981) was a German architect who served as the Minister of Armaments and War Production in Nazi Germany during most of World War II. A close ally of Adolf Hitler, he was convicted at the Nuremberg trials and sentenced to 20 years in prison. An architect by training, Speer joined the Nazi Party in 1931. His architectural skills made him increasingly prominent within the Party, and he became a member of Hitler's inner circle. Hitler commissioned him to design and construct structures including the Reich Chancellery and the Nazi party rally grounds in Nuremberg. In 1937, Hitler appointed Speer as General Building Inspector for Berlin. In this capacity he was responsible for the Central Department for Resettlement that evicted Jewish tenants from their homes in Berlin. In February 1942, Speer was appointed as Reich Minister of Armaments and War Production. Using misleading statistics, he promoted himself as having performed an "armaments miracle" that was widely credited with keeping Germany in the war. In 1944, Speer established a task force to increase production of fighter aircraft. It became instrumental in the exploitation of slave labor for the benefit of the German war effort. After the war, Speer was among the 24 "major war criminals" arrested and charged with the crimes of the Nazi regime at the Nuremberg trials. He was found guilty of war crimes and crimes against humanity, principally for the use of slave labor, narrowly avoiding a death sentence. Having served his full term, Speer was released in 1966. He used his writings from the time of imprisonment as the basis for two autobiographical books, Inside the Third Reich and Spandau: The Secret Diaries. Speer's books were a success; the public was fascinated by an inside view of the Third Reich. Speer died of a stroke in 1981. Little remains of his personal architectural work. Through his autobiographies and interviews, Speer carefully constructed an image of himself as a man who deeply regretted having failed to discover the monstrous crimes of the Third Reich. He continued to deny explicit knowledge of, and responsibility for the Holocaust. This image dominated his historiography in the decades following the war, giving rise to the "Speer Myth": the perception of him as an apolitical technocrat responsible for revolutionizing the German war machine. The myth began to fall apart in the 1980s, when the armaments miracle was attributed to Nazi propaganda. Adam Tooze wrote in The Wages of Destruction that the idea that Speer was an apolitical technocrat was "absurd". Martin Kitchen, writing in Speer: Hitler's Architect, stated that much of the increase in Germany's arms production was actually due to systems instituted by Speer's predecessor (Fritz Todt) and furthermore that Speer was intimately involved in the "Final Solution". Early years and personal life Speer was born in Mannheim, into an upper-middle-class family. He was the second of three sons of Luise Máthilde Wilhelmine (Hommel) and Albert Friedrich Speer. In 1918, the family leased their Mannheim residence and moved to a home they had in Heidelberg. Henry T. King, deputy prosecutor at the Nuremberg trials who later wrote a book about Speer said, "Love and warmth were lacking in the household of Speer's youth." His brothers, Ernst and Hermann, bullied him throughout his childhood. Speer was active in sports, taking up skiing and mountaineering. He followed in the footsteps of his father and grandfather and studied architecture. Speer began his architectural studies at the University of Karlsruhe instead of a more highly acclaimed institution because the hyperinflation crisis of 1923 limited his parents' income. In 1924, when the crisis had abated, he transferred to the "much more reputable" Technical University of Munich. In 1925, he transferred again, this time to the Technical University of Berlin where he studied under Heinrich Tessenow, whom Speer greatly admired. After passing his exams in 1927, Speer became Tessenow's assistant, a high honor for a man of 22. As such, Speer taught some of his classes while continuing his own postgraduate studies. In Munich Speer began a close friendship, ultimately spanning over 50 years, with Rudolf Wolters, who also studied under Tessenow. In mid-1922, Speer began courting Margarete (Margret) Weber (1905–1987), the daughter of a successful craftsman who employed 50 workers. The relationship was frowned upon by Speer's class-conscious mother, who felt the Webers were socially inferior. Despite this opposition, the two married in Berlin on 28 August 1928; seven years elapsed before Margarete was invited to stay at her in-laws' home. The couple would have six children together, but Albert Speer grew increasingly distant from his family after 1933. He remained so even after his release from imprisonment in 1966, despite their efforts to forge closer bonds. Party architect and government functionary Joining the Nazis (1931–1934) In January 1931, Speer applied for Nazi Party membership, and on 1 March 1931, he became member number 474,481. The same year, with stipends shrinking amid the Depression, Speer surrendered his position as Tessenow's assistant and moved to Mannheim, hoping to make a living as an architect. After he failed to do so, his father gave him a part-time job as manager of his properties. In July 1932, the Speers visited Berlin to help out the Party before the Reichstag elections. While they were there his friend, Nazi Party official Karl Hanke recommended the young architect to Joseph Goebbels to help renovate the Party's Berlin headquarters. When the commission was completed, Speer returned to Mannheim and remained there as Hitler took office in January 1933. The organizers of the 1933 Nuremberg Rally asked Speer to submit designs for the rally, bringing him into contact with Hitler for the first time. Neither the organizers nor Rudolf Hess were willing to decide whether to approve the plans, and Hess sent Speer to Hitler's Munich apartment to seek his approval. This work won Speer his first national post, as Nazi Party "Commissioner for the Artistic and Technical Presentation of Party Rallies and Demonstrations". Shortly after Hitler came into power, he began to make plans to rebuild the chancellery. At the end of 1933, he contracted Paul Troost to renovate the entire building. Hitler appointed Speer, whose work for Goebbels had impressed him, to manage the building site for Troost. As Chancellor, Hitler had a residence in the building and came by every day to be briefed by Speer and the building supervisor on the progress of the renovations. After one of these briefings, Hitler invited Speer to lunch, to the architect's great excitement. Speer quickly became part of Hitler's inner circle; he was expected to call on him in the morning for a walk or chat, to provide consultation on architectural matters, and to discuss Hitler's ideas. Most days he was invited to dinner. In the English version of his memoirs, Speer says that his political commitment merely consisted of paying his "monthly dues". He assumed his German readers would not be so gullible and told them the Nazi Party offered a "new mission". He was more forthright in an interview with William Hamsher in which he said he joined the party in order to save "Germany from Communism". After the war, he claimed to have had little interest in politics at all and had joined almost by chance. Like many of those in power in the Third Reich, he was not an ideologue, "nor was he anything more than an instinctive anti-Semite." The historian Magnus Brechtken, discussing Speer, said he did not give anti-Jewish public speeches and that his anti-Semitism can best be understood through his actions—which were anti-Semitic. Brechtken added that, throughout Speer's life, his central motives were to gain power, rule, and acquire wealth. Nazi architect (1934–1937) When Troost died on 21 January 1934, Speer effectively replaced him as the Party's chief architect. Hitler appointed Speer as head of the Chief Office for Construction, which placed him nominally on Hess's staff. One of Speer's first commissions after Troost's death was the Zeppelinfeld stadium in Nuremberg. It was used for Nazi propaganda rallies and can be seen in Leni Riefenstahl's propaganda film Triumph of the Will. The building was able to hold 340,000 people. Speer insisted that as many events as possible be held at night, both to give greater prominence to his lighting effects and to hide the overweight Nazis. Nuremberg was the site of many official Nazi buildings. Many more buildings were planned. If built, the German Stadium would have accommodated 400,000 spectators. Speer modified Werner March's design for the Olympic Stadium being built for the 1936 Summer Olympics. He added a stone exterior that pleased Hitler. Speer designed the German Pavilion for the 1937 international exposition in Paris. Berlin's General Building Inspector (1937–1942) On 30 January 1937, Hitler appointed Speer as General Building Inspector for the Reich Capital. This carried with it the rank of State Secretary in the Reich government and gave him extraordinary powers over the Berlin city government. He was to report directly to Hitler, and was independent of both the mayor and the Gauleiter of Berlin. Hitler ordered Speer to develop plans to rebuild Berlin. These centered on a three-mile-long grand boulevard running from north to south, which Speer called the Prachtstrasse, or Street of Magnificence; he also referred to it as the "North–South Axis". At the northern end of the boulevard, Speer planned to build the Volkshalle, a huge domed assembly hall over high, with floor space for 180,000 people. At the southern end of the avenue, a great triumphal arch, almost high and able to fit the Arc de Triomphe inside its opening, was planned. The existing Berlin railroad termini were to be dismantled, and two large new stations built. Speer hired Wolters as part of his design team, with special responsibility for the Prachtstrasse. The outbreak of World War II in 1939 led to the postponement, and later the abandonment, of these plans. Plans to build a new Reich chancellery had been underway since 1934. Land had been purchased by the end of 1934 and starting in March 1936 the first buildings were demolished to create space at Voßstraße. Speer was involved virtually from the beginning. In the aftermath of the Night of the Long Knives, he had been commissioned to renovate the Borsig Palace on the corner of Voßstraße and Wilhelmstraße as headquarters of the Sturmabteilung (SA). He completed the preliminary work for the new chancellery by May 1936. In June 1936 he charged a personal honorarium of 30,000 Reichsmark and estimated the chancellery would be completed within three to four years. Detailed plans were completed in July 1937 and the first shell of the new chancellery was complete on 1 January 1938. On 27 January 1938, Speer received plenipotentiary powers from Hitler to finish the new chancellery by 1 January 1939. For propaganda Hitler claimed during the topping-out ceremony on 2 August 1938, that he had ordered Speer to complete the new chancellery that year. Shortages of labor meant the construction workers had to work in ten-to-twelve-hour shifts. The Schutzstaffel (SS) built two concentration camps in 1938 and used the inmates to quarry stone for its construction. A brick factory was built near the Oranienburg concentration camp at Speer's behest; when someone commented on the poor conditions there, Speer stated, "The Yids got used to making bricks while in Egyptian captivity". The chancellery was completed in early January 1939. The building itself was hailed by Hitler as the "crowning glory of the greater German political empire". During the Chancellery project, the pogrom of Kristallnacht took place. Speer made no mention of it in the first draft of Inside the Third Reich. It was only on the urgent advice of his publisher that he added a mention of seeing the ruins of the Central Synagogue in Berlin from his car. Kristallnacht accelerated Speer's ongoing efforts to dispossess Berlin's Jews from their homes. From 1939 on, Speer's Department used the Nuremberg Laws to evict Jewish tenants of non-Jewish landlords in Berlin, to make way for non-Jewish tenants displaced by redevelopment or bombing. Eventually, 75,000 Jews were displaced by these measures. Speer denied he knew they were being put on Holocaust trains and claimed that those displaced were, "Completely free and their families were still in their apartments". He also said: " ... en route to my ministry on the city highway, I could see ... crowds of people on the platform of nearby Nikolassee Railroad Station. I knew that these must be Berlin Jews who were being evacuated. I am sure that an oppressive feeling struck me as I drove past. I presumably had a sense of somber events." Matthias Schmidt said Speer had personally inspected concentration camps and described his comments as an "outright farce". Martin Kitchen described Speer's often repeated line that he knew nothing of the "dreadful things" as hollow—because not only was he fully aware of the fate of the Jews he was actively participating in their persecution. As Germany started World War II in Europe, Speer instituted quick-reaction squads to construct roads or clear away debris; before long, these units would be used to clear bomb sites. Speer used forced Jewish labor on these projects, in addition to regular German workers. Construction stopped on the Berlin and Nüremberg plans at the outbreak of war. Though stockpiling of materials and other work continued, this slowed to a halt as more resources were needed for the armament industry. Speer's offices undertook building work for each branch of the military, and for the SS, using slave labor. Speer's building work made him among the wealthiest of the Nazi elite. Minister of Armaments Appointment and increasing power In 1941, Speer was elected to the Reichstag from electoral constituency 2 (Berlin-West). On 8 February 1942, Reich Minister of Armaments and Munitions Fritz Todt died in a plane crash shortly after taking off from Hitler's eastern headquarters at Rastenburg. Speer arrived there the previous evening and accepted Todt's offer to fly with him to Berlin. Speer cancelled some hours before take-off because the previous night he had been up late in a meeting with Hitler. Hitler appointed Speer in Todt's place. Martin Kitchen, a British historian, says that the choice was not surprising. Speer was loyal to Hitler, and his experience building prisoner of war camps and other structures for the military qualified him for the job. Speer succeeded Todt not only as Reich Minister but in all his other powerful positions, including Inspector General of German Roadways, Inspector General for Water and Energy and Head of the Nazi Party's Office of Technology. At the same time, Hitler also appointed Speer as head of the Organisation Todt, a massive, government-controlled construction company. Characteristically Hitler did not give Speer any clear remit; he was left to fight his contemporaries in the regime for power and control. As an example, he wanted to be given power over all armaments issues under Hermann Göring's Four Year Plan. Göring was reluctant to grant this. However Speer secured Hitler's support, and on 1 March 1942, Göring signed a decree naming Speer "General Plenipotentiary for Armament Tasks" in the Four Year Plan. Speer proved to be ambitious, unrelenting and ruthless. Speer set out to gain control not just of armaments production in the army, but in the whole armed forces. It did not immediately dawn on his political rivals that his calls for rationalization and reorganization were hiding his desire to sideline them and take control. By April 1942, Speer had persuaded Göring to create a three-member Central Planning Board within the Four Year Plan, which he used to obtain supreme authority over procurement and allocation of raw materials and scheduling of production in order to consolidate German war production in a single agency. Speer was fêted at the time, and in the post-war era, for performing an "armaments miracle" in which German war production dramatically increased. This "miracle" was brought to a halt in the summer of 1943 by, among other factors, the first sustained Allied bombing. Other factors probably contributed to the increase more than Speer himself. Germany's armaments production had already begun to result in increases under his predecessor, Todt. Naval armaments were not under Speer's supervision until October 1943, nor the Luftwaffe's armaments until June of the following year. Yet each showed comparable increases in production despite not being under Speer's control. Another factor that produced the boom in ammunition was the policy of allocating more coal to the steel industry. Production of every type of weapon peaked in June and July 1944, but there was now a severe shortage of fuel. After August 1944, oil from the Romanian fields was no longer available. Oil production became so low that any possibility of offensive action became impossible and weaponry lay idle. As Minister of Armaments, Speer was responsible for supplying weapons to the army. With Hitler's full agreement, he decided to prioritize tank production, and he was given unrivaled power to ensure success. Hitler was closely involved with the design of the tanks, but kept changing his mind about the specifications. This delayed the program, and Speer was unable to remedy the situation. In consequence, despite tank production having the highest priority, relatively little of the armaments budget was spent on it. This led to a significant German Army failure at the Battle of Prokhorovka, a major turning point on the Eastern Front against the Soviet Red Army. As head of Organisation Todt, Speer was directly involved in the construction and alteration of concentration camps. He agreed to expand Auschwitz and some other camps, allocating 13.7 million Reichsmarks for the work to be carried out. This allowed an extra 300 huts to be built at Auschwitz, increasing the total human capacity to 132,000. Included in the building works was material to build gas chambers, crematoria and morgues. The SS called this "Professor Speer's Special Programme". Speer realized that with six million workers drafted into the armed forces, there was a labor shortage in the war economy, and not enough workers for his factories. In response, Hitler appointed Fritz Sauckel as a "manpower dictator" to obtain new workers. Speer and Sauckel cooperated closely to meet Speer's labor demands. Hitler gave Sauckel a free hand to obtain labor, something that delighted Speer, who had requested 1,000,000 "voluntary" laborers to meet the need for armament workers. Sauckel had whole villages in France, Holland and Belgium forcibly rounded up and shipped to Speer's factories. Sauckel obtained new workers often using the most brutal methods. In occupied areas of the Soviet Union, that had been subject to partisan action, civilian men and women were rounded up en masse and sent to work forcibly in Germany. By April 1943, Sauckel had supplied 1,568,801 "voluntary" laborers, forced laborers, prisoners of war and concentration camp prisoners to Speer for use in his armaments factories. It was for the maltreatment of these people, that Speer was principally convicted at the Nuremberg Trials. Consolidation of arms production Following his appointment as Minister of Armaments, Speer was in control of armaments production solely for the Army. He coveted control of the production of armaments for the Luftwaffe and Kriegsmarine as well. He set about extending his power and influence with unexpected ambition. His close relationship with Hitler provided him with political protection, and he was able to outwit and outmaneuver his rivals in the regime. Hitler's cabinet was dismayed at his tactics, but, regardless, he was able to accumulate new responsibilities and more power. By July 1943, he had gained control of armaments production for the Luftwaffe and Kriegsmarine. In August 1943, he took control of most of the Ministry of Economics, to become, in Admiral Dönitz's words, "Europe's economic dictator". His formal title was changed on 2 September 1943, to "Reich Minister for Armaments and War Production". He had become one of the most powerful people in Nazi Germany. Speer and his hand-picked director of submarine construction Otto Merker believed that the shipbuilding industry was being held back by outdated methods, and revolutionary new approaches imposed by outsiders would dramatically improve output. This belief proved incorrect, and Speer and Merker's attempt to build the Kriegsmarines new generation of submarines, the Type XXI and Type XXIII, as prefabricated sections at different facilities rather than at single dockyards contributed to the failure of this strategically important program. The designs were rushed into production, and the completed submarines were crippled by flaws which resulted from the way they had been constructed. While dozens of submarines were built, few ever entered service. In December 1943, Speer visited Organisation Todt workers in Lapland, while there he seriously damaged his knee and was incapacitated for several months. He was under the dubious care of Professor Karl Gebhardt at a medical clinic called Hohenlychen where patients "mysteriously failed to survive". In mid-January 1944, Speer had a lung embolism and fell seriously ill. Concerned about retaining power, he did not appoint a deputy and continued to direct work of the Armaments Ministry from his bedside. Speer's illness coincided with the Allied "Big Week", a series of bombing raids on the German aircraft factories that were a devastating blow to aircraft production. His political rivals used the opportunity to undermine his authority and damage his reputation with Hitler. He lost Hitler's unconditional support and began to lose power. In response to the Allied Big Week, Adolf Hitler authorized the creation of a Fighter Staff committee. Its aim was to ensure the preservation and growth of fighter aircraft production. The task force was established by 1 March 1944, orders of Speer, with support from Erhard Milch of the Reich Aviation Ministry. Production of German fighter aircraft more than doubled between 1943 and 1944. The growth, however, consisted in large part of models that were becoming obsolescent and proved easy prey for Allied aircraft. On 1 August 1944, Speer merged the Fighter Staff into a newly formed Armament Staff committee. The Fighter Staff committee was instrumental in bringing about the increased exploitation of slave labor in the war economy. The SS provided 64,000 prisoners for 20 separate projects from various concentration camps including Mittelbau-Dora. Prisoners worked for Junkers, Messerschmitt, Henschel and BMW, among others. To increase production, Speer introduced a system of punishments for his workforce. Those who feigned illness, slacked off, sabotaged production or tried to escape were denied food or sent to concentration camps. In 1944, this became endemic; over half a million workers were arrested. By this time, 140,000 people were working in Speer's underground factories. These factories were death-traps; discipline was brutal, with regular executions. There were so many corpses at the Dora underground factory, for example, that the crematorium was overwhelmed. Speer's own staff described the conditions there as "hell". The largest technological advance under Speer's command came through the rocket program. It began in 1932 but had not supplied any weaponry. Speer enthusiastically supported the program and in March 1942 made an order for A4 rockets, the predecessor of the world's first ballistic missile, the V-2 rocket. The rockets were researched at a facility in Peenemünde along with the V-1 flying bomb. The V-2's first target was Paris on 8 September 1944. The program while advanced proved to be an impediment to the war economy. The large capital investment was not repaid in military effectiveness. The rockets were built at an underground factory at Mittelwerk. Labor to build the A4 rockets came from the Mittelbau-Dora concentration camp. Of the 60,000 people who ended up at the camp 20,000 died, due to the appalling conditions. On 14 April 1944, Speer lost control of Organisation Todt to his Deputy, Franz Xaver Dorsch. He opposed the assassination attempt against Hitler on 20 July 1944. He was not involved in the plot, and played a minor role in the regime's efforts to regain control over Berlin after Hitler survived. After the plot Speer's rivals attacked some of his closest allies and his management system fell out of favor with radicals in the party. He lost yet more authority. Defeat of Nazi Germany Losses of territory and a dramatic expansion of the Allied strategic bombing campaign caused the collapse of the German economy from late 1944. Air attacks on the transport network were particularly effective, as they cut the main centres of production off from essential coal supplies. In January 1945, Speer told Goebbels that armaments production could be sustained for at least a year. However, he concluded that the war was lost after Soviet forces captured the important Silesian industrial region later that month. Nevertheless, Speer believed that Germany should continue the war for as long as possible with the goal of winning better conditions from the Allies than the unconditional surrender they insisted upon. During January and February, Speer claimed that his ministry would deliver "decisive weapons" and a large increase in armaments production which would "bring about a dramatic change on the battlefield". Speer gained control over the railways in February, and asked Heinrich Himmler to supply concentration camp prisoners to work on their repair. By mid-March, Speer had accepted that Germany's economy would collapse within the next eight weeks. While he sought to frustrate directives to destroy industrial facilities in areas at risk of capture, so that they could be used after the war, he still supported the war's continuation. Speer provided Hitler with a memorandum on 15 March, which detailed Germany's dire economic situation and sought approval to cease demolitions of infrastructure. Three days later, he also proposed to Hitler that Germany's remaining military resources be concentrated along the Rhine and Vistula rivers in an attempt to prolong the fighting. This ignored military realities, as the German armed forces were unable to match the Allies' firepower and were facing total defeat. Hitler rejected Speer's proposal to cease demolitions. Instead, he issued the "Nero Decree" on 19 March, which called for the destruction of all infrastructure as the army retreated. Speer was appalled by this order, and persuaded several key military and political leaders to ignore it. During a meeting with Speer on 28/29 March, Hitler rescinded the decree and gave him authority over demolitions. Speer ended them, though the army continued to blow up bridges. By April, little was left of the armaments industry, and Speer had few official duties. Speer visited the Führerbunker on 22 April for the last time. He met Hitler and toured the damaged Chancellery before leaving Berlin to return to Hamburg. On 29 April, the day before committing suicide, Hitler dictated a final political testament which dropped Speer from the successor government. Speer was to be replaced by his subordinate, Karl-Otto Saur. Speer was disappointed that Hitler had not selected him as his successor. After Hitler's death, Speer offered his services to the so-called Flensburg Government, headed by Hitler's successor, Karl Dönitz. He took a role in that short-lived regime as Minister of Industry and Production. Speer provided information to the Allies, regarding the effects of the air war, and on a broad range of subjects, beginning on 10 May. On 23 May, two weeks after the surrender of German forces, British troops arrested the members of the Flensburg Government and brought Nazi Germany to a formal end. Post-war Nuremberg trial Speer was taken to several internment centres for Nazi officials and interrogated. In September 1945, he was told that he would be tried for war crimes, and several days later, he was moved to Nuremberg and incarcerated there. Speer was indicted on four counts: participating in a common plan or conspiracy for the accomplishment of crime against peace; planning, initiating and waging wars of aggression and other crimes against peace; war crimes; and crimes against humanity. The chief United States prosecutor, Robert H. Jackson, of the U.S. Supreme Court said, "Speer joined in planning and executing the program to dragoon prisoners of war and foreign workers into German war industries, which waxed in output while the workers waned in starvation." Speer's attorney, Hans Flächsner, presented Speer as an artist thrust into political life who had always remained a non-ideologue. Speer was found guilty of war crimes and crimes against humanity, principally for the use of slave labor and forced labor. He was acquitted on the other two counts. He had claimed that he was unaware of Nazi extermination plans, and the Allies had no proof that he was aware. His claim was revealed to be false in a private correspondence written in 1971 and publicly disclosed in 2007. On 1 October 1946, he was sentenced to 20 years' imprisonment. While three of the eight judges (two Soviet and American Francis Biddle) advocated the death penalty for Speer, the other judges did not, and a compromise sentence was reached after two days of discussions. Imprisonment On 18 July 1947, Speer was transferred to Spandau Prison in Berlin to serve his prison term. There he was known as Prisoner Number Five. Speer's parents died while he was incarcerated. His father, who died in 1947, despised the Nazis and was silent upon meeting Hitler. His mother died in 1952. As a Nazi Party member, she had greatly enjoyed dining with Hitler. Wolters and longtime Speer secretary Annemarie Kempf, while not permitted direct communication with Speer in Spandau, did what they could to help his family and carry out the requests Speer put in letters to his wife—the only written communication he was officially allowed. Beginning in 1948, Speer had the services of Toni Proost, a sympathetic Dutch orderly to smuggle mail and his writings. In 1949, Wolters opened a bank account for Speer and began fundraising among those architects and industrialists who had benefited from Speer's activities during the war. Initially, the funds were used only to support Speer's family, but increasingly the money was used for other purposes. They paid for Toni Proost to go on holiday, and for bribes to those who might be able to secure Speer's release. Once Speer became aware of the existence of the fund, he sent detailed instructions about what to do with the money. Wolters raised a total of DM158,000 for Speer over the final seventeen years of his sentence. The prisoners were forbidden to write memoirs. Speer was able to have his writings sent to Wolters, however, and they eventually amounted to 20,000 pages. He had completed his memoirs by November 1953, which became the basis of Inside the Third Reich. In Spandau Diaries, Speer aimed to present himself as a tragic hero who had made a Faustian bargain for which he endured a harsh prison sentence. Much of Speer's energy was dedicated to keeping fit, both physically and mentally, during his long confinement. Spandau had a large enclosed yard where inmates were allocated plots of land for gardening. Speer created an elaborate garden complete with lawns, flower beds, shrubbery, and fruit trees. To make his daily walks around the garden more engaging Speer embarked on an imaginary trip around the globe. Carefully measuring distance travelled each day, he mapped distances to the real-world geography. He had walked more than , ending his sentence near Guadalajara, Mexico. Speer also read, studied architectural journals, and brushed up on English and French. In his writings, Speer claimed to have finished five thousand books while in prison, a gross exaggeration. His sentence amounted to 7,300 days, which only allotted one and a half days per book. Speer's supporters maintained calls for his release. Among those who pledged support for his sentence to be commuted were Charles de Gaulle and US diplomat George Wildman Ball. Willy Brandt was an advocate of his release, putting an end to the de-Nazification proceedings against him, which could have caused his property to be confiscated. Speer's efforts for an early release came to naught. The Soviet Union, having demanded a death sentence at trial, was unwilling to entertain a reduced sentence. Speer served a full term and was released at midnight on 1 October 1966. Release and later life Speer's release from prison was a worldwide media event. Reporters and photographers crowded both the street outside Spandau and the lobby of the Hotel Berlin where Speer spent the night. He said little, reserving most comments for a major interview published in Der Spiegel in November 1966. Although he stated he hoped to resume an architectural career, his sole project, a collaboration for a brewery, was unsuccessful. Instead, he revised his Spandau writings into two autobiographical books, and later published a work about Himmler and the SS. His books included Inside the Third Reich (in German, Erinnerungen, or Reminiscences) and Spandau: The Secret Diaries. Speer was aided in shaping the works by Joachim Fest and Wolf Jobst Siedler from the publishing house Ullstein. He found himself unable to re-establish a relationship with his children, even with his son Albert who had also become an architect. According to Speer's daughter Hilde Schramm, "One by one my sister and brothers gave up. There was no communication." He supported Hermann, his brother, financially after the war. However, his other brother Ernst had died in the Battle of Stalingrad, despite repeated requests from his parents for Speer to repatriate him. Following his release from Spandau, Speer donated the Chronicle, his personal diary, to the German Federal Archives. It had been edited by Wolters and made no mention of the Jews. David Irving discovered discrepancies between the deceptively edited Chronicle and independent documents. Speer asked Wolters to destroy the material he had omitted from his donation but Wolters refused and retained an original copy. Wolters' friendship with Speer deteriorated and one year before Speer's death Wolters gave Matthias Schmidt access to the unedited Chronicle. Schmidt authored the first book that was highly critical of Speer. Speer's memoirs were a phenomenal success. The public was fascinated by an inside view of the Third Reich and a major war criminal became a popular figure almost overnight. Importantly, he provided an alibi to older Germans who had been Nazis. If Speer, who had been so close to Hitler, had not known the full extent of the crimes of the Nazi regime and had just been "following orders", then they could tell themselves and others they too had done the same. So great was the need to believe this "Speer Myth" that Fest and Siedler were able to strengthen it—even in the face of mounting historical evidence to the contrary. Death Speer made himself widely available to historians and other enquirers. In October 1973, he made his first trip to Britain, flying to London to be interviewed on the BBC Midweek programme. In the same year, he appeared on the television programme The World at War. Speer returned to London in 1981 to participate in the BBC Newsnight programme. He suffered a stroke and died in London on 1 September. He had remained married to his wife, but he had formed a relationship with a German woman living in London and was with her at the time of his death. His daughter, Margret Nissen, wrote in her 2005 memoirs that after his release from Spandau he spent all of his time constructing the "Speer Myth". The Speer myth The Good Nazi After his release from Spandau, Speer portrayed himself as the "good Nazi". He was well-educated, middle class, and bourgeois, and could contrast himself with those who, in the popular mind, typified "bad Nazis". In his memoirs and interviews, he had distorted the truth and made so many major omissions that his lies became known as "myths". Speer took his myth-making to a mass media level and his "cunning apologies" were reproduced countless times in post-war Germany. Isabell Trommer writes in her biography of Speer that Fest and Siedler were co-authors of Speer's memoirs and co-creators of his myths. In return they were paid handsomely in royalties and other financial inducements. Speer, Siedler and Fest had constructed a masterpiece; the image of the "good Nazi" remained in place for decades, despite historical evidence indicating that it was false. Speer had carefully constructed an image of himself as an apolitical technocrat who deeply regretted having failed to discover the monstrous crimes of the Third Reich. This construction was accepted almost at face value by historian Hugh Trevor-Roper when investigating the death of Adolf Hitler for British Intelligence and in writing The Last Days of Hitler. Trevor-Roper frequently refers to Speer as "a technocrat [who] nourished a technocrat's philosophy", one who cared only for his building projects or his ministerial duties, and who thought that politics was irrelevant, at least until Hitler's Nero Decree which Speer, according to his own telling, worked assiduously to counter. Trevor-Roper – who calls Speer an administrative genius whose basic instincts were peaceful and constructive – does take Speer to task, however, for his failure to recognize the immorality of Hitler and Nazism, calling him "the real criminal of Nazi Germany": For ten years he sat at the very centre of political power; his keen intelligence diagnosed the nature and observed the mutations of Nazi government and policy; he saw and despised the personalities around him; he heard their outrageous orders and understood their fantastic ambitions; but he did nothing. Supposing politics to be irrelevant, he turned aside and built roads and bridges and factories, while the logical consequences of government by madmen emerged. Ultimately, when their emergence involved the ruin of all his work, Speer accepted the consequences and acted. Then it was too late; Germany had been destroyed. After Speer's death, Matthias Schmidt published a book that demonstrated that Speer had ordered the eviction of Jews from their Berlin homes. By 1999, historians had amply demonstrated that he had lied extensively. Even so, public perceptions of Speer did not change substantially until Heinrich Breloer aired a biographical film on TV in 2004. The film began a process of demystification and critical reappraisal. Adam Tooze in his book The Wages of Destruction said Speer had manoeuvred himself through the ranks of the regime skillfully and ruthlessly and that the idea he was a technocrat blindly carrying out orders was "absurd". Trommer said he was not an apolitical technocrat; instead, he was one of the most powerful and unscrupulous leaders in the Nazi regime. Kitchen said he had deceived the Nuremberg Tribunal and post-war Germany. Brechtken said that if his extensive involvement in the Holocaust had been known at the time of his trial he would have been sentenced to death. The image of the good Nazi was supported by numerous Speer myths. In addition to the myth that he was an apolitical technocrat, he claimed he did not have full knowledge of the Holocaust or the persecution of the Jews. Another myth posits that Speer revolutionized the German war machine after his appointment as Minister of Armaments. He was credited with a dramatic increase in the shipment of arms that was widely reported as keeping Germany in the war. Another myth centered around a faked plan to assassinate Hitler with poisonous gas. The idea for this myth came to him after he recalled the panic when car fumes came through an air ventilation system. He fabricated the additional details. Brechtken wrote that his most brazen lie was fabricated during an interview with a French journalist in 1952. The journalist described an invented scenario in which Speer had refused Hitler's orders and Hitler had left with tears in his eyes. Speer liked the scenario so much that he wrote it into his memoirs. The journalist had unwittingly collaborated in one of his myths. Speer also sought to portray himself as an opponent of Hitler's leadership. Despite his opposition to the 20 July plot, he falsely claimed in his memoirs to have been sympathetic to the plotters. He maintained Hitler was cool towards him for the remainder of his life after learning they had included him on a list of potential ministers. This formed a key element of the myths Speer encouraged. Speer also falsely claimed that he had realised the war was lost at an early stage, and thereafter worked to preserve the resources needed for the civilian population's survival. In reality, he had sought to prolong the war until further resistance was impossible, thus contributing to the large number of deaths and the extensive destruction Germany suffered in the conflict's final months. Denial of responsibility Speer maintained at the Nuremberg trials and in his memoirs that he had no direct knowledge of the Holocaust. He admitted only to being uncomfortable around Jews in the published version of the Spandau Diaries. More broadly, Speer accepted responsibility for the Nazi regime's actions. Historian Martin Kitchen states that Speer was actually "fully aware of what had happened to the Jews" and was "intimately involved in the 'Final Solution'". Brechtken said Speer only admitted to a generalized responsibility for the Holocaust to hide his direct and actual responsibility. Speer was photographed with slave laborers at Mauthausen concentration camp during a visit on 31 March 1943; he also visited Gusen concentration camp. Although survivor Francisco Boix testified at the Nuremberg trials about Speer's visit, Taylor writes that, had the photo been available, he would have been hanged. In 2005, The Daily Telegraph reported that documents had surfaced indicating that Speer had approved the allocation of materials for the expansion of Auschwitz concentration camp after two of his assistants inspected the facility on a day when almost a thousand Jews were massacred. Heinrich Breloer, discussing the construction of Auschwitz, said Speer was not just a cog in the work—he was the "terror itself". Speer did not deny being present at the Posen speeches to Nazi leaders at a conference in Posen (Poznań) on 6 October 1943, but claimed to have left the auditorium before Himmler said during his speech: "The grave decision had to be taken to cause this people to vanish from the earth", and later, "The Jews must be exterminated". Speer is mentioned several times in the speech, and Himmler addresses him directly. In 2007, The Guardian reported that a letter from Speer dated 23 December 1971, had been found in a collection of his correspondence with Hélène Jeanty, the widow of a Belgian resistance fighter. In the letter, Speer says, "There is no doubt—I was present as Himmler announced on October 6, 1943, that all Jews would be killed." Armaments "miracle" Speer was credited with an "armaments miracle". During the winter of 1941–42, in the light of Germany's disastrous defeat in the Battle of Moscow, the German leadership including Friedrich Fromm, Georg Thomas and Fritz Todt had come to the conclusion that the war could not be won. The rational position to adopt was to seek a political solution that would end the war without defeat. Speer in response used his propaganda expertise to display a new dynamism of the war economy. He produced spectacular statistics, claiming a sixfold increase in munitions production, a fourfold increase in artillery production, and he sent further propaganda to the newsreels of the country. He was able to curtail the discussion that the war should be ended. The armaments "miracle" was a myth; Speer had used statistical manipulation to support his claims. The production of armaments did go up; however, this was due to the normal causes of reorganization before Speer came to office, the relentless mobilization of slave labor and a deliberate reduction in the quality of output to favor quantity. By July 1943 Speer's armaments propaganda became irrelevant because a catalogue of dramatic defeats on the battlefield meant the prospect of losing the war could no longer be hidden from the German public. Brechtken writes that Speer knew Germany was going to lose the war and deliberately extended its length, thus causing the deaths of millions of people in the death camps and on the battlefield who would have otherwise lived. Kitchen said, "There can be no doubt that Speer did indeed help to prolong the war longer than many thought possible, as a result of which millions were killed and Germany reduced to a pile of rubble". Architectural legacy Little remains of Speer's personal architectural works, other than the plans and photographs. No buildings designed by Speer during the Nazi era are extant in Berlin, other than the 4 entrance pavilions and underpasses leading to the Victory Column or Siegessäule, and the Schwerbelastungskörper, a heavy load-bearing body built around 1941. The concrete cylinder, high, was used to measure ground subsidence as part of feasibility studies for a massive triumphal arch and other large structures proposed as part of Welthauptstadt Germania, Hitler's planned post-war renewal project for the city. The cylinder is now a protected landmark and is open to the public. The tribune of the Zeppelinfeld stadium in Nuremberg, though partly demolished, can also be seen. During the war, the Speer-designed Reich Chancellery was largely destroyed by air raids and in the Battle of Berlin. The exterior walls survived, but they were eventually dismantled by the Soviets. Unsubstantiated rumors have claimed that the remains were used for other building projects such as the Humboldt University, Mohrenstraße metro station and Soviet war memorials in Berlin. See also Speer Goes to Hollywood Downfall, 2004 German film where he was portrayed by actor Heino Ferch Legion Speer Transportflotte Speer Transportkorps Speer Hermann Giesler References Informational notes Citations Bibliography Printed sources Online sources Further reading Causey, Charles M. (2016). The Lion and the Lamb: The True Holocaust Story of a Powerful Nazi Leader and a Dutch Resistance Worker, External links Francisco Boix identifies Speer at Nuremberg 1905 births 1981 deaths 20th-century German architects Articles containing video clips Deaths from cerebrovascular disease 20th-century German male writers German people convicted of crimes against humanity Holocaust perpetrators Karlsruhe Institute of Technology alumni Members of the Reichstag of Nazi Germany Architects in the Nazi Party Nazi Germany ministers Nazi Party officials Neoclassical architects Officials of Nazi Germany People convicted by the International Military Tribunal in Nuremberg Politicians from Mannheim People from the Grand Duchy of Baden Recipients of the Knights Cross of the War Merit Cross Technical University of Berlin alumni Technical University of Munich alumni Architects from Mannheim Neurological disease deaths in England
956
https://en.wikipedia.org/wiki/Asteraceae
Asteraceae
The family Asteraceae (), alternatively Compositae (), consists of over 32,000 known species of flowering plants in over 1,900 genera within the order Asterales. Commonly referred to as the aster, daisy, composite, or sunflower family, Compositae were first described in the year 1740. The number of species in Asteraceae is rivaled only by the Orchidaceae, and which is the larger family is unclear as the quantity of extant species in each family is unknown. Most species of Asteraceae are annual, biennial, or perennial herbaceous plants, but there are also shrubs, vines, and trees. The family has a widespread distribution, from subpolar to tropical regions in a wide variety of habitats. Most occur in hot desert and cold or hot semi-desert climates, and they are found on every continent but Antarctica. The primary common characteristic is the existence of sometimes hundreds of tiny individual florets which are held together by protective involucres in flower heads, or more technically, capitula. The oldest known fossils are pollen grains from the Late Cretaceous (Campanian to Maastrichtian) of Antarctica, dated to c. 76–66 million years (myr). It is estimated that the crown group of Asteraceae evolved at least 85.9 myr (Late Cretaceous, Santonian) with a stem node age of 88–89 myr (Late Cretaceous, Coniacian). Asteraceae is an economically important family, providing food staples, garden plants, and herbal medicines. Species outside of their native ranges can be considered weedy or invasive. Description Members of the Asteraceae are mostly herbaceous plants, but some shrubs, vines, and trees (such as Lachanodes arborea) do exist. Asteraceae species are generally easy to distinguish from other plants because of their unique inflorescence and other shared characteristics, such as the joined anthers of the stamens. However, determining genera and species of some groups such as Hieracium is notoriously difficult (see "damned yellow composite" for example). Roots Members of the family Asteraceae generally produce taproots, but sometimes they possess fibrous root systems. Some species have underground stems in the form of caudices or rhizomes. These can be fleshy or woody depending on the species. Stems Stems are herbaceous, aerial, branched, and cylindrical with glandular hairs, generally erect, but can be prostrate to ascending. The stems can contain secretory canals with resin, or latex which is particularly common among the Cichorioideae. Leaves Leaves can be alternate, opposite, or whorled. They may be simple, but are often deeply lobed or otherwise incised, often conduplicate or revolute. The margins also can be entire or toothed. Resin or latex also can be present in the leaves. Inflorescences Nearly all Asteraceae bear their flowers in dense flower heads called capitula. They are surrounded by involucral bracts, and when viewed from a distance, each capitulum may appear to be a single flower. Enlarged outer (peripheral) flowers in the capitulum may resemble petals, and the involucral bracts may look like a calyx. Floral heads In plants of the family Asteraceae, what appears to be a single flower is actually a cluster of much smaller flowers. The overall appearance of the cluster, as a single flower, functions in attracting pollinators in the same way as the structure of an individual flower in some other plant families. The older family name, Compositae, comes from the fact that what appears to be a single flower is actually a composite of smaller flowers. The "petals" or "sunrays" in a sunflower head are actually individual strap-shaped flowers called ray flowers, and the "sun disk" is made of smaller circular shaped individual flowers called disc flowers. The word "aster" means "star" in Greek, referring to the appearance of some family members, as a "star" surrounded by "rays". The cluster of flowers that may appear to be a single flower, is called a head. The entire head may move tracking the sun, like a "smart" solar panel, which maximizes reflectivity of the whole unit and can thereby attract more pollinators. On the outside the flower heads are small bracts that look like scales. These are called phyllaries, and together they form the involucre that protects the individual flowers in the head before they open. The individual heads have the smaller individual flowers arranged on a round or dome-like structure called the receptacle. The flowers mature first at the outside, moving toward the center, with the youngest in the middle. The individual flowers in a head have 5 fused petals (rarely 4), but instead of sepals, have threadlike, hairy, or bristly structures singularly called a pappus, plural pappi, which surround the fruit and can stick to animal fur or be lifted by wind, aiding in seed dispersal. The whitish fluffy head of a dandelion, commonly blown on by children, is made of pappi with tiny seeds attached at the ends. The pappi provide a parachute like structure to help the seed be carried away in the wind. A ray flower is a 3-tipped (3-lobed), strap-shaped, individual flower in the head of some members of the family Asteraceae. Sometimes a ray flower is 2-tipped (2-lobed). The corolla of the ray flower may have 2 tiny teeth opposite the 3-lobed strap, or tongue, indicating evolution by fusion from an originally 5-part corolla. Sometimes, the 3:2 arrangement is reversed, with 2 tips on the tongue, and 0 or 3 tiny teeth opposite the tongue. A ligulate flower is a 5-tipped, strap-shaped, individual flower in the heads of other members. A ligule is the strap-shaped tongue of the corolla of either a ray flower or of a ligulate flower. A disk flower (or disc flower) is a radially symmetric (i.e., with identical shaped petals arranged in circle around the center) individual flower in the head, which is ringed by ray flowers when both are present. Sometimes ray flowers may be slightly off from radial symmetry, or weakly bilaterally symmetric, as in the case of desert pincushions Chaenactis fremontii. A radiate head has disc flowers surrounded by ray flowers. A ligulate head has all ligulate flowers. When a sunflower family flower head has only disc flowers that are sterile, male, or have both male and female parts, it is a discoid head. Disciform heads have only disc flowers, but may have two kinds (male flowers and female flowers) in one head, or may have different heads of two kinds (all male, or all female). Pistillate heads have all female flowers. Staminate heads have all male flowers. Sometimes, but rarely, the head contains only a single flower, or has a single flowered pistillate (female) head, and a multi-flowered male staminate (male) head. Floral structures The distinguishing characteristic of Asteraceae is their inflorescence, a type of specialised, composite flower head or pseudanthium, technically called a calathium or capitulum, that may look superficially like a single flower. The capitulum is a contracted raceme composed of numerous individual sessile flowers, called florets, all sharing the same receptacle. A set of bracts forms an involucre surrounding the base of the capitulum. These are called "phyllaries", or "involucral bracts". They may simulate the sepals of the pseudanthium. These are mostly herbaceous but can also be brightly coloured (e.g. Helichrysum) or have a scarious (dry and membranous) texture. The phyllaries can be free or fused, and arranged in one to many rows, overlapping like the tiles of a roof (imbricate) or not (this variation is important in identification of tribes and genera). Each floret may be subtended by a bract, called a "palea" or "receptacular bract". These bracts are often called "chaff". The presence or absence of these bracts, their distribution on the receptacle, and their size and shape are all important diagnostic characteristics for genera and tribes. The florets have five petals fused at the base to form a corolla tube and they may be either actinomorphic or zygomorphic. Disc florets are usually actinomorphic, with five petal lips on the rim of the corolla tube. The petal lips may be either very short, or long, in which case they form deeply lobed petals. The latter is the only kind of floret in the Carduoideae, while the first kind is more widespread. Ray florets are always highly zygomorphic and are characterised by the presence of a ligule, a strap-shaped structure on the edge of the corolla tube consisting of fused petals. In the Asteroideae and other minor subfamilies these are usually borne only on florets at the circumference of the capitulum and have a 3+2 scheme – above the fused corolla tube, three very long fused petals form the ligule, with the other two petals being inconspicuously small. The Cichorioideae has only ray florets, with a 5+0 scheme – all five petals form the ligule. A 4+1 scheme is found in the Barnadesioideae. The tip of the ligule is often divided into teeth, each one representing a petal. Some marginal florets may have no petals at all (filiform floret). The calyx of the florets may be absent, but when present is always modified into a pappus of two or more teeth, scales or bristles and this is often involved in the dispersion of the seeds. As with the bracts, the nature of the pappus is an important diagnostic feature. There are usually five stamens. The filaments are fused to the corolla, while the anthers are generally connate (syngenesious anthers), thus forming a sort of tube around the style (theca). They commonly have basal and/or apical appendages. Pollen is released inside the tube and is collected around the growing style, and then, as the style elongates, is pushed out of the tube (nüdelspritze). The pistil consists of two connate carpels. The style has two lobes. Stigmatic tissue may be located in the interior surface or form two lateral lines. The ovary is inferior and has only one ovule, with basal placentation. Fruits and seeds In members of the Asteraceae the fruit is achene-like, and is called a cypsela (plural cypselae). Although there are two fused carpels, there is only one locule, and only one seed per fruit is formed. It may sometimes be winged or spiny because the pappus, which is derived from calyx tissue often remains on the fruit (for example in dandelion). In some species, however, the pappus falls off (for example in Helianthus). Cypsela morphology is often used to help determine plant relationships at the genus and species level. The mature seeds usually have little endosperm or none. Pollen The pollen of composites is typically echinolophate, a morphological term meaning "with elaborate systems of ridges and spines dispersed around and between the apertures." Metabolites In Asteraceae, the energy store is generally in the form of inulin rather than starch. They produce iso/chlorogenic acid, sesquiterpene lactones, pentacyclic triterpene alcohols, various alkaloids, acetylenes (cyclic, aromatic, with vinyl end groups), tannins. They have terpenoid essential oils which never contain iridoids. Asteraceae produce secondary metabolites, such as flavonoids and terpenoids. Some of these molecules can inhibit protozoan parasites such as Plasmodium, Trypanosoma, Leishmania and parasitic intestinal worms, and thus have potential in medicine. Taxonomy History Compositae, the original name for Asteraceae, were first described in 1740 by Dutch botanist Adriaan van Royen. Traditionally, two subfamilies were recognised: Asteroideae (or Tubuliflorae) and Cichorioideae (or Liguliflorae). The latter has been shown to be extensively paraphyletic, and has now been divided into 12 subfamilies, but the former still stands. The study of this family is known as synantherology. Phylogeny The phylogenetic tree presented below is based on Panero & Funk (2002) updated in 2014, and now also includes the monotypic Famatinanthoideae. The diamond (♦) denotes a very poorly supported node (<50% bootstrap support), the dot (•) a poorly supported node (<80%). The family includes over 32,000 currently accepted species, in over 1,900 genera (list) in 13 subfamilies. The number of species in the family Asteraceae is rivaled only by Orchidaceae. Which is the larger family is unclear, because of the uncertainty about how many extant species each family includes. The four subfamilies Asteroideae, Cichorioideae, Carduoideae and Mutisioideae contain 99% of the species diversity of the whole family (approximately 70%, 14%, 11% and 3% respectively). Because of the morphological complexity exhibited by this family, agreeing on generic circumscriptions has often been difficult for taxonomists. As a result, several of these genera have required multiple revisions. Paleontology and evolutionary processes The oldest known fossils of members of Asteraceae are pollen grains from the Late Cretaceous of Antarctica, dated to ∼76–66 myr (Campanian to Maastrichtian) and assigned to the extant genus Dasyphyllum. Barreda, et al. (2015) estimated that the crown group of Asteraceae evolved at least 85.9 myr (Late Cretaceous, Santonian) with a stem node age of 88–89 myr (Late Cretaceous, Coniacian). It is not known whether the precise cause of their great success was the development of the highly specialised capitulum, their ability to store energy as fructans (mainly inulin), which is an advantage in relatively dry zones, or some combination of these and possibly other factors. Heterocarpy, or the ability to produce different fruit morphs, has evolved and is common in Asteraceae. It allows seeds to be dispersed over varying distances and each is adapted to different environments, increasing chances of survival. Etymology and pronunciation The name Asteraceae () comes to international scientific vocabulary from New Latin, from Aster, the type genus, + -aceae, a standardized suffix for plant family names in modern taxonomy. The genus name comes from the Classical Latin word , "star", which came from Ancient Greek (), "star". It refers to the star-like form of the inflorescence. The original name Compositae is still valid under the International Code of Nomenclature for algae, fungi, and plants. It refers to the "composite" nature of the capitula, which consist of a few or many individual flowers. The vernacular name daisy, widely applied to members of this family, is derived from the Old English name of the daisy (Bellis perennis): , meaning "day's eye". This is because the petals open at dawn and close at dusk. Distribution and habitat Asteraceae species have a widespread distribution, from subpolar to tropical regions in a wide variety of habitats. Most occur in hot desert and cold or hot semi-desert climates, and they are found on every continent but Antarctica. They are especially numerous in tropical and subtropical regions (notably Central America, eastern Brazil, the Mediterranean, the Levant, southern Africa, central Asia, and southwestern China). The largest proportion of the species occur in the arid and semi-arid regions of subtropical and lower temperate latitudes. The Asteraceae family comprises 10% of all flowering plant species. Ecology Asteraceae are especially common in open and dry environments. Many members of Asteraceae are pollinated by insects, which explains their value in attracting beneficial insects, but anemophily is also present (e.g. Ambrosia, Artemisia). There are many apomictic species in the family. Seeds are ordinarily dispersed intact with the fruiting body, the cypsela. Anemochory (wind dispersal) is common, assisted by a hairy pappus. Epizoochory is another common method, in which the dispersal unit, a single cypsela (e.g. Bidens) or entire capitulum (e.g. Arctium) has hooks, spines or some structure to attach to the fur or plumage (or even clothes, as in the photo) of an animal just to fall off later far from its mother plant. Some members of Asteraceae are economically important as weeds. Notable in the United States are Senecio jacobaea (ragwort), Senecio vulgaris (groundsel), and Taraxacum (dandelion). Some are invasive species in particular regions, often having been introduced by human agency. Examples include various tumbleweeds, Bidens, ragweeds, thistles, and dandelion. Dandelion was introduced into North America by European settlers who used the young leaves as a salad green. Uses Asteraceae is an economically important family, providing products such as cooking oils, leaf vegetables like lettuce, sunflower seeds, artichokes, sweetening agents, coffee substitutes and herbal teas. Several genera are of horticultural importance, including pot marigold (Calendula officinalis), Echinacea (coneflowers), various daisies, fleabane, chrysanthemums, dahlias, zinnias, and heleniums. Asteraceae are important in herbal medicine, including Grindelia, yarrow, and many others. Commercially important plants in Asteraceae include the food crops Lactuca sativa (lettuce), Cichorium (chicory), Cynara scolymus (globe artichoke), Helianthus annuus (sunflower), Smallanthus sonchifolius (yacón), Carthamus tinctorius (safflower) and Helianthus tuberosus (Jerusalem artichoke). Plants are used as herbs and in herbal teas and other beverages. Chamomile, for example, comes from two different species: the annual Matricaria chamomilla (German chamomile) and the perennial Chamaemelum nobile (Roman chamomile). Calendula (known as pot marigold) is grown commercially for herbal teas and potpourri. Echinacea is used as a medicinal tea. The wormwood genus Artemisia includes absinthe (A. absinthium) and tarragon (A. dracunculus). Winter tarragon (Tagetes lucida), is commonly grown and used as a tarragon substitute in climates where tarragon will not survive. Many members of the family are grown as ornamental plants for their flowers, and some are important ornamental crops for the cut flower industry. Some examples are Chrysanthemum, Gerbera, Calendula, Dendranthema, Argyranthemum, Dahlia, Tagetes, Zinnia, and many others. Many species of this family possess medicinal properties and are used as traditional antiparasitic medicine. Members of the family are also commonly featured in medical and phytochemical journals because the sesquiterpene lactone compounds contained within them are an important cause of allergic contact dermatitis. Allergy to these compounds is the leading cause of allergic contact dermatitis in florists in the US. Pollen from ragweed Ambrosia is among the main causes of so-called hay fever in the United States. Asteraceae are also used for some industrial purposes. French Marigold (Tagetes patula) is common in commercial poultry feeds and its oil is extracted for uses in cola and the cigarette industry. The genera Chrysanthemum, Pulicaria, Tagetes, and Tanacetum contain species with useful insecticidal properties. Parthenium argentatum (guayule) is a source of hypoallergenic latex. Several members of the family are copious nectar producers and are useful for evaluating pollinator populations during their bloom. Centaurea (knapweed), Helianthus annuus (domestic sunflower), and some species of Solidago (goldenrod) are major "honey plants" for beekeepers. Solidago produces relatively high protein pollen, which helps honey bees over winter. References Bibliography External links Asteraceae at the Angiosperm Phylogeny Website Compositae.org – Compositae Working Group (CWG) and Global Compositae Database (GCD) Asterales families Extant Campanian first appearances
957
https://en.wikipedia.org/wiki/Apiaceae
Apiaceae
Apiaceae or Umbelliferae is a family of mostly aromatic flowering plants named after the type genus Apium and commonly known as the celery, carrot or parsley family, or simply as umbellifers. It is the 16th-largest family of flowering plants, with more than 3,700 species in 434 genera including such well-known and economically important plants such as ajwain, angelica, anise, asafoetida, caraway, carrot, celery, chervil, coriander, cumin, dill, fennel, lovage, cow parsley, parsley, parsnip and sea holly, as well as silphium, a plant whose identity is unclear and which may be extinct. The family Apiaceae includes a significant number of phototoxic species, such as giant hogweed, and a smaller number of highly poisonous species, such as poison hemlock, water hemlock, spotted cowbane, fool's parsley, and various species of water dropwort. Description Most Apiaceae are annual, biennial or perennial herbs (frequently with the leaves aggregated toward the base), though a minority are woody shrubs or small trees such as Bupleurum fruticosum. Their leaves are of variable size and alternately arranged, or with the upper leaves becoming nearly opposite. The leaves may be petiolate or sessile. There are no stipules but the petioles are frequently sheathing and the leaves may be perfoliate. The leaf blade is usually dissected, ternate, or pinnatifid, but simple and entire in some genera, e.g. Bupleurum. Commonly, their leaves emit a marked smell when crushed, aromatic to foetid, but absent in some species. The defining characteristic of this family is the inflorescence, the flowers nearly always aggregated in terminal umbels, that may be simple or more commonly compound, often umbelliform cymes. The flowers are usually perfect (hermaphroditic) and actinomorphic, but there may be zygomorphic flowers at the edge of the umbel, as in carrot (Daucus carota) and coriander, with petals of unequal size, the ones pointing outward from the umbel larger than the ones pointing inward. Some are andromonoecious, polygamomonoecious, or even dioecious (as in Acronema), with a distinct calyx and corolla, but the calyx is often highly reduced, to the point of being undetectable in many species, while the corolla can be white, yellow, pink or purple. The flowers are nearly perfectly pentamerous, with five petals and five stamens. There is often variation in the functionality of the stamens even within a single inflorescence. Some flowers are functionally staminate (where a pistil may be present but has no ovules capable of being fertilized) while others are functionally pistillate (where stamens are present but their anthers do not produce viable pollen). Pollination of one flower by the pollen of a different flower of the same plant (geitonogamy) is common. The gynoecium consists of two carpels fused into a single, bicarpellate pistil with an inferior ovary. Stylopodia support two styles and secrete nectar, attracting pollinators like flies, mosquitoes, gnats, beetles, moths, and bees. The fruit is a schizocarp consisting of two fused carpels that separate at maturity into two mericarps, each containing a single seed. The fruits of many species are dispersed by wind but others such as those of Daucus spp., are covered in bristles, which may be hooked in sanicle Sanicula europaea and thus catch in the fur of animals. The seeds have an oily endosperm and often contain essential oils, containing aromatic compounds that are responsible for the flavour of commercially important umbelliferous seed such as anise, cumin and coriander. The shape and details of the ornamentation of the ripe fruits are important for identification to species level. Taxonomy Apiaceae was first described by John Lindley in 1836. The name is derived from the type genus Apium, which was originally used by Pliny the Elder circa 50 AD for a celery-like plant. The alternative name for the family, Umbelliferae, derives from the inflorescence being generally in the form of a compound umbel. The family was one of the first to be recognized as a distinct group in Jacques Daleschamps' 1586 Historia generalis plantarum. With Robert Morison's 1672 Plantarum umbelliferarum distribution nova it became the first group of plants for which a systematic study was published. The family is solidly placed within the Apiales order in the APG III system. It is closely related to Araliaceae and the boundaries between these families remain unclear. Traditionally groups within the family have been delimited largely based on fruit morphology, and the results from this have not been congruent with the more recent molecular phylogenetic analyses. The subfamilial and tribal classification for the family is currently in a state of flux, with many of the groups being found to be grossly paraphyletic or polyphyletic. General According to the Angiosperm Phylogeny Website , 434 genera are in the family Apiaceae. Ecology The black swallowtail butterfly, Papilio polyxenes, uses the family Apiaceae for food and host plants for oviposition. The 22-spot ladybird is also commonly found eating mildew on these shrubs. Uses Many members of this family are cultivated for various purposes. Parsnip (Pastinaca sativa), carrot (Daucus carota) and Hamburg parsley (Petroselinum crispum) produce tap roots that are large enough to be useful as food. Many species produce essential oils in their leaves or fruits and as a result are flavourful aromatic herbs. Examples are parsley (Petroselinum crispum), coriander (Coriandrum sativum), culantro, and dill (Anethum graveolens). The seeds may be used in cuisine, as with coriander (Coriandrum sativum), fennel (Foeniculum vulgare), cumin (Cuminum cyminum), and caraway (Carum carvi). Other notable cultivated Apiaceae include chervil (Anthriscus cerefolium), angelica (Angelica spp.), celery (Apium graveolens), arracacha (Arracacia xanthorrhiza), sea holly (Eryngium spp.), asafoetida (Ferula asafoetida), galbanum (Ferula gummosa), cicely (Myrrhis odorata), anise (Pimpinella anisum), lovage (Levisticum officinale), and hacquetia (Hacquetia epipactis). Cultivation Generally, all members of this family are best cultivated in the cool-season garden; indeed, they may not grow at all if the soils are too warm. Almost every widely cultivated plant of this group is a considered useful as a companion plant. One reason is because the tiny flowers clustered into umbels, are well suited for ladybugs, parasitic wasps, and predatory flies, which actually drink nectar when not reproducing. They then prey upon insect pests on nearby plants. Some of the members of this family considered "herbs" produce scents that are believed to ...mask the odours of nearby plants, thus making them harder for insect pests to find. Other uses The poisonous members of the Apiaceae have been used for a variety of purposes globally. The poisonous Oenanthe crocata has been used to stupefy fish, Cicuta douglasii has been used as an aid in suicides, and arrow poisons have been made from various other family species. Daucus carota has been used as coloring for butter. Dorema ammoniacum, Ferula galbaniflua, and Ferula moschata (sumbul) are sources of incense. The woody Azorella compacta Phil. has been used in South America for fuel. Toxicity Many species in the family Apiaceae produce phototoxic substances (called furanocoumarins) that sensitize human skin to sunlight. Contact with plant parts that contain furanocoumarins, followed by exposure to sunlight, may cause phytophotodermatitis, a serious skin inflammation. Phototoxic species include Ammi majus, Notobubon galbanum, the parsnip (Pastinaca sativa) and numerous species of the genus Heracleum, especially the giant hogweed (Heracleum mantegazzianum). Of all the plant species that have been reported to induce phytophotodermatitis, approximately half belong to the family Apiaceae. The family Apiaceae also includes a smaller number of poisonous species, including poison hemlock, water hemlock, spotted cowbane, fool's parsley, and various species of water dropwort. Some members of the family Apiaceae, including carrot, celery, fennel, parsley and parsnip, contain polyynes, an unusual class of organic compounds that exhibit cytotoxic effects. References Further reading Constance, L. (1971). "History of the classification of Umbelliferae (Apiaceae)." in Heywood, V. H. [ed.], The biology and chemistry of the Umbelliferae, 1–11. Academic Press, London. Cronquist, A. (1968). The Evolution and Classification of Flowering Plants. Boston: Houghton Mifflin. French, D. H. (1971). "Ethnobotany of the Umbelliferae." in Heywood, V. H. [ed.], The biology and chemistry of the Umbelliferae, 385–412. Academic Press, London. Hegnauer, R. (1971) "Chemical Patterns and Relationships of Umbelliferae." in Heywood, V. H. [ed.], The biology and chemistry of the Umbelliferae, 267–277. Academic Press, London. Heywood, V. H. (1971). "Systematic survey of Old World Umbelliferae." in Heywood, V. H. [ed.], The biology and chemistry of the Umbelliferae, 31–41. Academic Press, London. Judd, W. S. et al. (1999). Plant Systematics: A Phylogenetic Approach. Sunderland, MA: Sinauer Associates, Inc. Nieto Feliner, Gonzalo; Jury, Stephen Leonard & Herrero Nieto, Alberto (eds.) Flora iberica. Plantas vasculares de la Península Ibérica e Islas Baleares. Vol. X. "Araliaceae-Umbelliferae" (2003) Madrid: Real Jardín Botánico, CSIC (in Spanish). External links Umbelliferae at The Families of Flowering Plants (DELTA) Apiaceae at Discover Life Umbellifer Resource Centre at the Royal Botanic Garden Edinburgh Umbellifer Information Server at Moscow State University Asterid families
958
https://en.wikipedia.org/wiki/Axon
Axon
An axon (from Greek ἄξων áxōn, axis), or nerve fiber (or nerve fibre: see spelling differences), is a long, slender projection of a nerve cell, or neuron, in vertebrates, that typically conducts electrical impulses known as action potentials away from the nerve cell body. The function of the axon is to transmit information to different neurons, muscles, and glands. In certain sensory neurons (pseudounipolar neurons), such as those for touch and warmth, the axons are called afferent nerve fibers and the electrical impulse travels along these from the periphery to the cell body and from the cell body to the spinal cord along another branch of the same axon. Axon dysfunction has caused many inherited and acquired neurological disorders which can affect both the peripheral and central neurons. Nerve fibers are classed into three types – group A nerve fibers, group B nerve fibers, and group C nerve fibers. Groups A and B are myelinated, and group C are unmyelinated. These groups include both sensory fibers and motor fibers. Another classification groups only the sensory fibers as Type I, Type II, Type III, and Type IV. An axon is one of two types of cytoplasmic protrusions from the cell body of a neuron; the other type is a dendrite. Axons are distinguished from dendrites by several features, including shape (dendrites often taper while axons usually maintain a constant radius), length (dendrites are restricted to a small region around the cell body while axons can be much longer), and function (dendrites receive signals whereas axons transmit them). Some types of neurons have no axon and transmit signals from their dendrites. In some species, axons can emanate from dendrites known as axon-carrying dendrites. No neuron ever has more than one axon; however in invertebrates such as insects or leeches the axon sometimes consists of several regions that function more or less independently of each other. Axons are covered by a membrane known as an axolemma; the cytoplasm of an axon is called axoplasm. Most axons branch, in some cases very profusely. The end branches of an axon are called telodendria. The swollen end of a telodendron is known as the axon terminal which joins the dendron or cell body of another neuron forming a synaptic connection. Axons make contact with other cells—usually other neurons but sometimes muscle or gland cells—at junctions called synapses. In some circumstances, the axon of one neuron may form a synapse with the dendrites of the same neuron, resulting in an autapse. At a synapse, the membrane of the axon closely adjoins the membrane of the target cell, and special molecular structures serve to transmit electrical or electrochemical signals across the gap. Some synaptic junctions appear along the length of an axon as it extends—these are called en passant ("in passing") synapses and can be in the hundreds or even the thousands along one axon. Other synapses appear as terminals at the ends of axonal branches. A single axon, with all its branches taken together, can innervate multiple parts of the brain and generate thousands of synaptic terminals. A bundle of axons make a nerve tract in the central nervous system, and a fascicle in the peripheral nervous system. In placental mammals the largest white matter tract in the brain is the corpus callosum, formed of some 200 million axons in the human brain. Anatomy Axons are the primary transmission lines of the nervous system, and as bundles they form nerves. Some axons can extend up to one meter or more while others extend as little as one millimeter. The longest axons in the human body are those of the sciatic nerve, which run from the base of the spinal cord to the big toe of each foot. The diameter of axons is also variable. Most individual axons are microscopic in diameter (typically about one micrometer (µm) across). The largest mammalian axons can reach a diameter of up to 20 µm. The squid giant axon, which is specialized to conduct signals very rapidly, is close to 1 millimetre in diameter, the size of a small pencil lead. The numbers of axonal telodendria (the branching structures at the end of the axon) can also differ from one nerve fiber to the next. Axons in the central nervous system (CNS) typically show multiple telodendria, with many synaptic end points. In comparison, the cerebellar granule cell axon is characterized by a single T-shaped branch node from which two parallel fibers extend. Elaborate branching allows for the simultaneous transmission of messages to a large number of target neurons within a single region of the brain. There are two types of axons in the nervous system: myelinated and unmyelinated axons. Myelin is a layer of a fatty insulating substance, which is formed by two types of glial cells: Schwann cells and oligodendrocytes. In the peripheral nervous system Schwann cells form the myelin sheath of a myelinated axon. In the central nervous system oligodendrocytes form the insulating myelin. Along myelinated nerve fibers, gaps in the myelin sheath known as nodes of Ranvier occur at evenly spaced intervals. The myelination enables an especially rapid mode of electrical impulse propagation called saltatory conduction. The myelinated axons from the cortical neurons form the bulk of the neural tissue called white matter in the brain. The myelin gives the white appearance to the tissue in contrast to the grey matter of the cerebral cortex which contains the neuronal cell bodies. A similar arrangement is seen in the cerebellum. Bundles of myelinated axons make up the nerve tracts in the CNS. Where these tracts cross the midline of the brain to connect opposite regions they are called commissures. The largest of these is the corpus callosum that connects the two cerebral hemispheres, and this has around 20 million axons. The structure of a neuron is seen to consist of two separate functional regions, or compartments – the cell body together with the dendrites as one region, and the axonal region as the other. Axonal region The axonal region or compartment, includes the axon hillock, the initial segment, the rest of the axon, and the axon telodendria, and axon terminals. It also includes the myelin sheath. The Nissl bodies that produce the neuronal proteins are absent in the axonal region. Proteins needed for the growth of the axon, and the removal of waste materials, need a framework for transport. This axonal transport is provided for in the axoplasm by arrangements of microtubules and intermediate filaments known as neurofilaments. Axon hillock The axon hillock is the area formed from the cell body of the neuron as it extends to become the axon. It precedes the initial segment. The received action potentials that are summed in the neuron are transmitted to the axon hillock for the generation of an action potential from the initial segment. Axonal initial segment The axonal initial segment (AIS) is a structurally and functionally separate microdomain of the axon. One function of the initial segment is to separate the main part of an axon from the rest of the neuron; another function is to help initiate action potentials. Both of these functions support neuron cell polarity, in which dendrites (and, in some cases the soma) of a neuron receive input signals at the basal region, and at the apical region the neuron's axon provides output signals. The axon initial segment is unmyelinated and contains a specialized complex of proteins. It is between approximately 20 and 60 µm in length and functions as the site of action potential initiation. Both the position on the axon and the length of the AIS can change showing a degree of plasticity that can fine-tune the neuronal output. A longer AIS is associated with a greater excitability. Plasticity is also seen in the ability of the AIS to change its distribution and to maintain the activity of neural circuitry at a constant level. The AIS is highly specialized for the fast conduction of nerve impulses. This is achieved by a high concentration of voltage-gated sodium channels in the initial segment where the action potential is initiated. The ion channels are accompanied by a high number of cell adhesion molecules and scaffolding proteins that anchor them to the cytoskeleton. Interactions with ankyrin G are important as it is the major organizer in the AIS. Axonal transport The axoplasm is the equivalent of cytoplasm in the cell. Microtubules form in the axoplasm at the axon hillock. They are arranged along the length of the axon, in overlapping sections, and all point in the same direction – towards the axon terminals. This is noted by the positive endings of the microtubules. This overlapping arrangement provides the routes for the transport of different materials from the cell body. Studies on the axoplasm has shown the movement of numerous vesicles of all sizes to be seen along cytoskeletal filaments – the microtubules, and neurofilaments, in both directions between the axon and its terminals and the cell body. Outgoing anterograde transport from the cell body along the axon, carries mitochondria and membrane proteins needed for growth to the axon terminal. Ingoing retrograde transport carries cell waste materials from the axon terminal to the cell body. Outgoing and ingoing tracks use different sets of motor proteins. Outgoing transport is provided by kinesin, and ingoing return traffic is provided by dynein. Dynein is minus-end directed. There are many forms of kinesin and dynein motor proteins, and each is thought to carry a different cargo. The studies on transport in the axon led to the naming of kinesin. Myelination In the nervous system, axons may be myelinated, or unmyelinated. This is the provision of an insulating layer, called a myelin sheath. The myelin membrane is unique in its relatively high lipid to protein ratio. In the peripheral nervous system axons are myelinated by glial cells known as Schwann cells. In the central nervous system the myelin sheath is provided by another type of glial cell, the oligodendrocyte. Schwann cells myelinate a single axon. An oligodendrocyte can myelinate up to 50 axons. The composition of myelin is different in the two types. In the CNS the major myelin protein is proteolipid protein, and in the PNS it is myelin basic protein. Nodes of Ranvier Nodes of Ranvier (also known as myelin sheath gaps) are short unmyelinated segments of a myelinated axon, which are found periodically interspersed between segments of the myelin sheath. Therefore, at the point of the node of Ranvier, the axon is reduced in diameter. These nodes are areas where action potentials can be generated. In saltatory conduction, electrical currents produced at each node of Ranvier are conducted with little attenuation to the next node in line, where they remain strong enough to generate another action potential. Thus in a myelinated axon, action potentials effectively "jump" from node to node, bypassing the myelinated stretches in between, resulting in a propagation speed much faster than even the fastest unmyelinated axon can sustain. Axon terminals An axon can divide into many branches called telodendria (Greek–end of tree). At the end of each telodendron is an axon terminal (also called a synaptic bouton, or terminal bouton). Axon terminals contain synaptic vesicles that store the neurotransmitter for release at the synapse. This makes multiple synaptic connections with other neurons possible. Sometimes the axon of a neuron may synapse onto dendrites of the same neuron, when it is known as an autapse. Action potentials Most axons carry signals in the form of action potentials, which are discrete electrochemical impulses that travel rapidly along an axon, starting at the cell body and terminating at points where the axon makes synaptic contact with target cells. The defining characteristic of an action potential is that it is "all-or-nothing" — every action potential that an axon generates has essentially the same size and shape. This all-or-nothing characteristic allows action potentials to be transmitted from one end of a long axon to the other without any reduction in size. There are, however, some types of neurons with short axons that carry graded electrochemical signals, of variable amplitude. When an action potential reaches a presynaptic terminal, it activates the synaptic transmission process. The first step is rapid opening of calcium ion channels in the membrane of the axon, allowing calcium ions to flow inward across the membrane. The resulting increase in intracellular calcium concentration causes synaptic vesicles (tiny containers enclosed by a lipid membrane) filled with a neurotransmitter chemical to fuse with the axon's membrane and empty their contents into the extracellular space. The neurotransmitter is released from the presynaptic nerve through exocytosis. The neurotransmitter chemical then diffuses across to receptors located on the membrane of the target cell. The neurotransmitter binds to these receptors and activates them. Depending on the type of receptors that are activated, the effect on the target cell can be to excite the target cell, inhibit it, or alter its metabolism in some way. This entire sequence of events often takes place in less than a thousandth of a second. Afterward, inside the presynaptic terminal, a new set of vesicles is moved into position next to the membrane, ready to be released when the next action potential arrives. The action potential is the final electrical step in the integration of synaptic messages at the scale of the neuron. Extracellular recordings of action potential propagation in axons has been demonstrated in freely moving animals. While extracellular somatic action potentials have been used to study cellular activity in freely moving animals such as place cells, axonal activity in both white and gray matter can also be recorded. Extracellular recordings of axon action potential propagation is distinct from somatic action potentials in three ways: 1. The signal has a shorter peak-trough duration (~150μs) than of pyramidal cells (~500μs) or interneurons (~250μs). 2. The voltage change is triphasic. 3. Activity recorded on a tetrode is seen on only one of the four recording wires. In recordings from freely moving rats, axonal signals have been isolated in white matter tracts including the alveus and the corpus callosum as well hippocampal gray matter. In fact, the generation of action potentials in vivo is sequential in nature, and these sequential spikes constitute the digital codes in the neurons. Although previous studies indicate an axonal origin of a single spike evoked by short-term pulses, physiological signals in vivo trigger the initiation of sequential spikes at the cell bodies of the neurons. In addition to propagating action potentials to axonal terminals, the axon is able to amplify the action potentials, which makes sure a secure propagation of sequential action potentials toward the axonal terminal. In terms of molecular mechanisms, voltage-gated sodium channels in the axons possess lower threshold and shorter refractory period in response to short-term pulses. Development and growth Development The development of the axon to its target, is one of the six major stages in the overall development of the nervous system. Studies done on cultured hippocampal neurons suggest that neurons initially produce multiple neurites that are equivalent, yet only one of these neurites is destined to become the axon. It is unclear whether axon specification precedes axon elongation or vice versa, although recent evidence points to the latter. If an axon that is not fully developed is cut, the polarity can change and other neurites can potentially become the axon. This alteration of polarity only occurs when the axon is cut at least 10 μm shorter than the other neurites. After the incision is made, the longest neurite will become the future axon and all the other neurites, including the original axon, will turn into dendrites. Imposing an external force on a neurite, causing it to elongate, will make it become an axon. Nonetheless, axonal development is achieved through a complex interplay between extracellular signaling, intracellular signaling and cytoskeletal dynamics. Extracellular signaling The extracellular signals that propagate through the extracellular matrix surrounding neurons play a prominent role in axonal development. These signaling molecules include proteins, neurotrophic factors, and extracellular matrix and adhesion molecules. Netrin (also known as UNC-6) a secreted protein, functions in axon formation. When the UNC-5 netrin receptor is mutated, several neurites are irregularly projected out of neurons and finally a single axon is extended anteriorly. The neurotrophic factors – nerve growth factor (NGF), brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NTF3) are also involved in axon development and bind to Trk receptors. The ganglioside-converting enzyme plasma membrane ganglioside sialidase (PMGS), which is involved in the activation of TrkA at the tip of neutrites, is required for the elongation of axons. PMGS asymmetrically distributes to the tip of the neurite that is destined to become the future axon. Intracellular signaling During axonal development, the activity of PI3K is increased at the tip of destined axon. Disrupting the activity of PI3K inhibits axonal development. Activation of PI3K results in the production of phosphatidylinositol (3,4,5)-trisphosphate (PtdIns) which can cause significant elongation of a neurite, converting it into an axon. As such, the overexpression of phosphatases that dephosphorylate PtdIns leads into the failure of polarization. Cytoskeletal dynamics The neurite with the lowest actin filament content will become the axon. PGMS concentration and f-actin content are inversely correlated; when PGMS becomes enriched at the tip of a neurite, its f-actin content is substantially decreased. In addition, exposure to actin-depolimerizing drugs and toxin B (which inactivates Rho-signaling) causes the formation of multiple axons. Consequently, the interruption of the actin network in a growth cone will promote its neurite to become the axon. Growth Growing axons move through their environment via the growth cone, which is at the tip of the axon. The growth cone has a broad sheet-like extension called a lamellipodium which contain protrusions called filopodia. The filopodia are the mechanism by which the entire process adheres to surfaces and explores the surrounding environment. Actin plays a major role in the mobility of this system. Environments with high levels of cell adhesion molecules (CAMs) create an ideal environment for axonal growth. This seems to provide a "sticky" surface for axons to grow along. Examples of CAM's specific to neural systems include N-CAM, TAG-1—an axonal glycoprotein——and MAG, all of which are part of the immunoglobulin superfamily. Another set of molecules called extracellular matrix-adhesion molecules also provide a sticky substrate for axons to grow along. Examples of these molecules include laminin, fibronectin, tenascin, and perlecan. Some of these are surface bound to cells and thus act as short range attractants or repellents. Others are difusible ligands and thus can have long range effects. Cells called guidepost cells assist in the guidance of neuronal axon growth. These cells that help axon guidance, are typically other neurons that are sometimes immature. When the axon has completed its growth at its connection to the target, the diameter of the axon can increase by up to five times, depending on the speed of conduction required. It has also been discovered through research that if the axons of a neuron were damaged, as long as the soma (the cell body of a neuron) is not damaged, the axons would regenerate and remake the synaptic connections with neurons with the help of guidepost cells. This is also referred to as neuroregeneration. Nogo-A is a type of neurite outgrowth inhibitory component that is present in the central nervous system myelin membranes (found in an axon). It has a crucial role in restricting axonal regeneration in adult mammalian central nervous system. In recent studies, if Nogo-A is blocked and neutralized, it is possible to induce long-distance axonal regeneration which leads to enhancement of functional recovery in rats and mouse spinal cord. This has yet to be done on humans. A recent study has also found that macrophages activated through a specific inflammatory pathway activated by the Dectin-1 receptor are capable of promoting axon recovery, also however causing neurotoxicity in the neuron. Length regulation Axons vary largely in length from a few micrometers up to meters in some animals. This emphasizes that there must be a cellular length regulation mechanism allowing the neurons both to sense the length of their axons and to control their growth accordingly. It was discovered that motor proteins play an important role in regulating the length of axons. Based on this observation, researchers developed an explicit model for axonal growth describing how motor proteins could affect the axon length on the molecular level. These studies suggest that motor proteins carry signaling molecules from the soma to the growth cone and vice versa whose concentration oscillates in time with a length-dependent frequency. Classification The axons of neurons in the human peripheral nervous system can be classified based on their physical features and signal conduction properties. Axons were known to have different thicknesses (from 0.1 to 20 µm) and these differences were thought to relate to the speed at which an action potential could travel along the axon – its conductance velocity. Erlanger and Gasser proved this hypothesis, and identified several types of nerve fiber, establishing a relationship between the diameter of an axon and its nerve conduction velocity. They published their findings in 1941 giving the first classification of axons. Axons are classified in two systems. The first one introduced by Erlanger and Gasser, grouped the fibers into three main groups using the letters A, B, and C. These groups, group A, group B, and group C include both the sensory fibers (afferents) and the motor fibres (efferents). The first group A, was subdivided into alpha, beta, gamma, and delta fibers — Aα, Aβ, Aγ, and Aδ. The motor neurons of the different motor fibers, were the lower motor neurons – alpha motor neuron, beta motor neuron, and gamma motor neuron having the Aα, Aβ, and Aγ nerve fibers respectively. Later findings by other researchers identified two groups of Aa fibers that were sensory fibers. These were then introduced into a system that only included sensory fibers (though some of these were mixed nerves and were also motor fibers). This system refers to the sensory groups as Types and uses Roman numerals: Type Ia, Type Ib, Type II, Type III, and Type IV. Lower motor neurons have two kind of fibers: Different sensory receptors innervate different types of nerve fibers. Proprioceptors are innervated by type Ia, Ib and II sensory fibers, mechanoreceptors by type II and III sensory fibers and nociceptors and thermoreceptors by type III and IV sensory fibers. Autonomic The autonomic nervous system has two kinds of peripheral fibers: Clinical significance In order of degree of severity, injury to a nerve can be described as neurapraxia, axonotmesis, or neurotmesis. Concussion is considered a mild form of diffuse axonal injury. Axonal injury can also cause central chromatolysis. The dysfunction of axons in the nervous system is one of the major causes of many inherited neurological disorders that affect both peripheral and central neurons. When an axon is crushed, an active process of axonal degeneration takes place at the part of the axon furthest from the cell body. This degeneration takes place quickly following the injury, with the part of the axon being sealed off at the membranes and broken down by macrophages. This is known as Wallerian degeneration. Dying back of an axon can also take place in many neurodegenerative diseases, particularly when axonal transport is impaired, this is known as Wallerian-like degeneration. Studies suggest that the degeneration happens as a result of the axonal protein NMNAT2, being prevented from reaching all of the axon. Demyelination of axons causes the multitude of neurological symptoms found in the disease multiple sclerosis. Dysmyelination is the abnormal formation of the myelin sheath. This is implicated in several leukodystrophies, and also in schizophrenia. A severe traumatic brain injury can result in widespread lesions to nerve tracts damaging the axons in a condition known as diffuse axonal injury. This can lead to a persistent vegetative state. It has been shown in studies on the rat that axonal damage from a single mild traumatic brain injury, can leave a susceptibility to further damage, after repeated mild traumatic brain injuries. A nerve guidance conduit is an artificial means of guiding axon growth to enable neuroregeneration, and is one of the many treatments used for different kinds of nerve injury. History German anatomist Otto Friedrich Karl Deiters is generally credited with the discovery of the axon by distinguishing it from the dendrites. Swiss Rüdolf Albert von Kölliker and German Robert Remak were the first to identify and characterize the axon initial segment. Kölliker named the axon in 1896. Louis-Antoine Ranvier was the first to describe the gaps or nodes found on axons and for this contribution these axonal features are now commonly referred to as the nodes of Ranvier. Santiago Ramón y Cajal, a Spanish anatomist, proposed that axons were the output components of neurons, describing their functionality. Joseph Erlanger and Herbert Gasser earlier developed the classification system for peripheral nerve fibers, based on axonal conduction velocity, myelination, fiber size etc. Alan Hodgkin and Andrew Huxley also employed the squid giant axon (1939) and by 1952 they had obtained a full quantitative description of the ionic basis of the action potential, leading to the formulation of the Hodgkin–Huxley model. Hodgkin and Huxley were awarded jointly the Nobel Prize for this work in 1963. The formulae detailing axonal conductance were extended to vertebrates in the Frankenhaeuser–Huxley equations. The understanding of the biochemical basis for action potential propagation has advanced further, and includes many details about individual ion channels. Other animals The axons in invertebrates have been extensively studied. The longfin inshore squid, often used as a model organism has the longest known axon. The giant squid has the largest axon known. Its size ranges from a half (typically) to one millimetre in diameter and is used in the control of its jet propulsion system. The fastest recorded conduction speed of 210 m/s, is found in the ensheathed axons of some pelagic Penaeid shrimps and the usual range is between 90 and 200 m/s (cf 100–120 m/s for the fastest myelinated vertebrate axon.) In other cases as seen in rat studies an axon originates from a dendrite; such axons are said to have "dendritic origin". Some axons with dendritic origin similarly have a "proximal" initial segment that starts directly at the axon origin, while others have a "distal" initial segment, discernibly separated from the axon origin. In many species some of the neurons have axons that emanate from the dendrite and not from the cell body, and these are known as axon-carrying dendrites. In many cases, an axon originates at an axon hillock on the soma; such axons are said to have "somatic origin". Some axons with somatic origin have a "proximal" initial segment adjacent the axon hillock, while others have a "distal" initial segment, separated from the soma by an extended axon hillock. See also Electrophysiology Ganglionic eminence Giant axonal neuropathy Neuronal tracing Pioneer axon References External links  — "Slide 3 Spinal cord" Neurohistology
960
https://en.wikipedia.org/wiki/Aramaic%20alphabet
Aramaic alphabet
The ancient Aramaic alphabet was adapted by Arameans from the Phoenician alphabet and became a distinct script by the 8th century BC. It was used to write the Aramaic language and had displaced the Paleo-Hebrew alphabet, itself a derivative of the Phoenician alphabet, for the writing of Hebrew. The letters all represent consonants, some of which are also used as matres lectionis to indicate long vowels. The Aramaic alphabet is historically significant since virtually all modern Middle Eastern writing systems can be traced back to it as well as numerous non-Chinese writing systems of Central and East Asia. That is primarily from the widespread usage of the Aramaic language as both a lingua franca and the official language of the Neo-Assyrian and Neo-Babylonian Empires, and their successor, the Achaemenid Empire. Among the scripts in modern use, the Hebrew alphabet bears the closest relation to the Imperial Aramaic script of the 5th century BC, with an identical letter inventory and, for the most part, nearly identical letter shapes. The Aramaic alphabet was an ancestor to the Nabataean alphabet and the later Arabic alphabet. Writing systems (like the Aramaic one) that indicate consonants but do not indicate most vowels other than by means of matres lectionis or added diacritical signs, have been called abjads by Peter T. Daniels to distinguish them from alphabets, such as the Greek alphabet, which represent vowels more systematically. The term was coined to avoid the notion that a writing system that represents sounds must be either a syllabary or an alphabet, which would imply that a system like Aramaic must be either a syllabary (as argued by Ignace Gelb) or an incomplete or deficient alphabet (as most other writers have said). Rather, it is a different type. Origins The earliest inscriptions in the Aramaic language use the Phoenician alphabet. Over time, the alphabet developed into the form shown below. Aramaic gradually became the lingua franca throughout the Middle East, with the script at first complementing and then displacing Assyrian cuneiform, as the predominant writing system. Achaemenid Empire (The First Persian Empire) Around 500 BC, following the Achaemenid conquest of Mesopotamia under Darius I, Old Aramaic was adopted by the Persians as the "vehicle for written communication between the different regions of the vast Persian empire with its different peoples and languages. The use of a single official language, which modern scholarship has dubbed as Official Aramaic, Imperial Aramaic or Achaemenid Aramaic, can be assumed to have greatly contributed to the astonishing success of the Achaemenid Persians in holding their far-flung empire together for as long as they did." Imperial Aramaic was highly standardised; its orthography was based more on historical roots than any spoken dialect and was inevitably influenced by Old Persian. The Aramaic glyph forms of the period are often divided into two main styles, the "lapidary" form, usually inscribed on hard surfaces like stone monuments, and a cursive form whose lapidary form tended to be more conservative by remaining more visually similar to Phoenician and early Aramaic. Both were in use through the Achaemenid Persian period, but the cursive form steadily gained ground over the lapidary, which had largely disappeared by the 3rd century BC. For centuries after the fall of the Achaemenid Empire in 331 BC, Imperial Aramaic, or something near enough to it to be recognisable, would remain an influence on the various native Iranian languages. The Aramaic script would survive as the essential characteristics of the Iranian Pahlavi writing system. 30 Aramaic documents from Bactria have been recently discovered, an analysis of which was published in November 2006. The texts, which were rendered on leather, reflect the use of Aramaic in the 4th century BC in the Persian Achaemenid administration of Bactria and Sogdiana. The widespread usage of Achaemenid Aramaic in the Middle East led to the gradual adoption of the Aramaic alphabet for writing Hebrew. Formerly, Hebrew had been written using an alphabet closer in form to that of Phoenician, the Paleo-Hebrew alphabet. Aramaic-derived scripts Since the evolution of the Aramaic alphabet out of the Phoenician one was a gradual process, the division of the world's alphabets into the ones derived from the Phoenician one directly and the ones derived from Phoenician via Aramaic is somewhat artificial. In general, the alphabets of the Mediterranean region (Anatolia, Greece, Italy) are classified as Phoenician-derived, adapted from around the 8th century BC, and those of the East (the Levant, Persia, Central Asia and India) are considered Aramaic-derived, adapted from around the 6th century BC from the Imperial Aramaic script of the Achaemenid Empire. After the fall of the Achaemenid Empire, the unity of the Imperial Aramaic script was lost, diversifying into a number of descendant cursives. The Hebrew and Nabataean alphabets, as they stood by the Roman era, were little changed in style from the Imperial Aramaic alphabet. Ibn Khaldun (1332–1406) alleges that not only the old Nabataean writing was influenced by the "Syrian script" (i.e. Aramaic), but also the old Chaldean script. A cursive Hebrew variant developed from the early centuries AD, but it remained restricted to the status of a variant used alongside the noncursive. By contrast, the cursive developed out of the Nabataean alphabet in the same period soon became the standard for writing Arabic, evolving into the Arabic alphabet as it stood by the time of the early spread of Islam. The development of cursive versions of Aramaic also led to the creation of the Syriac, Palmyrene and Mandaic alphabets, which formed the basis of the historical scripts of Central Asia, such as the Sogdian and Mongolian alphabets. The Old Turkic script is generally considered to have its ultimate origins in Aramaic, in particular via the Pahlavi or Sogdian alphabets, as suggested by V. Thomsen, or possibly via Kharosthi (cf., Issyk inscription). Brahmi script was also possibly derived or inspired by Aramaic. Brahmic family of scripts includes Devanagari. Languages using the alphabet Today, Biblical Aramaic, Jewish Neo-Aramaic dialects and the Aramaic language of the Talmud are written in the modern-Hebrew alphabet (distinguished from the Old Hebrew script). In classical Jewish literature, the name given to the modern-Hebrew script was "Ashurit" (the ancient Assyrian script), a script now known widely as the Aramaic script. It is believed that during the period of Assyrian dominion that Aramaic script and language received official status. Syriac and Christian Neo-Aramaic dialects are today written in the Syriac alphabet, which script has superseded the more ancient Assyrian script and now bears its name. Mandaic is written in the Mandaic alphabet. The near-identical nature of the Aramaic and the classical Hebrew alphabets caused Aramaic text to be typeset mostly in the standard Hebrew script in scholarly literature. Maaloula In Maaloula, one of few surviving communities in which a Western Aramaic dialect is still spoken, an Aramaic institute was established in 2007 by Damascus University that teaches courses to keep the language alive. The institute's activities were suspended in 2010 amidst fears that the square Aramaic alphabet used in the program too closely resembled the square script of the Hebrew alphabet and all the signs with the square Aramaic script were taken down. The program stated that they would instead use the more distinct Syriac alphabet, although use of the Aramaic alphabet has continued to some degree. Al Jazeera Arabic also broadcast a program about Western Neo-Aramaic and the villages in which it is spoken with the square script still in use. Letters Matres lectionis In Aramaic writing, Waw and Yodh serve a double function. Originally, they represented only the consonants w and y, but they were later adopted to indicate the long vowels ū and ī respectively as well (often also ō and ē respectively). In the latter role, they are known as or "mothers of reading". Ālap, likewise, has some of the characteristics of a because in initial positions, it indicates a glottal stop (followed by a vowel), but otherwise, it often also stands for the long vowels ā or ē. Among Jews, the influence of Hebrew often led to the use of Hē instead, at the end of a word. The practice of using certain letters to hold vowel values spread to Aramaic-derived writing systems, such as in Arabic and Hebrew, which still follow the practice. Unicode The Imperial Aramaic alphabet was added to the Unicode Standard in October 2009, with the release of version 5.2. The Unicode block for Imperial Aramaic is U+10840–U+1085F: The Syriac Aramaic alphabet was added to the Unicode Standard in September 1999, with the release of version 3.0. The Syriac Abbreviation (a type of overline) can be represented with a special control character called the Syriac Abbreviation Mark (U+070F). The Unicode block for Syriac Aramaic is U+0700–U+074F: See also Syriac alphabet References Sources Byrne, Ryan. "Middle Aramaic Scripts". Encyclopaedia of Language and Linguistics. Elsevier. (2006) Daniels, Peter T., et al. eds. The World's Writing Systems. Oxford. (1996) Coulmas, Florian. The Writing Systems of the World. Blackwell Publishers Ltd, Oxford. (1989) Rudder, Joshua. Learn to Write Aramaic: A Step-by-Step Approach to the Historical & Modern Scripts. n.p.: CreateSpace Independent Publishing Platform, 2011. 220 pp. . Includes a wide variety of Aramaic scripts. Ancient Hebrew and Aramaic on Coins, reading and transliterating Proto-Hebrew, online edition (Judaea Coin Archive). External links Comparison of Aramaic to related alphabets Omniglot entry 8th-century BC establishments Obsolete writing systems Persian scripts Right-to-left writing systems
966
https://en.wikipedia.org/wiki/American%20shot
American shot
"American shot" or "cowboy shot" is a translation of a phrase from French film criticism, plan américain, and refers to a medium-long ("knee") film shot of a group of characters, who are arranged so that all are visible to the camera. The usual arrangement is for the actors to stand in an irregular line from one side of the screen to the other, with the actors at the end coming forward a little and standing more in profile than the others. The purpose of the composition is to allow complex dialogue scenes to be played out without changes in camera position. In some literature, this is simply referred to as a 3/4 shot. One of the other main reasons why French critics called it "American shot" was its frequent use in western genre. This was because a shot that started at knee level would reveal the weapon of a cowboy, usually holstered at his waist. It is actually the closest the camera can get to an actor while keeping both his face and his holstered gun in frame. The French critics thought it was characteristic of American films of the 1930s or 1940s; however, it was mostly characteristic of cheaper American movies, such as Charlie Chan mysteries where people collected in front of a fireplace or at the foot of the stairs in order to explain what happened a few minutes ago. Howard Hawks legitimized this style in his films, allowing characters to act, even when not talking, when most of the audience would not be paying attention. It became his trademark style. References Cinematography
967
https://en.wikipedia.org/wiki/Acute%20disseminated%20encephalomyelitis
Acute disseminated encephalomyelitis
Acute disseminated encephalomyelitis (ADEM), or acute demyelinating encephalomyelitis, is a rare autoimmune disease marked by a sudden, widespread attack of inflammation in the brain and spinal cord. As well as causing the brain and spinal cord to become inflamed, ADEM also attacks the nerves of the central nervous system and damages their myelin insulation, which, as a result, destroys the white matter. It is often triggered by a viral infection or (very rarely) vaccinations. ADEM's symptoms resemble the symptoms of multiple sclerosis (MS), so the disease itself is sorted into the classification of the multiple sclerosis borderline diseases. However, ADEM has several features that distinguish it from MS. Unlike MS, ADEM occurs usually in children and is marked with rapid fever, although adolescents and adults can get the disease too. ADEM consists of a single flare-up whereas MS is marked with several flare-ups (or relapses), over a long period of time. Relapses following ADEM are reported in up to a quarter of patients, but the majority of these 'multiphasic' presentations following ADEM likely represent MS. ADEM is also distinguished by a loss of consciousness, coma and death, which is very rare in MS, except in severe cases. It affects about 8 per 1,000,000 people per year. Although it occurs in all ages, most reported cases are in children and adolescents, with the average age around 5 to 8 years old. The disease affects males and females almost equally. ADEM shows seasonal variation with higher incidence in winter and spring months which may coincide with higher viral infections during these months. The mortality rate may be as high as 5%; however, full recovery is seen in 50 to 75% of cases with increase in survival rates up to 70 to 90% with figures including minor residual disability as well. The average time to recover from ADEM flare-ups is one to six months. ADEM produces multiple inflammatory lesions in the brain and spinal cord, particularly in the white matter. Usually these are found in the subcortical and central white matter and cortical gray-white junction of both cerebral hemispheres, cerebellum, brainstem, and spinal cord, but periventricular white matter and gray matter of the cortex, thalami and basal ganglia may also be involved. When a person has more than one demyelinating episode of ADEM, the disease is then called recurrent disseminated encephalomyelitis or multiphasic disseminated encephalomyelitis (MDEM). Also, a fulminant course in adults has been described. Signs and symptoms ADEM has an abrupt onset and a monophasic course. Symptoms usually begin 1–3 weeks after infection. Major symptoms include fever, headache, nausea and vomiting, confusion, vision impairment, drowsiness, seizures and coma. Although initially the symptoms are usually mild, they worsen rapidly over the course of hours to days, with the average time to maximum severity being about four and a half days. Additional symptoms include hemiparesis, paraparesis, and cranial nerve palsies. ADEM in COVID-19 Neurological symptoms were the main presentation of COVID-19, which did not correlate with the severity of respiratory symptoms. The high incidence of ADEM with hemorrhage is striking. Brain inflammation is likely caused by an immune response to the disease rather than neurotropism. CSF analysis was not indicative of an infectious process, neurological impairment was not present in the acute phase of the infection, and neuroimaging findings were not typical of classical toxic and metabolic disorders. The finding of bilateral periventricular relatively asymmetrical lesions allied with deep white matter involvement, that may also be present in cortical gray-white matter junction, thalami, basal ganglia, cerebellum, and brainstem suggests an acute demyelination process. Additionally, hemorrhagic white matter lesions, clusters of macrophages related to axonal injury and ADEM-like appearance were also found in subcortical white matter. Causes Since the discovery of the anti-MOG specificity against multiple sclerosis diagnosis it is considered that ADEM is one of the possible clinical causes of anti-MOG associated encephalomyelitis About how the anti-MOG antibodies appear in the patients serum there are several theories: A preceding antigenic challenge can be identified in approximately two-thirds of people. Some viral infections thought to induce ADEM include influenza virus, dengue, enterovirus, measles, mumps, rubella, varicella zoster, Epstein–Barr virus, cytomegalovirus, herpes simplex virus, hepatitis A, coxsackievirus and COVID-19. Bacterial infections include Mycoplasma pneumoniae, Borrelia burgdorferi, Leptospira, and beta-hemolytic Streptococci. Exposure to vaccines: The only vaccine proven related to ADEM is the Semple form of the rabies vaccine, but hepatitis B, pertussis, diphtheria, measles, mumps, rubella, pneumococcus, varicella, influenza, Japanese encephalitis, and polio vaccines have all been implicated. The majority of the studies that correlate vaccination with ADEM onset use small samples or case studies. Large scale epidemiological studies (e.g., of MMR vaccine or smallpox vaccine) do not show increased risk of ADEM following vaccination. An upper bound for the risk of ADEM from measles vaccination, if it exists, can be estimated to be 10 per million, which is far lower than the risk of developing ADEM from an actual measles infection, which is about 1 per 1,000 cases. For a rubella infection, the risk is 1 per 5,000 cases. Some early vaccines, later shown to have been contaminated with host animal CNS tissue, had ADEM incident rates as high as 1 in 600. In rare cases, ADEM seems to follow from organ transplantation. Diagnosis ADEM term has been inconsistently used at different times. Currently, the commonly accepted international standard for the clinical case definition is the one published by the International Pediatric MS Study Group, revision 2007. Given that the definition is clinical, it is currently unknown if all the cases with ADEM are positive for anti-MOG autoantibody, but in any case, it seems strongly related to ADEM diagnosis. Differential diagnosis Multiple sclerosis While ADEM and MS both involve autoimmune demyelination, they differ in many clinical, genetic, imaging, and histopathological aspects. Some authors consider MS and its borderline forms to constitute a spectrum, differing only in chronicity, severity, and clinical course, while others consider them discretely different diseases. Typically, ADEM appears in children following an antigenic challenge and remains monophasic. Nevertheless, ADEM does occur in adults, and can also be clinically multiphasic. Problems for differential diagnosis increase due to the lack of agreement for a definition of multiple sclerosis. If MS were defined just by the separation in time and space of the demyelinating lesions as McDonald did, it would not be enough to make a difference, as some cases of ADEM satisfy these conditions. Therefore, some authors propose to establish the separation line in the shape of the lesions around the veins, being therefore "perivenous vs. confluent demyelination". The pathology of ADEM is very similar to that of MS with some differences. The pathological hallmark of ADEM is perivenular inflammation with limited "sleeves of demyelination". Nevertheless, MS-like plaques (confluent demyelination) can appear Plaques in the white matter in MS are sharply delineated, while the glial scar in ADEM is smooth. Axons are better preserved in ADEM lesions. Inflammation in ADEM is widely disseminated and ill-defined, and finally, lesions are strictly perivenous, while in MS they are disposed around veins, but not so sharply. Nevertheless, the co-occurrence of perivenous and confluent demyelination in some individuals suggests pathogenic overlap between acute disseminated encephalomyelitis and multiple sclerosis and misclassification even with biopsy or even postmortem ADEM in adults can progress to MS Multiphasic disseminated encephalomyelitis When the person has more than one demyelinating episode of ADEM, the disease is then called recurrent disseminated encephalomyelitis or multiphasic disseminated encephalomyelitis (MDEM). It has been found that anti-MOG auto-antibodies are related to this kind of ADEM Another variant of ADEM in adults has been described, also related to anti-MOG auto-antibodies, has been named fulminant disseminated encephalomyelitis, and it has been reported to be clinically ADEM, but showing MS-like lesions on autopsy. It has been classified inside the anti-MOG associated inflammatory demyelinating diseases. Acute hemorrhagic leukoencephalitis Acute hemorrhagic leukoencephalitis (AHL, or AHLE), acute hemorrhagic encephalomyelitis (AHEM), acute necrotizing hemorrhagic leukoencephalitis (ANHLE), Weston-Hurst syndrome, or Hurst's disease, is a hyperacute and frequently fatal form of ADEM. AHL is relatively rare (less than 100 cases have been reported in the medical literature ), it is seen in about 2% of ADEM cases, and is characterized by necrotizing vasculitis of venules and hemorrhage, and edema. Death is common in the first week and overall mortality is about 70%, but increasing evidence points to favorable outcomes after aggressive treatment with corticosteroids, immunoglobulins, cyclophosphamide, and plasma exchange. About 70% of survivors show residual neurological deficits, but some survivors have shown surprisingly little deficit considering the magnitude of the white matter affected. This disease has been occasionally associated with ulcerative colitis and Crohn's disease, malaria, sepsis associated with immune complex deposition, methanol poisoning, and other underlying conditions. Also anecdotal association with MS has been reported Laboratory studies that support diagnosis of AHL are: peripheral leukocytosis, cerebrospinal fluid (CSF) pleocytosis associated with normal glucose and increased protein. On magnetic resonance imaging (MRI), lesions of AHL typically show extensive T2-weighted and fluid-attenuated inversion recovery (FLAIR) white matter hyperintensities with areas of hemorrhages, significant edema, and mass effect. Treatment No controlled clinical trials have been conducted on ADEM treatment, but aggressive treatment aimed at rapidly reducing inflammation of the CNS is standard. The widely accepted first-line treatment is high doses of intravenous corticosteroids, such as methylprednisolone or dexamethasone, followed by 3–6 weeks of gradually lower oral doses of prednisolone. Patients treated with methylprednisolone have shown better outcomes than those treated with dexamethasone. Oral tapers of less than three weeks duration show a higher chance of relapsing, and tend to show poorer outcomes. Other anti-inflammatory and immunosuppressive therapies have been reported to show beneficial effect, such as plasmapheresis, high doses of intravenous immunoglobulin (IVIg), mitoxantrone and cyclophosphamide. These are considered alternative therapies, used when corticosteroids cannot be used or fail to show an effect. There is some evidence to suggest that patients may respond to a combination of methylprednisolone and immunoglobulins if they fail to respond to either separately In a study of 16 children with ADEM, 10 recovered completely after high-dose methylprednisolone, one severe case that failed to respond to steroids recovered completely after IV Ig; the five most severe cases – with ADAM and severe peripheral neuropathy – were treated with combined high-dose methylprednisolone and immunoglobulin, two remained paraplegic, one had motor and cognitive handicaps, and two recovered. A recent review of IVIg treatment of ADEM (of which the previous study formed the bulk of the cases) found that 70% of children showed complete recovery after treatment with IVIg, or IVIg plus corticosteroids. A study of IVIg treatment in adults with ADEM showed that IVIg seems more effective in treating sensory and motor disturbances, while steroids seem more effective in treating impairments of cognition, consciousness and rigor. This same study found one subject, a 71-year-old man who had not responded to steroids, that responded to an IVIg treatment 58 days after disease onset. Prognosis Full recovery is seen in 50 to 70% of cases, ranging to 70 to 90% recovery with some minor residual disability (typically assessed using measures such as mRS or EDSS), average time to recover is one to six months. The mortality rate may be as high as 5–10%. Poorer outcomes are associated with unresponsiveness to steroid therapy, unusually severe neurological symptoms, or sudden onset. Children tend to have more favorable outcomes than adults, and cases presenting without fevers tend to have poorer outcomes. The latter effect may be due to either protective effects of fever, or that diagnosis and treatment is sought more rapidly when fever is present. ADEM can progress to MS. It will be considered MS if some lesions appear in different times and brain areas Motor deficits Residual motor deficits are estimated to remain in about 8 to 30% of cases, the range in severity from mild clumsiness to ataxia and hemiparesis. Neurocognitive Patients with demyelinating illnesses, such as MS, have shown cognitive deficits even when there is minimal physical disability. Research suggests that similar effects are seen after ADEM, but that the deficits are less severe than those seen in MS. A study of six children with ADEM (mean age at presentation 7.7 years) were tested for a range of neurocognitive tests after an average of 3.5 years of recovery. All six children performed in the normal range on most tests, including verbal IQ and performance IQ, but performed at least one standard deviation below age norms in at least one cognitive domain, such as complex attention (one child), short-term memory (one child) and internalizing behaviour/affect (two children). Group means for each cognitive domain were all within one standard deviation of age norms, demonstrating that, as a group, they were normal. These deficits were less severe than those seen in similar aged children with a diagnosis of MS. Another study compared nineteen children with a history of ADEM, of which 10 were five years of age or younger at the time (average age 3.8 years old, tested an average of 3.9 years later) and nine were older (mean age 7.7y at time of ADEM, tested an average of 2.2 years later) to nineteen matched controls. Scores on IQ tests and educational achievement were lower for the young onset ADEM group (average IQ 90) compared to the late onset (average IQ 100) and control groups (average IQ 106), while the late onset ADEM children scored lower on verbal processing speed. Again, all groups means were within one standard deviation of the controls, meaning that while effects were statistically reliable, the children were as a whole, still within the normal range. There were also more behavioural problems in the early onset group, although there is some suggestion that this may be due, at least in part, to the stress of hospitalization at a young age. Research The relationship between ADEM and anti-MOG associated encephalomyelitis is currently under research. A new entity called MOGDEM has been proposed. About animal models, the main animal model for MS, experimental autoimmune encephalomyelitis (EAE) is also an animal model for ADEM. Being an acute monophasic illness, EAE is far more similar to ADEM than MS. See also Optic neuritis Transverse myelitis Victoria Arlen References External links Information for parents about Acute disseminated encephalomyelitis Multiple sclerosis Autoimmune diseases Central nervous system disorders Enterovirus-associated diseases Measles
969
https://en.wikipedia.org/wiki/Ataxia
Ataxia
Ataxia is a neurological sign consisting of lack of voluntary coordination of muscle movements that can include gait abnormality, speech changes, and abnormalities in eye movements. Ataxia is a clinical manifestation indicating dysfunction of the parts of the nervous system that coordinate movement, such as the cerebellum. Ataxia can be limited to one side of the body, which is referred to as hemiataxia. Several possible causes exist for these patterns of neurological dysfunction. Dystaxia is a mild degree of ataxia. Friedreich's ataxia has gait abnormality as the most commonly presented symptom. The word is from Greek α- [a negative prefix] + -τάξις [order] = "lack of order". Types Cerebellar The term cerebellar ataxia is used to indicate ataxia due to dysfunction of the cerebellum. The cerebellum is responsible for integrating a significant amount of neural information that is used to coordinate smoothly ongoing movements and to participate in motor planning. Although ataxia is not present with all cerebellar lesions, many conditions affecting the cerebellum do produce ataxia. People with cerebellar ataxia may have trouble regulating the force, range, direction, velocity, and rhythm of muscle contractions. This results in a characteristic type of irregular, uncoordinated movement that can manifest itself in many possible ways, such as asthenia, asynergy, delayed reaction time, and dyschronometria. Individuals with cerebellar ataxia could also display instability of gait, difficulty with eye movements, dysarthria, dysphagia, hypotonia, dysmetria, and dysdiadochokinesia. These deficits can vary depending on which cerebellar structures have been damaged, and whether the lesion is bi- or unilateral. People with cerebellar ataxia may initially present with poor balance, which could be demonstrated as an inability to stand on one leg or perform tandem gait. As the condition progresses, walking is characterized by a widened base and high stepping, as well as staggering and lurching from side to side. Turning is also problematic and could result in falls. As cerebellar ataxia becomes severe, great assistance and effort are needed to stand and walk. Dysarthria, an impairment with articulation, may also be present and is characterized by "scanning" speech that consists of slower rate, irregular rhythm, and variable volume. Also, slurring of speech, tremor of the voice, and ataxic respiration may occur. Cerebellar ataxia could result with incoordination of movement, particularly in the extremities. Overshooting (or hypermetria) occurs with finger-to-nose testing and heel to shin testing; thus, dysmetria is evident. Impairments with alternating movements (dysdiadochokinesia), as well as dysrhythmia, may also be displayed. Tremor of the head and trunk (titubation) may be seen in individuals with cerebellar ataxia. Dysmetria is thought to be caused by a deficit in the control of interaction torques in multijoint motion. Interaction torques are created at an associated joint when the primary joint is moved. For example, if a movement required reaching to touch a target in front of the body, flexion at the shoulder would create a torque at the elbow, while extension of the elbow would create a torque at the wrist. These torques increase as the speed of movement increases and must be compensated and adjusted for to create coordinated movement. This may, therefore, explain decreased coordination at higher movement velocities and accelerations. Dysfunction of the vestibulocerebellum (flocculonodular lobe) impairs balance and the control of eye movements. This presents itself with postural instability, in which the person tends to separate his/her feet upon standing, to gain a wider base and to avoid titubation (bodily oscillations tending to be forward-backward ones). The instability is, therefore, worsened when standing with the feet together, regardless of whether the eyes are open or closed. This is a negative Romberg's test, or more accurately, it denotes the individual's inability to carry out the test, because the individual feels unstable even with open eyes. Dysfunction of the spinocerebellum (vermis and associated areas near the midline) presents itself with a wide-based "drunken sailor" gait (called truncal ataxia), characterised by uncertain starts and stops, lateral deviations, and unequal steps. As a result of this gait impairment, falling is a concern in patients with ataxia. Studies examining falls in this population show that 74–93% of patients have fallen at least once in the past year and up to 60% admit to fear of falling. 'Dysfunction of the cerebrocerebellum' (lateral hemispheres) presents as disturbances in carrying out voluntary, planned movements by the extremities (called appendicular ataxia). These include: Intention tremor (coarse trembling, accentuated over the execution of voluntary movements, possibly involving the head and eyes, as well as the limbs and torso) Peculiar writing abnormalities (large, unequal letters, irregular underlining) A peculiar pattern of dysarthria (slurred speech, sometimes characterised by explosive variations in voice intensity despite a regular rhythm) Inability to perform rapidly alternating movements, known as dysdiadochokinesia, occurs, and could involve rapidly switching from pronation to supination of the forearm. Movements become more irregular with increases of speed. Inability to judge distances or ranges of movement happens. This dysmetria is often seen as undershooting, hypometria, or overshooting, hypermetria, the required distance or range to reach a target. This is sometimes seen when a patient is asked to reach out and touch someone's finger or touch his or her own nose. The rebound phenomenon, also known as the loss of the check reflex, is also sometimes seen in patients with cerebellar ataxia, for example, when patients are flexing their elbows isometrically against a resistance. When the resistance is suddenly removed without warning, the patients' arms may swing up and even strike themselves. With an intact check reflex, the patients check and activate the opposing triceps to slow and stop the movement. Patients may exhibit a constellation of subtle to overt cognitive symptoms, which are gathered under the terminology of Schmahmann's syndrome. Sensory The term sensory ataxia is used to indicate ataxia due to loss of proprioception, the loss of sensitivity to the positions of joint and body parts. This is generally caused by dysfunction of the dorsal columns of the spinal cord, because they carry proprioceptive information up to the brain. In some cases, the cause of sensory ataxia may instead be dysfunction of the various parts of the brain that receive positional information, including the cerebellum, thalamus, and parietal lobes. Sensory ataxia presents itself with an unsteady "stomping" gait with heavy heel strikes, as well as a postural instability that is usually worsened when the lack of proprioceptive input cannot be compensated for by visual input, such as in poorly lit environments. Physicians can find evidence of sensory ataxia during physical examination by having patients stand with their feet together and eyes shut. In affected patients, this will cause the instability to worsen markedly, producing wide oscillations and possibly a fall; this is called a positive Romberg's test. Worsening of the finger-pointing test with the eyes closed is another feature of sensory ataxia. Also, when patients are standing with arms and hands extended toward the physician, if the eyes are closed, the patients' fingers tend to "fall down" and then be restored to the horizontal extended position by sudden muscular contractions (the "ataxic hand"). Vestibular The term vestibular ataxia is used to indicate ataxia due to dysfunction of the vestibular system, which in acute and unilateral cases is associated with prominent vertigo, nausea, and vomiting. In slow-onset, chronic bilateral cases of vestibular dysfunction, these characteristic manifestations may be absent, and dysequilibrium may be the sole presentation. Causes The three types of ataxia have overlapping causes, so can either coexist or occur in isolation. Cerebellar ataxia can have many causes despite normal neuroimaging. Focal lesions Any type of focal lesion of the central nervous system (such as stroke, brain tumor, multiple sclerosis, inflammatory [such as sarcoidosis], and "chronic lymphocytyc inflammation with pontine perivascular enhancement responsive to steroids syndrome" [CLIPPERS]) will cause the type of ataxia corresponding to the site of the lesion: cerebellar if in the cerebellum; sensory if in the dorsal spinal cord...to include cord compression by thickened ligamentum flavum or stenosis of the boney spinal canal...(and rarely in the thalamus or parietal lobe); or vestibular if in the vestibular system (including the vestibular areas of the cerebral cortex). Exogenous substances (metabolic ataxia) Exogenous substances that cause ataxia mainly do so because they have a depressant effect on central nervous system function. The most common example is ethanol (alcohol), which is capable of causing reversible cerebellar and vestibular ataxia. Chronic intake of ethanol causes atrophy of the cerebellum by oxidative and endoplasmic reticulum stresses induced by thiamine deficiency. Other examples include various prescription drugs (e.g. most antiepileptic drugs have cerebellar ataxia as a possible adverse effect), Lithium level over 1.5mEq/L, synthetic cannabinoid HU-211 ingestion and various other medical and recreational drugs (e.g. ketamine, PCP or dextromethorphan, all of which are NMDA receptor antagonists that produce a dissociative state at high doses). A further class of pharmaceuticals which can cause short term ataxia, especially in high doses, are benzodiazepines. Exposure to high levels of methylmercury, through consumption of fish with high mercury concentrations, is also a known cause of ataxia and other neurological disorders. Radiation poisoning Ataxia can be induced as a result of severe acute radiation poisoning with an absorbed dose of more than 30 grays. Vitamin B12 deficiency Vitamin B12 deficiency may cause, among several neurological abnormalities, overlapping cerebellar and sensory ataxia. Hypothyroidism Symptoms of neurological dysfunction may be the presenting feature in some patients with hypothyroidism. These include reversible cerebellar ataxia, dementia, peripheral neuropathy, psychosis and coma. Most of the neurological complications improve completely after thyroid hormone replacement therapy. Causes of isolated sensory ataxia Peripheral neuropathies may cause generalised or localised sensory ataxia (e.g. a limb only) depending on the extent of the neuropathic involvement. Spinal disorders of various types may cause sensory ataxia from the lesioned level below, when they involve the dorsal columns. Non-hereditary cerebellar degeneration Non-hereditary causes of cerebellar degeneration include chronic alcohol use disorder, head injury, paraneoplastic and non-paraneoplastic autoimmune ataxia, high altitude cerebral oedema, coeliac disease, normal pressure hydrocephalus and infectious or post-infectious cerebellitis. Hereditary ataxias Ataxia may depend on hereditary disorders consisting of degeneration of the cerebellum or of the spine; most cases feature both to some extent, and therefore present with overlapping cerebellar and sensory ataxia, even though one is often more evident than the other. Hereditary disorders causing ataxia include autosomal dominant ones such as spinocerebellar ataxia, episodic ataxia, and dentatorubropallidoluysian atrophy, as well as autosomal recessive disorders such as Friedreich's ataxia (sensory and cerebellar, with the former predominating) and Niemann Pick disease, ataxia-telangiectasia (sensory and cerebellar, with the latter predominating), and abetalipoproteinaemia. An example of X-linked ataxic condition is the rare fragile X-associated tremor/ataxia syndrome or FXTAS. Arnold–Chiari malformation (congenital ataxia) Arnold–Chiari malformation is a malformation of the brain. It consists of a downward displacement of the cerebellar tonsils and the medulla through the foramen magnum, sometimes causing hydrocephalus as a result of obstruction of cerebrospinal fluid outflow. Succinic semialdehyde dehydrogenase deficiency Succinic semialdehyde dehydrogenase deficiency is an autosomal-recessive gene disorder where mutations in the ALDH5A1 gene results in the accumulation of gamma-Hydroxybutyric acid (GHB) in the body. GHB accumulates in the nervous system and can cause ataxia as well as other neurological dysfunction. Wilson's disease Wilson's disease is an autosomal-recessive gene disorder whereby an alteration of the ATP7B gene results in an inability to properly excrete copper from the body. Copper accumulates in the nervous system and liver and can cause ataxia as well as other neurological and organ impairments. Gluten ataxia Gluten ataxia is an autoimmune disease triggered by the ingestion of gluten. Early diagnosis and treatment with a gluten-free diet can improve ataxia and prevent its progression. The effectiveness of the treatment depends on the elapsed time from the onset of the ataxia until diagnosis, because the death of neurons in the cerebellum as a result of gluten exposure is irreversible. It accounts for 40% of ataxias of unknown origin and 15% of all ataxias. Less than 10% of people with gluten ataxia present any gastrointestinal symptom and only about 40% have intestinal damage. This entity is classified into primary auto-immune cerebellar ataxias (PACA). Potassium pump Malfunction of the sodium-potassium pump may be a factor in some ataxias. The - pump has been shown to control and set the intrinsic activity mode of cerebellar Purkinje neurons. This suggests that the pump might not simply be a homeostatic, "housekeeping" molecule for ionic gradients; but could be a computational element in the cerebellum and the brain. Indeed, an ouabain block of - pumps in the cerebellum of a live mouse results in it displaying ataxia and dystonia. Ataxia is observed for lower ouabain concentrations, dystonia is observed at higher ouabain concentrations. Cerebellar ataxia associated with anti-GAD antibodies Antibodies against the enzyme glutamic acid decarboxylase (GAD: enzyme changing glutamate into GABA) cause cerebellar deficits. The antibodies impair motor learning and cause behavioral deficits. GAD antibodies related ataxia is part of the group called immune-mediated cerebellar ataxias. The antibodies induce a synaptopathy. The cerebellum is particularly vulnerable to autoimmune disorders. Cerebellar circuitry has capacities to compensate and restore function thanks to cerebellar reserve, gathering multiple forms of plasticity. LTDpathies gather immune disorders targeting long-term depression (LTD), a form of plasticity. Diagnosis Imaging studies - A CT scan or MRI of the brain might help determine potential causes. An MRI can sometimes show shrinkage of the cerebellum and other brain structures in people with ataxia. It may also show other treatable findings, such as a blood clot or benign tumour, that could be pressing on the cerebellum. Lumbar puncture (spinal tap) - A needle is inserted into the lower back (lumbar region) between two lumbar vertebrae to obtain a sample of cerebrospinal fluid for testing. Genetic testing - Determines whether the mutation that causes one of the hereditary ataxic conditions is present. Tests are available for many but not all of the hereditary ataxias. Treatment The treatment of ataxia and its effectiveness depend on the underlying cause. Treatment may limit or reduce the effects of ataxia, but it is unlikely to eliminate them entirely. Recovery tends to be better in individuals with a single focal injury (such as stroke or a benign tumour), compared to those who have a neurological degenerative condition. A review of the management of degenerative ataxia was published in 2009. A small number of rare conditions presenting with prominent cerebellar ataxia are amenable to specific treatment and recognition of these disorders is critical. Diseases include vitamin E deficiency, abetalipoproteinemia, cerebrotendinous xanthomatosis, Niemann–Pick type C disease, Refsum's disease, glucose transporter type 1 deficiency, episodic ataxia type 2, gluten ataxia, glutamic acid decarboxylase ataxia. Novel therapies target the RNA defects associated with cerebellar disorders, using in particular anti-sense oligonucleotides. The movement disorders associated with ataxia can be managed by pharmacological treatments and through physical therapy and occupational therapy to reduce disability. Some drug treatments that have been used to control ataxia include: 5-hydroxytryptophan (5-HTP), idebenone, amantadine, physostigmine, L-carnitine or derivatives, trimethoprim/sulfamethoxazole, vigabatrin, phosphatidylcholine, acetazolamide, 4-aminopyridine, buspirone, and a combination of coenzyme Q10 and vitamin E. Physical therapy requires a focus on adapting activity and facilitating motor learning for retraining specific functional motor patterns. A recent systematic review suggested that physical therapy is effective, but there is only moderate evidence to support this conclusion. The most commonly used physical therapy interventions for cerebellar ataxia are vestibular habituation, Frenkel exercises, proprioceptive neuromuscular facilitation (PNF), and balance training; however, therapy is often highly individualized and gait and coordination training are large components of therapy. Current research suggests that, if a person is able to walk with or without a mobility aid, physical therapy should include an exercise program addressing five components: static balance, dynamic balance, trunk-limb coordination, stairs, and contracture prevention. Once the physical therapist determines that the individual is able to safely perform parts of the program independently, it is important that the individual be prescribed and regularly engage in a supplementary home exercise program that incorporates these components to further improve long term outcomes. These outcomes include balance tasks, gait, and individual activities of daily living. While the improvements are attributed primarily to changes in the brain and not just the hip or ankle joints, it is still unknown whether the improvements are due to adaptations in the cerebellum or compensation by other areas of the brain. Decomposition, simplification, or slowing of multijoint movement may also be an effective strategy that therapists may use to improve function in patients with ataxia. Training likely needs to be intense and focused—as indicated by one study performed with stroke patients experiencing limb ataxia who underwent intensive upper limb retraining. Their therapy consisted of constraint-induced movement therapy which resulted in improvements of their arm function. Treatment should likely include strategies to manage difficulties with everyday activities such as walking. Gait aids (such as a cane or walker) can be provided to decrease the risk of falls associated with impairment of balance or poor coordination. Severe ataxia may eventually lead to the need for a wheelchair. To obtain better results, possible coexisting motor deficits need to be addressed in addition to those induced by ataxia. For example, muscle weakness and decreased endurance could lead to increasing fatigue and poorer movement patterns. There are several assessment tools available to therapists and health care professionals working with patients with ataxia. The International Cooperative Ataxia Rating Scale (ICARS) is one of the most widely used and has been proven to have very high reliability and validity. Other tools that assess motor function, balance and coordination are also highly valuable to help the therapist track the progress of their patient, as well as to quantify the patient's functionality. These tests include, but are not limited to: The Berg Balance Scale Tandem Walking (to test for Tandem gaitability) Scale for the Assessment and Rating of Ataxia (SARA) tapping tests – The person must quickly and repeatedly tap their arm or leg while the therapist monitors the amount of dysdiadochokinesia. finger-nose testing – This test has several variations including finger-to-therapist's finger, finger-to-finger, and alternate nose-to-finger. Industry Insights According to the report published by the Facts and Factors, global demand for the ataxia market was estimated at approximately USD 29,401.1 Million in 2020 and is expected to generate revenue of around USD 46,000.8 Million by the end of 2026, growing at a CAGR of around 10.2% between 2021 and 2026. Other uses The term "ataxia" is sometimes used in a broader sense to indicate lack of coordination in some physiological process. Examples include optic ataxia (lack of coordination between visual inputs and hand movements, resulting in inability to reach and grab objects) and ataxic respiration (lack of coordination in respiratory movements, usually due to dysfunction of the respiratory centres in the medulla oblongata). Optic ataxia may be caused by lesions to the posterior parietal cortex, which is responsible for combining and expressing positional information and relating it to movement. Outputs of the posterior parietal cortex include the spinal cord, brain stem motor pathways, pre-motor and pre-frontal cortex, basal ganglia and the cerebellum. Some neurons in the posterior parietal cortex are modulated by intention. Optic ataxia is usually part of Balint's syndrome, but can be seen in isolation with injuries to the superior parietal lobule, as it represents a disconnection between visual-association cortex and the frontal premotor and motor cortex. See also Ataxic cerebral palsy Spinocerebellar ataxia Bruns apraxia References Further reading External links Symptoms and signs: Nervous system Stroke
974
https://en.wikipedia.org/wiki/Ada%20Lovelace
Ada Lovelace
Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852) was an English mathematician and writer, chiefly known for her work on Charles Babbage's proposed mechanical general-purpose computer, the Analytical Engine. She was the first to recognise that the machine had applications beyond pure calculation, and to have published the first algorithm intended to be carried out by such a machine. As a result, she is often regarded as the first computer programmer. Ada Byron was the only child of poet Lord Byron and mathematician Lady Byron. All of Byron's other children were born out of wedlock to other women. Byron separated from his wife a month after Ada was born and left England forever. Four months later, he commemorated the parting in a poem that begins, "Is thy face like thy mother's my fair child! ADA! sole daughter of my house and heart?". He died in Greece when Ada was eight years old. Her mother remained bitter and promoted Ada's interest in mathematics and logic in an effort to prevent her from developing her father's perceived insanity. Despite this, Ada remained interested in him, naming her two sons Byron and Gordon. Upon her death, she was buried next to him at her request. Although often ill in her childhood, Ada pursued her studies assiduously. She married William King in 1835. King was made Earl of Lovelace in 1838, Ada thereby becoming Countess of Lovelace. Her educational and social exploits brought her into contact with scientists such as Andrew Crosse, Charles Babbage, Sir David Brewster, Charles Wheatstone, Michael Faraday and the author Charles Dickens, contacts which she used to further her education. Ada described her approach as "poetical science" and herself as an "Analyst (& Metaphysician)". When she was a teenager (18), her mathematical talents led her to a long working relationship and friendship with fellow British mathematician Charles Babbage, who is known as "the father of computers". She was in particular interested in Babbage's work on the Analytical Engine. Lovelace first met him in June 1833, through their mutual friend, and her private tutor, Mary Somerville. Between 1842 and 1843, Ada translated an article by Italian military engineer Luigi Menabrea about the Analytical Engine, supplementing it with an elaborate set of notes, simply called "Notes". Lovelace's notes are important in the early history of computers, containing what many consider to be the first computer program—that is, an algorithm designed to be carried out by a machine. Other historians reject this perspective and point out that Babbage's personal notes from the years 1836/1837 contain the first programs for the engine. She also developed a vision of the capability of computers to go beyond mere calculating or number-crunching, while many others, including Babbage himself, focused only on those capabilities. Her mindset of "poetical science" led her to ask questions about the Analytical Engine (as shown in her notes) examining how individuals and society relate to technology as a collaborative tool. She died of uterine cancer in 1852 at the age of 36, the same age at which her father died. Biography Childhood Lord Byron expected his child to be a "glorious boy" and was disappointed when Lady Byron gave birth to a girl. The child was named after Byron's half-sister, Augusta Leigh, and was called "Ada" by Byron himself. On 16 January 1816, at Lord Byron's command, Lady Byron left for her parents' home at Kirkby Mallory, taking their five-week-old daughter with her. Although English law at the time granted full custody of children to the father in cases of separation, Lord Byron made no attempt to claim his parental rights, but did request that his sister keep him informed of Ada's welfare. On 21 April, Lord Byron signed the deed of separation, although very reluctantly, and left England for good a few days later. Aside from an acrimonious separation, Lady Byron continued throughout her life to make allegations about her husband's immoral behaviour. This set of events made Lovelace infamous in Victorian society. Ada did not have a relationship with her father. He died in 1824 when she was eight years old. Her mother was the only significant parental figure in her life. Lovelace was not shown the family portrait of her father until her 20th birthday. Lovelace did not have a close relationship with her mother. She was often left in the care of her maternal grandmother Judith, Hon. Lady Milbanke, who doted on her. However, because of societal attitudes of the time—which favoured the husband in any separation, with the welfare of any child acting as mitigation—Lady Byron had to present herself as a loving mother to the rest of society. This included writing anxious letters to Lady Milbanke about her daughter's welfare, with a cover note saying to retain the letters in case she had to use them to show maternal concern. In one letter to Lady Milbanke, she referred to her daughter as "it": "I talk to it for your satisfaction, not my own, and shall be very glad when you have it under your own." Lady Byron had her teenage daughter watched by close friends for any sign of moral deviation. Lovelace dubbed these observers the "Furies" and later complained they exaggerated and invented stories about her. Lovelace was often ill, beginning in early childhood. At the age of eight, she experienced headaches that obscured her vision. In June 1829, she was paralyzed after a bout of measles. She was subjected to continuous bed rest for nearly a year, something which may have extended her period of disability. By 1831, she was able to walk with crutches. Despite the illnesses, she developed her mathematical and technological skills. Ada Byron had an affair with a tutor in early 1833. She tried to elope with him after she was caught, but the tutor's relatives recognised her and contacted her mother. Lady Byron and her friends covered the incident up to prevent a public scandal. Lovelace never met her younger half-sister, Allegra, the daughter of Lord Byron and Claire Clairmont. Allegra died in 1822 at the age of five. Lovelace did have some contact with Elizabeth Medora Leigh, the daughter of Byron's half-sister Augusta Leigh, who purposely avoided Lovelace as much as possible when introduced at court. Adult years Lovelace became close friends with her tutor Mary Somerville, who introduced her to Charles Babbage in 1833. She had a strong respect and affection for Somerville, and they corresponded for many years. Other acquaintances included the scientists Andrew Crosse, Sir David Brewster, Charles Wheatstone, Michael Faraday and the author Charles Dickens. She was presented at Court at the age of seventeen "and became a popular belle of the season" in part because of her "brilliant mind." By 1834 Ada was a regular at Court and started attending various events. She danced often and was able to charm many people, and was described by most people as being dainty, although John Hobhouse, Byron's friend, described her as "a large, coarse-skinned young woman but with something of my friend's features, particularly the mouth". This description followed their meeting on 24 February 1834 in which Ada made it clear to Hobhouse that she did not like him, probably due to her mother's influence, which led her to dislike all of her father's friends. This first impression was not to last, and they later became friends. On 8 July 1835, she married William, 8th Baron King, becoming Lady King. They had three homes: Ockham Park, Surrey; a Scottish estate on Loch Torridon in Ross-shire; and a house in London. They spent their honeymoon at Worthy Manor in Ashley Combe near Porlock Weir, Somerset. The Manor had been built as a hunting lodge in 1799 and was improved by King in preparation for their honeymoon. It later became their summer retreat and was further improved during this time. From 1845, the family's main house was Horsley Towers, built in the Tudorbethan fashion by the architect of the Houses of Parliament, Charles Barry, and later greatly enlarged to Lovelace's own designs. They had three children: Byron (born 1836); Anne Isabella (called Annabella, born 1837); and Ralph Gordon (born 1839). Immediately after the birth of Annabella, Lady King experienced "a tedious and suffering illness, which took months to cure." Ada was a descendant of the extinct Barons Lovelace and in 1838, her husband was made Earl of Lovelace and Viscount Ockham, meaning Ada became the Countess of Lovelace. In 1843–44, Ada's mother assigned William Benjamin Carpenter to teach Ada's children and to act as a "moral" instructor for Ada. He quickly fell for her and encouraged her to express any frustrated affections, claiming that his marriage meant he would never act in an "unbecoming" manner. When it became clear that Carpenter was trying to start an affair, Ada cut it off. In 1841, Lovelace and Medora Leigh (the daughter of Lord Byron's half-sister Augusta Leigh) were told by Ada's mother that Ada's father was also Medora's father. On 27 February 1841, Ada wrote to her mother: "I am not in the least astonished. In fact, you merely confirm what I have for years and years felt scarcely a doubt about, but should have considered it most improper in me to hint to you that I in any way suspected." She did not blame the incestuous relationship on Byron, but instead blamed Augusta Leigh: "I fear she is more inherently wicked than he ever was." In the 1840s, Ada flirted with scandals: firstly, from a relaxed approach to extra-marital relationships with men, leading to rumours of affairs; and secondly, from her love of gambling. She apparently lost more than £3,000 on the horses during the later 1840s. The gambling led to her forming a syndicate with male friends, and an ambitious attempt in 1851 to create a mathematical model for successful large bets. This went disastrously wrong, leaving her thousands of pounds in debt to the syndicate, forcing her to admit it all to her husband. She had a shadowy relationship with Andrew Crosse's son John from 1844 onwards. John Crosse destroyed most of their correspondence after her death as part of a legal agreement. She bequeathed him the only heirlooms her father had personally left to her. During her final illness, she would panic at the idea of the younger Crosse being kept from visiting her. Education From 1832, when she was seventeen, her mathematical abilities began to emerge, and her interest in mathematics dominated the majority of her adult life. Her mother's obsession with rooting out any of the insanity of which she accused Byron was one of the reasons that Ada was taught mathematics from an early age. She was privately educated in mathematics and science by William Frend, William King, and Mary Somerville, the noted 19th-century researcher and scientific author. In the 1840s, the mathematician Augustus De Morgan extended her "much help in her mathematical studies" including study of advanced calculus topics including the "numbers of Bernoulli" (that formed her celebrated algorithm for Babbage's Analytical Engine). In a letter to Lady Byron, De Morgan suggested that Ada's skill in mathematics might lead her to become "an original mathematical investigator, perhaps of first-rate eminence." Lovelace often questioned basic assumptions through integrating poetry and science. Whilst studying differential calculus, she wrote to De Morgan: I may remark that the curious transformations many formulae can undergo, the unsuspected and to a beginner apparently impossible identity of forms exceedingly dissimilar at first sight, is I think one of the chief difficulties in the early part of mathematical studies. I am often reminded of certain sprites and fairies one reads of, who are at one's elbows in one shape now, and the next minute in a form most dissimilar. Lovelace believed that intuition and imagination were critical to effectively applying mathematical and scientific concepts. She valued metaphysics as much as mathematics, viewing both as tools for exploring "the unseen worlds around us." Death Lovelace died at the age of 36 on 27 November 1852, from uterine cancer. The illness lasted several months, in which time Annabella took command over whom Ada saw, and excluded all of her friends and confidants. Under her mother's influence, Ada had a religious transformation and was coaxed into repenting of her previous conduct and making Annabella her executor. She lost contact with her husband after confessing something to him on 30 August which caused him to abandon her bedside. It is not known what she told him. She was buried, at her request, next to her father at the Church of St. Mary Magdalene in Hucknall, Nottinghamshire. A memorial plaque, written in Latin, to her and her father is in the chapel attached to Horsley Towers. Work Throughout her life, Lovelace was strongly interested in scientific developments and fads of the day, including phrenology and mesmerism. After her work with Babbage, Lovelace continued to work on other projects. In 1844, she commented to a friend Woronzow Greig about her desire to create a mathematical model for how the brain gives rise to thoughts and nerves to feelings ("a calculus of the nervous system"). She never achieved this, however. In part, her interest in the brain came from a long-running pre-occupation, inherited from her mother, about her "potential" madness. As part of her research into this project, she visited the electrical engineer Andrew Crosse in 1844 to learn how to carry out electrical experiments. In the same year, she wrote a review of a paper by Baron Karl von Reichenbach, Researches on Magnetism, but this was not published and does not appear to have progressed past the first draft. In 1851, the year before her cancer struck, she wrote to her mother mentioning "certain productions" she was working on regarding the relation of maths and music. Lovelace first met Charles Babbage in June 1833, through their mutual friend Mary Somerville. Later that month, Babbage invited Lovelace to see the prototype for his difference engine. She became fascinated with the machine and used her relationship with Somerville to visit Babbage as often as she could. Babbage was impressed by Lovelace's intellect and analytic skills. He called her "The Enchantress of Number." In 1843, he wrote to her: During a nine-month period in 1842–43, Lovelace translated the Italian mathematician Luigi Menabrea's article on Babbage's newest proposed machine, the Analytical Engine. With the article, she appended a set of notes. Explaining the Analytical Engine's function was a difficult task, as many other scientists did not really grasp the concept and the British establishment had shown little interest in it. Lovelace's notes even had to explain how the Analytical Engine differed from the original Difference Engine. Her work was well received at the time; the scientist Michael Faraday described himself as a supporter of her writing. The notes are around three times longer than the article itself and include (in Note G), in complete detail, a method for calculating a sequence of Bernoulli numbers using the Analytical Engine, which might have run correctly had it ever been built (only Babbage's Difference Engine has been built, completed in London in 2002). Based on this work, Lovelace is now considered by many to be the first computer programmer and her method has been called the world's first computer program. Others dispute this because some of Charles Babbage's earlier writings could be considered computer programs. Note G also contains Lovelace's dismissal of artificial intelligence. She wrote that "The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths." This objection has been the subject of much debate and rebuttal, for example by Alan Turing in his paper "Computing Machinery and Intelligence". Lovelace and Babbage had a minor falling out when the papers were published, when he tried to leave his own statement (criticising the government's treatment of his Engine) as an unsigned preface, which could have been mistakenly interpreted as a joint declaration. When Taylor's Scientific Memoirs ruled that the statement should be signed, Babbage wrote to Lovelace asking her to withdraw the paper. This was the first that she knew he was leaving it unsigned, and she wrote back refusing to withdraw the paper. The historian Benjamin Woolley theorised that "His actions suggested he had so enthusiastically sought Ada's involvement, and so happily indulged her ... because of her 'celebrated name'." Their friendship recovered, and they continued to correspond. On 12 August 1851, when she was dying of cancer, Lovelace wrote to him asking him to be her executor, though this letter did not give him the necessary legal authority. Part of the terrace at Worthy Manor was known as Philosopher's Walk, as it was there that Lovelace and Babbage were reputed to have walked while discussing mathematical principles. First computer program In 1840, Babbage was invited to give a seminar at the University of Turin about his Analytical Engine. Luigi Menabrea, a young Italian engineer and the future Prime Minister of Italy, transcribed Babbage's lecture into French, and this transcript was subsequently published in the Bibliothèque universelle de Genève in October 1842. Babbage's friend Charles Wheatstone commissioned Ada Lovelace to translate Menabrea's paper into English. She then augmented the paper with notes, which were added to the translation. Ada Lovelace spent the better part of a year doing this, assisted with input from Babbage. These notes, which are more extensive than Menabrea's paper, were then published in the September 1843 edition of Taylor's Scientific Memoirs under the initialism AAL. Ada Lovelace's notes were labelled alphabetically from A to G. In note G, she describes an algorithm for the Analytical Engine to compute Bernoulli numbers. It is considered to be the first published algorithm ever specifically tailored for implementation on a computer, and Ada Lovelace has often been cited as the first computer programmer for this reason. The engine was never completed so her program was never tested. In 1953, more than a century after her death, Ada Lovelace's notes on Babbage's Analytical Engine were republished as an appendix to B. V. Bowden's Faster than Thought: A Symposium on Digital Computing Machines. The engine has now been recognised as an early model for a computer and her notes as a description of a computer and software. Insight into potential of computing devices In her notes, Ada Lovelace emphasised the difference between the Analytical Engine and previous calculating machines, particularly its ability to be programmed to solve problems of any complexity. She realised the potential of the device extended far beyond mere number crunching. In her notes, she wrote: This analysis was an important development from previous ideas about the capabilities of computing devices and anticipated the implications of modern computing one hundred years before they were realised. Walter Isaacson ascribes Ada's insight regarding the application of computing to any process based on logical symbols to an observation about textiles: "When she saw some mechanical looms that used punchcards to direct the weaving of beautiful patterns, it reminded her of how Babbage's engine used punched cards to make calculations." This insight is seen as significant by writers such as Betty Toole and Benjamin Woolley, as well as the programmer John Graham-Cumming, whose project Plan 28 has the aim of constructing the first complete Analytical Engine. According to the historian of computing and Babbage specialist Doron Swade: Ada saw something that Babbage in some sense failed to see. In Babbage's world his engines were bound by number...What Lovelace saw...was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation—and looking back from the present high ground of modern computing, if we are looking and sifting history for that transition, then that transition was made explicitly by Ada in that 1843 paper. Controversy over contribution Though Lovelace is often referred to as the first computer programmer, some biographers, computer scientists and historians of computing claim otherwise. Allan G. Bromley, in the 1990 article Difference and Analytical Engines: Bruce Collier, who later wrote a biography of Babbage, wrote in his 1970 Harvard University PhD thesis that Lovelace "made a considerable contribution to publicizing the Analytical Engine, but there is no evidence that she advanced the design or theory of it in any way". Eugene Eric Kim and Betty Alexandra Toole consider it "incorrect" to regard Lovelace as the first computer programmer, as Babbage wrote the initial programs for his Analytical Engine, although the majority were never published. Bromley notes several dozen sample programs prepared by Babbage between 1837 and 1840, all substantially predating Lovelace's notes. Dorothy K. Stein regards Lovelace's notes as "more a reflection of the mathematical uncertainty of the author, the political purposes of the inventor, and, above all, of the social and cultural context in which it was written, than a blueprint for a scientific development." Doron Swade, a specialist on history of computing known for his work on Babbage, discussed Lovelace during a lecture on Babbage's analytical engine. He explained that Ada was only a "promising beginner" instead of genius in mathematics, that she began studying basic concepts of mathematics five years after Babbage conceived the analytical engine so she could not have made important contributions to it, and that she only published the first computer program instead of actually writing it. But he agrees that Ada was the only person to see the potential of the analytical engine as a machine capable of expressing entities other than quantities. In his self-published book, Idea Makers, Stephen Wolfram defends Lovelace's contributions. While acknowledging that Babbage wrote several unpublished algorithms for the Analytical Engine prior to Lovelace's notes, Wolfram argues that "there's nothing as sophisticated—or as clean—as Ada's computation of the Bernoulli numbers. Babbage certainly helped and commented on Ada's work, but she was definitely the driver of it." Wolfram then suggests that Lovelace's main achievement was to distill from Babbage's correspondence "a clear exposition of the abstract operation of the machine—something which Babbage never did." In popular culture 1810s Lord Byron wrote the poem "Fare Thee Well" to his wife Lady Byron in 1816, following their separation after the birth of Ada Lovelace. In the poem he writes: And when thou would'st solace gather— When our child's first accents flow— Wilt thou teach her to say "Father!" Though his care she must forego? When her little hands shall press thee— When her lip to thine is pressed— Think of him whose prayer shall bless thee— Think of him thy love had blessed! Should her lineaments resemble Those thou never more may'st see, Then thy heart will softly tremble With a pulse yet true to me. 1970s Lovelace is portrayed in Romulus Linney's 1977 play Childe Byron. 1990s In the 1990 steampunk novel The Difference Engine by William Gibson and Bruce Sterling, Lovelace delivers a lecture on the "punched cards" programme which proves Gödel's incompleteness theorems decades before their actual discovery. In the 1997 film Conceiving Ada, a computer scientist obsessed with Ada finds a way of communicating with her in the past by means of "undying information waves". In Tom Stoppard's 1993 play Arcadia, the precocious teenage genius Thomasina Coverly—a character "apparently based" on Ada Lovelace (the play also involves Lord Byron)—comes to understand chaos theory, and theorises the second law of thermodynamics, before either is officially recognised. 2000s Lovelace features in John Crowley's 2005 novel, Lord Byron's Novel: The Evening Land, as an unseen character whose personality is forcefully depicted in her annotations and anti-heroic efforts to archive her father's lost novel. 2010s The 2015 play Ada and the Engine by Lauren Gunderson portrays Lovelace and Charles Babbage in unrequited love, and it imagines a post-death meeting between Lovelace and her father. Lovelace and Babbage are the main characters in Sydney Padua's webcomic and graphic novel The Thrilling Adventures of Lovelace and Babbage. The comic features extensive footnotes on the history of Ada Lovelace, and many lines of dialogue are drawn from actual correspondence. Lovelace and Mary Shelley as teenagers are the central characters in Jordan Stratford's steampunk series, The Wollstonecraft Detective Agency. Lovelace, identified as Ada Augusta Byron, is portrayed by Lily Lesser in the second season of The Frankenstein Chronicles. She is employed as an "analyst" to provide the workings of a life-sized humanoid automaton. The brass workings of the machine are reminiscent of Babbage's analytical engine. Her employment is described as keeping her occupied until she returns to her studies in advanced mathematics. Lovelace and Babbage appear as characters in the second season of the ITV series Victoria (2017). Emerald Fennell portrays Lovelace in the episode, "The Green-Eyed Monster." The Cardano cryptocurrency platform, which was launched in 2017, uses Ada as the name for their cryptocurrency and Lovelace as the smallest sub-unit of an Ada. "Lovelace" is the name given to the operating system designed by the character Cameron Howe in Halt and Catch Fire. Lovelace is a primary character in the 2019 Big Finish Doctor Who audio play The Enchantress of Numbers, starring Tom Baker as the Fourth Doctor and Jane Slavin as his current companion, WPC Ann Kelso. Lovelace is played by Finty Williams. In 2019, Lovelace is a featured character in the play STEM FEMMES by Philadelphia theater company Applied Mechanics. 2020s Lovelace features as a character in "Spyfall, Part 2", the second episode of Doctor Who, series 12, which first aired on BBC One on 5 January 2020. The character was portrayed by Sylvie Briggs, alongside characterisations of Charles Babbage and Noor Inayat Khan. In 2021, Nvidia named their upcoming GPU architecture (to be released in 2022), "Ada Lovelace", after her. Commemoration The computer language Ada, created on behalf of the United States Department of Defense, was named after Lovelace. The reference manual for the language was approved on 10 December 1980 and the Department of Defense Military Standard for the language, MIL-STD-1815, was given the number of the year of her birth. In 1981, the Association for Women in Computing inaugurated its Ada Lovelace Award. Since 1998, the British Computer Society (BCS) has awarded the Lovelace Medal, and in 2008 initiated an annual competition for women students. BCSWomen sponsors the Lovelace Colloquium, an annual conference for women undergraduates. Ada College is a further-education college in Tottenham Hale, London, focused on digital skills. Ada Lovelace Day is an annual event celebrated on the second Tuesday of October, which began in 2009. Its goal is to "... raise the profile of women in science, technology, engineering, and maths," and to "create new role models for girls and women" in these fields. Events have included Wikipedia edit-a-thons with the aim of improving the representation of women on Wikipedia in terms of articles and editors to reduce unintended gender bias on Wikipedia. The Ada Initiative was a non-profit organisation dedicated to increasing the involvement of women in the free culture and open source movements. The Engineering in Computer Science and Telecommunications College building in Zaragoza University is called the Ada Byron Building. The computer centre in the village of Porlock, near where Lovelace lived, is named after her. Ada Lovelace House is a council-owned building in Kirkby-in-Ashfield, Nottinghamshire, near where Lovelace spent her infancy. In 2012, a Google Doodle and blog post honoured her on her birthday. In 2013, Ada Developers Academy was founded and named after her. The mission of Ada Developers Academy is to diversify tech by providing women and gender diverse people the skills, experience, and community support to become professional software developers to change the face of tech. On 17 September 2013, an episode of Great Lives about Ada Lovelace aired. As of November 2015, all new British passports have included an illustration of Lovelace and Babbage. In 2017, a Google Doodle honoured her with other women on International Women's Day. On 2 February 2018, Satellogic, a high-resolution Earth observation imaging and analytics company, launched a ÑuSat type micro-satellite named in honour of Ada Lovelace. In March 2018, The New York Times published a belated obituary for Ada Lovelace. On 27 July 2018, Senator Ron Wyden submitted, in the United States Senate, the designation of 9 October 2018 as National Ada Lovelace Day: "To honor the life and contributions of Ada Lovelace as a leading woman in science and mathematics". The resolution (S.Res.592) was considered, and agreed to without amendment and with a preamble by unanimous consent. In November 2020 it was announced that Trinity College Dublin whose library had previously held forty busts, all of them of men, was commissioning four new busts of women, one of whom was to be Lovelace. Bicentenary The bicentenary of Ada Lovelace's birth was celebrated with a number of events, including: The Ada Lovelace Bicentenary Lectures on Computability, Israel Institute for Advanced Studies, 20 December 2015 – 31 January 2016. Ada Lovelace Symposium, University of Oxford, 13–14 October 2015. Ada.Ada.Ada, a one-woman show about the life and work of Ada Lovelace (using an LED dress), premiered at Edinburgh International Science Festival on 11 April 2015, and continues to touring internationally to promote diversity on STEM at technology conferences, businesses, government and educational organisations. Special exhibitions were displayed by the Science Museum in London, England and the Weston Library (part of the Bodleian Library) in Oxford, England. Publications Lovelace, Ada King. Ada, the Enchantress of Numbers: A Selection from the Letters of Lord Byron's Daughter and her Description of the First Computer. Mill Valley, CA: Strawberry Press, 1992. . Publication history Six copies of the 1843 first edition of Sketch of the Analytical Engine with Ada Lovelace's "Notes" have been located. Three are held at Harvard University, one at the University of Oklahoma, and one at the United States Air Force Academy. On 20 July 2018, the sixth copy was sold at auction to an anonymous buyer for £95,000. A digital facsimile of one of the copies in the Harvard University Library is available online. In December 2016, a letter written by Ada Lovelace was forfeited by Martin Shkreli to the New York State Department of Taxation and Finance for unpaid taxes owed by Shkreli. See also Ai-Da (robot) Code: Debugging the Gender Gap List of pioneers in computer science Timeline of women in science Women in computing Women in STEM fields Explanatory notes References General sources . . . . . . . With notes upon the memoir by the translator. Miller, Clair Cain. "Ada Lovelace, 1815–1852," New York Times, 8 March 2018. . . . . . . . Further reading Miranda Seymour, In Byron's Wake: The Turbulent Lives of Byron's Wife and Daughter: Annabella Milbanke and Ada Lovelace, Pegasus, 2018, 547 pp. Christopher Hollings, Ursula Martin, and Adrian Rice, Ada Lovelace: The Making of a Computer Scientist, Bodleian Library, 2018, 114 pp. Jenny Uglow, "Stepping Out of Byron's Shadow", The New York Review of Books, vol. LXV, no. 18 (22 November 2018), pp. 30–32. Jennifer Chiaverini, Enchantress of Numbers, Dutton, 2017, 426 pp. External links "Ada's Army gets set to rewrite history at Inspirefest 2018" by Luke Maxwell, 4 August 2018 "Untangling the Tale of Ada Lovelace" by Stephen Wolfram, December 2015 1815 births 1852 deaths 19th-century British women scientists 19th-century British writers 19th-century English mathematicians 19th-century English women writers 19th-century British inventors 19th-century English nobility Ada (programming language) British countesses British women computer scientists British women mathematicians Burials in Nottinghamshire Ada Women computer scientists Computer designers Daughters of barons Deaths from cancer in England Deaths from uterine cancer English computer programmers English people of Scottish descent English women poets Lord Byron Mathematicians from London Women of the Victorian era Burials at the Church of St Mary Magdalene, Hucknall
980
https://en.wikipedia.org/wiki/August%20Derleth
August Derleth
August William Derleth (February 24, 1909 – July 4, 1971) was an American writer and anthologist. Though best remembered as the first book publisher of the writings of H. P. Lovecraft, and for his own contributions to the Cthulhu Mythos and the cosmic horror genre, as well as his founding of the publisher Arkham House (which did much to bring supernatural fiction into print in hardcover in the US that had only been readily available in the UK), Derleth was a leading American regional writer of his day, as well as prolific in several other genres, including historical fiction, poetry, detective fiction, science fiction, and biography. A 1938 Guggenheim Fellow, Derleth considered his most serious work to be the ambitious Sac Prairie Saga, a series of fiction, historical fiction, poetry, and non-fiction naturalist works designed to memorialize life in the Wisconsin he knew. Derleth can also be considered a pioneering naturalist and conservationist in his writing. Life The son of William Julius Derleth and Rose Louise Volk, Derleth grew up in Sauk City, Wisconsin. He was educated in local parochial and public high school. Derleth wrote his first fiction at age 13. He was interested most in reading, and he made three trips to the library a week. He would save his money to buy books (his personal library exceeded 12,000 later on in life). Some of his biggest influences were Ralph Waldo Emerson's essays, Walt Whitman, H. L. Mencken's The American Mercury, Samuel Johnson's The History of Rasselas, Prince of Abissinia, Alexandre Dumas, Edgar Allan Poe, Walter Scott, and Henry David Thoreau's Walden. Forty rejected stories and three years later, according to anthologist Jim Stephens, he sold his first story, "Bat's Belfry", to Weird Tales magazine. Derleth wrote throughout his four years at the University of Wisconsin, where he received a B.A. in 1930. During this time he also served briefly as associate editor of Minneapolis-based Fawcett Publications Mystic Magazine. Returning to Sauk City in the summer of 1931, Derleth worked in a local canning factory and collaborated with childhood friend Mark Schorer (later Chairman of the University of California, Berkeley English Department). They rented a cabin, writing Gothic and other horror stories and selling them to Weird Tales magazine. Derleth won a place on the O'Brien Roll of Honor for Five Alone, published in Place of Hawks, but was first found in Pagany magazine. As a result of his early work on the Sac Prairie Saga, Derleth was awarded the prestigious Guggenheim Fellowship; his sponsors were Helen C. White, Nobel Prize-winning novelist Sinclair Lewis and poet Edgar Lee Masters of Spoon River Anthology fame. In the mid-1930s, Derleth organized a Ranger's Club for young people, served as clerk and president of the local school board, served as a parole officer, organized a local men's club and a parent-teacher association. He also lectured in American regional literature at the University of Wisconsin and was a contributing editor of Outdoors Magazine. With longtime friend Donald Wandrei, Derleth in 1939 founded Arkham House. Its initial objective was to publish the works of H. P. Lovecraft, with whom Derleth had corresponded since his teenage years. At the same time, he began teaching a course in American Regional Literature at the University of Wisconsin. In 1941, he became literary editor of The Capital Times newspaper in Madison, a post he held until his resignation in 1960. His hobbies included fencing, swimming, chess, philately and comic-strips (Derleth reportedly used the funding from his Guggenheim Fellowship to bind his comic book collection, most recently valued in the millions of dollars, rather than to travel abroad as the award intended.). Derleth's true avocation, however, was hiking the terrain of his native Wisconsin lands, and observing and recording nature with an expert eye. Derleth once wrote of his writing methods, "I write very swiftly, from 750,000 to a million words yearly, very little of it pulp material." In 1948, he was elected president of the Associated Fantasy Publishers at the 6th World Science Fiction Convention in Toronto. He was married April 6, 1953, to Sandra Evelyn Winters. They divorced six years later. Derleth retained custody of the couple's two children, April Rose and Walden William. April earned a Bachelor of Arts degree in English from the University of Wisconsin-Madison in 1977. She became majority stockholder, President, and CEO of Arkham House in 1994. She remained in that capacity until her death. She was known in the community as a naturalist and humanitarian. April died on March 21, 2011. In 1960, Derleth began editing and publishing a magazine called Hawk and Whippoorwill, dedicated to poems of man and nature. Derleth died of a heart attack on July 4, 1971, and is buried in St. Aloysius Cemetery in Sauk City. The U.S. 12 bridge over the Wisconsin River is named in his honor. Derleth was Roman Catholic. Career Derleth wrote more than 150 short stories and more than 100 books during his lifetime. The Sac Prairie Saga Derleth wrote an expansive series of novels, short stories, journals, poems, and other works about Sac Prairie (whose prototype is Sauk City). Derleth intended this series to comprise up to 50 novels telling the projected life-story of the region from the 19th century onwards, with analogies to Balzac's Human Comedy and Proust's Remembrance of Things Past. This, and other early work by Derleth, made him a well-known figure among the regional literary figures of his time: early Pulitzer Prize winners Hamlin Garland and Zona Gale, as well as Sinclair Lewis, the last both an admirer and critic of Derleth. As Edward Wagenknecht wrote in Cavalcade of the American Novel, "What Mr. Derleth has that is lacking...in modern novelists generally, is a country. He belongs. He writes of a land and a people that are bone of his bone and flesh of his flesh. In his fictional world, there is a unity much deeper and more fundamental than anything that can be conferred by an ideology. It is clear, too, that he did not get the best, and most fictionally useful, part of his background material from research in the library; like Scott, in his Border novels, he gives, rather, the impression of having drunk it in with his mother's milk." Jim Stephens, editor of An August Derleth Reader, (1992), argues: "what Derleth accomplished....was to gather a Wisconsin mythos which gave respect to the ancient fundament of our contemporary life." The author inaugurated the Sac Prairie Saga with four novellas comprising Place of Hawks, published by Loring & Mussey in 1935. At publication, The Detroit News wrote: "Certainly with this book Mr. Derleth may be added to the American writers of distinction." Derleth's first novel, Still is the Summer Night, was published two years later by the famous Charles Scribners' editor Maxwell Perkins, and was the second in his Sac Prairie Saga. Village Year, the first in a series of journals – meditations on nature, Midwestern village American life, and more – was published in 1941 to praise from The New York Times Book Review: "A book of instant sensitive responsiveness...recreates its scene with acuteness and beauty, and makes an unusual contribution to the Americana of the present day." The New York Herald Tribune observed that "Derleth...deepens the value of his village setting by presenting in full the enduring natural background; with the people projected against this, the writing comes to have the quality of an old Flemish picture, humanity lively and amusing and loveable in the foreground and nature magnificent beyond." James Grey, writing in the St. Louis Dispatch concluded, "Derleth has achieved a kind of prose equivalent of the Spoon River Anthology." In the same year, Evening in Spring was published by Charles Scribners & Sons. This work Derleth considered among his finest. What The Milwaukee Journal called "this beautiful little love story", is an autobiographical novel of first love beset by small-town religious bigotry. The work received critical praise: The New Yorker considered it a story told "with tenderness and charm", while the Chicago Tribune concluded: "It's as though he turned back the pages of an old diary and told, with rekindled emotion, of the pangs of pain and the sharp, clear sweetness of a boy's first love." Helen Constance White, wrote in The Capital Times that it was "...the best articulated, the most fully disciplined of his stories." These were followed in 1943 with Shadow of Night, a Scribners' novel of which The Chicago Sun wrote: "Structurally it has the perfection of a carved jewel...A psychological novel of the first order, and an adventure tale that is unique and inspiriting." In November 1945, however, Derleth's work was attacked by his one-time admirer and mentor, Sinclair Lewis. Writing in Esquire, Lewis observed, "It is a proof of Mr. Derleth's merit that he makes one want to make the journey and see his particular Avalon: The Wisconsin River shining among its islands, and the castles of Baron Pierneau and Hercules Dousman. He is a champion and a justification of regionalism. Yet he is also a burly, bounding, bustling, self-confident, opinionated, and highly-sweatered young man with faults so grievous that a melancholy perusal of them may be of more value to apprentices than a study of his serious virtues. If he could ever be persuaded that he isn't half as good as he thinks he is, if he would learn the art of sitting still and using a blue pencil, he might become twice as good as he thinks he is – which would about rank him with Homer." Derleth good-humoredly reprinted the criticism along with a photograph of himself sans sweater, on the back cover of his 1948 country journal: Village Daybook. A lighter side to the Sac Prairie Saga is a series of quasi-autobiographical short stories known as the "Gus Elker Stories", amusing tales of country life that Peter Ruber, Derleth's last editor, said were "...models of construction and...fused with some of the most memorable characters in American literature." Most were written between 1934 and the late 1940s, though the last, "Tail of the Dog", was published in 1959 and won the Scholastic Magazine short story award for the year. The series was collected and republished in Country Matters in 1996. Walden West, published in 1961, is considered by many Derleth's finest work. This prose meditation is built out of the same fundamental material as the series of Sac Prairie journals, but is organized around three themes: "the persistence of memory...the sounds and odors of the country...and Thoreau's observation that the 'mass of men lead lives of quiet desperation.'" A blend of nature writing, philosophic musings, and careful observation of the people and place of "Sac Prairie." Of this work, George Vukelich, author of "North Country Notebook", writes: "Derleth's Walden West is...the equal of Sherwood Anderson's Winesburg,Ohio, Thornton Wilder's Our Town, and Edgar Lee Masters' Spoon River Anthology." This was followed eight years later by Return to Walden West, a work of similar quality, but with a more noticeable environmentalist edge to the writing, notes critic Norbert Blei. A close literary relative of the Sac Prairie Saga was Derleth's Wisconsin Saga, which comprises several historical novels. Detective and mystery fiction Detective fiction represented another substantial body of Derleth's work. Most notable among this work was a series of 70 stories in affectionate pastiche of Sherlock Holmes, whose creator, Sir Arthur Conan Doyle, he admired greatly. These included one published novel as well (Mr. Fairlie's Final Journey). The series features a (Sherlock Holmes-styled) British detective named Solar Pons, of 7B Praed Street in London. The series was greatly admired by such notable writers and critics of mystery and detective fiction as Ellery Queen (Frederic Dannay), Anthony Boucher, Vincent Starrett and Howard Haycraft. In his 1944 volume The Misadventures of Sherlock Holmes, Ellery Queen wrote of Derleth's The Norcross Riddle, an early Pons story: "How many budding authors, not even old enough to vote, could have captured the spirit and atmosphere with as much fidelity?" Queen adds, "...and his choice of the euphonic Solar Pons is an appealing addition to the fascinating lore of Sherlockian nomenclature." Vincent Starrett, in his foreword to the 1964 edition of The Casebook of Solar Pons, wrote that the series is "...as sparkling a galaxy of Sherlockian pastiches as we have had since the canonical entertainments came to an end." Despite close similarities to Doyle's creation, Pons lived in the post-World War I era, in the decades of the 1920s and 1930s. Though Derleth never wrote a Pons novel to equal The Hound of the Baskervilles, editor Peter Ruber wrote: "...Derleth produced more than a few Solar Pons stories almost as good as Sir Arthur's, and many that had better plot construction." Although these stories were a form of diversion for Derleth, Ruber, who edited The Original Text Solar Pons Omnibus Edition (2000), argued: "Because the stories were generally of such high quality, they ought to be assessed on their own merits as a unique contribution in the annals of mystery fiction, rather than suffering comparison as one of the endless imitators of Sherlock Holmes." Some of the stories were self-published, through a new imprint called "Mycroft & Moran", an appellation of humorous significance to Holmesian scholars. For approximately a decade, an active supporting group was the Praed Street Irregulars, patterned after the Baker Street Irregulars. In 1946, Conan Doyle's two sons made some attempts to force Derleth to cease publishing the Solar Pons series, but the efforts were unsuccessful and eventually withdrawn. Derleth's mystery and detective fiction also included a series of works set in Sac Prairie and featuring Judge Peck as the central character. Youth and children's fiction Derleth wrote many and varied children's works, including biographies meant to introduce younger readers to explorer Jacques Marquette, as well as Ralph Waldo Emerson and Henry David Thoreau. Arguably most important among his works for younger readers, however, is the Steve and Sim Mystery Series, also known as the Mill Creek Irregulars series. The ten-volume series, published between 1958 and 1970, is set in Sac Prairie of the 1920s and can thus be considered in its own right a part of the Sac Prairie Saga, as well as an extension of Derleth's body of mystery fiction. Robert Hood, writing in the New York Times said: "Steve and Sim, the major characters, are twentieth-century cousins of Huck Finn and Tom Sawyer; Derleth's minor characters, little gems of comic drawing." The first novel in the series, The Moon Tenders, does, in fact, involve a rafting adventure down the Wisconsin River, which led regional writer Jesse Stuart to suggest the novel was one that "older people might read to recapture the spirit and dream of youth." The connection to the Sac Prairie Saga was noted by the Chicago Tribune: "Once again a small midwest community in 1920s is depicted with perception, skill, and dry humor." Arkham House and the "Cthulhu Mythos" Derleth was a correspondent and friend of H. P. Lovecraft – when Lovecraft wrote about "le Comte d'Erlette" in his fiction, it was in homage to Derleth. Derleth invented the term "Cthulhu Mythos" to describe the fictional universe depicted in the series of stories shared by Lovecraft and other writers in his circle. When Lovecraft died in 1937, Derleth and Donald Wandrei assembled a collection of Lovecraft's stories and tried to get them published. Existing publishers showed little interest, so Derleth and Wandrei founded Arkham House in 1939 for that purpose. The name of the company derived from Lovecraft's fictional town of Arkham, Massachusetts, which features in many of his stories. In 1939, Arkham House published The Outsider and Others, a huge collection that contained most of Lovecraft's known short stories. Derleth and Wandrei soon expanded Arkham House and began a regular publishing schedule after its second book, Someone in the Dark, a collection of some of Derleth's own horror stories, was published in 1941. Following Lovecraft's death, Derleth wrote a number of stories based on fragments and notes left by Lovecraft. These were published in Weird Tales and later in book form, under the byline "H. P. Lovecraft and August Derleth", with Derleth calling himself a "posthumous collaborator." This practice has raised objections in some quarters that Derleth simply used Lovecraft's name to market what was essentially his own fiction; S. T. Joshi refers to the "posthumous collaborations" as marking the beginning of "perhaps the most disreputable phase of Derleth's activities". Dirk W. Mosig, S. T. Joshi, and Richard L. Tierney were dissatisfied with Derleth's invention of the term Cthulhu Mythos (Lovecraft himself used Yog-Sothothery) and his presentation of Lovecraft's fiction as having an overall pattern reflecting Derleth's own Christian world view, which they contrast with Lovecraft's depiction of an amoral universe. However, Robert M. Price points out that while Derleth's tales are distinct from Lovecraft's in their use of hope and his depiction of a struggle between good and evil, nevertheless the basis of Derlerth's systemization are found in Lovecraft. He also suggests that the differences can be overstated: Derleth was more optimistic than Lovecraft in his conception of the Mythos, but we are dealing with a difference more of degree than kind. There are indeed tales wherein Derleth's protagonists get off scot-free (like "The Shadow in the Attic", "Witches' Hollow", or "The Shuttered Room"), but often the hero is doomed (e.g., "The House in the Valley", "The Peabody Heritage", "Something in Wood"), as in Lovecraft. And it must be remembered that an occasional Lovecraftian hero does manage to overcome the odds, e.g., in "The Horror in the Museum", "The Shunned House", and 'The Case of Charles Dexter Ward'. Derleth also treated Lovecraft's Great Old Ones as representatives of elemental forces, creating new fictional entities to flesh out this framework. Such debates aside, Derleth's founding of Arkham House and his successful effort to rescue Lovecraft from literary oblivion are widely acknowledged by practitioners in the horror field as seminal events in the field. For instance, Ramsey Campbell has acknowledged Derleth's encouragement and guidance during the early part of his own writing career, and Kirby McCauley has cited Derleth and Arkham House as an inspiration for his own anthology Dark Forces. Arkham House and Derleth published Dark Carnival, the first book by Ray Bradbury, as well. Brian Lumley cites the importance of Derleth to his own Lovecraftian work, and contends in a 2009 introduction to Derleth's work that he was "...one of the first, finest, and most discerning editors and publishers of macabre fiction." Important as was Derleth's work to rescue H.P. Lovecraft from literary obscurity at the time of Lovecraft's death, Derleth also built a body of horror and spectral fiction of his own; still frequently anthologized. The best of this work, recently reprinted in four volumes of short stories – most of which were originally published in Weird Tales, illustrates Derleth's original abilities in the genre. While Derleth considered his work in this genre less important than his most serious literary efforts, the compilers of these four anthologies, including Ramsey Campbell, note that the stories still resonate after more than 50 years. In 2009, The Library of America selected Derleth's story The Panelled Room for inclusion in its two-century retrospective of American Fantastic Tales. Other works Derleth also wrote many historical novels, as part of both the Sac Prairie Saga and the Wisconsin Saga. He also wrote history; arguably most notable among these was The Wisconsin: River of a Thousand Isles, published in 1942. The work was one in a series entitled "The Rivers of America", conceived by writer Constance Lindsay Skinner in the Great Depression as a series that would connect Americans to their heritage through the history of the great rivers of the nation. Skinner wanted the series to be written by artists, not academicians. Derleth, while not a trained historian, was, according to former Wisconsin state historian William F. Thompson, "...a very competent regional historian who based his historical writing upon research in the primary documents and who regularly sought the help of professionals... ." In the foreword to the 1985 reissue of the work by The University of Wisconsin Press, Thompson concluded: "No other writer, of whatever background or training, knew and understood his particular 'corner of the earth' better than August Derleth." Additionally, Derleth wrote a number of volumes of poetry. Three of his collections – Rind of Earth (1942), Selected Poems (1944), and The Edge of Night (1945) – were published by the Decker Press, which also printed the work of other Midwestern poets such as Edgar Lee Masters. Derleth was also the author of several biographies of other writers, including Zona Gale, Ralph Waldo Emerson and Henry David Thoreau. He also wrote introductions to several collections of classic early 20th century comics, such as Buster Brown, Little Nemo in Slumberland, and Katzenjammer Kids, as well as a book of children's poetry entitled A Boy's Way, and the foreword to Tales from an Indian Lodge by Phebe Jewell Nichols. Derleth also wrote under the noms de plume Stephen Grendon, Kenyon Holmes and Tally Mason. Derleth's papers were donated to the Wisconsin Historical Society in Madison. Bibliography Awards O'Brien Roll of Honour for short story, 1933 Guggenheim fellow, 1938 See also August Derleth Award List of authors of new Sherlock Holmes stories List of horror fiction authors List of people from Wisconsin Mark Schorer Sherlock Holmes pastiches Notes References Meudt, Edna. 'August Derleth: "A simple, honorable man",' Wisconsin Academy Review, 19:2 (Summer, 1972) 8–11. Schorer, Mark. "An Appraisal of the Work of August Derleth", The Capital Times, July 9, 1971. Further reading Robert Bloch. "Two Great Editors". Is No 4 (Oct 1971). Reprint in Bloch's Out of My Head. Cambridge MA: NESFA Press, 1986, 71–79. Lin Carter. "A Day in Derleth Country". Is No 4 (Oct 1971). Reprint in Crypt of Cthulhu 1, No 6. John Howard. "The Ghosts of Sauk County". All Hallows 18 (1998); in Howard's Touchstones: Essays on the Fantastic. Staffordshire UK: Alchemy Press, 2014. David E. Schultz and S.T. Joshi (eds). Eccentric, Impractical Devils: The Letters of August Derleth and Clark Ashton Smith. NY: Hippocampus Press, 2020. External links The August Derleth Society A biography August Derleth Bibliography Works Online catalog of Derleth's collection at the Wisconsin Historical Society 1909 births 1971 deaths University of Wisconsin–Madison alumni American Catholics American short story writers American mystery writers American speculative fiction editors 20th-century American novelists Cthulhu Mythos writers American horror writers People from Sauk City, Wisconsin Novelists from Wisconsin Science fiction editors Solar Pons Anthologists American male novelists American male short story writers Catholics from Wisconsin 20th-century Roman Catholics Writers from Wisconsin Weird fiction writers 20th-century American male writers
981
https://en.wikipedia.org/wiki/Alps
Alps
The Alps are the highest and most extensive mountain range system that lies entirely in Europe, stretching approximately across eight Alpine countries (from west to east): France, Switzerland, Monaco, Italy, Liechtenstein, Austria, Germany, and Slovenia. The Alpine arch generally extends from Nice on the western Mediterranean to Trieste on the Adriatic and Vienna at the beginning of the Pannonian Basin. The mountains were formed over tens of millions of years as the African and Eurasian tectonic plates collided. Extreme shortening caused by the event resulted in marine sedimentary rocks rising by thrusting and folding into high mountain peaks such as Mont Blanc and the Matterhorn. Mont Blanc spans the French–Italian border, and at is the highest mountain in the Alps. The Alpine region area contains 128 peaks higher than . The altitude and size of the range affect the climate in Europe; in the mountains, precipitation levels vary greatly and climatic conditions consist of distinct zones. Wildlife such as ibex live in the higher peaks to elevations of , and plants such as Edelweiss grow in rocky areas in lower elevations as well as in higher elevations. Evidence of human habitation in the Alps goes back to the Palaeolithic era. A mummified man, determined to be 5,000 years old, was discovered on a glacier at the Austrian–Italian border in 1991. By the 6th century BC, the Celtic La Tène culture was well established. Hannibal famously crossed the Alps with a herd of elephants, and the Romans had settlements in the region. In 1800, Napoleon crossed one of the mountain passes with an army of 40,000. The 18th and 19th centuries saw an influx of naturalists, writers, and artists, in particular, the Romantics, followed by the golden age of alpinism as mountaineers began to ascend the peaks. The Alpine region has a strong cultural identity. The traditional culture of farming, cheesemaking, and woodworking still exists in Alpine villages, although the tourist industry began to grow early in the 20th century and expanded greatly after World War II to become the dominant industry by the end of the century. The Winter Olympic Games have been hosted in the Swiss, French, Italian, Austrian and German Alps. At present, the region is home to 14 million people and has 120 million annual visitors. Etymology and toponymy The English word Alps comes from the Latin Alpes. The Latin word Alpes could possibly come from the adjective albus ("white"), or could possibly come from the Greek goddess Alphito, whose name is related to alphita, the "white flour"; alphos, a dull white leprosy; and finally the Proto-Indo-European word *albʰós. Similarly, the river god Alpheus is also supposed to derive from the Greek alphos and means whitish. In his commentary on the Aeneid of Vergil, the late fourth-century grammarian Maurus Servius Honoratus says that all high mountains are called Alpes by Celts. According to the Oxford English Dictionary, the Latin Alpes might possibly derive from a pre-Indo-European word *alb "hill"; "Albania" is a related derivation. Albania, a name not native to the region known as the country of Albania, has been used as a name for a number of mountainous areas across Europe. In Roman times, "Albania" was a name for the eastern Caucasus, while in the English languages "Albania" (or "Albany") was occasionally used as a name for Scotland, although it is more likely derived from the Latin word albus, the color white. In modern languages the term alp, alm, albe or alpe refers to a grazing pastures in the alpine regions below the glaciers, not the peaks. An alp refers to a high mountain pasture, typically near or above the tree line, where cows and other livestock are taken to be grazed during the summer months and where huts and hay barns can be found, sometimes constituting tiny hamlets. Therefore, the term "the Alps", as a reference to the mountains, is a misnomer. The term for the mountain peaks varies by nation and language: words such as Horn, Kogel, Kopf, Gipfel, Spitze, Stock, and Berg are used in German-speaking regions; Mont, Pic, Tête, Pointe, Dent, Roche, and Aiguille in French-speaking regions; and Monte, Picco, Corno, Punta, Pizzo, or Cima in Italian-speaking regions. Geography The Alps are a crescent shaped geographic feature of central Europe that ranges in an arc (curved line) from east to west and is in width. The mean height of the mountain peaks is . The range stretches from the Mediterranean Sea north above the Po basin, extending through France from Grenoble, and stretching eastward through mid and southern Switzerland. The range continues onward toward Vienna, Austria, and east to the Adriatic Sea and Slovenia. To the south it dips into northern Italy and to the north extends to the southern border of Bavaria in Germany. In areas like Chiasso, Switzerland, and Allgäu, Bavaria, the demarcation between the mountain range and the flatlands are clear; in other places such as Geneva, the demarcation is less clear. The countries with the greatest alpine territory are Austria (28.7% of the total area), Italy (27.2%), France (21.4%) and Switzerland (13.2%). The highest portion of the range is divided by the glacial trough of the Rhône valley, from Mont Blanc to the Matterhorn and Monte Rosa on the southern side, and the Bernese Alps on the northern. The peaks in the easterly portion of the range, in Austria and Slovenia, are smaller than those in the central and western portions. The variances in nomenclature in the region spanned by the Alps makes classification of the mountains and subregions difficult, but a general classification is that of the Eastern Alps and Western Alps with the divide between the two occurring in eastern Switzerland according to geologist Stefan Schmid, near the Splügen Pass. The highest peaks of the Western Alps and Eastern Alps, respectively, are Mont Blanc, at and Piz Bernina at . The second-highest major peaks are Monte Rosa at and Ortler, at , respectively. Series of lower mountain ranges run parallel to the main chain of the Alps, including the French Prealps in France and the Jura Mountains in Switzerland and France. The secondary chain of the Alps follows the watershed from the Mediterranean Sea to the Wienerwald, passing over many of the highest and most well-known peaks in the Alps. From the Colle di Cadibona to Col de Tende it runs westwards, before turning to the northwest and then, near the Colle della Maddalena, to the north. Upon reaching the Swiss border, the line of the main chain heads approximately east-northeast, a heading it follows until its end near Vienna. The northeast end of the Alpine arc directly on the Danube, which flows into the Black Sea, is the Leopoldsberg near Vienna. In contrast, the southeastern part of the Alps ends on the Adriatic Sea in the area around Trieste towards Duino and Barcola. Passes The Alps have been crossed for war and commerce, and by pilgrims, students and tourists. Crossing routes by road, train or foot are known as passes, and usually consist of depressions in the mountains in which a valley leads from the plains and hilly pre-mountainous zones. In the medieval period hospices were established by religious orders at the summits of many of the main passes. The most important passes are the Col de l'Iseran (the highest), the Col Agnel, the Brenner Pass, the Mont-Cenis, the Great St. Bernard Pass, the Col de Tende, the Gotthard Pass, the Semmering Pass, the Simplon Pass, and the Stelvio Pass. Crossing the Italian-Austrian border, the Brenner Pass separates the Ötztal Alps and Zillertal Alps and has been in use as a trading route since the 14th century. The lowest of the Alpine passes at , the Semmering crosses from Lower Austria to Styria; since the 12th century when a hospice was built there, it has seen continuous use. A railroad with a tunnel long was built along the route of the pass in the mid-19th century. With a summit of , the Great St. Bernard Pass is one of the highest in the Alps, crossing the Italian-Swiss border east of the Pennine Alps along the flanks of Mont Blanc. The pass was used by Napoleon Bonaparte to cross 40,000 troops in 1800. The Mont Cenis pass has been a major commercial and military road between Western Europe and Italy. The pass was crossed by many troops on their way to the Italian peninsula. From Constantine I, Pepin the Short and Charlemagne to Henry IV, Napoléon and more recently the German Gebirgsjägers during World War II. Now the pass has been supplanted by the Fréjus Highway Tunnel (opened 1980) and Rail Tunnel (opened 1871). The Saint Gotthard Pass crosses from Central Switzerland to Ticino; in 1882 the Saint Gotthard Railway Tunnel was opened connecting Lucerne in Switzerland, with Milan in Italy. 98 years later followed Gotthard Road Tunnel ( long) connecting the A2 motorway in Göschenen on the north side with Airolo on the south side, exactly like the railway tunnel. On 1 June 2016 the world's longest railway tunnel, the Gotthard Base Tunnel was opened, which connects Erstfeld in canton of Uri with Bodio in canton of Ticino by two single tubes of . It is the first tunnel that traverses the Alps on a flat route. From 11 December 2016, it has been part of the regular railway timetable and used hourly as standard ride between Basel/Lucerne/Zurich and Bellinzona/Lugano/Milan. The highest pass in the alps is the col de l'Iseran in Savoy (France) at , followed by the Stelvio Pass in northern Italy at ; the road was built in the 1820s. Highest mountains The Union Internationale des Associations d'Alpinisme (UIAA) has defined a list of 82 "official" Alpine summits that reach at least . The list includes not only mountains, but also subpeaks with little prominence that are considered important mountaineering objectives. Below are listed the 29 "four-thousanders" with at least of prominence. While Mont Blanc was first climbed in 1786 and the Jungfrau in 1811, most of the Alpine four-thousanders were climbed during the second half of the 19th century, notably Piz Bernina (1850), the Dom (1858), the Grand Combin (1859), the Weisshorn (1861) and the Barre des Écrins (1864); the ascent of the Matterhorn in 1865 marked the end of the golden age of alpinism. Karl Blodig (1859–1956) was among the first to successfully climb all the major 4,000 m peaks. He completed his series of ascents in 1911. Many of the big Alpine three-thousanders were climbed in the early 19th century, notably the Grossglockner (1800) and the Ortler (1804), although some of them were climbed only much later, such at Mont Pelvoux (1848), Monte Viso (1861) and La Meije (1877). The first British Mont Blanc ascent was in 1788; the first female ascent in 1819. By the mid-1850s Swiss mountaineers had ascended most of the peaks and were eagerly sought as mountain guides. Edward Whymper reached the top of the Matterhorn in 1865 (after seven attempts), and in 1938 the last of the six great north faces of the Alps was climbed with the first ascent of the Eiger Nordwand (north face of the Eiger). Geology and orogeny Important geological concepts were established as naturalists began studying the rock formations of the Alps in the 18th century. In the mid-19th century the now-defunct theory of geosynclines was used to explain the presence of "folded" mountain chains but by the mid-20th century the theory of plate tectonics became widely accepted. The formation of the Alps (the Alpine orogeny) was an episodic process that began about 300 million years ago. In the Paleozoic Era the Pangaean supercontinent consisted of a single tectonic plate; it broke into separate plates during the Mesozoic Era and the Tethys sea developed between Laurasia and Gondwana during the Jurassic Period. The Tethys was later squeezed between colliding plates causing the formation of mountain ranges called the Alpide belt, from Gibraltar through the Himalayas to Indonesia—a process that began at the end of the Mesozoic and continues into the present. The formation of the Alps was a segment of this orogenic process, caused by the collision between the African and the Eurasian plates that began in the late Cretaceous Period. Under extreme compressive stresses and pressure, marine sedimentary rocks were uplifted, creating characteristic recumbent folds, or nappes, and thrust faults. As the rising peaks underwent erosion, a layer of marine flysch sediments was deposited in the foreland basin, and the sediments became involved in younger nappes (folds) as the orogeny progressed. Coarse sediments from the continual uplift and erosion were later deposited in foreland areas as molasse. The molasse regions in Switzerland and Bavaria were well-developed and saw further upthrusting of flysch. The Alpine orogeny occurred in ongoing cycles through to the Paleogene causing differences in nappe structures, with a late-stage orogeny causing the development of the Jura Mountains. A series of tectonic events in the Triassic, Jurassic and Cretaceous periods caused different paleogeographic regions. The Alps are subdivided by different lithology (rock composition) and nappe structure according to the orogenic events that affected them. The geological subdivision differentiates the Western, Eastern Alps and Southern Alps: the Helveticum in the north, the Penninicum and Austroalpine system in the centre and, south of the Periadriatic Seam, the Southern Alpine system. According to geologist Stefan Schmid, because the Western Alps underwent a metamorphic event in the Cenozoic Era while the Austroalpine peaks underwent an event in the Cretaceous Period, the two areas show distinct differences in nappe formations. Flysch deposits in the Southern Alps of Lombardy probably occurred in the Cretaceous or later. Peaks in France, Italy and Switzerland lie in the "Houillière zone", which consists of basement with sediments from the Mesozoic Era. High "massifs" with external sedimentary cover are more common in the Western Alps and were affected by Neogene Period thin-skinned thrusting whereas the Eastern Alps have comparatively few high peaked massifs. Similarly the peaks in eastern Switzerland extending to western Austria (Helvetic nappes) consist of thin-skinned sedimentary folding that detached from former basement rock. In simple terms, the structure of the Alps consists of layers of rock of European, African and oceanic (Tethyan) origin. The bottom nappe structure is of continental European origin, above which are stacked marine sediment nappes, topped off by nappes derived from the African plate. The Matterhorn is an example of the ongoing orogeny and shows evidence of great folding. The tip of the mountain consists of gneisses from the African plate; the base of the peak, below the glaciated area, consists of European basement rock. The sequence of Tethyan marine sediments and their oceanic basement is sandwiched between rock derived from the African and European plates. The core regions of the Alpine orogenic belt have been folded and fractured in such a manner that erosion created the characteristic steep vertical peaks of the Swiss Alps that rise seemingly straight out of the foreland areas. Peaks such as Mont Blanc, the Matterhorn, and high peaks in the Pennine Alps, the Briançonnais, and Hohe Tauern consist of layers of rock from the various orogenies including exposures of basement rock. Due to the ever-present geologic instability, earthquakes continue in the Alps to this day. Typically, the largest earthquakes in the alps have been between magnitude 6 and 7 on the Richter scale. Minerals The Alps are a source of minerals that have been mined for thousands of years. In the 8th to 6th centuries BC during the Hallstatt culture, Celtic tribes mined copper; later the Romans mined gold for coins in the Bad Gastein area. Erzberg in Styria furnishes high-quality iron ore for the steel industry. Crystals, such as cinnabar, amethyst, and quartz, are found throughout much of the Alpine region. The cinnabar deposits in Slovenia are a notable source of cinnabar pigments. Alpine crystals have been studied and collected for hundreds of years, and began to be classified in the 18th century. Leonhard Euler studied the shapes of crystals, and by the 19th century crystal hunting was common in Alpine regions. David Friedrich Wiser amassed a collection of 8000 crystals that he studied and documented. In the 20th century Robert Parker wrote a well-known work about the rock crystals of the Swiss Alps; at the same period a commission was established to control and standardize the naming of Alpine minerals. Glaciers In the Miocene Epoch the mountains underwent severe erosion because of glaciation, which was noted in the mid-19th century by naturalist Louis Agassiz who presented a paper proclaiming the Alps were covered in ice at various intervals—a theory he formed when studying rocks near his Neuchâtel home which he believed originated to the west in the Bernese Oberland. Because of his work he came to be known as the "father of the ice-age concept" although other naturalists before him put forth similar ideas. Agassiz studied glacier movement in the 1840s at the Unteraar Glacier where he found the glacier moved per year, more rapidly in the middle than at the edges. His work was continued by other scientists and now a permanent laboratory exists inside a glacier under the Jungfraujoch, devoted exclusively to the study of Alpine glaciers. Glaciers pick up rocks and sediment with them as they flow. This causes erosion and the formation of valleys over time. The Inn valley is an example of a valley carved by glaciers during the ice ages with a typical terraced structure caused by erosion. Eroded rocks from the most recent ice age lie at the bottom of the valley while the top of the valley consists of erosion from earlier ice ages. Glacial valleys have characteristically steep walls (reliefs); valleys with lower reliefs and talus slopes are remnants of glacial troughs or previously infilled valleys. Moraines, piles of rock picked up during the movement of the glacier, accumulate at edges, centre and the terminus of glaciers. Alpine glaciers can be straight rivers of ice, long sweeping rivers, spread in a fan-like shape (Piedmont glaciers), and curtains of ice that hang from vertical slopes of the mountain peaks. The stress of the movement causes the ice to break and crack loudly, perhaps explaining why the mountains were believed to be home to dragons in the medieval period. The cracking creates unpredictable and dangerous crevasses, often invisible under new snowfall, which cause the greatest danger to mountaineers. Glaciers end in ice caves (the Rhône Glacier), by trailing into a lake or river, or by shedding snowmelt on a meadow. Sometimes a piece of glacier will detach or break resulting in flooding, property damage and loss of life. High levels of precipitation cause the glaciers to descend to permafrost levels in some areas whereas in other, more arid regions, glaciers remain above about the level. The of the Alps covered by glaciers in 1876 had shrunk to by 1973, resulting in decreased river run-off levels. Forty percent of the glaciation in Austria has disappeared since 1850, and 30% of that in Switzerland. Rivers and lakes The Alps provide lowland Europe with drinking water, irrigation, and hydroelectric power. Although the area is only about 11% of the surface area of Europe, the Alps provide up to 90% of water to lowland Europe, particularly to arid areas and during the summer months. Cities such as Milan depend on 80% of water from Alpine runoff. Water from the rivers is used in at least 550 hydroelectricity power plants, considering only those producing at least 10MW of electricity. Major European rivers flow from the Alps, such as the Rhine, the Rhône, the Inn, and the Po, all of which have headwaters in the Alps and flow into neighbouring countries, finally emptying into the North Sea, the Mediterranean Sea, the Adriatic Sea and the Black Sea. Other rivers such as the Danube have major tributaries flowing into them that originate in the Alps. The Rhône is second to the Nile as a freshwater source to the Mediterranean Sea; the river begins as glacial meltwater, flows into Lake Geneva, and from there to France where one of its uses is to cool nuclear power plants. The Rhine originates in a area in Switzerland and represents almost 60% of water exported from the country. Tributary valleys, some of which are complicated, channel water to the main valleys which can experience flooding during the snowmelt season when rapid runoff causes debris torrents and swollen rivers. The rivers form lakes, such as Lake Geneva, a crescent-shaped lake crossing the Swiss border with Lausanne on the Swiss side and the town of Evian-les-Bains on the French side. In Germany, the medieval St. Bartholomew's chapel was built on the south side of the Königssee, accessible only by boat or by climbing over the abutting peaks. Additionally, the Alps have led to the creation of large lakes in Italy. For instance, the Sarca, the primary inflow of Lake Garda, originates in the Italian Alps. The Italian Lakes are a popular tourist destination since the Roman Era for their mild climate. Scientists have been studying the impact of climate change and water use. For example, each year more water is diverted from rivers for snowmaking in the ski resorts, the effect of which is yet unknown. Furthermore, the decrease of glaciated areas combined with a succession of winters with lower-than-expected precipitation may have a future impact on the rivers in the Alps as well as an effect on the water availability to the lowlands. Climate The Alps are a classic example of what happens when a temperate area at lower altitude gives way to higher-elevation terrain. Elevations around the world that have cold climates similar to those of the polar regions have been called Alpine. A rise from sea level into the upper regions of the atmosphere causes the temperature to decrease (see adiabatic lapse rate). The effect of mountain chains on prevailing winds is to carry warm air belonging to the lower region into an upper zone, where it expands in volume at the cost of a proportionate loss of temperature, often accompanied by precipitation in the form of snow or rain. The height of the Alps is sufficient to divide the weather patterns in Europe into a wet north and a dry south because moisture is sucked from the air as it flows over the high peaks. The severe weather in the Alps has been studied since the 18th century; particularly the weather patterns such as the seasonal foehn wind. Numerous weather stations were placed in the mountains early in the early 20th century, providing continuous data for climatologists. Some of the valleys are quite arid such as the Aosta valley in Italy, the Maurienne in France, the Valais in Switzerland, and northern Tyrol. The areas that are not arid and receive high precipitation experience periodic flooding from rapid snowmelt and runoff. The mean precipitation in the Alps ranges from a low of per year to per year, with the higher levels occurring at high altitudes. At altitudes between , snowfall begins in November and accumulates through to April or May when the melt begins. Snow lines vary from , above which the snow is permanent and the temperatures hover around the freezing point even during July and August. High-water levels in streams and rivers peak in June and July when the snow is still melting at the higher altitudes. The Alps are split into five climatic zones, each with different vegetation. The climate, plant life and animal life vary among the different sections or zones of the mountains. The lowest zone is the colline zone, which exists between , depending on the location. The montane zone extends from , followed by the sub-Alpine zone from . The Alpine zone, extending from tree line to snow line, is followed by the glacial zone, which covers the glaciated areas of the mountain. Climatic conditions show variances within the same zones; for example, weather conditions at the head of a mountain valley, extending directly from the peaks, are colder and more severe than those at the mouth of a valley which tend to be less severe and receive less snowfall. Various models of climate change have been projected into the 22nd century for the Alps, with an expectation that a trend toward increased temperatures will have an effect on snowfall, snowpack, glaciation, and river runoff. Significant changes, of both natural and anthropogenic origins, have already been diagnosed from observations. Ecology Flora Thirteen thousand species of plants have been identified in the Alpine regions. Alpine plants are grouped by habitat and soil type which can be limestone or non-calcareous. The habitats range from meadows, bogs, woodland (deciduous and coniferous) areas to soil-less scree and moraines, and rock faces and ridges. A natural vegetation limit with altitude is given by the presence of the chief deciduous trees—oak, beech, ash and sycamore maple. These do not reach exactly to the same elevation, nor are they often found growing together; but their upper limit corresponds accurately enough to the change from a temperate to a colder climate that is further proved by a change in the presence of wild herbaceous vegetation. This limit usually lies about above the sea on the north side of the Alps, but on the southern slopes it often rises to , sometimes even to . Above the forestry, there is often a band of short pine trees (Pinus mugo), which is in turn superseded by Alpenrosen, dwarf shrubs, typically Rhododendron ferrugineum (on acid soils) or Rhododendron hirsutum (on alkaline soils). Although the Alpenrose prefers acidic soil, the plants are found throughout the region. Above the tree line is the area defined as "alpine" where in the alpine meadow plants are found that have adapted well to harsh conditions of cold temperatures, aridity, and high altitudes. The alpine area fluctuates greatly because of regional fluctuations in tree lines. Alpine plants such as the Alpine gentian grow in abundance in areas such as the meadows above the Lauterbrunnental. Gentians are named after the Illyrian king Gentius, and 40 species of the early-spring blooming flower grow in the Alps, in a range of . Writing about the gentians in Switzerland D. H. Lawrence described them as "darkening the day-time, torch-like with the smoking blueness of Pluto's gloom." Gentians tend to "appear" repeatedly as the spring blooming takes place at progressively later dates, moving from the lower altitude to the higher altitude meadows where the snow melts much later than in the valleys. On the highest rocky ledges the spring flowers bloom in the summer. At these higher altitudes, the plants tend to form isolated cushions. In the Alps, several species of flowering plants have been recorded above , including Ranunculus glacialis, Androsace alpina and Saxifraga biflora. Eritrichium nanum, commonly known as the King of the Alps, is the most elusive of the alpine flowers, growing on rocky ridges at . Perhaps the best known of the alpine plants is Edelweiss which grows in rocky areas and can be found at altitudes as low as and as high as . The plants that grow at the highest altitudes have adapted to conditions by specialization such as growing in rock screes that give protection from winds. The extreme and stressful climatic conditions give way to the growth of plant species with secondary metabolites important for medicinal purposes. Origanum vulgare, Prunella vulgaris, Solanum nigrum and Urtica dioica are some of the more useful medicinal species found in the Alps. Human interference has nearly exterminated the trees in many areas, and, except for the beech forests of the Austrian Alps, forests of deciduous trees are rarely found after the extreme deforestation between the 17th and 19th centuries. The vegetation has changed since the second half of the 20th century, as the high alpine meadows cease to be harvested for hay or used for grazing which eventually might result in a regrowth of forest. In some areas, the modern practice of building ski runs by mechanical means has destroyed the underlying tundra from which the plant life cannot recover during the non-skiing months, whereas areas that still practice a natural piste type of ski slope building preserve the fragile underlayers. Fauna The Alps are a habitat for 30,000 species of wildlife, ranging from the tiniest snow fleas to brown bears, many of which have made adaptations to the harsh cold conditions and high altitudes to the point that some only survive in specific micro-climates either directly above or below the snow line. The largest mammal to live in the highest altitudes are the alpine ibex, which have been sighted as high as . The ibex live in caves and descend to eat the succulent alpine grasses. Classified as antelopes, chamois are smaller than ibex and found throughout the Alps, living above the tree line and are common in the entire alpine range. Areas of the eastern Alps are still home to brown bears. In Switzerland the canton of Bern was named for the bears but the last bear is recorded as having been killed in 1792 above Kleine Scheidegg by three hunters from Grindelwald. Many rodents such as voles live underground. Marmots live almost exclusively above the tree line as high as . They hibernate in large groups to provide warmth, and can be found in all areas of the Alps, in large colonies they build beneath the alpine pastures. Golden eagles and bearded vultures are the largest birds to be found in the Alps; they nest high on rocky ledges and can be found at altitudes of . The most common bird is the alpine chough which can be found scavenging at climber's huts or at the Jungfraujoch, a high altitude tourist destination. Reptiles such as adders and vipers live up to the snow line; because they cannot bear the cold temperatures they hibernate underground and soak up the warmth on rocky ledges. The high-altitude Alpine salamanders have adapted to living above the snow line by giving birth to fully developed young rather than laying eggs. Brown trout can be found in the streams up to the snow line. Molluscs such as the wood snail live up the snow line. Popularly gathered as food, the snails are now protected. A number of species of moths live in the Alps, some of which are believed to have evolved in the same habitat up to 120 million years ago, long before the Alps were created. Blue butterflies can commonly be seen drinking from the snowmelt; some species of blues fly as high as . The butterflies tend to be large, such as those from the swallowtail Parnassius family, with a habitat that ranges to . Twelve species of beetles have habitats up to the snow line; the most beautiful and formerly collected for its colours but now protected is Rosalia alpina. Spiders, such as the large wolf spider, live above the snow line and can be seen as high as . Scorpions can be found in the Italian Alps. Some of the species of moths and insects show evidence of having been indigenous to the area from as long ago as the Alpine orogeny. In Emosson in Valais, Switzerland, dinosaur tracks were found in the 1970s, dating probably from the Triassic Period. History Prehistory to Christianity About 10,000 years ago, when the ice melted after the Würm glaciation, late Palaeolithic communities were established along the lake shores and in cave systems. Evidence of human habitation has been found in caves near Vercors, close to Grenoble; in Austria the Mondsee culture shows evidence of houses built on piles to keep them dry. Standing stones have been found in Alpine areas of France and Italy. The Rock Drawings in Valcamonica are more than 5000 years old; more than 200,000 drawings and etchings have been identified at the site. In 1991, a mummy of a neolithic body, known as Ötzi the Iceman, was discovered by hikers on the Similaun glacier. His clothing and gear indicate that he lived in an alpine farming community, while the location and manner of his death – an arrowhead was discovered in his shoulder – suggests he was travelling from one place to another. Analysis of the mitochondrial DNA of Ötzi, has shown that he belongs to the K1 subclade which cannot be categorized into any of the three modern branches of that subclade. The new subclade has provisionally been named K1ö for Ötzi. Celtic tribes settled in Switzerland between 1500 and 1000 BC. The Raetians lived in the eastern regions, while the west was occupied by the Helvetii and the Allobrogi settled in the Rhône valley and in Savoy. The Ligurians and Adriatic Veneti lived in north-west Italy and Triveneto respectively. Among the many substances Celtic tribes mined was salt in areas such as Salzburg in Austria where evidence of the Hallstatt culture was found by a mine manager in the 19th century. By the 6th century BC the La Tène culture was well established in the region, and became known for high quality decorated weapons and jewellery. The Celts were the most widespread of the mountain tribes—they had warriors that were strong, tall and fair skinned, and skilled with iron weapons, which gave them an advantage in warfare. During the Second Punic War in 218 BC, the Carthaginian general Hannibal probably crossed the Alps with an army numbering 38,000 infantry, 8,000 cavalry, and 37 war elephants. This was one of the most celebrated achievements of any military force in ancient warfare, although no evidence exists of the actual crossing or the place of crossing. The Romans, however, had built roads along the mountain passes, which continued to be used through the medieval period to cross the mountains and Roman road markers can still be found on the mountain passes. The Roman expansion brought the defeat of the Allobrogi in 121 BC and during the Gallic Wars in 58 BC Julius Caesar overcame the Helvetii. The Rhaetians continued to resist but were eventually conquered when the Romans turned northward to the Danube valley in Austria and defeated the Brigantes. The Romans built settlements in the Alps; towns such as Aosta (named for Augustus) in Italy, Martigny and Lausanne in Switzerland, and Partenkirchen in Bavaria show remains of Roman baths, villas, arenas and temples. Much of the Alpine region was gradually settled by Germanic tribes, (Lombards, Alemanni, Bavarii, and Franks) from the 6th to the 13th centuries mixing with the local Celtic tribes. Christianity, feudalism, and Napoleonic wars Christianity was established in the region by the Romans, and saw the establishment of monasteries and churches in the high regions. The Frankish expansion of the Carolingian Empire and the Bavarian expansion in the eastern Alps introduced feudalism and the building of castles to support the growing number of dukedoms and kingdoms. Castello del Buonconsiglio in Trento, Italy, still has intricate frescoes, excellent examples of Gothic art, in a tower room. In Switzerland, Château de Chillon is preserved as an example of medieval architecture. Much of the medieval period was a time of power struggles between competing dynasties such as the House of Savoy, the Visconti in northern Italy and the House of Habsburg in Austria and Slovenia. In 1291, to protect themselves from incursions by the Habsburgs, four cantons in the middle of Switzerland drew up a charter that is considered to be a declaration of independence from neighbouring kingdoms. After a series of battles fought in the 13th, 14th and 15th centuries, more cantons joined the confederacy and by the 16th century Switzerland was well-established as a separate state. During the Napoleonic Wars in the late 18th century and early 19th century, Napoleon annexed territory formerly controlled by the Habsburgs and Savoys. In 1798, he established the Helvetic Republic in Switzerland; two years later he led an army across the St. Bernard pass and conquered almost all of the Alpine regions. After the fall of Napoléon, many alpine countries developed heavy protections to prevent any new invasion. Thus, Savoy built a series of fortifications in the Maurienne valley in order to protect the major alpine passes, such as the col du Mont-Cenis that was even crossed by Charlemagne and his father to defeat the Lombards. The later indeed became very popular after the construction of a paved road ordered by Napoléon Bonaparte. The Barrière de l'Esseillon is a series of forts with heavy batteries, built on a cliff with a perfect view of the valley, a gorge on one side and steep mountains on the other side. In the 19th century, the monasteries built in the high Alps during the medieval period to shelter travellers and as places of pilgrimage, became tourist destinations. The Benedictines had built monasteries in Lucerne, Switzerland, and Oberammergau; the Cistercians in the Tyrol and at Lake Constance; and the Augustinians had abbeys in the Savoy and one in the centre of Interlaken, Switzerland. The Great St Bernard Hospice, built in the 9th or 10th centuries, at the summit of the Great Saint Bernard Pass was a shelter for travellers and place for pilgrims since its inception; by the 19th century it became a tourist attraction with notable visitors such as author Charles Dickens and mountaineer Edward Whymper. Exploration Radiocarbon-dated charcoal placed around 50,000 years ago was found in the Drachloch (Dragon's Hole) cave above the village of Vattis in the canton of St. Gallen, proving that the high peaks were visited by prehistoric people. Seven bear skulls from the cave may have been buried by the same prehistoric people. The peaks, however, were mostly ignored except for a few notable examples, and long left to the exclusive attention of the people of the adjoining valleys. The mountain peaks were seen as terrifying, the abode of dragons and demons, to the point that people blindfolded themselves to cross the Alpine passes. The glaciers remained a mystery and many still believed the highest areas to be inhabited by dragons. Charles VII of France ordered his chamberlain to climb Mont Aiguille in 1356. The knight reached the summit of Rocciamelone where he left a bronze triptych of three crosses, a feat which he conducted with the use of ladders to traverse the ice. In 1492, Antoine de Ville climbed Mont Aiguille, without reaching the summit, an experience he described as "horrifying and terrifying." Leonardo da Vinci was fascinated by variations of light in the higher altitudes, and climbed a mountain—scholars are uncertain which one; some believe it may have been Monte Rosa. From his description of a "blue like that of a gentian" sky it is thought that he reached a significantly high altitude. In the 18th century four Chamonix men almost made the summit of Mont Blanc but were overcome by altitude sickness and snowblindness. Conrad Gessner was the first naturalist to ascend the mountains in the 16th century, to study them, writing that in the mountains he found the "theatre of the Lord". By the 19th century more naturalists began to arrive to explore, study and conquer the high peaks. Two men who first explored the regions of ice and snow were Horace-Bénédict de Saussure (1740–1799) in the Pennine Alps, and the Benedictine monk of Disentis Placidus a Spescha (1752–1833). Born in Geneva, Saussure was enamoured with the mountains from an early age; he left a law career to become a naturalist and spent many years trekking through the Bernese Oberland, the Savoy, the Piedmont and Valais, studying the glaciers and the geology, as he became an early proponent of the theory of rock upheaval. Saussure, in 1787, was a member of the third ascent of Mont Blanc—today the summits of all the peaks have been climbed. The Romantics and Alpinists Albrecht von Haller's poem Die Alpen (1732) described the mountains as an area of mythical purity. Jean-Jacques Rousseau was another writer who presented the Alps as a place of allure and beauty, in his novel Julie, or the New Heloise (1761), Later the first wave of Romantics such as Goethe and Turner came to admire the scenery; Wordsworth visited the area in 1790, writing of his experiences in The Prelude (1799). Schiller later wrote the play William Tell (1804), which tells the story the legendary Swiss marksman William Tell as part of the greater Swiss struggle for independence from the Habsburg Empire in the early 14th century. At the end of the Napoleonic Wars, the Alpine countries began to see an influx of poets, artists, and musicians, as visitors came to experience the sublime effects of monumental nature. In 1816, Byron, Percy Bysshe Shelley and his wife Mary Shelley visited Geneva and all three were inspired by the scenery in their writings. During these visits Shelley wrote the poem "Mont Blanc", Byron wrote "The Prisoner of Chillon" and the dramatic poem Manfred, and Mary Shelley, who found the scenery overwhelming, conceived the idea for the novel Frankenstein in her villa on the shores of Lake Geneva in the midst of a thunderstorm. When Coleridge travelled to Chamonix, he declaimed, in defiance of Shelley, who had signed himself "Atheos" in the guestbook of the Hotel de Londres near Montenvers, "Who would be, who could be an atheist in this valley of wonders". By the mid-19th century scientists began to arrive en masse to study the geology and ecology of the region. From the beginning of the 19th century, the tourism and mountaineering development of the Alps began. In the early years of the "golden age of alpinism" initially scientific activities were mixed with sport, for example by the physicist John Tyndall, with the first ascent of the Matterhorn by Edward Whymper being the highlight. In the later years, the "silver age of alpinism", the focus was on mountain sports and climbing. The first president of the Alpine Club, John Ball, is considered the discoverer of the Dolomites, which for decades were the focus of climbers like Paul Grohmann, Michael Innerkofler and Angelo Dibona. The Nazis Austrian-born Adolf Hitler had a lifelong romantic fascination with the Alps and by the 1930s established a home at Berghof, in the Obersalzberg region outside of Berchtesgaden. His first visit to the area was in 1923 and he maintained a strong tie there until the end of his life. At the end of World War II, the US Army occupied Obersalzberg, to prevent Hitler from retreating with the Wehrmacht into the mountains. By 1940 many of the Alpine countries were under the control of the Axis powers. Austria underwent a political coup that made it part of the Third Reich; France had been invaded and Italy was a fascist regime. Switzerland and Liechtenstein were the only countries to avoid an Axis takeover. The Swiss Confederation mobilized its troops—the country follows the doctrine of "armed neutrality" with all males required to have military training—a number that General Eisenhower estimated to be about 850,000. The Swiss commanders wired the infrastructure leading into the country with explosives, and threatened to destroy bridges, railway tunnels and roads across passes in the event of a Nazi invasion; and if there was an invasion the Swiss army would then have retreated to the heart of the mountain peaks, where conditions were harsher, and a military invasion would involve difficult and protracted battles. German Ski troops were trained for the war, and battles were waged in mountainous areas such as the battle at Riva Ridge in Italy, where the American 10th Mountain Division encountered heavy resistance in February 1945. At the end of the war, a substantial amount of Nazi plunder was found stored in Austria, where Hitler had hoped to retreat as the war drew to a close. The salt mines surrounding the Altaussee area, where American troops found of gold coins stored in a single mine, were used to store looted art, jewels, and currency; vast quantities of looted art were found and returned to the owners. Largest cities The largest city within the Alps is the city of Grenoble in France. Other larger and important cities within the Alps with over 100,000 inhabitants are in Tyrol with Bolzano (Italy), Trento (Italy) and Innsbruck (Austria). Larger cities outside the Alps are Milan, Verona, Turin (Italy), Munich (Germany), Graz, Vienna, Salzburg (Austria), Ljubljana, Maribor, Kranj (Slovenia), Zurich, Geneva (Switzerland), Nice and Lyon (France). Cities with over 100,000 inhabitants in the Alps are: Alpine people and culture The population of the region is 14 million spread across eight countries. On the rim of the mountains, on the plateaus and the plains the economy consists of manufacturing and service jobs whereas in the higher altitudes and in the mountains farming is still essential to the economy. Farming and forestry continue to be mainstays of Alpine culture, industries that provide for export to the cities and maintain the mountain ecology. The Alpine regions are multicultural and linguistically diverse. Dialects are common, and vary from valley to valley and region to region. In the Slavic Alps alone 19 dialects have been identified. Some of the Romance dialects spoken in the French, Swiss and Italian alps of Aosta Valley derive from Arpitan, while the southern part of the western range is related to Occitan; the German dialects derive from Germanic tribal languages. Romansh, spoken by two percent of the population in southeast Switzerland, is an ancient Rhaeto-Romanic language derived from Latin, remnants of ancient Celtic languages and perhaps Etruscan. Much of the Alpine culture is unchanged since the medieval period when skills that guaranteed survival in the mountain valleys and in the highest villages became mainstays, leading to strong traditions of carpentry, woodcarving, baking and pastry-making, and cheesemaking. Farming has been a traditional occupation for centuries, although it became less dominant in the 20th century with the advent of tourism. Grazing and pasture land are limited because of the steep and rocky topography of the Alps. In mid-June cows are moved to the highest pastures close to the snowline, where they are watched by herdsmen who stay in the high altitudes often living in stone huts or wooden barns during the summers. Villagers celebrate the day the cows are herded up to the pastures and again when they return in mid-September. The Almabtrieb, Alpabzug, Alpabfahrt, Désalpes ("coming down from the alps") is celebrated by decorating the cows with garlands and enormous cowbells while the farmers dress in traditional costumes. Cheesemaking is an ancient tradition in most Alpine countries. A wheel of cheese from the Emmental in Switzerland can weigh up to , and the Beaufort in Savoy can weigh up to . Owners of the cows traditionally receive from the cheesemakers a portion in relation to the proportion of the cows' milk from the summer months in the high alps. Haymaking is an important farming activity in mountain villages that has become somewhat mechanized in recent years, although the slopes are so steep that scythes are usually necessary to cut the grass. Hay is normally brought in twice a year, often also on festival days. In the high villages, people live in homes built according to medieval designs that withstand cold winters. The kitchen is separated from the living area (called the stube, the area of the home heated by a stove), and second-floor bedrooms benefit from rising heat. The typical Swiss chalet originated in the Bernese Oberland. Chalets often face south or downhill, and are built of solid wood, with a steeply gabled roof to allow accumulated snow to slide off easily. Stairs leading to upper levels are sometimes built on the outside, and balconies are sometimes enclosed. Food is passed from the kitchen to the stube, where the dining room table is placed. Some meals are communal, such as fondue, where a pot is set in the middle of the table for each person to dip into. Other meals are still served in a traditional manner on carved wooden plates. Furniture has been traditionally elaborately carved and in many Alpine countries carpentry skills are passed from generation to generation. Roofs are traditionally constructed from Alpine rocks such as pieces of schist, gneiss or slate. Such chalets are typically found in the higher parts of the valleys, as in the Maurienne valley in Savoy, where the amount of snow during the cold months is important. The inclination of the roof cannot exceed 40%, allowing the snow to stay on top, thereby functioning as insulation from the cold. In the lower areas where the forests are widespread, wooden tiles are traditionally used. Commonly made of Norway spruce, they are called "tavaillon". In the German-speaking parts of the Alps (Austria, Bavaria, South Tyrol, Liechtenstein and Switzerland), there is a strong tradition of Alpine folk culture. Old traditions are carefully maintained among inhabitants of Alpine areas, even though this is seldom obvious to the visitor: many people are members of cultural associations where the Alpine folk culture is cultivated. At cultural events, traditional folk costume (in German Tracht) is expected: typically lederhosen for men and dirndls for women. Visitors can get a glimpse of the rich customs of the Alps at public Volksfeste. Even when large events feature only a little folk culture, all participants take part with gusto. Good opportunities to see local people celebrating the traditional culture occur at the many fairs, wine festivals and firefighting festivals which fill weekends in the countryside from spring to autumn. Alpine festivals vary from country to country. Frequently they include music (e.g. the playing of Alpenhorns), dance (e.g. Schuhplattler), sports (e.g. wrestling marches and archery), as well as traditions with pagan roots such as the lighting of fires on Walpurgis Night and Saint John's Eve. Many areas celebrate Fastnacht in the weeks before Lent. Folk costume also continues to be worn for most weddings and festivals. Tourism The Alps are one of the more popular tourist destinations in the world with many resorts such Oberstdorf, in Bavaria, Saalbach in Austria, Davos in Switzerland, Chamonix in France, and Cortina d'Ampezzo in Italy recording more than a million annual visitors. With over 120 million visitors a year, tourism is integral to the Alpine economy with much it coming from winter sports, although summer visitors are also an important component. The tourism industry began in the early 19th century when foreigners visited the Alps, travelled to the bases of the mountains to enjoy the scenery, and stayed at the spa-resorts. Large hotels were built during the Belle Époque; cog-railways, built early in the 20th century, brought tourists to ever-higher elevations, with the Jungfraubahn terminating at the Jungfraujoch, well above the eternal snow-line, after going through a tunnel in Eiger. During this period winter sports were slowly introduced: in 1882 the first figure skating championship was held in St. Moritz, and downhill skiing became a popular sport with English visitors early in the 20th century, as the first ski-lift was installed in 1908 above Grindelwald. In the first half of the 20th century the Olympic Winter Games were held three times in Alpine venues: the 1924 Winter Olympics in Chamonix, France; the 1928 Winter Olympics in St. Moritz, Switzerland; and the 1936 Winter Olympics in Garmisch-Partenkirchen, Germany. During World War II the winter games were cancelled but after that time the Winter Games have been held in St. Moritz (1948), Cortina d'Ampezzo (1956), Innsbruck, Austria (1964 and 1976), Grenoble, France, (1968), Albertville, France, (1992), and Torino (2006). In 1930, the Lauberhorn Rennen (Lauberhorn Race), was run for the first time on the Lauberhorn above Wengen; the equally demanding Hahnenkamm was first run in the same year in Kitzbühl, Austria. Both races continue to be held each January on successive weekends. The Lauberhorn is the more strenuous downhill race at and poses danger to racers who reach within seconds of leaving the start gate. During the post-World War I period, ski-lifts were built in Swiss and Austrian towns to accommodate winter visitors, but summer tourism continued to be important; by the mid-20th century the popularity of downhill skiing increased greatly as it became more accessible and in the 1970s several new villages were built in France devoted almost exclusively to skiing, such as Les Menuires. Until this point, Austria and Switzerland had been the traditional and more popular destinations for winter sports, but by the end of the 20th century and into the early 21st century, France, Italy and the Tyrol began to see increases in winter visitors. From 1980 to the present, ski-lifts have been modernized and snow-making machines installed at many resorts, leading to concerns regarding the loss of traditional Alpine culture and questions regarding sustainable development. Probably due to climate change, the number of ski resorts and piste kilometres has declined since 2015 Avalanche/snow-slide 17th century French-Italian border avalanche: in the 17th century about 2500 people were killed by an avalanche in a village on the French-Italian border. 19th century Zermatt avalanche: in the 19th century, 120 homes in a village near Zermatt were destroyed by an avalanche. December 13, 1916 Marmolada-mountain-avalanche 1950–1951 winter-of-terror avalanches February 10, 1970 Val d'Isère avalanche February 9, 1999 Montroc avalanche February 21, 1999 Evolène avalanche February 23, 1999 Galtür avalanche the deadliest avalanche in the Alps in 40 years. July 2014 Mont-Blanc avalanche January 13, 2016 Les-Deux-Alpes avalanche January 18, 2016 Valfréjus avalanche Transportation The region is serviced by of roads used by six million vehicles per year. Train travel is well established in the Alps, with, for instance of track for every in a country such as Switzerland. Most of Europe's highest railways are located there. In 2007, the new Lötschberg Base Tunnel was opened, which circumvents the 100 years older Lötschberg Tunnel. With the opening of the Gotthard Base Tunnel on June 1, 2016, it bypasses the Gotthard Tunnel built in the 19th century and realizes the first flat route through the Alps. Some high mountain villages are car-free either because of inaccessibility or by choice. Wengen, and Zermatt (in Switzerland) are accessible only by cable car or cog-rail trains. Avoriaz (in France), is car-free, with other Alpine villages considering becoming car-free zones or limiting the number of cars for reasons of sustainability of the fragile Alpine terrain. The lower regions and larger towns of the Alps are well-served by motorways and main roads, but higher mountain passes and byroads, which are amongst the highest in Europe, can be treacherous even in summer due to steep slopes. Many passes are closed in winter. A number of airports around the Alps (and some within), as well as long-distance rail links from all neighbouring countries, afford large numbers of travellers easy access. See also Notes References Works cited Alpine Convention. (2010). The Alps: People and pressures in the mountains, the facts at a glance Allaby, Michael et al. The Encyclopedia of Earth. (2008). Berkeley: University of California Press. Beattie, Andrew. (2006). The Alps: A Cultural History. New York: Oxford University Press. Benniston, Martin, et al. (2011). "Impact of Climatic Change on Water and Natural Hazards in the Alps". Environmental Science and Policy. Volume 30. 1–9 Cebon, Peter, et al. (1998). Views from the Alps: Regional Perspectives on Climate Change. Cambridge MA: MIT Press. Chatré, Baptiste, et al. (2010). The Alps: People and Pressures in the Mountains, the Facts at a Glance. Permanent Secretariat of the Alpine Convention (alpconv.org). Retrieved August 4, 2012. De Graciansky, Pierre-Charles et al. (2011). The Western Alps, From Rift to Passive Margin to Orogenic Belt. Amsterdam: Elsevier. Feuer, A.B. (2006). Packs On!: Memoirs of the 10th Mountain Division in World War II. Mechanicsburg, Pennsylvania: Stackpole Books. Fleming, Fergus. (2000). Killing Dragons: The Conquest of the Alps. New York: Grove. Gerrard, AJ. (1990) Mountain Environments: An Examination of the Physical Geography of Mountains. Boston: MIT Press. Halbrook, Stephen P. (1998). Target Switzerland: Swiss Armed Neutrality in World War II. Rockville Center, NY: Sarpedon. Halbrook, Stephen P. (2006). The Swiss and the Nazis: How the Alpine Republic Survived in the Shadow of the Third Reich. Havertown, PA: Casemate. Hudson, Simon. (2000). Snow Business: A Study of the International Ski Industry. New York: Cengage Körner, Christian. (2003). Alpine Plant Life. New York: Springer Verlag. Lancel, Serge. (1999). Hannibal. Oxford: Blackwell. Mitchell, Arthur H. (2007). Hitler's Mountain. Jefferson, NC: McFarland. Prevas, John. (2001). Hannibal Crosses The Alps: The Invasion Of Italy And The Punic Wars. Cambridge, MA: Da Capo Press. Reynolds, Kev. (2012) The Swiss Alps. Cicerone Press. Roth, Philipe. (2007). Minerals first Discovered in Switzerland. Lausanne, CH: Museum of Geology. Schmid, Stefan M. (2004). "Regional tectonics: from the Rhine graben to the Po plain, a summary of the tectonic evolution of the Alps and their forelands". Basel: Geologisch-Paläontologisches Institut Sharp, Hilary. (2002). Trekking and Climbing in the Western Alps. London: New Holland. Shoumatoff, Nicholas and Nina. (2001). The Alps: Europe's Mountain Heart. Ann Arbor, MI: University of Michigan Press. Viazzo, Pier Paolo. (1980). Upland Communities: Environment, Population and Social Structure in the Alps since the Sixteenth Century. Cambridge: Cambridge University Press. External links 17, 2005 Satellite photo of the Alps, taken on August 31, 2005, by MODIS aboard Terra Official website of the Alpine Space Programme This EU co-funded programme co-finances transnational projects in the Alpine region Geography of Central Europe Geography of Southern Europe Geography of Western Europe Mountain ranges of Austria Mountain ranges of France Mountain ranges of Germany Mountain ranges of Italy Mountain ranges of Liechtenstein Mountain ranges of Monaco Mountain ranges of Slovenia Mountain ranges of Switzerland Physiographic provinces