id
int64
39
19.8M
url
stringlengths
31
264
title
stringlengths
1
182
text
stringlengths
1
316k
__index_level_0__
int64
1
7.91M
6,261
https://en.wikipedia.org/wiki/Charles%20Baxter%20%28author%29
Charles Baxter (author)
Charles Morley Baxter (born May 13, 1947) is an American novelist, essayist, and poet. Biography Baxter was born in Minneapolis, Minnesota, to John and Mary Barber (Eaton) Baxter. He graduated from Macalester College in Saint Paul in 1969. In 1974 he received his PhD in English from the University at Buffalo with a thesis on Djuna Barnes, Malcolm Lowry, and Nathanael West. Baxter taught high school in Pinconning, Michigan for a year before beginning his university teaching career at Wayne State University in Detroit, Michigan. He then moved to the University of Michigan, where for many years he directed the Creative Writing MFA program. He was a visiting professor of creative writing at the University of Iowa and at Stanford. He taught at the University of Minnesota and in the Warren Wilson College MFA Program for Writers. He retired in 2020. He was awarded a Guggenheim Fellowship in 1985. He received the PEN/Malamud Award in 2021 for Excellence in the Short Story. He married teacher Martha Ann Hauser in 1976, and has a son, Daniel. Baxter and Hauser eventually separated. Works Novels First Light (1987). An eminent astrophysicist and her brother, a small-town Buick salesman, discover how they grew so far apart and the bonds of love that still keep them together. Shadow Play (1993). As his wife does gymnastics and magic tricks, his crazy mother invents her own vocabulary, and his aunt writes her own version of the Bible, Five Oaks Assistant City Manager Wyatt Palmer tries to live a normal life and nearly succeeds, but... The Feast of Love (2000) (Pantheon Books), a reimagined Midsummer Night's Dream, a story told through the eyes of several different people. Nominated for the National Book Award. A film version of the book, starring Morgan Freeman, Fred Ward and Greg Kinnear and directed by Robert Benton, was released in 2007. Saul and Patsy (2003). A teacher's marriage and identity are threatened by a dangerously obsessed teenage boy at his school. The Soul Thief (2008). A graduate student's complicated relationships lead to a disturbing case of identity theft, which ultimately leads the man to wonder if he really is who he thinks he is. The Sun Collective (2020, Pantheon Books). The lives of two very different couples—one retired, one in their twenties—intersect in Minneapolis around an anti-capitalist collective arguing for revolution, as an underground group of extremists wage war on the homeless. Short story collections Harmony of the World (1984). Winner of the Associated Writing Programs Award. Through the Safety Net (1985) A Relative Stranger (1990) Believers (1997) Gryphon: New and Selected Stories (2011) There's Something I Want You to Do: Stories (February 2015) Non-fiction Burning Down the House: Essays on Fiction (1997) The Art of Subtext: Beyond Plot (2007). Winner of the 2008 Minnesota Book Award for General Non-fiction. Wonderlands: Essays on the Life of Literature (2022) Poetry collections Chameleon (1970) The South Dakota Guidebook (1974) Imaginary Paintings (1989) Edited works The Business of Memory (1999) Best New American Voices 2001 (2001) Bringing the Devil to His Knees: The Craft of Fiction and the Writing Life (2001) A William Maxwell Portrait: Memories and Appreciations (2004) References Greasley, Philip A. (2001). Dictionary of Midwestern Literature Volume One: The Authors. Indiana: Indiana University Press. p. 54. . External links Charles Baxter's official website Interview with the author at Powells.com Interview with Charles Baxter at website of Department of English at the University of Minnesota Interview with the author at Pif Magazine. 1947 births American book editors 20th-century American novelists 20th-century American male writers Novelists from Minnesota Living people University at Buffalo alumni Writers from Minneapolis Wayne State University faculty Macalester College alumni Warren Wilson College faculty 21st-century American novelists American male novelists University of Michigan faculty American male essayists American male short story writers 20th-century American short story writers 21st-century American short story writers 20th-century American essayists 21st-century American essayists 21st-century American male writers Novelists from Michigan Fulbright alumni
2,836
6,262
https://en.wikipedia.org/wiki/Ceres
Ceres
Ceres most commonly refers to: Ceres (dwarf planet), the largest asteroid Ceres (mythology), the Roman goddess of agriculture Ceres may also refer to: Places Brazil Ceres, Goiás, Brazil Ceres Microregion, in north-central Goiás state, Brazil United States Ceres, California Ceres, Georgia Ceres, Iowa Ceres, New York, a community that also extends into Pennsylvania Ceres, Oklahoma, a community in Noble County Ceres, Virginia Ceres, Washington Ceres, West Virginia Ceres Township, McKean County, Pennsylvania Other countries Ceres, Santa Fe, Argentina Ceres, Victoria, Australia Ceres, Piedmont, Italy Ceres, Fife, Scotland Ceres, Western Cape, South Africa Ceres, Limpopo, South Africa Ceres Nunataks, Antarctica Ceres Koekedouw Dam, dam on the Koekedouw River, near Ceres, Western Cape, South Africa Acronyms CERES (satellite), a planned French spy satellite program California Environmental Resources Evaluation System Center for Eurasian, Russian, and East European Studies at Georgetown University Centre for Research on Energy Security (CeRES), an Indian research center on geopolitics and energy CERES Community Environment Park (Centre for Education and Research in Environmental Strategies), a community environmental park in Melbourne, Australia. Clouds and the Earth's Radiant Energy System, an ongoing NASA meteorological experiment. Coalition for Environmentally Responsible Economies (French: Center of Socialist Studies, Research and Education), a left-wing political organization founded by Jean-Pierre Chevènement Aircraft, rocket, transport, and vessels Aircraft, locomotive, car CAC Ceres, a crop-duster aircraft manufactured in Australia Ceres, a West Cornwall Railway steam locomotive Toyota Corolla Ceres a compact, 4-door hardtop sold in Japan Kia Ceres, a version of the Kia Bongo, a 2-door pick up truck Ships and submarines Ceres (East Indiaman), three vessels of the British East India Company , several ships HMS Ceres, three ships and three shore establishments of the British Royal Navy , several ships of the French Navy USS Ceres (1856), a Union Navy steamship during the American Civil War Rocket Ceres-1, a PR of China four stage rocket Arts, entertainment, and media Ceres (band), a band from Australia Ceres (sculpture), a c.1770 statuette by Augustin Pajou Ceres (2005), an orchestral work by Mark-Anthony Turnage Sailor Ceres, a character in Sailor Moon media The titular character of Ceres, Celestial Legend, a manga and mini anime series Ceres Space Colony, from the video game Super Metroid Brands and enterprises Ceres (organization), a coalition of investors and environmentalists (formerly the Coalition for Environmentally Responsible Economies) Ceres Brewery, a brewery in Aarhus, Denmark Ceres Fruit Juices, a South African juice company Ceres Hellenic Shipping Enterprises, a Greek shipping company Ceres, Inc., a US energy crop seeds developer Ceres Liner, a bus company in the Philippines Education Ceres Connection, a cooperative program between MIT's Lincoln Laboratory and the Society for Science and the Public dedicated for promoting science education Ceres School, an historic school building located at Ceres in Allegany County, New York Ceres (women's fraternity), a women's fraternity focused on agriculture Sport Ceres Futebol Clube, a Brazilian football team from the city of Rio de Janeiro Sportsklubben Ceres, a Norwegian sports team from Skedsmo, Akershus United City F.C., a Philippine football team formerly known as Ceres–Negros F.C. People Dragoș Cereș (born 2004), a Moldovan chess master Other uses Ceres (workstation), a computer workstation built at ETH Zürich Ceres series (disambiguation), several series of postage stamps representing the goddess Ceres Ceres Chess Engine, an experimental chess engine that uses Leela Chess Zero networks Plural of cere See also Colonization of Ceres Keres (mythology), death spirits unconnected with Ceres Seres (disambiguation)
2,837
6,267
https://en.wikipedia.org/wiki/Cultural%20imperialism
Cultural imperialism
Cultural imperialism (also cultural colonialism) comprises the cultural dimensions of imperialism. The word "imperialism" describes practices in which a country engages culture (language, tradition, and ritual, politics, economics) to create and maintain unequal social and economic relationships among social groups. Cultural imperialism often uses violence to implement the system of cultural hegemony that legitimizes imperialism. Cultural imperialism may take various forms, such as an attitude, a formal policy, or military action - insofar as each of these reinforces the empire's cultural hegemony. Research on the topic occurs in scholarly disciplines, and is especially prevalent in communication and media studies, education, foreign policy, history, international relations, linguistics, literature, post-colonialism, science, sociology, social theory, environmentalism, and sports. Background and definitions Although the Oxford English Dictionary has a 1921 reference to the "cultural imperialism of the Russians", John Tomlinson, in his book on the subject, writes that the term emerged in the 1960s and has been a focus of research since at least the 1970s. Terms such as "media imperialism", "structural imperialism", "cultural dependency and domination", "cultural synchronization", "electronic colonialism", "ideological imperialism", and "economic imperialism" have all been used to describe the same basic notion of cultural imperialism. The term refers largely to the exercise of power in a cultural relationship in which the principles, ideas, practices, and values of a powerful, invading society are imposed upon indigenous cultures in the occupied areas. The process is often used to describe examples of when the compulsory practices of the cultural traditions of the imperial social group are implemented upon a conquered social group. Cultural imperialism has been called a process that intends to transition the “cultural symbols of the invading communities from ‘foreign’ to ‘natural,’‘domestic,’” comments Jeffrey Herlihy-Mera. The process of Cultural Conquest often involves three discrete and sequential phases: (Herlihy-Mera, Jeffrey. 2018. After American Studies: Rethinking the Legacies of Transnational Exceptionalism. Routledge. p. 24) While the third phase continues "in perpetuity," cultural imperialism tends to be “gradual, contested (and continues to be contested), and is by nature incomplete. The partial and imperfect configuration of this ontology takes an implicit conceptualization of reality and attempts—and often fails—to elide other forms of collective existence.” In order to achieve that end, cultural engineering projects strive to “isolate residents within constructed spheres of symbols” such that they (eventually, in some cases after several generations) abandon other cultures and identify with the new symbols. “The broader intended outcome of these interventions might be described as a common recognition of possession of the land itself (on behalf of the organizations publishing and financing the images).” For Herbert Schiller, cultural imperialism refers to the American Empire's "coercive and persuasive agencies, and their capacity to promote and universalize an American ‘way of life’ in other countries without any reciprocation of influence." According to Schiller, cultural imperialism “pressured, forced and bribed” societies to integrate with the U.S.’s expansive capitalist model but also incorporated them with attraction and persuasion by winning “the mutual consent, even solicitation of the indigenous rulers.” He continues remarks that it is:the sum processes by which a society is brought into the modern [U.S.-centered] world system and how its dominating stratum is attracted, pressured, forced, and sometimes bribed into shaping social institutions to correspond to, or even promote, the values and structures of the dominating centres of the system. The public media are the foremost example of operating enterprises that are used in the penetrative process. For penetration on a significant scale the media themselves must be captured by the dominating/penetrating power. This occurs largely through the commercialization of broadcasting.The historical contexts, iterations, complexities, and politics of Schiller's foundational and substantive theorization of cultural imperialism in international communication and media studies are discussed in detail by political economy of communication researchers Richard Maxwell, Vincent Mosco, Graham Murdock, and Tanner Mirrlees. Downing and Sreberny-Mohammadi state: "Cultural imperialism signifies the dimensions of the process that go beyond economic exploitation or military force. In the history of colonialism, (i.e., the form of imperialism in which the government of the colony is run directly by foreigners), the educational and media systems of many Third World countries have been set up as replicas of those in Britain, France, or the United States and carry their values. Western advertising has made further inroads, as have architectural and fashion styles. Subtly but powerfully, the message has often been insinuated that Western cultures are superior to the cultures of the Third World." Poststructuralism Within the realm of poststructuralist and postcolonial theory, cultural imperialism can be seen as the cultural legacy of Western colonialism, or forms of social action contributing to the continuation of Western hegemony. To some outside of the realm of this discourse, the term is critiqued as being unclear, unfocused, and/or contradictory in nature. The work of French philosopher and social theorist Michel Foucault has heavily influenced use of the term cultural imperialism, particularly his philosophical interpretation of power and his concept of governmentality. Following an interpretation of power similar to that of Machiavelli, Foucault defines power as immaterial, as a "certain type of relation between individuals" that has to do with complex strategic social positions that relate to the subject's ability to control its environment and influence those around itself. According to Foucault, power is intimately tied with his conception of truth. "Truth", as he defines it, is a "system of ordered procedures for the production, regulation, distribution, circulation, and operation of statements" which has a "circular relation" with systems of power. Therefore, inherent in systems of power, is always "truth", which is culturally specific, inseparable from ideology which often coincides with various forms of hegemony. Cultural imperialism may be an example of this. Foucault's interpretation of governance is also very important in constructing theories of transnational power structure. In his lectures at the Collège de France, Foucault often defines governmentality as the broad art of "governing", which goes beyond the traditional conception of governance in terms of state mandates, and into other realms such as governing "a household, souls, children, a province, a convent, a religious order, a family". This relates directly back to Machiavelli's treatise on how to retain political power at any cost, The Prince, and Foucault's aforementioned conceptions of truth and power. (i.e. various subjectivities are created through power relations that are culturally specific, which lead to various forms of culturally specific governmentality such as neoliberal governmentality.) Post-Colonialism Edward Saïd is a founding figure of postcolonialism, established with the book Orientalism (1978), a humanist critique of The Enlightenment, which criticises Western knowledge of "The East"—specifically the English and the French constructions of what is and what is not "Oriental". Whereby said "knowledge" then led to cultural tendencies towards a binary opposition of the Orient vs. the Occident, wherein one concept is defined in opposition to the other concept, and from which they emerge as of unequal value. In Culture and Imperialism (1993), the sequel to Orientalism, Saïd proposes that, despite the formal end of the "age of empire" after the Second World War (1939–45), colonial imperialism left a cultural legacy to the (previously) colonised peoples, which remains in their contemporary civilisations; and that said American cultural imperialism is very influential in the international systems of power. In "Can the Subaltern Speak?" Gayatri Chakravorty Spivak critiques common representations in the West of the Sati, as being controlled by authors other than the participants (specifically English colonizers and Hindu leaders). Because of this, Spivak argues that the subaltern, referring to the communities that participate in the Sati, are not able to represent themselves through their own voice. Spivak says that cultural imperialism has the power to disqualify or erase the knowledge and mode of education of certain populations that are low on the social hierarchy. In A Critique of Postcolonial Reason, Spivak argues that Western philosophy has a history of not only exclusion of the subaltern from discourse, but also does not allow them to occupy the space of a fully human subject. Contemporary ideas and debate Cultural imperialism can refer to either the forced acculturation of a subject population, or to the voluntary embracing of a foreign culture by individuals who do so of their own free will. Since these are two very different referents, the validity of the term has been called into question. Cultural influence can be seen by the "receiving" culture as either a threat to or an enrichment of its cultural identity. It seems therefore useful to distinguish between cultural imperialism as an (active or passive) attitude of superiority, and the position of a culture or group that seeks to complement its own cultural production, considered partly deficient, with imported products. The imported products or services can themselves represent, or be associated with, certain values (such as consumerism). According to one argument, the "receiving" culture does not necessarily perceive this link, but instead absorbs the foreign culture passively through the use of the foreign goods and services. Due to its somewhat concealed, but very potent nature, this hypothetical idea is described by some experts as "banal imperialism." For example, it is argued that while "American companies are accused of wanting to control 95 percent of the world's consumers", "cultural imperialism involves much more than simple consumer goods; it involved the dissemination of American principles such as freedom and democracy", a process which "may sound appealing" but which "masks a frightening truth: many cultures around the world are disappearing due to the overwhelming influence of corporate and cultural America". Some believe that the newly globalised economy of the late 20th and early 21st century has facilitated this process through the use of new information technology. This kind of cultural imperialism is derived from what is called "soft power". The theory of electronic colonialism extends the issue to global cultural issues and the impact of major multi-media conglomerates, ranging from Paramount, WarnerMedia, AT&T, Disney, News Corp, to Google and Microsoft with the focus on the hegemonic power of these mainly United States-based communication giants. Cultural diversity One of the reasons often given for opposing any form of cultural imperialism, voluntary or otherwise, is the preservation of cultural diversity, a goal seen by some as analogous to the preservation of ecological diversity. Proponents of this idea argue either that such diversity is valuable in itself, to preserve human historical heritage and knowledge, or instrumentally valuable because it makes available more ways of solving problems and responding to catastrophes, natural or otherwise. Ideas relating to African colonisation Of all the areas of the world that scholars have claimed to be adversely affected by imperialism, Africa is probably the most notable. In the expansive "age of imperialism" of the nineteenth century, scholars have argued that European colonisation in Africa has led to the elimination of many various cultures, worldviews, and epistemologies, particularly through neocolonisation of public education. This, arguably has led to uneven development, and further informal forms of social control having to do with culture and imperialism. A variety of factors, scholars argue, lead to the elimination of cultures, worldviews, and epistemologies, such as "de-linguicization" (replacing native African languages with European ones), devaluing ontologies that are not explicitly individualistic, and at times going as far as to not only define Western culture itself as science, but that non-Western approaches to science, the Arts, indigenous culture, etc. are not even knowledge. One scholar, Ali A. Abdi, claims that imperialism inherently "involve[s] extensively interactive regimes and heavy contexts of identity deformation, misrecognition, loss of self-esteem, and individual and social doubt in self-efficacy." Therefore, all imperialism would always, already be cultural. Ties to neoliberalism Neoliberalism is often critiqued by sociologists, anthropologists, and cultural studies scholars as being culturally imperialistic. Critics of neoliberalism, at times, claim that it is the newly predominant form of imperialism. Other scholars, such as Elizabeth Dunn and Julia Elyachar have claimed that neoliberalism requires and creates its own form of governmentality. In Dunn's work, Privatizing Poland, she argues that the expansion of the multinational corporation, Gerber, into Poland in the 1990s imposed Western, neoliberal governmentality, ideologies, and epistemologies upon the post-soviet persons hired. Cultural conflicts occurred most notably the company's inherent individualistic policies, such as promoting competition among workers rather than cooperation, and in its strong opposition to what the company owners claimed was bribery. In Elyachar's work, Markets of Dispossession, she focuses on ways in which, in Cairo, NGOs along with INGOs and the state promoted neoliberal governmentality through schemas of economic development that relied upon "youth microentrepreneurs." Youth microentrepreneurs would receive small loans to build their own businesses, similar to the way that microfinance supposedly operates. Elyachar argues though, that these programs not only were a failure, but that they shifted cultural opinions of value (personal and cultural) in a way that favoured Western ways of thinking and being. Ties to development studies Often, methods of promoting development and social justice are critiqued as being imperialistic in a cultural sense. For example, Chandra Mohanty has critiqued Western feminism, claiming that it has created a misrepresentation of the "third world woman" as being completely powerless, unable to resist male dominance. Thus, this leads to the often critiqued narrative of the "white man" saving the "brown woman" from the "brown man." Other, more radical critiques of development studies, have to do with the field of study itself. Some scholars even question the intentions of those developing the field of study, claiming that efforts to "develop" the Global South were never about the South itself. Instead, these efforts, it is argued, were made in order to advance Western development and reinforce Western hegemony. Ties to media effects studies The core of cultural imperialism thesis is integrated with the political-economy traditional approach in media effects research. Critics of cultural imperialism commonly claim that non-Western cultures, particularly from the Third World, will forsake their traditional values and lose their cultural identities when they are solely exposed to Western media. Nonetheless, Michael B. Salwen, in his book Critical Studies in Mass Communication (1991), claims that cross-consideration and integration of empirical findings on cultural imperialist influences is very critical in terms of understanding mass media in the international sphere. He recognises both of contradictory contexts on cultural imperialist impacts. The first context is where cultural imperialism imposes socio-political disruptions on developing nations. Western media can distort images of foreign cultures and provoke personal and social conflicts to developing nations in some cases. Another context is that peoples in developing nations resist to foreign media and preserve their cultural attitudes. Although he admits that outward manifestations of Western culture may be adopted, but the fundamental values and behaviours remain still. Furthermore, positive effects might occur when male-dominated cultures adopt the "liberation" of women with exposure to Western media and it stimulates ample exchange of cultural exchange. Criticisms of "cultural imperialism theory" Critics of scholars who discuss cultural imperialism have a number of critiques. Cultural imperialism is a term that is only used in discussions where cultural relativism and constructivism are generally taken as true. (One cannot critique promoting Western values if one believes that said values are absolutely correct. Similarly, one cannot argue that Western epistemology is unjustly promoted in non-Western societies if one believes that those epistemologies are absolutely correct.) Therefore, those who disagree with cultural relativism and/or constructivism may critique the employment of the term, cultural imperialism on those terms. John Tomlinson provides a critique of cultural imperialism theory and reveals major problems in the way in which the idea of cultural, as opposed to economic or political, imperialism is formulated. In his book Cultural Imperialism: A Critical Introduction, he delves into the much debated "media imperialism" theory. Summarizing research on the Third World's reception of American television shows, he challenges the cultural imperialism argument, conveying his doubts about the degree to which US shows in developing nations actually carry US values and improve the profits of US companies. Tomlinson suggests that cultural imperialism is growing in some respects, but local transformation and interpretations of imported media products propose that cultural diversification is not at an end in global society. He explains that one of the fundamental conceptual mistakes of cultural imperialism is to take for granted that the distribution of cultural goods can be considered as cultural dominance. He thus supports his argument highly criticising the concept that Americanization is occurring through global overflow of American television products. He points to a myriad of examples of television networks who have managed to dominate their domestic markets and that domestic programs generally top the ratings. He also doubts the concept that cultural agents are passive receivers of information. He states that movement between cultural/geographical areas always involves translation, mutation, adaptation, and the creation of hybridity. Other key critiques are that the term is not defined well, and employs further terms that are not defined well, and therefore lacks explanatory power, that cultural imperialism is hard to measure, and that the theory of a legacy of colonialism is not always true. Rothkopf on dealing with cultural dominance David Rothkopf, managing director of Kissinger Associates and an adjunct professor of international affairs at Columbia University (who also served as a senior U.S. Commerce Department official in the Clinton Administration), wrote about cultural imperialism in his provocatively titled In Praise of Cultural Imperialism? in the summer 1997 issue of Foreign Policy magazine. Rothkopf says that the United States should embrace "cultural imperialism" as in its self-interest. But his definition of cultural imperialism stresses spreading the values of tolerance and openness to cultural change in order to avoid war and conflict between cultures as well as expanding accepted technological and legal standards to provide free traders with enough security to do business with more countries. Rothkopf's definition almost exclusively involves allowing individuals in other nations to accept or reject foreign cultural influences. He also mentions, but only in passing, the use of the English language and consumption of news and popular music and film as cultural dominance that he supports. Rothkopf additionally makes the point that globalisation and the Internet are accelerating the process of cultural influence. Culture is sometimes used by the organisers of society—politicians, theologians, academics, and families—to impose and ensure order, the rudiments of which change over time as need dictates. One need only look at the 20th century's genocides. In each one, leaders used culture as a political front to fuel the passions of their armies and other minions and to justify their actions among their people. Rothkopf then cites genocide and massacres in Armenia, Russia, the Holocaust, Cambodia, Bosnia and Herzegovina, Rwanda and East Timor as examples of culture (in some cases expressed in the ideology of "political culture" or religion) being misused to justify violence. He also acknowledges that cultural imperialism in the past has been guilty of forcefully eliminating the cultures of natives in the Americas and in Africa, or through use of the Inquisition, "and during the expansion of virtually every empire.".The most important way to deal with cultural influence in any nation, according to Rothkopf, is to promote tolerance and allow, or even promote, cultural diversities that are compatible with tolerance and to eliminate those cultural differences that cause violent conflict: Successful multicultural societies, be they nations, federations, or other conglomerations of closely interrelated states, discern those aspects of culture that do not threaten union, stability, or prosperity (such as food, holidays, rituals, and music) and allow them to flourish. But they counteract or eradicate the more subversive elements of culture (exclusionary aspects of religion, language, and political/ideological beliefs). History shows that bridging cultural gaps successfully and serving as a home to diverse peoples requires certain social structures, laws, and institutions that transcend culture. Furthermore, the history of a number of ongoing experiments in multiculturalism, such as in the European Union, India, South Africa, Canada and the United States, suggests that workable, if not perfected, integrative models exist. Each is built on the idea that tolerance is crucial to social well-being, and each at times has been threatened by both intolerance and a heightened emphasis on cultural distinctions. The greater public good warrants eliminating those cultural characteristics that promote conflict or prevent harmony, even as less-divisive, more personally observed cultural distinctions are celebrated and preserved. Cultural dominance can also be seen in the 1930s in Australia where the Aboriginal Assimilation Policy acted as an attempt to wipe out the Native Australian people. The British settlers tried to biologically alter the skin colour of the Australian Aboriginal people through mixed breeding with white people. The policy also made attempts to forcefully conform the Aborigines to western ideas of dress and education. In history Although the term was popularised in the 1960s, and was used by its original proponents to refer to cultural hegemonies in a post-colonial world, cultural imperialism has also been used to refer to times further in the past. Ancient Greece The Ancient Greeks are known for spreading their culture around the Mediterranean and Near East through trade and conquest. During the Archaic Period, the burgeoning Greek city-states established settlements and colonies across the Mediterranean Sea, especially in Sicily and southern Italy, influencing the Etruscan and Roman peoples of the region. In the late fourth century BC, Alexander the Great conquered Persian and Indian territories all the way to the Indus River Valley and Punjab, spreading Greek pagan religion, art, and science along the way. This resulted in the rise of Hellenistic kingdoms and cities across Egypt, the Near East, Central Asia, and Northwest India where Greek culture fused with the cultures of the indigenous peoples. The Greek influence prevailed even longer in science and literature, where medieval Muslim scholars in the Middle East studied the writings of Aristotle for scientific learning. Ancient Rome The Roman Empire was also an early example of cultural imperialism. Early Rome, in its conquest of Italy, assimilated the people of Etruria by replacing the Etruscan language with Latin, which led to the demise of that language and many aspects of Etruscan civilisation. Cultural Romanization was imposed on many parts of Rome's empire by "many regions receiving Roman culture unwillingly, as a form of cultural imperialism." For example, when Greece was conquered by the Roman armies, Rome set about altering the culture of Greece to conform with Roman ideals. For instance, the Greek habit of stripping naked, in public, for exercise, was looked on askance by Roman writers, who considered the practice to be a cause of the Greeks' effeminacy and enslavement. The Roman example has been linked to modern instances of European imperialism in African countries, bridging the two instances with Slavoj Zizek's discussions of 'empty signifiers'. The Pax Romana was secured in the empire, in part, by the "forced acculturation of the culturally diverse populations that Rome had conquered." British Empire British worldwide expansion in the 18th and 19th centuries was an economic and political phenomenon. However, "there was also a strong social and cultural dimension to it, which Rudyard Kipling termed the 'white man's burden'." One of the ways this was carried out was by religious proselytising, by, amongst others, the London Missionary Society, which was "an agent of British cultural imperialism." Another way, was by the imposition of educational material on the colonies for an "imperial curriculum". Robin A. Butlin writes, "The promotion of empire through books, illustrative materials, and educational syllabuses was widespread, part of an education policy geared to cultural imperialism". This was also true of science and technology in the empire. Douglas M. Peers and Nandini Gooptu note that "Most scholars of colonial science in India now prefer to stress the ways in which science and technology worked in the service of colonialism, as both a 'tool of empire' in the practical sense and as a vehicle for cultural imperialism. In other words, science developed in India in ways that reflected colonial priorities, tending to benefit Europeans at the expense of Indians, while remaining dependent on and subservient to scientific authorities in the colonial metropolis." The analysis of cultural imperialism carried out by Edward Said drew principally from a study of the British Empire. According to Danilo Raponi, the cultural imperialism of the British in the 19th century had a much wider effect than only in the British Empire. He writes, "To paraphrase Said, I see cultural imperialism as a complex cultural hegemony of a country, Great Britain, that in the 19th century had no rivals in terms of its ability to project its power across the world and to influence the cultural, political and commercial affairs of most countries. It is the 'cultural hegemony' of a country whose power to export the most fundamental ideas and concepts at the basis of its understanding of 'civilisation' knew practically no bounds." In this, for example, Raponi includes Italy. Other pre-Second World War examples The New Cambridge Modern History writes about the cultural imperialism of Napoleonic France. Napoleon used the Institut de France "as an instrument for transmuting French universalism into cultural imperialism." Members of the Institute (who included Napoleon), descended upon Egypt in 1798. "Upon arrival they organised themselves into an Institute of Cairo. The Rosetta Stone is their most famous find. The science of Egyptology is their legacy." After the First World War, Germans were worried about the extent of French influence in the annexed Rhineland, with the French occupation of the Ruhr Valley in 1923. An early use of the term appeared in an essay by Paul Ruhlmann (as "Peter Hartmann") at that date, entitled French Cultural Imperialism on the Rhine. Ties to North American Colonisation Keeping in line with the trends of international imperialistic endeavours, the expansion of Canadian and American territory in the 19th century saw cultural imperialism employed as a means of control over indigenous populations. This, when used in conjunction of more traditional forms of ethnic cleansing and genocide in the United States, saw devastating, lasting effects on indigenous communities. In 2017 Canada celebrated its 150-year anniversary of the confederating of three British colonies. As Catherine Murton Stoehr points out in Origins, a publication organised by the history departments of Ohio State University and Miami University, the occasion came with remembrance of Canada's treatment of First Nations people. Numerous policies focused on indigenous persons came into effect shortly thereafter. Most notable is the use of residential schools across Canada as a means to remove indigenous persons from their culture and instill in them the beliefs and values of the majorised colonial hegemony. The policies of these schools, as described by Ward Churchill in his book Kill the Indian, Save the Man, were to forcefully assimilate students who were often removed with force from their families. These schools forbid students from using their native languages and participating in their own cultural practices. Residential schools were largely run by Christian churches, operating in conjunction with Christian missions with minimal government oversight. The book, Stolen Lives: The Indigenous peoples of Canada and the Indian Residentials Schools, describes this form of operation: In a The New York Times op-ed, Gabrielle Scrimshaw describes her grandparents being forced to send her mother to one of these schools or risk imprisonment. After hiding her mother on "school pick up day" so as to avoid sending their daughter to institutions whose abuse was well known at the time (mid-20th century). Scrimshaw's mother was left with limited options for further education she says and is today illiterate as a result. Scrimshaw explains "Seven generations of my ancestors went through these schools. Each new family member enrolled meant a compounding of abuse and a steady loss of identity, culture and hope. My mother was the last generation. the experience left her broken, and like so many, she turned to substances to numb these pains." A report, republished by CBC News, estimates nearly 6,000 children died in the care of these schools. The colonisation of native peoples in North America remains active today despite the closing of the majority of residential schools. This form of cultural imperialism continues in the use of Native Americans as mascots for schools and athletic teams. Jason Edward Black, a professor and chair in the Department of Communication Studies at the University of North Carolina at Charlotte, describes how the use of Native Americans as mascots furthers the colonial attitudes of the 18th and 19th centuries. In Deciphering Pocahontas, Kent Ono and Derek Buescher wrote: "Euro-American culture has made a habit of appropriating, and redefining what is 'distinctive' and constitutive of Native Americans." Nazi colonialism Cultural imperialism has also been used in connection with the expansion of German influence under the Nazis in the middle of the twentieth century. Alan Steinweis and Daniel Rogers note that even before the Nazis came to power, "Already in the Weimar Republic, German academic specialists on eastern Europe had contributed through their publications and teaching to the legitimization of German territorial revanchism and cultural imperialism. These scholars operated primarily in the disciplines of history, economics, geography, and literature." In the area of music, Michael Kater writes that during the WWII German occupation of France, Hans Rosbaud, a German conductor based by the Nazi regime in Strasbourg, became "at least nominally, a servant of Nazi cultural imperialism directed against the French." In Italy during the war, Germany pursued "a European cultural front that gravitates around German culture". The Nazi propaganda minister Joseph Goebbels set up the European Union of Writers, "one of Goebbels's most ambitious projects for Nazi cultural hegemony. Presumably a means of gathering authors from Germany, Italy, and the occupied countries to plan the literary life of the new Europe, the union soon emerged as a vehicle of German cultural imperialism." For other parts of Europe, Robert Gerwarth, writing about cultural imperialism and Reinhard Heydrich, states that the "Nazis' Germanization project was based on a historically unprecedented programme of racial stock-taking, theft, expulsion and murder." Also, "The full integration of the [Czech] Protectorate into this New Order required the complete Germanization of the Protectorate's cultural life and the eradication of indigenous Czech and Jewish culture." The actions by Nazi Germany reflect on the notion of race and culture playing a significant role in imperialism. The idea that there is a distinction between the Germans and the Jews has created the illusion of Germans believing they were superior to the Jewish inferiors, the notion of us/them and self/others. Western imperialism Cultural imperialism manifests in the Western world in the form legal system to include commodification and marketing of indigenous resources (example medicinal, spiritual or artistic) and genetic resources (example human DNA). Americanization The terms "McDonaldization", "Disneyization" and "Cocacolonization" have been coined to describe the spread of Western cultural influence. There are many countries affected by the US and their pop-culture. For example, the film industry in Nigeria referred to as "Nollywood" being the second largest as it produces more films annually than the United States, their films are shown across Africa. Another term that describes the spread of Western cultural influence is "Hollywoodization" it is when American culture is promoted through Hollywood films which can culturally affect the viewers of Hollywood films. See also Related negative concepts Civilizing mission Colonial mentality Cultural appropriation Cultural assimilation Cultural genocide Ethnocide Green imperialism Linguistic imperialism Scientific imperialism Impact Cross-culturalism Cultural cringe Cultural revolution Globalization Revanchism Right to exist Transculturation Related examples Cultural Albanisation Americanization Arabization Bulgarization Chilenization Dutchification Europeanisation Francization Hawaiianization Hispanicization Persianization Russification Serbianisation Sinicization Sovietization Thaification Westernization Theocultural Christianization Islamization Notes References External links "In Praise of Cultural Imperialism?", by David Rothkopf, Foreign Policy no. 107, Summer 1997, pp. 38–53, which argues that cultural imperialism is a positive thing. "Reconsidering cultural imperialism theory" by Livingston A. White, Transnational Broadcasting Studies no. 6, Spring/Summer 2001, which argues that the idea of media imperialism is outdated. Academic Web page from 24 February 2000, discussing the idea of cultural imperialism "Cultural Imperialism", BBC Radio 4 discussion with Linda Colley, Phillip Dodd and Mary Beard (In Our Time, 27 June 2002) Imperialism Imperialism Imperialism Imperialism Imperialism Imperialism Political science Postmodern theory
2,838
6,272
https://en.wikipedia.org/wiki/Charleston
Charleston
Charleston most commonly refers to: Charleston, South Carolina Charleston, West Virginia, the state capital Charleston (dance) Charleston may also refer to: Places Australia Charleston, South Australia Canada Charleston, Newfoundland and Labrador Charleston, Nova Scotia New Zealand Charleston, New Zealand United Kingdom Charleston Farmhouse, Sussex, artists' house open to the public Charleston, Angus, near Dundee, Scotland Charleston, Dundee, Scotland Charleston, Paisley, Scotland United States Charleston, Arizona Charleston, Arkansas Charleston, Illinois Charleston, Iowa Charleston, Kansas Charleston, Kentucky Charleston, Maine Charleston, Mississippi Charleston, Missouri Charleston, Nevada Charleston, New Jersey Charleston, New York Charleston, Staten Island, in New York City, New York Charleston, North Carolina Charleston, Oklahoma Charleston, Oregon Charleston, Tennessee Charleston, Utah Charleston, Vermont Charleston County, South Carolina Charleston Township, Coles County, Illinois Charleston Township, Kalamazoo County, Michigan Charleston Township, Tioga County, Pennsylvania Mount Charleston, Nevada, Clark County, a town Mount Charleston, Nevada, a mountain North Charleston, South Carolina South Charleston, Ohio South Charleston, West Virginia West Charleston, Ohio Naval history USS Charleston, several US Navy ships Charleston, later Texan schooner Zavala Railway stations Charleston station (West Virginia), US North Charleston station, South Carolina, US Education Charleston Collegiate School, South Carolina Charleston High School (disambiguation) College of Charleston, in South Carolina College of Charleston Cougars, athletic teams nicknamed "Charleston" University of Charleston, West Virginia Charleston Golden Eagles, athletic team Charleston Academy, Inverness, Scotland Music "Charleston" (1923 song) "Charleston", a song by Brendan James Charleston (Den Harrow song) "Charleston", a song by Sons of Bill "Charleston", a track on the 1979 Mike Oldfield album Platinum Other uses Charleston (name) Charleston (novel),by John Jakes, 2002 Charleston, a 1981 novel by Alexandra Ripley Charleston (1974 film), Italy Charleston (1977 film), Italy Charleston Open, a tennis tournament, Charleston, South Carolina Charleston, a procedure in mahjong Charleston, a model of the Citroën 2CV car See also Charleston metropolitan area (disambiguation) Charlestown (disambiguation) Charlton (disambiguation) Charlottetown (disambiguation)
2,840
6,280
https://en.wikipedia.org/wiki/Cuboctahedron
Cuboctahedron
A cuboctahedron is a polyhedron with 8 triangular faces and 6 square faces. A cuboctahedron has 12 identical vertices, with 2 triangles and 2 squares meeting at each, and 24 identical edges, each separating a triangle from a square. As such, it is a quasiregular polyhedron, i.e. an Archimedean solid that is not only vertex-transitive but also edge-transitive. It is radially equilateral. Its dual polyhedron is the rhombic dodecahedron. The cuboctahedron was probably known to Plato: Heron's Definitiones quotes Archimedes as saying that Plato knew of a solid made of 8 triangles and 6 squares. Synonyms Vector Equilibrium (Buckminster Fuller) because its center-to-vertex radius equals its edge length (it has radial equilateral symmetry). Fuller also called a cuboctahedron built of rigid struts and flexible vertices a jitterbug; this object can be progressively transformed into an icosahedron, octahedron, and tetrahedron by folding along the diagonals of its square sides. With Oh symmetry, order 48, it is a rectified cube or rectified octahedron (Norman Johnson) With Td symmetry, order 24, it is a cantellated tetrahedron or rhombitetratetrahedron. With D3d symmetry, order 12, it is a triangular gyrobicupola. Orthogonal projections The cuboctahedron has four special orthogonal projections, centered on a vertex, an edge, and the two types of faces, triangular and square. The last two correspond to the B2 and A2 Coxeter planes. The skew projections show a square and hexagon passing through the center of the cuboctahedron. Spherical tiling The cuboctahedron can also be represented as a spherical tiling, and projected onto the plane via a stereographic projection. This projection is conformal, preserving angles but not areas or lengths. Straight lines on the sphere are projected as circular arcs on the plane. Structure Coordinates The Cartesian coordinates for the vertices of a cuboctahedron (of edge length ) centered at the origin are: (±1,±1,0) (±1,0,±1) (0,±1,±1) An alternate set of coordinates can be made in 4-space, as 12 permutations of: (0,1,1,2) This construction exists as one of 16 orthant facets of the cantellated 16-cell. Root vectors The cuboctahedron's 12 vertices can represent the root vectors of the simple Lie group A3. With the addition of 6 vertices of the octahedron, these vertices represent the 18 root vectors of the simple Lie group B3. Metric properties The area A and the volume V of the cuboctahedron of edge length a are: Dissection Tetrahedra and Octahedra The cuboctahedron can be dissected into 6 square pyramids and 8 tetrahedra meeting at a central point. This dissection is expressed in the tetrahedral-octahedral honeycomb where pairs of square pyramids are combined into octahedra. Irregular polyhedra The cuboctahedron can be dissected into two triangular cupolas by a common hexagon passing through the center of the cuboctahedron. If these two triangular cupolas are twisted so triangles and squares line up, Johnson solid J27, the triangular orthobicupola, is created. Geometric relations Radial equilateral symmetry In a cuboctahedron, the long radius (center to vertex) is the same as the edge length; thus its long diameter (vertex to opposite vertex) is 2 edge lengths. Its center is like the apical vertex of a pyramid: one edge length away from all the other vertices. (In the case of the cuboctahedron, the center is in fact the apex of 6 square and 8 triangular pyramids). This radial equilateral symmetry is a property of only a few uniform polytopes, including the two-dimensional hexagon, the three-dimensional cuboctahedron, and the four-dimensional 24-cell and 8-cell (tesseract). Radially equilateral polytopes are those which can be constructed, with their long radii, from equilateral triangles which meet at the center of the polytope, each contributing two radii and an edge. Therefore, all the interior elements which meet at the center of these polytopes have equilateral triangle inward faces, as in the dissection of the cuboctahedron into 6 square pyramids and 8 tetrahedra. Each of these radially equilateral polytopes also occurs as cells of a characteristic space-filling tessellation: the tiling of regular hexagons, the rectified cubic honeycomb (of alternating cuboctahedra and octahedra), the 24-cell honeycomb and the tesseractic honeycomb, respectively. Each tessellation has a dual tessellation; the cell centers in a tessellation are cell vertices in its dual tessellation. The densest known regular sphere-packing in two, three and four dimensions uses the cell centers of one of these tessellations as sphere centers. A cuboctahedron has octahedral symmetry. Its first stellation is the compound of a cube and its dual octahedron, with the vertices of the cuboctahedron located at the midpoints of the edges of either. Constructions A cuboctahedron can be obtained by taking an equatorial cross section of a four-dimensional 24-cell or 16-cell. A hexagon or a square can be obtained by taking an equatorial cross section of a cuboctahedron. The cuboctahedron is a rectified cube and also a rectified octahedron. It is also a cantellated tetrahedron. With this construction it is given the Wythoff symbol: . A skew cantellation of the tetrahedron produces a solid with faces parallel to those of the cuboctahedron, namely eight triangles of two sizes, and six rectangles. While its edges are unequal, this solid remains vertex-uniform: the solid has the full tetrahedral symmetry group and its vertices are equivalent under that group. The edges of a cuboctahedron form four regular hexagons. If the cuboctahedron is cut in the plane of one of these hexagons, each half is a triangular cupola, one of the Johnson solids; the cuboctahedron itself thus can also be called a triangular gyrobicupola, the simplest of a series (other than the gyrobifastigium or "digonal gyrobicupola"). If the halves are put back together with a twist, so that triangles meet triangles and squares meet squares, the result is another Johnson solid, the triangular orthobicupola, also called an anticuboctahedron. Both triangular bicupolae are important in sphere packing. The distance from the solid's center to its vertices is equal to its edge length. Each central sphere can have up to twelve neighbors, and in a face-centered cubic lattice these take the positions of a cuboctahedron's vertices. In a hexagonal close-packed lattice they correspond to the corners of the triangular orthobicupola. In both cases the central sphere takes the position of the solid's center. Cuboctahedra appear as cells in three of the convex uniform honeycombs and in nine of the convex uniform 4-polytopes. The volume of the cuboctahedron is of that of the enclosing cube and of that of the enclosing octahedron. Vertex arrangement Because it is radially equilateral, the cuboctahedron's center can be treated as a 13th canonical apical vertex, one edge length distant from the 12 ordinary vertices, as the apex of a canonical pyramid is one edge length equidistant from its other vertices. The cuboctahedron shares its edges and vertex arrangement with two nonconvex uniform polyhedra: the cubohemioctahedron (having the square faces in common) and the octahemioctahedron (having the triangular faces in common), both have four hexagons. It also serves as a cantellated tetrahedron, as being a rectified tetratetrahedron. The cuboctahedron 2-covers the tetrahemihexahedron, which accordingly has the same abstract vertex figure (two triangles and two squares: 3.4.3.4) and half the vertices, edges, and faces. (The actual vertex figure of the tetrahemihexahedron is 3.4..4, with the factor due to the cross.) Kinematics When interpreted as a framework of rigid flat faces, connected along the edges by hinges, the cuboctahedron is a rigid structure, as are all convex polyhedra, by Cauchy's theorem. However, when the faces are removed, leaving only rigid edges connected by flexible joints at the vertices, the result is not a rigid system (unlike polyhedra whose faces are all triangles, to which Cauchy's theorem applies despite the missing faces). Adding a central vertex, connected by rigid edges to all the other vertices, subdivides the cuboctahedron into square pyramids and tetrahedra, meeting at the central vertex. Unlike the cuboctahedron itself, the resulting system of edges and joints is rigid, and forms part of the infinite octet truss structure. Related polytopes Regular polyhedra The cuboctahedron is one of a family of uniform polyhedra related to the cube and regular octahedron. The cuboctahedron also has tetrahedral symmetry with two colors of triangles. Quasiregular polyhedra and tilings The cuboctahedron exists in a sequence of symmetries of quasiregular polyhedra and tilings with vertex configurations (3.n)2, progressing from tilings of the sphere to the Euclidean plane and into the hyperbolic plane. With orbifold notation symmetry of *n32 all of these tilings are wythoff construction within a fundamental domain of symmetry, with generator points at the right angle corner of the domain. This polyhedron is topologically related as a part of sequence of cantellated polyhedra with vertex figure (3.4.n.4), and continues as tilings of the hyperbolic plane. These vertex-transitive figures have (*n32) reflectional symmetry. 4-dimensional polytopes The cuboctahedron can be decomposed into a regular octahedron and eight irregular but equal octahedra in the shape of the convex hull of a cube with two opposite vertices removed. This decomposition of the cuboctahedron corresponds with the cell-first parallel projection of the 24-cell into three dimensions. Under this projection, the cuboctahedron forms the projection envelope, which can be decomposed into six square faces, a regular octahedron, and eight irregular octahedra. These elements correspond with the images of six of the octahedral cells in the 24-cell, the nearest and farthest cells from the 4D viewpoint, and the remaining eight pairs of cells, respectively. Cuboctahedral graph In the mathematical field of graph theory, a cuboctahedral graph is the graph of vertices and edges of the cuboctahedron, one of the Archimedean solids. It can also be constructed as the line graph of the cube. It has 12 vertices and 24 edges, is locally linear, and is a quartic Archimedean graph. See also Icosidodecahedron Pseudocuboctahedron Rhombicuboctahedron Truncated cuboctahedron Tetradecahedron Snub cube Notes References Bibliography (Section 3-9) Cromwell, P. Polyhedra, CUP hbk (1997), pbk. (1999). Ch.2 p. 79-86 Archimedean solids External links The Uniform Polyhedra Virtual Reality Polyhedra The Encyclopedia of Polyhedra The Cuboctahedron on Hexnet a website devoted to hexagon mathematics. Editable printable net of a Cuboctahedron with interactive 3D view Archimedean solids Quasiregular polyhedra
2,843
6,291
https://en.wikipedia.org/wiki/Roman%20censor
Roman censor
The censor (at any time, there were two) was a magistrate in ancient Rome who was responsible for maintaining the census, supervising public morality, and overseeing certain aspects of the government's finances. The power of the censor was absolute: no magistrate could oppose his decisions, and only another censor who succeeded him could cancel those decisions. The censor's regulation of public morality is the origin of the modern meaning of the words censor and censorship. Early history of the magistracy The census was first instituted by Servius Tullius, sixth king of Rome, BC. After the abolition of the monarchy and the founding of the Republic in 509 BC, the consuls had responsibility for the census until 443 BC. In 442 BC, no consuls were elected, but tribunes with consular power were appointed instead. This was a move by the plebeians to try to attain higher magistracies: only patricians could be elected consuls, while some military tribunes were plebeians. To prevent the possibility of plebeians obtaining control of the census, the patricians removed the right to take the census from the consuls and tribunes, and appointed for this duty two magistrates, called censores (censors), elected exclusively from the patricians in Rome. The magistracy continued to be controlled by patricians until 351 BC, when Gaius Marcius Rutilus was appointed the first plebeian censor. Twelve years later, in 339 BC, one of the Publilian laws required that one censor had to be a plebeian. Despite this, no plebeian censor performed the solemn purification of the people (the lustrum; Livy Periochae 13) until 280 BC. In 131 BC, for the first time, both censors were plebeians. The reason for having two censors was that the two consuls had previously taken the census together. If one of the censors died during his term of office, another was chosen to replace him, just as with consuls. This happened only once, in 393 BC. However, the Gauls captured Rome in that lustrum (five-year period), and the Romans thereafter regarded such replacement as "an offense against religion". From then on, if one of the censors died, his colleague resigned, and two new censors were chosen to replace them. Initially, the office of censor was limited to eighteen months by a law of the dictator Mamercus Aemilius Mamercinus; and the office therefore was of less importance in the 5th and 4th centuries BC. However, during the censorship of Appius Claudius Caecus (312–308 BC) the prestige of the censorship massively increased. Caecus built the first-ever Roman road (the Via Appia) and the first Roman aqueduct (the Aqua Appia), both named after him. He changed the organisation of the Roman tribes and was the first censor to draw the list of senators. He also advocated the founding of Roman coloniae throughout Latium and Campania to support the Roman war effort in the Second Samnite War. With these efforts and reforms, Appius Claudius Caecus was able to hold the censorship for a whole lustrum (five-year period), and the office of censor, subsequently entrusted with various important duties, eventually attained one of the highest political statuses in the Roman Republic, second only to that of the consuls. Election The censors were elected in the Centuriate Assembly, which met under the presidency of a consul. Barthold Niebuhr suggests that the censors were at first elected by the Curiate Assembly, and that the Assembly's selections were confirmed by the Centuriate, but William Smith believes that "there is no authority for this supposition, and the truth of it depends entirely upon the correctness of [Niebuhr's] views respecting the election of the consuls". Both censors had to be elected on the same day, and accordingly if the voting for the second was not finished in the same day, the election of the first was invalidated, and a new assembly had to be held. The assembly for the election of the censors was held under different auspices from those at the election of the consuls and praetors, so the censors were not regarded as their colleagues, although they likewise possessed the maxima auspicia. The assembly was held by the new consuls shortly after they began their term of office; and the censors, as soon as they were elected and the censorial power had been granted to them by a decree of the Centuriate Assembly (lex centuriata), were fully installed in their office. As a general principle, the only ones eligible for the office of censor were those who had previously been consuls, but there were a few exceptions. At first, there was no law to prevent a person being censor twice, but the only person who was elected to the office twice was Gaius Marcius Rutilus in 265 BC. In that year, he originated a law stating that no one could be elected censor twice. In consequence of this, he received the cognomen of Censorinus. Attributes The censorship differed from all other Roman magistracies in the length of office. The censors were originally chosen for a whole lustrum (a period of five years), but as early as ten years after its institution (433 BC) their office was limited to eighteen months by a law of the dictator Mamercus Aemilius Mamercinus. The censors were also unique with respect to rank and dignity. They had no imperium, and accordingly no lictors. Their rank was granted to them by the Centuriate Assembly, and not by the curiae, and in that respect they were inferior in power to the consuls and praetors. Notwithstanding this, the censorship was regarded as the highest dignity in the state, with the exception of the dictatorship; it was a "sacred magistracy" (sanctus magistratus), to which the deepest reverence was due. The high rank and dignity which the censorship obtained was due to the various important duties gradually entrusted to it, and especially to its possessing the regimen morum, or general control over the conduct and the morals of the citizens. In the exercise of this power, they were regulated solely by their own views of duty, and were not responsible to any other power in the state. The censors possessed the official stool called a "curule chair" (sella curulis), but some doubt exists with respect to their official dress. A well-known passage of Polybius describes the use of the imagines at funerals; we may conclude that a consul or praetor wore the purple-bordered toga praetexta, one who triumphed the embroidered toga picta, and the censor a purple toga peculiar to him, but other writers speak of their official dress as being the same as that of the other higher magistrates. The funeral of a censor was always conducted with great pomp and splendour, and hence a "censorial funeral" (funus censorium) was voted even to the emperors. Abolition The censorship continued in existence for 421 years, from 443 BC to 22 BC, but during this period, many lustra passed by without any censor being chosen at all. According to one statement, the office was abolished by Lucius Cornelius Sulla. Although the authority on which this statement rests is not of much weight, the fact itself is probable, since there was no census during the two lustra which elapsed from Sulla's dictatorship to Gnaeus Pompeius Magnus (Pompey)'s first consulship (82–70 BC), and any strict "imposition of morals" would have been found inconvenient to the aristocracy that supported Sulla. If the censorship had been done away with by Sulla, it was at any rate restored in the consulship of Pompey and Marcus Licinius Crassus. Its power was limited by one of the laws of the tribune Publius Clodius Pulcher (58 BC), which prescribed certain regular forms of proceeding before the censors in expelling a person from the Roman Senate, and required that the censors be in agreement to exact this punishment. This law, however, was repealed in the third consulship of Pompey in 52 BC, on the urging of his colleague Q. Caecilius Metellus Scipio, but the office of the censorship never recovered its former power and influence. During the civil wars which followed soon afterwards, no censors were elected; it was only after a long interval that they were again appointed, namely in 23 BC, when Augustus caused Lucius Munatius Plancus and Aemilius Lepidus Paullus to fill the office. This was the last time that such magistrates were appointed; the emperors in future discharged the duties of their office under the name of Praefectura Morum ("prefect of the morals"). Some of the emperors sometimes took the name of censor when they held a census of the Roman people; this was the case with Claudius, who appointed the elder Lucius Vitellius as his colleague, and with Vespasian, who likewise had a colleague in his son Titus. Domitian assumed the title of "perpetual censor" (censor perpetuus), but this example was not imitated by succeeding emperors. In the reign of Decius, we find the elder Valerian nominated to the censorship, but Valerian was never actually elected censor. Duties The duties of the censors may be divided into three classes, all of which were closely connected with one another: The Census, or register of the citizens and of their property, in which were included the reading of the Senate's lists (lectio senatus) and the recognition of who qualified for equestrian rank (recognitio equitum); The Regimen Morum, or keeping of the public morals; and The administration of the finances of the state, under which were classed the superintendence of the public buildings and the erection of all new public works. The original business of the censorship was at first of a much more limited kind, and was restricted almost entirely to taking the census, but the possession of this power gradually brought with it fresh power and new duties, as is shown below. A general view of these duties is briefly expressed in the following passage of Cicero: "Censores populi aevitates, soboles, familias pecuniasque censento: urbis templa, vias, aquas, aerarium, vectigalia tuento: populique partes in tribus distribunto: exin pecunias, aevitates, ordines patiunto: equitum, peditumque prolem describunto: caelibes esse prohibento: mores populi regunto: probrum in senatu ne relinquunto." This can be translated as: "The Censors are to determine the generations, origins, families, and properties of the people; they are to (watch over/protect) the city's temples, roads, waters, treasury, and taxes; they are to divide the people into three parts; next, they are to (allow/approve) the properties, generations, and ranks [of the people]; they are to describe the offspring of knights and footsoldiers; they are to forbid being unmarried; they are to guide the behavior of the people; they are not to overlook abuse in the Senate." Census The Census, the first and principal duty of the censors, was always held in the Campus Martius, and from the year 435 BC onwards, in a special building called Villa publica, which was erected for that purpose by the second pair of censors, Gaius Furius Pacilus Fusus and Marcus Geganius Macerinus. An account of the formalities with which the census was opened is given in a fragment of the Tabulae Censoriae, preserved by Varro. After the auspices had been taken, the citizens were summoned by a public crier to appear before the censors. Each tribe was called up separately, and the names in each tribe were probably taken according to the lists previously made out by the tribunes of the tribes. Every pater familias had to appear in person before the censors, who were seated in their curule chairs, and those names were taken first which were considered to be of good omen, such as Valerius, Salvius, Statorius, etc. The Census was conducted according to the judgement of the censor (ad arbitrium censoris), but the censors laid down certain rules, sometimes called leges censui censendo, in which mention was made of the different kinds of property subject to the census, and in what way their value was to be estimated. According to these laws, each citizen had to give an account of himself, of his family, and of his property upon oath, "declared from the heart". First he had to give his full name (praenomen, nomen, and cognomen) and that of his father, or if he were a libertus ("freedman") that of his patron, and he was likewise obliged to state his age. He was then asked, "You, declaring from your heart, do you have a wife?" and if married he had to give the name of his wife, and likewise the number, names, and ages of his children, if any. Single women and orphans were represented by their guardians; their names were entered in separate lists, and they were not included in the sum total of heads. After a citizen had stated his name, age, family, etc., he then had to give an account of all his property, so far as it was subject to the census. Only such things were liable to the census (censui censendo) as were property according to the Quiritarian law. At first, each citizen appears to have merely given the value of his whole property in general without entering into details; but it soon became the practice to give a minute specification of each article, as well as the general value of the whole. Land formed the most important article of the census, but public land, the possession of which only belonged to a citizen, was excluded as not being Quiritarian property. Judging from the practice of the imperial period, it was the custom to give a most minute specification of all such land as a citizen held according to the Quiritarian law. He had to state the name and location of the land, and to specify what portion of it was arable, what meadow, what vineyard, and what olive-ground: and of the land thus described, he had to give his assessment of its value. Slaves and cattle formed the next most important item. The censors also possessed the right of calling for a return of such objects as had not usually been given in, such as clothing, jewels, and carriages. It has been doubted by some modern writers whether the censors possessed the power of setting a higher valuation on the property than the citizens themselves gave, but given the discretionary nature of the censors' powers, and the necessity almost that existed, in order to prevent fraud, that the right of making a surcharge should be vested in somebody's hands, it is likely that the censors had this power. It is moreover expressly stated that on one occasion they made an extravagant surcharge on articles of luxury; and even if they did not enter in their books the property of a person at a higher value than he returned it, they accomplished the same end by compelling him to pay a tax upon the property at a higher rate than others. The tax was usually one per thousand upon the property entered in the books of the censors, but on one occasion the censors compelled a person to pay eight per thousand as a punishment. A person who voluntarily absented himself from the census was considered incensus and subject to the severest punishment. Servius Tullius is said to have threatened such individuals with imprisonment and death, and in the Republican period he might be sold by the state as a slave. In the later period of the Republic, a person who was absent from the census might be represented by another, and be thus registered by the censors. Whether the soldiers who were absent on service had to appoint a representative is uncertain. In ancient times, the sudden outbreaks of war prevented the census from being taken, because a large number of the citizens would necessarily be absent. It is supposed from a passage in Livy that in later times the censors sent commissioners into the provinces with full powers to take the census of the Roman soldiers there, but this seems to have been a special case. It is, on the contrary, probable from the way in which Cicero pleads the absence of Archias from Rome with the army under Lucullus, as a sufficient reason for his not having been enrolled in the census, that service in the army was a valid excuse for absence. [[File:Foro romano tempio Saturno 09feb08 01.jpg|thumb|left|The Temple of Saturn, which housed the aerarium Saturni and the aerarium sanctum]] After the censors had received the names of all the citizens with the amount of their property, they then had to make out the lists of the tribes, and also of the classes and centuries; for by the legislation of Servius Tullius the position of each citizen in the state was determined by the amount of his property (Comitia Centuriata). These lists formed a most important part of the Tabulae Censoriae, under which name were included all the documents connected in any way with the discharge of the censors' duties. These lists, insofar as they were connected with the finances of the state, were deposited in the aerarium, located in the Temple of Saturn; but the regular depository for all the archives of the censors was in earlier times the Atrium Libertatis, near the Villa publica, and in later times the temple of the Nymphs. Besides the division of the citizens into tribes, centuries, and classes, the censors had also to make out the lists of the senators for the ensuing five years, or until new censors were appointed, striking out the names of such as they considered unworthy, and making additions to the body from those who were qualified. In the same manner they held a review of the equites who received a horse from public funds (equites equo publico), and added and removed names as they judged proper. They also confirmed the princeps senatus, or appointed a new one. The princeps himself had to be a former censor. After the lists had been completed, the number of citizens was counted up, and the sum total announced. Accordingly, we find that in the account of a census, the number of citizens is likewise usually given. They are in such cases spoken of as capita ("heads"), sometimes with the addition of the word civium ("of the citizens"), and sometimes not. Hence, to be registered in the census was the same thing as "having a head" (caput habere). Census beyond Rome A census was sometimes taken in the provinces, even under the Republic. The emperor sent into the provinces special officers called censitores to take the census; but the duty was sometimes discharged by the Imperial legati. The censitores were assisted by subordinate officers, called censuales, who made out the lists, etc. In Rome, the census was still taken under the Empire, but the old ceremonies connected with it were no longer performed, and the ceremony of the lustratio was not performed after the time of Vespasian. The jurists Paulus and Ulpian each wrote works on the census in the imperial period; and several extracts from these works are given in a chapter in the Digest (50 15). Other uses of census The word census, besides the conventional meaning of "valuation" of a person's estate, has other meaning in Rome; it could refer to: the amount of a person's property (hence we read of census senatorius, the estate of a senator; census equestris, the estate of an eques). the lists of the censors. the tax which depended upon the valuation in the census. The Lexicons will supply examples of these meanings. Regimen morum Keeping the public morals (regimen morum, or in the Empire cura morum or praefectura morum) was the second most important branch of the censors' duties, and the one which caused their office to be one of the most revered and the most dreaded; hence they were also known as castigatores ("chastisers"). It naturally grew out of the right which they possessed of excluding persons from the lists of citizens; for, as has been well remarked, "they would, in the first place, be the sole judges of many questions of fact, such as whether a citizen had the qualifications required by law or custom for the rank which he claimed, or whether he had ever incurred any judicial sentence, which rendered him infamous: but from thence the transition was easy, according to Roman notions, to the decisions of questions of right; such as whether a citizen was really worthy of retaining his rank, whether he had not committed some act as justly degrading as those which incurred the sentence of the law." In this manner, the censors gradually assumed at least nominal complete superintendence over the whole public and private life of every citizen. They were constituted as the conservators of public morality; they were not simply to prevent crime or particular acts of immorality, but rather to maintain the traditional Roman character, ethics, and habits (mos majorum)—regimen morum also encompassed this protection of traditional ways, which was called in the times of the Empire cura ("supervision") or praefectura ("command"). The punishment inflicted by the censors in the exercise of this branch of their duties was called nota ("mark, letter") or notatio, or animadversio censoria ("censorial reproach"). In inflicting it, they were guided only by their conscientious convictions of duty; they had to take an oath that they would act biased by neither partiality nor favour; and, in addition to this, they were bound in every case to state in their lists, opposite the name of the guilty citizen, the cause of the punishment inflicted on him, subscriptio censoria. This part of the censors' office invested them with a peculiar kind of jurisdiction, which in many respects resembled the exercise of public opinion in modern times; for there are innumerable actions which, though acknowledged by everyone to be prejudicial and immoral, still do not come within the reach of the positive laws of a country; as often said, "immorality does not equal illegality". Even in cases of real crimes, the positive laws frequently punish only the particular offence, while in public opinion the offender, even after he has undergone punishment, is still incapacitated for certain honours and distinctions which are granted only to persons of unblemished character. Hence, the Roman censors might brand a man with their "censorial mark" (nota censoria) in case he had been convicted of a crime in an ordinary court of justice, and had already suffered punishment for it. The consequence of such a nota was only ignominia and not infamia. Infamia and the censorial verdict was not a judicium or res judicata, for its effects were not lasting, but might be removed by the following censors, or by a lex (roughly "law"). A censorial mark was moreover not valid unless both censors agreed. The ignominia was thus only a transitory reduction of status, which does not even appear to have deprived a magistrate of his office, and certainly did not disqualify persons labouring under it for obtaining a magistracy, for being appointed as judices by the praetor, or for serving in the Roman army. Mamercus Aemilius Mamercinus was thus, notwithstanding the reproach of the censors (animadversio censoria), made dictator. A person might be branded with a censorial mark in a variety of cases, which it would be impossible to specify, as in a great many instances it depended upon the discretion of the censors and the view they took of a case; and sometimes even one set of censors would overlook an offence which was severely chastised by their successors. But the offences which are recorded to have been punished by the censors are of a threefold nature. A person who had been branded with a nota censoria, might, if he considered himself wronged, endeavour to prove his innocence to the censors, and if he did not succeed, he might try to gain the protection of one of the censors, that he might intercede on his behalf. Punishments The punishments inflicted by the censors generally differed according to the station which a man occupied, though sometimes a person of the highest rank might suffer all the punishments at once, by being degraded to the lowest class of citizens. The punishments are generally divided into four classes:Motio ("removal") or ejectio e senatu ("ejection from the Senate"), or the exclusion of a man from the ranks of senators. This punishment might either be a simple exclusion from the list of senators, or the person might at the same time be excluded from the tribes and degraded to the rank of an aerarian. The latter course seems to have been seldom adopted; the ordinary mode of inflicting the punishment was simply this: the censors in their new lists omitted the names of such senators as they wished to exclude, and in reading these new lists in public, quietly omitted the names of those who were no longer to be senators. Hence the expression praeteriti senatores ("senators passed over") is equivalent to e senatu ejecti (those removed from the Senate). In some cases, however, the censors did not acquiesce to this simple mode of proceeding, but addressed the senator whom they had noted, and publicly reprimanded him for his conduct. As in ordinary cases an ex-senator was not disqualified by his ignominia for holding any of the magistracies which opened the way to the Senate, he might at the next census again become a senator. The ademptio equi, or the taking away the publicly funded horse from an equestrian. This punishment might likewise be simple, or combined with the exclusion from the tribes and the degradation to the rank of an aerarian. The motio e tribu, or the exclusion of a person from his tribe. This punishment and the degradation to the rank of an aerarian were originally the same, but when in the course of time a distinction was made between the rural or rustic tribes and the urban tribes, the motio e tribu transferred a person from the rustic tribes to the less respectable city tribes, and if the further degradation to the rank of an aerarian was combined with the motio e tribu, it was always expressly stated. The fourth punishment was called referre in aerarios or facere aliquem aerarium, and might be inflicted on any person who was thought by the censors to deserve it. This degradation, properly speaking, included all the other punishments, for an equestrian could not be made an aerarius unless he was previously deprived of his horse, nor could a member of a rustic tribe be made an aerarius unless he was previously excluded from it. It was this authority of the Roman censors which eventually developed into the modern meaning of "censor" and "censorship"—i.e., officials who review published material and forbid the publication of material judged to be contrary to "public morality" as the term is interpreted in a given political and social environment. Administration of the finances of the state The administration of the state's finances was another part of the censors' office. In the first place the tributum, or property-tax, had to be paid by each citizen according to the amount of his property registered in the census, and, accordingly, the regulation of this tax naturally fell under the jurisdiction of the censors. They also had the superintendence of all the other revenues of the state, the vectigalia, such as the tithes paid for the public lands, the salt works, the mines, the customs, etc. The censors typically auctioned off to the highest bidder for the space of a lustrum the collection of the tithes and taxes (tax farming). This auctioning was called venditio or locatio, and seems to have taken place in the month of March, in a public place in Rome The terms on which they were let, together with the rights and duties of the purchasers, were all specified in the leges censoriae, which the censors published in every case before the bidding commenced. For further particulars see Publicani. The censors also possessed the right, though probably not without the assent of the Senate, of imposing new vectigalia, and even of selling the land belonging to the state. It would thus appear that it was the duty of the censors to bring forward a budget for a five-year period, and to take care that the income of the state was sufficient for its expenditure during that time. In part, their duties resembled those of a modern minister of finance. The censors, however, did not receive the revenues of the state. All the public money was paid into the aerarium, which was entirely under the jurisdiction of the Senate; and all disbursements were made by order of this body, which employed the quaestors as its officers. Overseeing public works In one important department, the public works, the censors were entrusted with the expenditure of the public money (though the actual payments were no doubt made by the quaestors). The censors had the general superintendence of all the public buildings and works (opera publica), and to meet the expenses connected with this part of their duties, the Senate voted them a certain sum of money or certain revenues, to which they were restricted, but which they might at the same time employ according to their discretion. They had to see that the temples and all other public buildings were in a good state of repair, that no public places were encroached upon by the occupation of private persons, and that the aqueducts, roads, drains, etc. were properly attended to. The repairs of the public works and the keeping of them in proper condition were let out by the censors by public auction to the lowest bidder, just as the vectigalia were let out to the highest bidder. These expenses were called ultrotributa, and hence we frequently find vectigalia and ultrotributa contrasted with one another. The persons who undertook the contract were called conductores, mancipes, redemptores, susceptores, etc., and the duties they had to discharge were specified in the Leges Censoriae. The censors had also to superintend the expenses connected with the worship of the gods, even for instance the feeding of the sacred geese in the Capitol; these various tasks were also let out on contract. It was ordinary for censors to expend large amounts of money (“by far the largest and most extensive” of the state) in their public works. Besides keeping existing public buildings and facilities in a proper state of repair, the censors were also in charge of constructing new ones, either for ornament or utility, both in Rome and in other parts of Italy, such as temples, basilicae, theatres, porticoes, fora, aqueducts, town walls, harbours, bridges, cloacae, roads, etc. These works were either performed by them jointly, or they divided between them the money, which had been granted to them by the Senate. They were let out to contractors, like the other works mentioned above, and when they were completed, the censors had to see that the work was performed in accordance with the contract: this was called opus probare or in acceptum referre. The first ever Roman road, the Via Appia, and the first Roman aqueduct, the Aqua Appia, were all constructed under the censorship of Appius Claudius Caecus, one of the most influential censors. The aediles had likewise a superintendence over the public buildings, and it is not easy to define with accuracy the respective duties of the censors and aediles, but it may be remarked in general that the superintendence of the aediles had more of a police character, while that of the censors were more financial in subject matter. Lustrum After the censors had performed their various duties and taken the five-yearly census, the lustrum, a solemn purification of the people, followed. When the censors entered upon their office, they drew lots to see which of them should perform this purification; but both censors were of course obliged to be present at the ceremony. Long after the Roman census was no longer taken, the Latin word lustrum'' has survived, and been adopted in some modern languages, in the derived sense of a period of five years, i.e., half a decennium. Census statistics See also Birth registration in Ancient Rome Cursus honorum Lex Caecilia de censoria List of censors Outline of ancient Rome Political institutions of Rome Pauly–Wissowa References Citations Sources Brunt, P. A. Italian Manpower 225 BC – AD 14. Oxford, 1971; Virlouvet, C. Famines et émeutes à Rome, des origines de la République à la mort de Néron. Roma, 1985; Suder, W., Góralczyk, E. Sezonowość epidemii w Republice Rzymskiej. Vitae historicae, Księga jubileuszowa dedykowana profesorowi Lechowi A. Tyszkiewiczowi w siedemdziesiątą rocznicę urodzin. Wrocław, 2001. Suolahti, J. The Roman Censors: A Study on Social Structure. Helsinki, 1963. Melnichuk Y. Birth of the Roman censorship: Exploring the ancient tradition of the civil control of ancient Rome. - Moscow, 2010 Ancient Roman titles Censor Cursus honorum Governmental auctions
2,851
6,293
https://en.wikipedia.org/wiki/Cairo
Cairo
Cairo ( ; , ) is the capital of Egypt and the city-state Cairo Governorate, and is the country's largest city, home to 10 million people. It is also part of the largest urban agglomeration in Africa, the Arab world and the Middle East: The Greater Cairo metropolitan area, with a population of 21.9 million, is the 12th-largest in the world by population. Cairo is associated with ancient Egypt, as the Giza pyramid complex and the ancient cities of Memphis and Heliopolis are located in its geographical area. Located near the Nile Delta, the city first developed as Fustat, a settlement founded after the Muslim conquest of Egypt in 640 next to an existing ancient Roman fortress, Babylon. Under the Fatimid dynasty a new city, al-Qāhirah, was founded nearby in 969. It later superseded Fustat as the main urban centre during the Ayyubid and Mamluk periods (12th–16th centuries). Cairo has long been a centre of the region's political and cultural life, and is titled "the city of a thousand minarets" for its preponderance of Islamic architecture. Cairo's historic center was awarded World Heritage Site status in 1979. Cairo is considered a World City with a "Beta +" classification according to GaWC. Today, Cairo has the oldest and largest cinema and music industry in the Arab world, as well as the world's second-oldest institution of higher learning, Al-Azhar University. Many international media, businesses, and organizations have regional headquarters in the city; the Arab League has had its headquarters in Cairo for most of its existence. With a population of over 10 million spread over , Cairo is by far the largest city in Egypt. An additional 9.5 million inhabitants live close to the city. Cairo, like many other megacities, suffers from high levels of pollution and traffic. The Cairo Metro, opened in 1987, is the oldest metro system in Africa, and ranks amongst the fifteen busiest in the world, with over 1 billion annual passenger rides. The economy of Cairo was ranked first in the Middle East in 2005, and 43rd globally on Foreign Policy 2010 Global Cities Index. Etymology Egyptians often refer to Cairo as (; ), the Egyptian Arabic name for Egypt itself, emphasizing the city's importance for the country. Its official name () means 'the Vanquisher' or 'the Conqueror', supposedly due to the fact that the planet Mars, (, 'the Conquering Star'), was rising at the time when the city was founded, possibly also in reference to the much awaited arrival of the Fatimid Caliph Al-Mu'izz who reached Cairo in 973 from Mahdia, the old Fatimid capital. The location of the ancient city of Heliopolis is the suburb of Ain Shams (, 'Eye of the Sun'). There are a few Coptic names of the city. Tikešrōmi ( Late Coptic: ) is attested in the 1211 text The Martyrdom of John of Phanijoit and is either a calque meaning 'man breaker' (, 'the', , 'to break', and , 'man'), akin to Arabic , or a derivation from Arabic (qaṣr ar-rūm, "the Roman castle"), another name of Babylon Fortress in Old Cairo. The form Khairon () is attested in the modern Coptic text Ⲡⲓⲫⲓⲣⲓ ⲛ̀ⲧⲉ ϯⲁⲅⲓⲁ ⲙ̀ⲙⲏⲓ Ⲃⲉⲣⲏⲛⲁ (The Tale of Saint Verina). ( Late Coptic: ) or ( Late Coptic: ) is another name which is descended from the Greek name of Heliopolis (). Some argue that ( Late Coptic: ) or ( Late Coptic: ) is another Coptic name for Cairo, although others think that it's rather a name of an Abbasid capital Al-Askar. () is a popular modern rendering of an Arabic name (others being [Kairon] and [Kahira]) which is modern folk etymology meaning 'land of sun'. Some argue that it was a name of an Egyptian settlement upon which Cairo was built, but it is rather doubtful as this name is not attested in any Hieroglyphic or Demotic source, although some researchers, like Paul Casanova, view it as a legitimate theory. Cairo is also referred to as (Late Coptic: ) or (Late Coptic: ), which means Egypt in Coptic, the same way it is referred to in Egyptian Arabic. Sometimes the city is informally referred to as by people from Alexandria (; ). History Ancient settlements The area around present-day Cairo had long been a focal point of Ancient Egypt due to its strategic location at the junction of the Nile Valley and the Nile Delta regions (roughly Upper Egypt and Lower Egypt), which also placed it at the crossing of major routes between North Africa and the Levant. Memphis, the capital of Egypt during the Old Kingdom and a major city up until the Ptolemaic period, was located a short distance south of present-day Cairo. Heliopolis, another important city and major religious center, was located in what are now the northeastern suburbs of Cairo. It was largely destroyed by the Persian invasions in 525 BC and 343 BC and partly abandoned by the late first century BC. However, the origins of modern Cairo are generally traced back to a series of settlements in the first millennium AD. Around the turn of the fourth century, as Memphis was continuing to decline in importance, the Romans established a large fortress along the east bank of the Nile. The fortress, called Babylon, was built by the Roman emperor Diocletian (r. 285–305) at the entrance of a canal connecting the Nile to the Red Sea that was created earlier by emperor Trajan (r. 98–115). Further north of the fortress, near the present-day district of al-Azbakiya, was a port and fortified outpost known as Tendunyas () or Umm Dunayn. While no structures older than the 7th century have been preserved in the area aside from the Roman fortifications, historical evidence suggests that a sizeable city existed. The city was important enough that its bishop, Cyrus, participated in the Second Council of Ephesus in 449. However, the Byzantine-Sassanian War between 602 and 628 caused great hardship and likely caused much of the urban population to leave for the countryside, leaving the settlement partly deserted. The site today remains at the nucleus of the Coptic Orthodox community, which separated from the Roman and Byzantine churches in the late 4th century. Cairo's oldest extant churches, such as the Church of Saint Barbara and the Church of Saints Sergius and Bacchus (from the late 7th or early 8th century), are located inside the fortress walls in what is now known as Old Cairo or Coptic Cairo. Fustat and other early Islamic settlements The Muslim conquest of Byzantine Egypt was led by Amr ibn al-As from 639 to 642. Babylon Fortress was besieged in September 640 and fell in April 641. In 641 or early 642, after the surrender of Alexandria (the Egyptian capital at the time), he founded a new settlement next to Babylon Fortress. The city, known as Fustat (), served as a garrison town and as the new administrative capital of Egypt. Historians such as Janet Abu-Lughod and André Raymond trace the genesis of present-day Cairo to the foundation of Fustat. The choice of founding a new settlement at this inland location, instead of using the existing capital of Alexandria on the Mediterranean coast, may have been due to the new conquerors' strategic priorities. One of the first projects of the new Muslim administration was to clear and re-open Trajan's ancient canal in order to ship grain more directly from Egypt to Medina, the capital of the caliphate in Arabia. Ibn al-As also founded a mosque for the city at the same time, now known as the Mosque of Amr Ibn al-As, the oldest mosque in Egypt and Africa (although the current structure dates from later expansions). In 750, following the overthrow of the Umayyad caliphate by the Abbasids, the new rulers created their own settlement to the northeast of Fustat which became the new provincial capital. This was known as al-Askar () as it was laid out like a military camp. A governor's residence and a new mosque were also added, with the latter completed in 786. In 861, on the orders of the Abbasid caliph al-Mutawakkil, a Nilometer was built on Roda Island near Fustat. Although it was repaired and given a new roof in later centuries, its basic structure is still preserved today, making it the oldest preserved Islamic-era structure in Cairo today. In 868 a commander of Turkic origin named Bakbak was sent to Egypt by the Abbasid caliph al-Mu'taz to restore order after a rebellion in the country. He was accompanied by his stepson, Ahmad ibn Tulun, who became effective governor of Egypt. Over time, Ibn Tulun gained an army and accumulated influence and wealth, allowing him to become the de facto independent ruler of both Egypt and Syria by 878. In 870, he used his growing wealth to found a new administrative capital, al-Qata'i (), to the northeast of Fustat and of al-Askar. The new city included a palace known as the Dar al-Imara, a parade ground known as al-Maydan, a bimaristan (hospital), and an aqueduct to supply water. Between 876 and 879 Ibn Tulun built a great mosque, now known as the Mosque of Ibn Tulun, at the center of the city, next to the palace. After his death in 884, Ibn Tulun was succeeded by his son and his descendants who continued a short-lived dynasty, the Tulunids. In 905, the Abbasids sent general Muhammad Sulayman al-Katib to re-assert direct control over the country. Tulunid rule was ended and al-Qatta'i was razed to the ground, except for the mosque which remains standing today. Foundation and expansion of Cairo In 969, the Shi'a Isma'ili Fatimid empire conquered Egypt after ruling from Ifriqiya. The Fatimid general Jawhar Al Saqili founded a new fortified city northeast of Fustat and of former al-Qata'i. It took four years to build the city, initially known as al-Manṣūriyyah, which was to serve as the new capital of the caliphate. During that time, the construction of the al-Azhar Mosque was commissioned by order of the caliph, which developed into the third-oldest university in the world. Cairo would eventually become a centre of learning, with the library of Cairo containing hundreds of thousands of books. When Caliph al-Mu'izz li Din Allah arrived from the old Fatimid capital of Mahdia in Tunisia in 973, he gave the city its present name, Qāhirat al-Mu'izz ("The Vanquisher of al-Mu'izz"), from which the name "Cairo" (al-Qāhira) originates. The caliphs lived in a vast and lavish palace complex that occupied the heart of the city. Cairo remained a relatively exclusive royal city for most of this era, but during the tenure of Badr al-Gamali as vizier (1073–1094) the restrictions were loosened for the first time and richer families from Fustat were allowed to move into the city. Between 1087 and 1092 Badr al-Gamali also rebuilt the city walls in stone and constructed the city gates of Bab al-Futuh, Bab al-Nasr, and Bab Zuweila that still stand today. During the Fatimid period Fustat reached its apogee in size and prosperity, acting as a center of craftsmanship and international trade and as the area's main port on the Nile. Historical sources report that multi-story communal residences existed in the city, particularly in its center, which were typically inhabited by middle and lower-class residents. Some of these were as high as seven stories and could house some 200 to 350 people. They may have been similar to Roman insulae and may have been the prototypes for the rental apartment complexes which became common in the later Mamluk and Ottoman periods. However, in 1168 the Fatimid vizier Shawar set fire to unfortified Fustat to prevent its potential capture by Amalric, the Crusader king of Jerusalem. While the fire did not destroy the city and it continued to exist afterward, it did mark the beginning of its decline. Over the following centuries it was Cairo, the former palace-city, that became the new economic center and attracted migration from Fustat. While the Crusaders did not capture the city in 1168, a continuing power struggle between Shawar, King Amalric, and the Zengid general Shirkuh led to the downfall of the Fatimid establishment. In 1169, Shirkuh's nephew Saladin was appointed as the new vizier of Egypt by the Fatimids and two years later he seized power from the family of the last Fatimid caliph, al-'Āḍid. As the first Sultan of Egypt, Saladin established the Ayyubid dynasty, based in Cairo, and aligned Egypt with the Sunni Abbasids, who were based in Baghdad. In 1176, Saladin began construction on the Cairo Citadel, which was to serve as the seat of the Egyptian government until the mid-19th century. The construction of the Citadel definitively ended Fatimid-built Cairo's status as an exclusive palace-city and opened it up to common Egyptians and to foreign merchants, spurring its commercial development. Along with the Citadel, Saladin also began the construction of a new 20-kilometre-long wall that would protect both Cairo and Fustat on their eastern side and connect them with the new Citadel. These construction projects continued beyond Saladin's lifetime and were completed under his Ayyubid successors. Apogee and decline under the Mamluks In 1250, during the Seventh Crusade, the Ayyubid dynasty had a crisis with the death of al-Salih and power transitioned instead to the Mamluks, partly with the help of al-Salih's wife, Shajar ad-Durr, who ruled for a brief period around this time. Mamluks were soldiers who were purchased as young slaves and raised to serve in the sultan's army. Between 1250 and 1517 the throne of the Mamluk Sultanate passed from one mamluk to another in a system of succession that was generally non-hereditary, but also frequently violent and chaotic. The Mamluk Empire nonetheless became a major power in the region and was responsible for repelling the advance of the Mongols (most famously at the Battle of Ain Jalut in 1260) and for eliminating the last Crusader states in the Levant. Despite their military character, the Mamluks were also prolific builders and left a rich architectural legacy throughout Cairo. Continuing a practice started by the Ayyubids, much of the land occupied by former Fatimid palaces was sold and replaced by newer buildings, becoming a prestigious site for the construction of Mamluk religious and funerary complexes. Construction projects initiated by the Mamluks pushed the city outward while also bringing new infrastructure to the centre of the city. Meanwhile, Cairo flourished as a centre of Islamic scholarship and a crossroads on the spice trade route among the civilisations in Afro-Eurasia. Under the reign of the Mamluk sultan al-Nasir Muhammad (1293–1341, with interregnums), Cairo reached its apogee in terms of population and wealth. By 1340, Cairo had a population of close to half a million, making it the largest city west of China. Multi-story buildings occupied by rental apartments, known as a rab' (plural ribā or urbu), became common in the Mamluk period and continued to be a feature of the city's housing during the later Ottoman period. These apartments were often laid out as multi-story duplexes or triplexes. They were sometimes attached to caravanserais, where the two lower floors were for commercial and storage purposes and the multiple stories above them were rented out to tenants. The oldest partially-preserved example of this type of structure is the Wikala of Amir Qawsun, built before 1341. Residential buildings were in turn organized into close-knit neighbourhoods called a harat, which in many cases had gates that could be closed off at night or during disturbances. When the traveller Ibn Battuta first came to Cairo in 1326, he described it as the principal district of Egypt. When he passed through the area again on his return journey in 1348 the Black Death was ravaging most major cities. He cited reports of thousands of deaths per day in Cairo. Although Cairo avoided Europe's stagnation during the Late Middle Ages, it could not escape the Black Death, which struck the city more than fifty times between 1348 and 1517. During its initial, and most deadly waves, approximately 200,000 people were killed by the plague, and, by the 15th century, Cairo's population had been reduced to between 150,000 and 300,000. The population decline was accompanied by a period of political instability between 1348 and 1412. It was nonetheless in this period that the largest Mamluk-era religious monument, the Madrasa-Mosque of Sultan Hasan, was built. In the late 14th century the Burji Mamluks replaced the Bahri Mamluks as rulers of the Mamluk state, but the Mamluk system continued to decline. Though the plagues returned frequently throughout the 15th century, Cairo remained a major metropolis and its population recovered in part through rural migration. More conscious efforts were conducted by rulers and city officials to redress the city's infrastructure and cleanliness. Its economy and politics also became more deeply connected with the wider Mediterranean. Some Mamluk sultans in this period, such as Barbsay (r. 1422–1438) and Qaytbay (r. 1468–1496), had relatively long and successful reigns. After al-Nasir Muhammad, Qaytbay was one of the most prolific patrons of art and architecture of the Mamluk era. He built or restored numerous monuments in Cairo, in addition to commissioning projects beyond Egypt. The crisis of Mamluk power and of Cairo's economic role deepened after Qaytbay. The city's status was diminished after Vasco da Gama discovered a sea route around the Cape of Good Hope between 1497 and 1499, thereby allowing spice traders to avoid Cairo. Ottoman rule Cairo's political influence diminished significantly after the Ottomans defeated Sultan al-Ghuri in the Battle of Marj Dabiq in 1516 and conquered Egypt in 1517. Ruling from Constantinople, Sultan Selim I relegated Egypt to a province, with Cairo as its capital. For this reason, the history of Cairo during Ottoman times is often described as inconsequential, especially in comparison to other time periods. However, during the 16th and 17th centuries, Cairo remained an important economic and cultural centre. Although no longer on the spice route, the city facilitated the transportation of Yemeni coffee and Indian textiles, primarily to Anatolia, North Africa, and the Balkans. Cairene merchants were instrumental in bringing goods to the barren Hejaz, especially during the annual hajj to Mecca. It was during this same period that al-Azhar University reached the predominance among Islamic schools that it continues to hold today; pilgrims on their way to hajj often attested to the superiority of the institution, which had become associated with Egypt's body of Islamic scholars. The first printing press of the Middle East, printing in Hebrew, was established in Cairo by a scion of the Soncino family of printers, Italian Jews of Ashkenazi origin who operated a press in Constantinople. The existence of the press is known solely from two fragments discovered in the Cairo Genizah. Under the Ottomans, Cairo expanded south and west from its nucleus around the Citadel. The city was the second-largest in the empire, behind Constantinople, and, although migration was not the primary source of Cairo's growth, twenty percent of its population at the end of the 18th century consisted of religious minorities and foreigners from around the Mediterranean. Still, when Napoleon arrived in Cairo in 1798, the city's population was less than 300,000, forty percent lower than it was at the height of Mamluk—and Cairene—influence in the mid-14th century. The French occupation was short-lived as British and Ottoman forces, including a sizeable Albanian contingent, recaptured the country in 1801. Cairo itself was besieged by a British and Ottoman force culminating with the French surrender on 22 June 1801. The British vacated Egypt two years later, leaving the Ottomans, the Albanians, and the long-weakened Mamluks jostling for control of the country. Continued civil war allowed an Albanian named Muhammad Ali Pasha to ascend to the role of commander and eventually, with the approval of the religious establishment, viceroy of Egypt in 1805. Modern era Until his death in 1848, Muhammad Ali Pasha instituted a number of social and economic reforms that earned him the title of founder of modern Egypt. However, while Muhammad Ali initiated the construction of public buildings in the city, those reforms had minimal effect on Cairo's landscape. Bigger changes came to Cairo under Isma'il Pasha (r. 1863–1879), who continued the modernisation processes started by his grandfather. Drawing inspiration from Paris, Isma'il envisioned a city of maidans and wide avenues; due to financial constraints, only some of them, in the area now composing Downtown Cairo, came to fruition. Isma'il also sought to modernize the city, which was merging with neighbouring settlements, by establishing a public works ministry, bringing gas and lighting to the city, and opening a theatre and opera house. The immense debt resulting from Isma'il's projects provided a pretext for increasing European control, which culminated with the British invasion in 1882. The city's economic centre quickly moved west toward the Nile, away from the historic Islamic Cairo section and toward the contemporary, European-style areas built by Isma'il. Europeans accounted for five percent of Cairo's population at the end of the 19th century, by which point they held most top governmental positions. In 1906 the Heliopolis Oasis Company headed by the Belgian industrialist Édouard Empain and his Egyptian counterpart Boghos Nubar, built a suburb called Heliopolis (city of the sun in Greek) ten kilometers from the center of Cairo. In 1905–1907 the northern part of the Gezira island was developed by the Baehler Company into Zamalek, which would later become Cairo's upscale "chic" neighbourhood. In 1906 construction began on Garden City, a neighbourhood of urban villas with gardens and curved streets. The British occupation was intended to be temporary, but it lasted well into the 20th century. Nationalists staged large-scale demonstrations in Cairo in 1919, five years after Egypt had been declared a British protectorate. Nevertheless, this led to Egypt's independence in 1922. 1924 Cairo Quran The King Fuad I Edition of the Qur'an was first published on 10 July 1924 in Cairo under the patronage of King Fuad.Peter G. Riddell, Early Malay Qur'anic exegical activity , p. 164. Taken from Islam and the Malay-Indonesian World: Transmission and Responses. London: C. Hurst & Co., 2001. The goal of the government of the newly formed Kingdom of Egypt was not to delegitimize the other variant Quranic texts ("qira'at"), but to eliminate errors found in Qur'anic texts used in state schools. A committee of teachers chose to preserve a single one of the canonical qira'at "readings", namely that of the "Ḥafṣ" version, an 8th-century Kufic recitation. This edition has become the standard for modern printings of the Quran for much of the Islamic world. The publication has been called a "terrific success", and the edition has been described as one "now widely seen as the official text of the Qur'an", so popular among both Sunni and Shi'a that the common belief among less well-informed Muslims is "that the Qur'an has a single, unambiguous reading". Minor amendments were made later in 1924 and in 1936 - the "Faruq edition" in honour of then ruler, King Faruq. British occupation until 1956 British troops remained in the country until 1956. During this time, urban Cairo, spurred by new bridges and transport links, continued to expand to include the upscale neighbourhoods of Garden City, Zamalek, and Heliopolis. Between 1882 and 1937, the population of Cairo more than tripled—from 347,000 to 1.3 million—and its area increased from . The city was devastated during the 1952 riots known as the Cairo Fire or Black Saturday, which saw the destruction of nearly 700 shops, movie theatres, casinos and hotels in downtown Cairo. The British departed Cairo following the Egyptian Revolution of 1952, but the city's rapid growth showed no signs of abating. Seeking to accommodate the increasing population, President Gamal Abdel Nasser redeveloped Tahrir Square and the Nile Corniche, and improved the city's network of bridges and highways. Meanwhile, additional controls of the Nile fostered development within Gezira Island and along the city's waterfront. The metropolis began to encroach on the fertile Nile Delta, prompting the government to build desert satellite towns and devise incentives for city-dwellers to move to them. After 1956 In the second half of the 20th century Cairo continue to grow enormously in both population and area. Between 1947 and 2006 the population of Greater Cairo went from 2,986,280 to 16,292,269. The population explosion also drove the rise of "informal" housing ('ashwa'iyyat), meaning housing that was built without any official planning or control. The exact form of this type of housing varies considerably but usually has a much higher population density than formal housing. By 2009, over 63% of the population of Greater Cairo lived in informal neighbourhoods, even though these occupied only 17% of the total area of Greater Cairo. According to economist David Sims, informal housing has the benefits of providing affordable accommodation and vibrant communities to huge numbers of Cairo's working classes, but it also suffers from government neglect, a relative lack of services, and overcrowding. The "formal" city was also expanded. The most notable example was the creation of Madinat Nasr, a huge government-sponsored expansion of the city to the east which officially began in 1959 but was primarily developed in the mid-1970s. Starting in 1977 the Egyptian government established the New Urban Communities Authority to initiate and direct the development of new planned cities on the outskirts of Cairo, generally established on desert land. These new satellite cities were intended to provide housing, investment, and employment opportunities for the region's growing population as well as to pre-empt the further growth of informal neighbourhoods. As of 2014, about 10% of the population of Greater Cairo lived in the new cities. Concurrently, Cairo established itself as a political and economic hub for North Africa and the Arab world, with many multinational businesses and organisations, including the Arab League, operating out of the city. In 1979 the historic districts of Cairo were listed as a UNESCO World Heritage Site. In 1992, Cairo was hit by an earthquake causing 545 deaths, injuring 6,512 and leaving around 50,000 people homeless. 2011 Egyptian revolution Cairo's Tahrir Square was the focal point of the 2011 Egyptian Revolution against former president Hosni Mubarak. Over 2 million protesters were at Cairo's Tahrir square. More than 50,000 protesters first occupied the square on 25 January, during which the area's wireless services were reported to be impaired. In the following days Tahrir Square continued to be the primary destination for protests in Cairo as it took place following a popular uprising that began on Tuesday, 25 January 2011 and continued until June 2013. The uprising was mainly a campaign of non-violent civil resistance, which featured a series of demonstrations, marches, acts of civil disobedience, and labour strikes. Millions of protesters from a variety of socio-economic and religious backgrounds demanded the overthrow of the regime of Egyptian President Hosni Mubarak. Despite being predominantly peaceful in nature, the revolution was not without violent clashes between security forces and protesters, with at least 846 people killed and 6,000 injured. The uprising took place in Cairo, Alexandria, and in other cities in Egypt, following the Tunisian revolution that resulted in the overthrow of the long-time Tunisian president Zine El Abidine Ben Ali. On 11 February, following weeks of determined popular protest and pressure, Hosni Mubarak resigned from office. Post-revolutionary Cairo Under the rule of President el-Sisi, in March 2015 plans were announced for another yet-unnamed planned city to be built further east of the existing satellite city of New Cairo, intended to serve as the new capital of Egypt. Geography Cairo is located in northern Egypt, known as Lower Egypt, south of the Mediterranean Sea and west of the Gulf of Suez and Suez Canal. The city lies along the Nile River, immediately south of the point where the river leaves its desert-bound valley and branches into the low-lying Nile Delta region. Although the Cairo metropolis extends away from the Nile in all directions, the city of Cairo resides only on the east bank of the river and two islands within it on a total area of . Geologically, Cairo lies on alluvium and sand dunes which date from the quaternary period.El-Sohby M.A., Mazen S.O (1985) Proceedings, Eleventh International Conference on soil Mechanics & Foundation Engineering (san Francisco), Geological Aspects in Cairo subsoil Development, 4, pp 2401–2415. Retrieved 20 September 2020 Until the mid-19th century, when the river was tamed by dams, levees, and other controls, the Nile in the vicinity of Cairo was highly susceptible to changes in course and surface level. Over the years, the Nile gradually shifted westward, providing the site between the eastern edge of the river and the Mokattam highlands on which the city now stands. The land on which Cairo was established in 969 (present-day Islamic Cairo) was located underwater just over three hundred years earlier, when Fustat was first built. Low periods of the Nile during the 11th century continued to add to the landscape of Cairo; a new island, known as Geziret al-Fil, first appeared in 1174, but eventually became connected to the mainland. Today, the site of Geziret al-Fil is occupied by the Shubra district. The low periods created another island at the turn of the 14th century that now composes Zamalek and Gezira. Land reclamation efforts by the Mamluks and Ottomans further contributed to expansion on the east bank of the river. Because of the Nile's movement, the newer parts of the city—Garden City, Downtown Cairo, and Zamalek—are located closest to the riverbank. The areas, which are home to most of Cairo's embassies, are surrounded on the north, east, and south by the older parts of the city. Old Cairo, located south of the centre, holds the remnants of Fustat and the heart of Egypt's Coptic Christian community, Coptic Cairo. The Boulaq district, which lies in the northern part of the city, was born out of a major 16th-century port and is now a major industrial centre. The Citadel is located east of the city centre around Islamic Cairo, which dates back to the Fatimid era and the foundation of Cairo. While western Cairo is dominated by wide boulevards, open spaces, and modern architecture of European influence, the eastern half, having grown haphazardly over the centuries, is dominated by small lanes, crowded tenements, and Islamic architecture. Northern and extreme eastern parts of Cairo, which include satellite towns, are among the most recent additions to the city, as they developed in the late-20th and early-21st centuries to accommodate the city's rapid growth. The western bank of the Nile is commonly included within the urban area of Cairo, but it composes the city of Giza and the Giza Governorate. Giza city has also undergone significant expansion over recent years, and today has a population of 2.7 million. The Cairo Governorate was just north of the Helwan Governorate from 2008 when some Cairo's southern districts, including Maadi and New Cairo, were split off and annexed into the new governorate, to 2011 when the Helwan Governorate was reincorporated into the Cairo Governorate. According to the World Health Organization, the level of air pollution in Cairo is nearly 12 times higher than the recommended safety level. Climate In Cairo, and along the Nile River Valley, the climate is a hot desert climate (BWh according to the Köppen climate classification system). Wind storms can be frequent, bringing Saharan dust into the city, from March to May and the air often becomes uncomfortably dry. High temperatures in winter range from , while night-time lows drop to below , often to . In summer, the highs rarely surpass , and lows drop to about . Rainfall is sparse and only happens in the colder months, but sudden showers can cause severe flooding. The summer months have high humidity due to its coastal location. Snowfall is extremely rare; a small amount of graupel, widely believed to be snow, fell on Cairo's easternmost suburbs on 13 December 2013, the first time Cairo's area received this kind of precipitation in many decades. Dew points in the hottest months range from in June to in August. Metropolitan area and districts The city of Cairo forms part of Greater Cairo, the largest metropolitan area in Africa. While it has no administrative body, the Ministry of Planning considers it as an economic region consisting of Cairo Governorate, Giza Governorate, and Qalyubia Governorate. As a contiguous metropolitan area, various studies have considered Greater Cairo be composed of the administrative cities that are Cairo, Giza and Shubra al-Kheima, in addition to the satellite cities/new towns surrounding them. Cairo is a city-state where the governor is also the head of the city. Cairo City itself differs from other Egyptian cities in that it has an extra administrative division between the city and district levels, and that is areas, which are headed by deputy governors. Cairo consists of 4 areas (manatiq, singl. mantiqa) divided into 38 districts (ahya', singl. hayy) and 46 qisms (police wards, 1-2 per district):The Northern Area is divided into 8 Districts: Shubra Al-Zawiya al-Hamra Hadayek al-Qubba Rod al-Farg Al-Sharabia Al-Sahel Al-Zeitoun Al-AmiriyyaThe Eastern Area divided into 9 Districts and three new cities: Misr al-Gadidah and Al-Nozha (Heliopolis) Nasr City East and Nasr City West Al-Salam 1 (Awwal) and al-Salam 2 (Than) Ain Shams Al-Matariya Al-Marg Shorouk (Under jurisdiction of NUCA) Badr (Under jurisdiction of NUCA) Al-Qahira al-Gadida (New Cairo, three qisms, under jurisdiction of NUCA)The Western Area divided into 9 Districts: Manshiyat Nasser Al-Wayli (Incl. qism al-Daher) Wasat al-Qahira (Central Cairo, incl. Al-Darb al-Ahmar, al-Gamaliyya qisms) Bulaq Gharb al-Qahira (West Cairo, incl. Zamalek qism, Qasr al-Nil qism incl. Garden City and part of Down Town) Abdeen Al-Azbakiya Al-Muski Bab al-Sha'ariaThe Southern Area divided into 12 Districts: Masr El-Qadima (Old Cairo, inlc. Al-Manial) Al-Khalifa Al-Moqattam Al-Basatin Dar al-Salam Al-Sayeda Zeinab Al-Tebin Helwan Al-Ma'sara Al-Maadi Tora 15th of May (Under jurisdiction of NUCA) Satellite cities Since 1977 a number of new towns have been planned and built by the New Urban Communities Authority (NUCA) in the Eastern Desert around Cairo, ostensibly to accommodate additional population growth and development of the city and stem the development of self-built informal areas, especially over agricultural land. As of 2022 four new towns have been built and have residential populations: 15th of May City, Badr City, Shorouk City, and New Cairo. In addition, two more are under construction: the New Administrative Capital. And Capital Gardens, where land was allocated in 2021, and which will house most of the civil servants employed in the new capital. Planned new capital In March 2015, plans were announced for a new city to be built east of Cairo, in an undeveloped area of the Cairo Governorate, which would serve as the New Administrative Capital of Egypt. Demographics According to the 2017 census, Cairo had a population of 9,539,673 people, distributed across 46 qisms (police wards): Infrastructure Health Cairo, as well as neighbouring Giza, has been established as Egypt's main centre for medical treatment, and despite some exceptions, has the most advanced level of medical care in the country. Cairo's hospitals include the JCI-accredited As-Salaam International Hospital—Corniche El Nile, Maadi (Egypt's largest private hospital with 350 beds), Ain Shams University Hospital, Dar Al Fouad, Nile Badrawi Hospital, 57357 Hospital, as well as Qasr El Eyni Hospital. Education Greater Cairo has long been the hub of education and educational services for Egypt and the region. Today, Greater Cairo is the centre for many government offices governing the Egyptian educational system, has the largest number of educational schools, and higher education institutes among other cities and governorates of Egypt.Some of the International Schools found in Cairo:Universities in Greater Cairo:Transport Cairo has an extensive road network, rail system, subway system and maritime services. Road transport is facilitated by personal vehicles, taxi cabs, privately owned public buses and Cairo microbuses. Cairo, specifically Ramses Station, is the centre of almost the entire Egyptian transportation network. The subway system, officially called "Metro (مترو)", is a fast and efficient way of getting around Cairo. Metro network covers Helwan and other suburbs. It can get very crowded during rush hour. Two train cars (the fourth and fifth ones) are reserved for women only, although women may ride in any car they want. Trams in Greater Cairo and Cairo trolleybus were used as modes of transportation, but were closed in the 1970s everywhere except Heliopolis and Helwan. These were shut down in 2014, after the Egyptian Revolution. An extensive road network connects Cairo with other Egyptian cities and villages. There is a new Ring Road that surrounds the outskirts of the city, with exits that reach outer Cairo districts. There are flyovers and bridges, such as the 6th October Bridge that, when the traffic is not heavy, allow fast means of transportation from one side of the city to the other. Cairo traffic is known to be overwhelming and overcrowded. Traffic moves at a relatively fluid pace. Drivers tend to be aggressive, but are more courteous at junctions, taking turns going, with police aiding in traffic control of some congested areas. In 2017, plans to construct two monorail systems were announced, one linking 6th of October to suburban Giza, a distance of , and the other linking Nasr City to New Cairo, a distance of . Other forms of transport Cairo International Airport Ramses Railway Station Cairo Transportation Authority CTA Cairo Taxi/Yellow Cab Cairo Metro Cairo Nile Ferry Careem Uber DiDi Sports Football is the most popular sport in Egypt, and Cairo has a number of sporting teams that compete in national and regional leagues, most notably Al Ahly and Zamalek SC, who are the CAF first and second African clubs of the 20th century. The annual match between Al Ahly and El Zamalek is one of the most watched sports events in Egypt as well as the African-Arab region. The teams form the major rivalry of Egyptian football, and are the first and the second champions in Africa and the Arab world. They play their home games at Cairo International Stadium or Naser Stadium, which is the second largest stadium in Egypt, as well as the largest in Cairo and one of the largest stadiums in the world. The Cairo International Stadium was built in 1960 and its multi-purpose sports complex that houses the main football stadium, an indoor stadium, several satellite fields that held several regional, continental and global games, including the African Games, U17 Football World Championship and was one of the stadiums scheduled that hosted the 2006 Africa Cup of Nations which was played in January 2006. Egypt later won the competition and went on to win the next edition in Ghana (2008) making the Egyptian and Ghanaian national teams the only teams to win the African Nations Cup Back to back which resulted in Egypt winning the title for a record number of six times in the history of African Continental Competition. This was followed by a third consecutive win in Angola 2010, making Egypt the only country with a record 3-consecutive and 7-total Continental Football Competition winner. This achievement had also placed the Egyptian football team as the #9 best team in the world's FIFA rankings. As of 2021, Egypt's national team is ranked at #46 in the world by FIFA. Cairo failed at the applicant stage when bidding for the 2008 Summer Olympics, which was hosted in Beijing, China. However, Cairo did host the 2007 Pan Arab Games. There are several other sports teams in the city that participate in several sports including Gezira Sporting Club, el Shams Club, el Seid Club, Heliopolis Sporting Club and several smaller clubs, but the biggest clubs in Egypt (not in area but in sports) are Al Ahly and Zamalek. They have the two biggest football teams in Egypt. There are new sports clubs in the area of New Cairo (one hour far from Cairo's down town), these are Al Zohour sporting club, Wadi Degla sporting club and Platinum Club. Most of the sports federations of the country are also located in the city suburbs, including the Egyptian Football Association. The headquarters of the Confederation of African Football (CAF) was previously located in Cairo, before relocating to its new headquarters in 6 October City, a small city away from Cairo's crowded districts. In October 2008, the Egyptian Rugby Federation was officially formed and granted membership into the International Rugby Board. Egypt is internationally known for the excellence of its squash players who excel in both professional and junior divisions. Egypt has seven players in the top ten of the PSA men's world rankings, and three in the women's top ten. Mohamed El Shorbagy held the world number one position for more than a year before being overtaken by compatriot Karim Abdel Gawad, who is number two behind Gregory Gaultier of France. Ramy Ashour and Amr Shabana are regarded as two of the most talented squash players in history. Shabana won the World Open title four times and Ashour twice, although his recent form has been hampered by injury. Egypt's Nour El Sherbini has won the Women's World Championship twice and has been women's world number one for 16 consecutive months. On 30 April 2016, she became the youngest woman to win the Women's World Championship which was held in Malaysia. In April 2017 she retained her title by winning the Women's World Championship which was held in the Egyptian resort of El Gouna. Cairo is the official end point of Cross Egypt Challenge where its route ends yearly in the most sacred place in Egypt, under the Great Pyramids of Giza with a huge trophy-giving ceremony. Culture Cultural tourism in Egypt Cairo Opera House President Mubarak inaugurated the new Cairo Opera House of the Egyptian National Cultural Centres on 10 October 1988, 17 years after the Royal Opera House had been destroyed by fire. The National Cultural Centre was built with the help of JICA, the Japan International Co-operation Agency and stands as a prominent feature for the Japanese-Egyptian co-operation and the friendship between the two nations. Khedivial Opera House The Khedivial Opera House, or Royal Opera House, was the original opera house in Cairo. It was dedicated on 1 November 1869 and burned down on 28 October 1971. After the original opera house was destroyed, Cairo was without an opera house for nearly two decades until the opening of the new Cairo Opera House in 1988. Cairo International Film Festival Cairo held its first international film festival 16 August 1976, when the first Cairo International Film Festival was launched by the Egyptian Association of Film Writers and Critics, headed by Kamal El-Mallakh. The Association ran the festival for seven years until 1983. This achievement lead to the President of the Festival again contacting the FIAPF with the request that a competition should be included at the 1991 Festival. The request was granted. In 1998, the Festival took place under the presidency of one of Egypt's leading actors, Hussein Fahmy, who was appointed by the Minister of Culture, Farouk Hosni, after the death of Saad El-Din Wahba. Four years later, the journalist and writer Cherif El-Shoubashy became president. Cairo Geniza The Cairo Geniza is an accumulation of almost 200,000 Jewish manuscripts that were found in the genizah of the Ben Ezra synagogue (built 882) of Fustat, Egypt (now Old Cairo), the Basatin cemetery east of Old Cairo, and a number of old documents that were bought in Cairo in the later 19th century. These documents were written from about 870 to 1880 AD and have been archived in various American and European libraries. The Taylor-Schechter collection in the University of Cambridge runs to 140,000 manuscripts; a further 40,000 manuscripts are housed at the Jewish Theological Seminary of America. Food The majority of Cairenes make food for themselves and make use of local produce markets. The restaurant scene includes Arab cuisine and Middle Eastern cuisine, including local staples such as koshary. The city's most exclusive restaurants are typically concentrated in Zamalek and around the luxury hotels lining the shore of the Nile near the Garden City district. Influence from modern western society is also evident, with American chains such as McDonald's, Arby's, Pizza Hut, Subway, and Kentucky Fried Chicken being easy to find in central areas. Places of worship Among the places of worship, they are predominantly Muslim mosques. There are also Christian churches and temples: Coptic Orthodox Church, Coptic Catholic Church (Catholic Church), Evangelical Church of Egypt (Synod of the Nile) (World Communion of Reformed Churches). Economy Cairo's economy has traditionally been based on governmental institutions and services, with the modern productive sector expanding in the 20th century to include developments in textiles and food processing - specifically the production of sugar cane. As of 2005, Egypt has the largest non-oil based GDP in the Arab world. Cairo accounts for 11% of Egypt's population and 22% of its economy (PPP). The majority of the nation's commerce is generated there, or passes through the city. The great majority of publishing houses and media outlets and nearly all film studios are there, as are half of the nation's hospital beds and universities. This has fuelled rapid construction in the city, with one building in five being less than 15 years old. This growth until recently surged well ahead of city services. Homes, roads, electricity, telephone and sewer services were all in short supply. Analysts trying to grasp the magnitude of the change coined terms like "hyper-urbanization". Automobile manufacturers from Cairo Arab American Vehicles Company Egyptian Light Transport Manufacturing Company (Egyptian NSU pedant) Ghabbour Group (Fuso, Hyundai and Volvo) MCV Corporate Group (a part of the Daimler AG) Mod Car Seoudi Group (Modern Motors: Nissan, BMW (formerly); El-Mashreq: Alfa Romeo and Fiat) Speranza (former Daewoo Motors Egypt; Chery, Daewoo) General Motors Egypt Cityscape and landmarks Tahrir Square Tahrir Square was founded during the mid 19th century with the establishment of modern downtown Cairo. It was first named Ismailia Square, after the 19th-century ruler Khedive Ismail, who commissioned the new downtown district's 'Paris on the Nile' design. After the Egyptian Revolution of 1919 the square became widely known as Tahrir (Liberation) Square, though it was not officially renamed as such until after the 1952 Revolution which eliminated the monarchy. Several notable buildings surround the square including, the American University in Cairo's downtown campus, the Mogamma governmental administrative Building, the headquarters of the Arab League, the Nile Ritz Carlton Hotel, and the Egyptian Museum. Being at the heart of Cairo, the square witnessed several major protests over the years. However, the most notable event in the square was being the focal point of the 2011 Egyptian Revolution against former president Hosni Mubarak. In 2020 the government completed the erection of a new monument in the center of the square featuring an ancient obelisk from the reign of Ramses II, originally unearthed at Tanis (San al-Hagar) in 2019, and four ram-headed sphinx statues moved from Karnak. Egyptian Museum The Museum of Egyptian Antiquities, known commonly as the Egyptian Museum, is home to the most extensive collection of ancient Egyptian antiquities in the world. It has 136,000 items on display, with many more hundreds of thousands in its basement storerooms. Among the collections on display are the finds from the tomb of Tutankhamun. Grand Egyptian Museum Much of the collection of the Museum of Egyptian Antiquities, including the Tutankhamun collection, are slated to be moved to the new Grand Egyptian Museum, under construction in Giza and was due to open by the end of 2020. Cairo Tower The Cairo Tower is a free-standing tower with a revolving restaurant at the top. It provides a bird's eye view of Cairo to the restaurant patrons. It stands in the Zamalek district on Gezira Island in the Nile River, in the city centre. At , it is higher than the Great Pyramid of Giza, which stands some to the southwest. Old Cairo This area of Cairo is so-named as it contains the remains of the ancient Roman fortress of Babylon and also overlaps the original site of Fustat, the first Arab settlement in Egypt (7th century AD) and the predecessor of later Cairo. The area includes the Coptic Cairo, which holds a high concentration of old Christian churches such as the Hanging Church, the Greek Orthodox Church of St. George, and other Christian or Coptic buildings, most of which are located over the site of the ancient Roman fortress. It is also the location of the Coptic Museum, which showcases the history of Coptic art from Greco-Roman to Islamic times, and of the Ben Ezra Synagogue, the oldest and best-known synagogue in Cairo, where the important collection of Geniza documents were discovered in the 19th century. To the north of this Coptic enclave is the Amr ibn al-'As Mosque, the first mosque in Egypt and the most important religious centre of what was formerly Fustat, founded in 642 AD right after the Arab conquest but rebuilt many times since. Islamic Cairo Cairo holds one of the greatest concentrations of historical monuments of Islamic architecture in the world. The areas around the old walled city and around the Citadel are characterized by hundreds of mosques, tombs, madrasas, mansions, caravanserais, and fortifications dating from the Islamic era and are often referred to as "Islamic Cairo", especially in English travel literature. It is also the location of several important religious shrines such as the al-Hussein Mosque (whose shrine is believed to hold the head of Husayn ibn Ali), the Mausoleum of Imam al-Shafi'i (founder of the Shafi'i madhhab, one of the primary schools of thought in Sunni Islamic jurisprudence), the Tomb of Sayyida Ruqayya, the Mosque of Sayyida Nafisa, and others. The first mosque in Egypt was the Mosque of Amr ibn al-As in what was formerly Fustat, the first Arab-Muslim settlement in the area. However, the Mosque of Ibn Tulun is the oldest mosque that still retains its original form and is a rare example of Abbasid architecture from the classical period of Islamic civilization. It was built in 876–879 AD in a style inspired by the Abbasid capital of Samarra in Iraq. It is one of the largest mosques in Cairo and is often cited as one of the most beautiful. Another Abbasid construction, the Nilometer on Roda Island, is the oldest original structure in Cairo, built in 862 AD. It was designed to measure the level of the Nile, which was important for agricultural and administrative purposes. The settlement that was formally named Cairo (Arabic: al-Qahira) was founded to the northeast of Fustat in 959 AD by the victorious Fatimid army. The Fatimids built it as a separate palatial city which contained their palaces and institutions of government. It was enclosed by a circuit of walls, which were rebuilt in stone in the late 11th century AD by the vizir Badr al-Gamali, parts of which survive today at Bab Zuwayla in the south and Bab al-Futuh and Bab al-Nasr in the north. Among the extant monuments from the Fatimid era are the large Mosque of al-Hakim, the Aqmar Mosque, Juyushi Mosque, Lulua Mosque, and the Mosque of Al-Salih Tala'i. One of the most important and lasting institutions founded in the Fatimid period was the Mosque of al-Azhar, founded in 970 AD, which competes with the Qarawiyyin in Fes for the title of oldest university in the world. Today, al-Azhar University is the foremost Center of Islamic learning in the world and one of Egypt's largest universities with campuses across the country. The mosque itself retains significant Fatimid elements but has been added to and expanded in subsequent centuries, notably by the Mamluk sultans Qaytbay and al-Ghuri and by Abd al-Rahman Katkhuda in the 18th century. The most prominent architectural heritage of medieval Cairo, however, dates from the Mamluk period, from 1250 to 1517 AD. The Mamluk sultans and elites were eager patrons of religious and scholarly life, commonly building religious or funerary complexes whose functions could include a mosque, madrasa, khanqah (for Sufis), a sabil (water dispensary), and a mausoleum for themselves and their families. Among the best-known examples of Mamluk monuments in Cairo are the huge Mosque-Madrasa of Sultan Hasan, the Mosque of Amir al-Maridani, the Mosque of Sultan al-Mu'ayyad (whose twin minarets were built above the gate of Bab Zuwayla), the Sultan Al-Ghuri complex, the funerary complex of Sultan Qaytbay in the Northern Cemetery, and the trio of monuments in the Bayn al-Qasrayn area comprising the complex of Sultan al-Mansur Qalawun, the Madrasa of al-Nasir Muhammad, and the Madrasa of Sultan Barquq. Some mosques include spolia (often columns or capitals) from earlier buildings built by the Romans, Byzantines, or Copts. The Mamluks, and the later Ottomans, also built wikalas or caravanserais to house merchants and goods due to the important role of trade and commerce in Cairo's economy. Still intact today is the Wikala al-Ghuri, which today hosts regular performances by the Al-Tannoura Egyptian Heritage Dance Troupe. The Khan al-Khalili is a commercial hub which also integrated caravanserais (also known as khans). Citadel of Cairo The Citadel is a fortified enclosure begun by Salah al-Din in 1176 AD on an outcrop of the Muqattam Hills as part of a large defensive system to protect both Cairo to the north and Fustat to the southwest. It was the centre of Egyptian government and residence of its rulers until 1874, when Khedive Isma'il moved to 'Abdin Palace. It is still occupied by the military today, but is now open as a tourist attraction comprising, notably, the National Military Museum, the 14th century Mosque of al-Nasir Muhammad, and the 19th century Mosque of Muhammad Ali which commands a dominant position on Cairo's skyline. Khan el-Khalili Khan el-Khalili is an ancient bazaar, or marketplace adjacent to the Al-Hussein Mosque. It dates back to 1385, when Amir Jarkas el-Khalili built a large caravanserai, or khan. (A caravanserai is a hotel for traders, and usually the focal point for any surrounding area.) This original caravanserai building was demolished by Sultan al-Ghuri, who rebuilt it as a new commercial complex in the early 16th century, forming the basis for the network of souqs existing today. Many medieval elements remain today, including the ornate Mamluk-style gateways. Today, the Khan el-Khalili is a major tourist attraction and popular stop for tour groups. Society In the present day, Cairo is heavily urbanized and most Cairenes live in apartment buildings. Because of the influx of people into the city, lone standing houses are rare, and apartment buildings accommodate for the limited space and abundance of people. Single detached houses are usually owned by the wealthy. Formal education is also seen as important, with twelve years of standard formal education. Cairenes can take a standardized test similar to the SAT to be accepted to an institution of higher learning, but most children do not finish school and opt to pick up a trade to enter the work force. Egypt still struggles with poverty, with almost half the population living on $2 or less a day. Women's rights The civil rights movement for women in Cairo - and by extent, Egypt - has been a struggle for years. Women are reported to face constant discrimination, sexual harassment, and abuse throughout Cairo. A 2013 UN study found that over 99% of Egyptian women reported experiencing sexual harassment at some point in their lives. The problem has persisted in spite of new national laws since 2014 defining and criminalizing sexual harassment. The situation is so severe that in 2017, Cairo was named by one poll as the most dangerous megacity for women in the world. In 2020, the social media account "Assault Police" began to name and shame perpetrators of violence against women, in an effort to dissuade potential offenders. The account was founded by student Nadeen Ashraf, who is credited for instigating an iteration of the #MeToo movement in Egypt. Pollution The air pollution in Cairo is a matter of serious concern. Greater Cairo's volatile aromatic hydrocarbon levels are higher than many other similar cities. Air quality measurements in Cairo have also been recording dangerous levels of lead, carbon dioxide, sulphur dioxide, and suspended particulate matter concentrations due to decades of unregulated vehicle emissions, urban industrial operations, and chaff and trash burning. There are over 4,500,000 cars on the streets of Cairo, 60% of which are over 10 years old, and therefore lack modern emission cutting features. Cairo has a very poor dispersion factor because of its lack of rain and its layout of tall buildings and narrow streets, which create a bowl effect. In recent years, a black cloud (as Egyptians refer to it) of smog has appeared over Cairo every autumn due to temperature inversion. Smog causes serious respiratory diseases and eye irritations for the city's citizens. Tourists who are not familiar with such high levels of pollution must take extra care. Cairo also has many unregistered lead and copper smelters which heavily pollute the city. The results of this has been a permanent haze over the city with particulate matter in the air reaching over three times normal levels. It is estimated that 10,000 to 25,000 people a year in Cairo die due to air pollution-related diseases. Lead has been shown to cause harm to the central nervous system and neurotoxicity particularly in children. In 1995, the first environmental acts were introduced and the situation has seen some improvement with 36 air monitoring stations and emissions tests on cars. Twenty thousand buses have also been commissioned to the city to improve congestion levels, which are very high. The city also suffers from a high level of land pollution. Cairo produces 10,000 tons of waste material each day, 4,000 tons of which is not collected or managed. This is a huge health hazard, and the Egyptian Government is looking for ways to combat this. The Cairo Cleaning and Beautification Agency was founded to collect and recycle the waste; they work with the Zabbaleen community that has been collecting and recycling Cairo's waste since the turn of the 20th century and live in an area known locally as Manshiyat naser. Both are working together to pick up as much waste as possible within the city limits, though it remains a pressing problem. Water pollution is also a serious problem in the city as the sewer system tends to fail and overflow. On occasion, sewage has escaped onto the streets to create a health hazard. This problem is hoped to be solved by a new sewer system funded by the European Union, which could cope with the demand of the city. The dangerously high levels of mercury in the city's water system has global health officials concerned over related health risks. International relations The Headquarters of the Arab League is located in Tahrir Square, near the downtown business district of Cairo. Twin towns – sister cities Cairo is twinned with: Abu Dhabi, United Arab Emirates Amman, Jordan Baghdad, Iraq Beijing, China Damascus, Syria East Jerusalem, Palestine Istanbul, Turkey Kairouan, Tunisia Khartoum, Sudan Muscat, Oman Palermo Province, Italy Rabat, Morocco Sanaa, Yemen Seoul, South Korea Stuttgart, Germany Tashkent, Uzbekistan Tbilisi, Georgia Tokyo, Japan Tripoli, Libya Notable people Rabab Al-Kadhimi (1918 - 1998), dentist and poet Gamal Aziz, also known as Gamal Mohammed Abdelaziz, former president and chief operating officer of Wynn Resorts, and former CEO of MGM Resorts International, indicted as part of the 2019 college admissions bribery scandal Yasser Arafat (4/ 24 August 1929 – 11 November 2004) Born Mohammed Abdel Rahman Abdel Raouf al-Qudwa al-Husseini was the 3rd Chairman of The PLO and first president of the Palestinian Authority Abu Sa'id al-Afif, 15th-century Samaritan Boutros Ghali (1922–2016), former Secretary-General of the United Nations Avi Cohen (1956–2010), Israeli international footballer Dalida (1933–1987), Italian-Egyptian singer who lived most of her life in France, received 55 golden records and was the first singer to receive a diamond disc Farouk El-Baz (born 1938), an Egyptian American space scientist who worked with NASA to assist in the planning of scientific exploration of the Moon, including the selection of landing sites for the Apollo missions and the training of astronauts in lunar observations and photography. Ahmed Mourad Bey Zulfikar (1888–1945), Egyptian chief of police. Mohamed ElBaradei (born 1942), former Director General of the International Atomic Energy Agency, 2005 Nobel Peace Prize laureate Nourane Foster (born 1987), Cameroonian entrepreneur, politician, and member of the National Assembly. Mauro Hamza, fencing coach Taco Hemingway (born 1990), Polish hip-hop artist Dorothy Hodgkin (1910–1994), British chemist, credited with the development of protein crystallography, Nobel Prize in Chemistry in 1964 Yakub Kadri Karaosmanoğlu (1889–1974), Turkish novelist Naguib Mahfouz (1911–2006), novelist, Nobel Prize in Literature in 1988 Roland Moreno (1945–2012), French inventor, engineer, humorist and author who invented the smart card Gamal Abdel Nasser (15 January 1918 – 28 September 1970) was an Egyptian politician who served as the second President of Egypt from 1954 until his death in 1970. Gaafar Nimeiry (1930–2009), President of Sudan Ahmed Sabri (1889–1955), painter Naguib Sawiris (born 1954), Egyptian businessman, 62nd richest person on Earth in 2007 list of billionaires, reaching US$10.0 billion with his company Orascom Telecom Holding Dina Zulfikar (born 1962), Egyptian film distributor and animal welfare activist Mohamed Sobhi (born 1948), Egyptian film, television and stage actor, director Blessed Maria Caterina Troiani (1813–1887), a charitable activist Magdi Yacoub (born 1935), Egyptian-British cardiothoracic surgeon Hesham Youssef, Egyptian diplomat Ahmed Zulfikar (15 August 1952 – 1 May 2010) was an Egyptian mechanical engineer and entrepreneur Ezz El-Dine Zulficar, (October 28, 1919 – July 1, 1963) was an Egyptian film director, screenwriter, actor and producer. known for his distinctive style, which blends romance and action. Zulficar was one of the most influential filmmakers in the Egyptian Cinema's golden age Mona Zulficar (born 1950) Egyptian lawyer and human rights activist and was included in the Forbes 2021 list of the "100 most powerful businesswomen in the Arab region" See also Charles Ayrout Cultural tourism in Egypt List of buildings in Cairo List of cities and towns in Egypt Outline of Cairo Outline of Egypt Architecture of Egypt Explanatory notes References Citations Works cited English translation:''' Further reading Artemis Cooper, Cairo in the War, 1939–1945, Hamish Hamilton, 1989 / Penguin Book, 1995. (Pbk) Max Rodenbeck, Cairo– the City Victorious, Picador, 1998. (Hbk) (Pbk) Wahba, Magdi (1990). Cairo Memories" in Studies in Arab History: The Antonius Lectures, 1978–87. Edited by Derek Hopwood. London: Macmillan Press. Peter Theroux, Cairo: Clamorous heart of Egypt National Geographic Magazine April 1993 Cynthia Myntti, Paris Along the Nile: Architecture in Cairo from the Belle Epoque, American University in Cairo Press, 2003. Cairo's belle époque architects 1900–1950, by Samir Raafat. Antonine Selim Nahas, one of city's major belle époque (1900–1950) architects. Nagib Mahfooz novels, all tell great stories about Cairo's deep conflicts. Jörg Armbruster, Suleman Taufiq (Eds.) مدينتي القاهرة (MYCAI – My Cairo Mein Kairo), text by different authors, photos by Barbara Armbruster and Hala Elkoussy, edition esefeld & traub, Stuttgart 2014, . External links Cairo City Government Coptic Churches of Cairo Map of Cairo, 1914. Eran Laor Cartographic Collection, The National Library of Israel. Maps of Cairo. Historic Cities Research Project. Photos and videos Cairo 360-degree full-screen images Cairo Travel Photos Pictures of Cairo published under Creative Commons License Call to Cairo Time-lapse film of Cairo cityscapes Cairo, Egypt – video by Global Post'' Photos of Cairo / Travel 10th-century establishments in Egypt 10th-century establishments in the Fatimid Caliphate 969 establishments Burial sites of the Burji dynasty Capitals in Africa Capitals of caliphates Cities in Egypt Fatimid cities Governorate capitals in Egypt Medieval cities of Egypt Metropolitan areas of Egypt Nile Delta Populated places established in the 10th century Populated places in Cairo Governorate Populated places on the Nile
2,853
6,295
https://en.wikipedia.org/wiki/Chaos%20theory
Chaos theory
Chaos theory is an interdisciplinary area of scientific study and branch of mathematics focused on underlying patterns and deterministic laws of dynamical systems that are highly sensitive to initial conditions, and were once thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnection, constant feedback loops, repetition, self-similarity, fractals, and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning that there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas. Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as: Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather, and climate. It also occurs spontaneously in some systems with artificial components, such as the road traffic. This behavior can be studied through the analysis of a chaotic mathematical model, or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, and pandemic crisis management. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, and self-assembly processes. Introduction Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time for which the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random. Chaos theory is a method of qualitative and quantitative analysis to investigate the behavior of dynamic systems that cannot be explained and predicted by single data relationships, but must be explained and predicted by whole, continuous data relationships. Chaotic dynamics In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties: it must be sensitive to initial conditions, it must be topologically transitive, it must have dense periodic orbits. In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition. If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list. Sensitivity to initial conditions Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior. Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different. As suggested in Lorenz's book entitled "The Essence of Chaos", published in 1993, "sensitive dependence can serve as an acceptable definition of chaos". In the same book, Lorenz defined the butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." The above definition is consistent with the sensitive dependence of solutions on initial conditions (SDIC). An idealized skiing model was developed to illustrate the sensitivity of time-varying paths to initial positions. A predictability horizon can be determined before the onset of SDIC (i.e., prior to significant separations of initial nearby trajectories). A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year. In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic. In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system. Non-periodicity A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior. Topological mixing Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system. Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity. Topological transitivity A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that it is impossible to decompose the system into two open sets. An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits. Density of periodic orbits For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example,  →  → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem). Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits. Strange attractors Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region. An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly. Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them. Coexisting attractors In contrast to single type chaotic solutions, recent studies using Lorenz models have emphasized the importance of considering various types of solutions. For example, coexisting chaotic and non-chaotic may appear within the same model (e.g., the double pendulum system) using the same modeling configurations but different initial conditions. The findings of attractor coexistence, obtained from classical and generalized Lorenz models, suggested a revised view that “the entirety of weather possesses a dual nature of chaos and order with distinct predictability”, in contrast to the conventional view of “weather is chaotic”. Minimum complexity of a chaotic system Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. Universality of one-dimensional maps with parabolic maxima and Feigenbaum constants , is well visible with map proposed as a toy model for discrete laser dynamics: , where stands for electric field amplitude, is laser gain as bifurcation parameter. The gradual increase of at interval changes dynamics from regular to chaotic one with qualitatively the same bifurcation diagram as those for logistic map. In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional. The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as: where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved. While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can exhibit chaotic behavior. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis. The above elegant set of three ordinary differential equations has been referred to as the three-dimensional Lorenz model. Since 1963, higher-dimensional Lorenz models have been developed in numerous studies for examining the impact of an increased degree of nonlinearity, as well as its collective effect with heating and dissipations, on solution stability. Infinite dimensional maps The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps: , where kernel is propagator derived as Green function of a relevant physical system, might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:. . Jerk systems In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form are sometimes called jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behavior. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems. A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits. One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of is: Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes: In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative. Similar circuits only require one diode or no diodes at all. See also the well-known Chua's circuit, one basis for chaotic true random number generators. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system. Spontaneous order Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system. Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions. History An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent. Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Except for Smale, these studies were all directly inspired by physics: the three-body problem in the case of Birkhoff, turbulence and astronomical problems in the case of Kolmogorov, and radio engineering in the case of Cartwright and Littlewood. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing. Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems. The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970. Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz and his collaborator Ellen Fetter were using a simple digital computer, a Royal McBee LGP-30, to run weather simulations. They wanted to see a sequence of data again, and to save time they started the simulation in the middle of its course. They did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To their surprise, the weather the machine began to predict was completely different from the previous calculation. They tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modeling cannot, in general, make precise long-term weather predictions. In 1963, Benoit Mandelbrot, studying information theory, discovered that noise in many phenomena (including stock prices and telephone circuits) was patterned like a Cantor set, a set of points with infinite roughness and detail Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory. In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena. In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements. In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking dysfunction among people with schizophrenia. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles. In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature. Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws. In the same year, James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick. The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc. A Popular but Inaccurate Analogy for Chaos The sensitive dependence on initial conditions (i.e., butterfly effect) has been illustrated using the following folklore: “For want of a nail, the shoe was lost. For want of a shoe, the horse was lost. For want of a horse, the rider was lost. For want of a rider, the battle was lost. For want of a battle, the kingdom was lost. And all for the want of a horseshoe nail”. Based on the above, many people mistakenly believe that the impact of a tiny initial perturbation monotonically increases with time and that any tiny perturbation can eventually produce a large impact on numerical integrations. However, in 2008, Prof. Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability and that the verse implicitly suggests that subsequent small events will not reverse the outcome (Lorenz, 2008 ). Based on the analysis, the verse only indicates divergence, not boundedness. Boundedness is important for the finite size of a butterfly pattern. Applications Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing. Cryptography Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking, and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient. Robotics Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model. Chaotic dynamics have been exhibited by passive walking biped robots. Biology For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling. As Perry points out, modeling of chaotic time series in ecology is helped by constraint. There is always potential difficulty in distinguishing real chaos from chaos that is only in the model. Hence both constraint in the model and or duplicate time series data for comparison will be helpful in constraining the model to something close to the reality, for example Perry & Wall 1984. Gene-for-gene co-evolution sometimes shows chaotic dynamics in allele frequencies. Adding variables exaggerates this: Chaos is more common in models incorporating additional variables to reflect additional facets of real populations. Robert M. May himself did some of these foundational crop co-evolution studies, and this in turn helped shape the entire field. Even for a steady environment, merely combining one crop and one pathogen may result in quasi-periodic- or chaotic- oscillations in pathogen population. Economics It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships. Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos could help in modeling how economy operate as well as in embedding shocks due to external events such as COVID-19. Other areas In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory. Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately. Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior. Researchers have continued to apply chaos theory to psychology. For example, in modeling group behavior in which heterogeneous members may behave as if sharing to different degrees what in Wilfred Bion's theory is a basic assumption, researchers have found that the group dynamic is the result of the individual dynamics of the members: each individual reproduces the group dynamics in a different scale, and the chaotic behavior of the group is reflected in each member. Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so. In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r. Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model. By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable. Some say the chaos metaphor—used in verbal theories—grounded on mathematical models and psychological aspects of human behavior provides helpful insights to describing the complexity of small work groups, that go beyond the metaphor itself. Traffic forecasting may benefit from applications of chaos theory. Better predictions of when a congestion will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right). Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics. In art (predominately art theory) a possible postpostmodern era has been outlined with emphasis on multiple narratives and the notion that every fictional angle is a possibility. In part this is therefor of a bisociate (trissociative) discourse and can be explained within emphasis on an institutional interchange of subjectivistic agents. See also Examples of chaotic systems Advected contours Arnold's cat map Bifurcation theory Bouncing ball dynamics Chua's circuit Cliodynamics Coupled map lattice Double pendulum Duffing equation Dynamical billiards Economic bubble Gaspard-Rice system Hénon map Horseshoe map List of chaotic maps Rössler attractor Standard map Swinging Atwood's machine Tilt A Whirl Other related topics Amplitude death Anosov diffeomorphism Catastrophe theory Causality Chaos as topological supersymmetry breaking Chaos machine Chaotic mixing Chaotic scattering Control of chaos Determinism Edge of chaos Emergence Mandelbrot set Kolmogorov–Arnold–Moser theorem Ill-conditioning Ill-posedness Nonlinear system Patterns in nature Predictability Quantum chaos Santa Fe Institute Shadowing lemma Synchronization of chaos Unintended consequence People Ralph Abraham Michael Berry Leon O. Chua Ivar Ekeland Doyne Farmer Martin Gutzwiller Brosl Hasslacher Michel Hénon Aleksandr Lyapunov Norman Packard Otto Rössler David Ruelle Oleksandr Mikolaiovich Sharkovsky Robert Shaw Floris Takens James A. Yorke George M. Zaslavsky References Further reading Articles Online version (Note: the volume and page citation cited for the online text differ from that cited here. The citation here is from a photocopy, which is consistent with other citations found online that don't provide article views. The online content is identical to the hardcopy text. Citation variations are related to country of publication). Textbooks Semitechnical and popular works Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, . John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper Perennial 1990, 224 pp. John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change, Harper Perennial 2000, 224 pp. Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp. Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University Press 1988, 272 pp. James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp. L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and Applications, University of Michigan Press, 1997, 360 pp. Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature , National Book Trust, 2003. Hans Lauwerier, Fractals, Princeton University Press, 1991. Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996. David Peak and Michael Frame, Chaos Under Control: The Art and Science of Complexity, Freeman, 1994. Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp. Nuria Perpinya, Caos, virus, calma. La Teoría del Caos aplicada al desórden artístico, social y político, Páginas de Espuma, 2021. Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St Martins Pr 1991. Clifford A. Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World, St Martins Pr 1994. Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984. David Ruelle, Chance and Chaos, Princeton University Press 1993. Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993. Manfred Schroeder, Fractals, Chaos, and Power Laws, Freeman, 1991. Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990. Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003. Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993. M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, 1992. Antonio Sawaya, Financial Time Series Analysis : Chaos and Neurodynamics Approach, Lambert, 2012. External links Nonlinear Dynamics Research Group with Animations in Flash The Chaos group at the University of Maryland The Chaos Hypertextbook. An introductory primer on chaos and fractals ChaosBook.org An advanced graduate textbook on chaos (no fractals) Society for Chaos Theory in Psychology & Life Sciences Nonlinear Dynamics Research Group at CSDC, Florence Italy Nonlinear dynamics: how science comprehends chaos, talk presented by Sunny Auyang, 1998. Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens Gleick's Chaos (excerpt) Systems Analysis, Modelling and Prediction Group at the University of Oxford A page about the Mackey-Glass equation High Anxieties — The Mathematics of Chaos (2008) BBC documentary directed by David Malone The chaos theory of evolution – article published in Newscientist featuring similarities of evolution and non-linear systems including fractal nature of life and chaos. Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure. Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide audience. "Chaos Theory", BBC Radio 4 discussion with Susan Greenfield, David Papineau & Neil Johnson (In Our Time, May 16, 2002) Chaos: The Science of the Butterfly Effect (2019) an explanation presented by Derek Muller Copyright note Complex systems theory Computational fields of study
2,854
6,298
https://en.wikipedia.org/wiki/Cupola
Cupola
In architecture, a cupola () is a relatively small, most often dome-like, tall structure on top of a building. Often used to provide a lookout or to admit light and air, it usually crowns a larger roof or dome. The word derives, via Italian, from lower Latin cupula (classical Latin cupella), (Latin cupa), indicating a vault resembling an upside-down cup. Background The cupola evolved during the Renaissance from the older oculus. Being weatherproof, the cupola was better suited to the wetter climates of northern Europe. The chhatri, seen in Indian architecture, fits the definition of a cupola when it is used atop a larger structure. Cupolas often serve as a belfry, belvedere, vault, or roof lantern above a main roof. In other cases they may crown a spire, tower, or turret. Barns often have cupolas for ventilation. Cupolas can also appear as small buildings in their own right. The square, dome-like segment of a North American railroad train caboose that contains the second-level or "angel" seats is also called a cupola. Gallery On Armoured Vehicles The term cupola can also refer to the protrusions atop an armoured fighting vehicle due to their distinctive dome-like appearance. They allow crew or personnel to observe, offering very good all round vision, or even field weaponary, without being exposed to incoming fire. Later designs, however, became progressivley flatter and less prominent as technology evolved to allow designers to reduce the profile of their vehicles. See also Daylighting Windcatcher Cupola (ISS module) Notes References Architectural elements Cupola
2,855
6,314
https://en.wikipedia.org/wiki/Fire%20%28classical%20element%29
Fire (classical element)
Fire is one of the four classical elements along with earth, water and air in ancient Greek philosophy and science. Fire is considered to be both hot and dry and, according to Plato, is associated with the tetrahedron. Greek and Roman tradition Fire is one of the four classical elements in ancient Greek philosophy and science. It was commonly associated with the qualities of energy, assertiveness, and passion. In one Greek myth, Prometheus stole fire from the gods to protect the otherwise helpless humans, but was punished for this charity. Fire was one of many archai proposed by the pre-Socratics, most of whom sought to reduce the cosmos, or its creation, to a single substance. Heraclitus considered fire to be the most fundamental of all elements. He believed fire gave rise to the other three elements: "All things are an interchange for fire, and fire for all things, just like goods for gold and gold for goods." He had a reputation for obscure philosophical principles and for speaking in riddles. He described how fire gave rise to the other elements as the: "upward-downward path", (), a "hidden harmony"  or series of transformations he called the "turnings of fire", (), first into sea, and half that sea into earth, and half that earth into rarefied air. This is a concept that anticipates both the four classical elements of Empedocles and Aristotle's transmutation of the four elements into one another. This world, which is the same for all, no one of gods or men has made. But it always was and will be: an ever-living fire, with measures of it kindling, and measures going out. Heraclitus regarded the soul as being a mixture of fire and water, with fire being the more noble part and water the ignoble aspect. He believed the goal of the soul is to be rid of water and become pure fire: the dry soul is the best and it is worldly pleasures that make the soul "moist". He was known as the "weeping philosopher" and died of hydropsy, a swelling due to abnormal accumulation of fluid beneath the skin. However, Empedocles of Akragas , is best known for having selected all elements as his archai and by the time of Plato , the four Empedoclian elements were well established. In the Timaeus, Plato's major cosmological dialogue, the Platonic solid he associated with fire was the tetrahedron which is formed from four triangles and contains the least volume with the greatest surface area. This also makes fire the element with the smallest number of sides, and Plato regarded it as appropriate for the heat of fire, which he felt is sharp and stabbing, (like one of the points of a tetrahedron). Plato's student Aristotle did not maintain his former teacher's geometric view of the elements, but rather preferred a somewhat more naturalistic explanation for the elements based on their traditional qualities. Fire the hot and dry element, like the other elements, was an abstract principle and not identical with the normal solids, liquids and combustion phenomena we experience: What we commonly call fire. It is not really fire, for fire is an excess of heat and a sort of ebullition; but in reality, of what we call air, the part surrounding the earth is moist and warm, because it contains both vapour and a dry exhalation from the earth. According to Aristotle, the four elements rise or fall toward their natural place in concentric layers surrounding the center of the earth and form the terrestrial or sublunary spheres. In ancient Greek medicine, each of the four humours became associated with an element. Yellow bile was the humor identified with fire, since both were hot and dry. Other things associated with fire and yellow bile in ancient and medieval medicine included the season of summer, since it increased the qualities of heat and aridity; the choleric temperament (of a person dominated by the yellow bile humour); the masculine; and the eastern point of the compass. In alchemy the chemical element of sulfur was often associated with fire and its alchemical symbol and its symbol was an upward-pointing triangle. In alchemic tradition, metals are incubated by fire in the womb of the Earth and alchemists only accelerate their development. Indian tradition Agni is a Hindu and Vedic deity. The word agni is Sanskrit for fire (noun), cognate with Latin ignis (the root of English ignite), Russian огонь (fire), pronounced agon. Agni has three forms: fire, lightning and the sun. Agni is one of the most important of the Vedic gods. He is the god of fire and the accepter of sacrifices. The sacrifices made to Agni go to the deities because Agni is a messenger from and to the other gods. He is ever-young, because the fire is re-lit every day, yet he is also immortal. In Indian tradition fire is also linked to Surya or the Sun and Mangala or Mars, and with the south-east direction. Ceremonial magic Fire and the other Greek classical elements were incorporated into the Golden Dawn system. Philosophus (4=7) is the elemental grade attributed to fire; this grade is also attributed to the Qabalistic Sephirah Netzach and the planet Venus. The elemental weapon of fire is the Wand. Each of the elements has several associated spiritual beings. The archangel of fire is Michael, the angel is Aral, the ruler is Seraph, the king is Djin, and the fire elementals (following Paracelsus) are called salamanders. Fire is considered to be active; it is represented by the symbol for Leo and it is referred to the lower right point of the pentacle in the Supreme Invoking Ritual of the Pentacle. Many of these associations have since spread throughout the occult community. Tarot Fire in tarot symbolizes conversion or passion. Many references to fire in tarot are related to the usage of fire in the practice of alchemy, in which the application of fire is a prime method of conversion, and everything that touches fire is changed, often beyond recognition. The symbol of fire was a cue pointing towards transformation, the chemical variant being the symbol delta, which is also the classical symbol for fire. Conversion symbolized can be good, for example, refining raw crudities to gold, as seen in The Devil. Conversion can also be bad, as in The Tower, symbolizing a downfall due to anger. Fire is associated with the suit of rods/wands, and as such, represents passion from inspiration. As an element, fire has mixed symbolism because it represents energy, which can be helpful when controlled, but volatile if left unchecked. Modern witchcraft Fire is one of the five elements that appear in most Wiccan traditions influenced by the Golden Dawn system of magic, and Aleister Crowley's mysticism, which was in turn inspired by the Golden Dawn. Freemasonry In freemasonry, fire is present, for example, during the ceremony of winter solstice, a symbol also of renaissance and energy. Freemasonry takes the ancient symbolic meaning of fire and recognizes its double nature: creation, light, on the one hand, and destruction and purification, on the other. See also Fire god Fire worship Pyrokinesis Pyromancy Pyromania References Further reading Frazer, Sir James George, Myths of the Origin of Fire, London: Macmillan, 1930. External links Different versions of the classical elements Overview the 5 elements Section on 4 elements in Buddhism a virtual exhibition about the history of fire Classical elements Numerology Esoteric cosmology Fire in culture Technical factors of astrology History of astrology
2,861
6,316
https://en.wikipedia.org/wiki/Water%20%28classical%20element%29
Water (classical element)
Water is one of the classical elements in ancient Greek philosophy along with air, earth and fire, in the Asian Indian system Panchamahabhuta, and in the Chinese cosmological and physiological system Wu Xing. In contemporary esoteric traditions, it is commonly associated with the qualities of emotion and intuition. Greek and Roman tradition Water was one of many archai proposed by the Pre-socratics, most of whom tried to reduce all things to a single substance. However, Empedocles of Acragas (c. 495 – c. 435 BC) selected four archai for his four roots: air, fire, water and earth. Empedocles roots became the four classical elements of Greek philosophy. Plato (427–347 BC) took over the four elements of Empedocles. In the Timaeus, his major cosmological dialogue, the Platonic solid associated with water is the icosahedron which is formed from twenty equilateral triangles. This makes water the element with the greatest number of sides, which Plato regarded as appropriate because water flows out of one's hand when picked up, as if it is made of tiny little balls. Plato's student Aristotle (384–322 BC) developed a different explanation for the elements based on pairs of qualities. The four elements were arranged concentrically around the center of the Universe to form the sublunary sphere. According to Aristotle, water is both cold and wet and occupies a place between air and earth among the elemental spheres. In ancient Greek medicine, each of the four humours became associated with an element. Phlegm was the humor identified with water, since both were cold and wet. Other things associated with water and phlegm in ancient and medieval medicine included the season of Winter, since it increased the qualities of cold and moisture, the phlegmatic temperament, the feminine and the western point of the compass. In alchemy, the chemical element of mercury was often associated with water and its alchemical symbol was a downward-pointing triangle. Indian tradition Ap () is the Vedic Sanskrit term for water, in Classical Sanskrit occurring only in the plural is not an element.v, (sometimes re-analysed as a thematic singular, ), whence Hindi . The term is from PIE hxap water. In Hindu philosophy, the term refers to water as an element, one of the Panchamahabhuta, or "five great elements". In Hinduism, it is also the name of the deva, a personification of water, (one of the Vasus in most later Puranic lists). The element water is also associated with Chandra or the moon and Shukra, who represent feelings, intuition and imagination. Ceremonial magic Water and the other Greek classical elements were incorporated into the Golden Dawn system. The elemental weapon of water is the cup. Each of the elements has several associated spiritual beings. The archangel of water is Gabriel, the angel is Taliahad, the ruler is Tharsis, the king is Nichsa and the water elementals are called Ondines. It is referred to the upper right point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community. Modern witchcraft Water is one of the five elements that appear in most Wiccan traditions. Wicca in particular was influenced by the Golden Dawn system of magic and Aleister Crowley's mysticism, which was in turn inspired by the Golden Dawn. See also Water Sea and river deity Notes External links Different versions of the classical elements Classical elements Water Esoteric cosmology History of astrology Technical factors of astrology
2,863
6,317
https://en.wikipedia.org/wiki/Earth%20%28classical%20element%29
Earth (classical element)
Earth is one of the classical elements, in some systems being one of the four along with air, fire, and water. European tradition Earth is one of the four classical elements in ancient Greek philosophy and science. It was commonly associated with qualities of heaviness, matter and the terrestrial world. Due to the hero cults, and chthonic underworld deities, the element of earth is also associated with the sensual aspects of both life and death in later occultism. Empedocles of Acragas proposed four archai by which to understand the cosmos: fire, air, water, and earth. Plato (427–347 BCE) believed the elements were geometric forms (the platonic solids) and he assigned the cube to the element of earth in his dialogue Timaeus. Aristotle (384–322 BCE) believed earth was the heaviest element, and his theory of natural place suggested that any earth–laden substances, would fall quickly, straight down, towards the center of the cosmos. In Classical Greek and Roman myth, various goddesses represented the Earth, seasons, crops and fertility, including Demeter and Persephone; Ceres; the Horae (goddesses of the seasons), and Proserpina; and Hades (Pluto) who ruled the souls of dead in the Underworld. In ancient Greek medicine, each of the four humours became associated with an element. Black bile was the humor identified with earth, since both were cold and dry. Other things associated with earth and black bile in ancient and medieval medicine included the season of fall, since it increased the qualities of cold and aridity; the melancholic temperament (of a person dominated by the black bile humour); the feminine; and the southern point of the compass. In alchemy, earth was believed to be primarily dry, and secondarily cold, (as per Aristotle). Beyond those classical attributes, the chemical substance salt, was associated with earth and its alchemical symbol was a downward-pointing triangle, bisected by a horizontal line. Indian tradition Prithvi (Sanskrit: , also ) is the Hindu earth and mother goddess. According to one such tradition, she is the personification of the Earth itself; according to another, its actual mother, being Prithvi Tattwa, the essence of the element earth. As Prithvi Mata, or "Mother Earth", she contrasts with Dyaus Pita, "father sky". In the Rigveda, earth and sky are frequently addressed as a duality, often indicated by the idea of two complementary "half-shells." In addition, the element Earth is associated with Budha or Mercury who represents communication, business, mathematics and other practical matters. Ceremonial magic Earth and the other Greek classical elements were incorporated into the Golden Dawn system. Zelator is the elemental grade attributed to earth; this grade is also attributed to the Qabbalistic sphere Malkuth. The elemental weapon of earth is the Pentacle. Each of the elements has several associated spiritual beings. The archangel of earth is Uriel, the angel is Phorlakh, the ruler is Kerub, the king is Ghob, and the earth elementals (following Paracelsus) are called gnomes. Earth is considered to be passive; it is represented by the symbol for Taurus, and it is referred to the lower left point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community. It is sometimes represented by its Tattva or by a downward pointing triangle with a horizontal line through it. Modern witchcraft Earth is one of the five elements that appear in most Wiccan and Pagan traditions. Wicca in particular was influenced by the Golden Dawn system of magic, and Aleister Crowley's mysticism which was in turn inspired by the Golden Dawn. Other traditions Earth is represented in the Aztec religion by a house; to the Hindus, a lotus; to the Scythians, a plough; to the Greeks, a wheel; and in Christian iconography; bulls and birds. See also Gaia (mythology) Mother goddess Mother nature Pherecydes of Syros Notes External links Different versions of the classical elements Earth in religion Classical elements Numerology Esoteric cosmology History of astrology Technical factors of astrology
2,864
6,319
https://en.wikipedia.org/wiki/Blue%20Jam
Blue Jam
Blue Jam was an ambient, surreal dark comedy and horror radio programme created and directed by Chris Morris. It was broadcast on BBC Radio 1 in the early hours of the morning, for three series from 1997 to 1999. The programme gained cult status due to its unique mix of surreal monologue, ambient soundtrack, synthesised voices, heavily edited broadcasts and recurring sketches. It featured vocal performances of Kevin Eldon, Julia Davis, Mark Heap, David Cann and Amelia Bullmore, with Morris himself delivering disturbing monologues, one of which was revamped and made into the BAFTA-winning short film My Wrongs #8245–8249 & 117. Writers who contributed to the programme included Graham Linehan, Arthur Mathews, Peter Baynham, David Quantick, Jane Bussmann, Robert Katz and the cast. The programme was adapted into the TV series Jam, which aired in 2000. All episodes of Blue Jam are currently available for streaming and download on the Internet Archive and Youtube. Production On his inspiration for making the show, Morris commented: "It was so singular, and it came from a mood, quite a desolate mood. I had this misty, autumnal, boggy mood anyway, so I just went with that. But no doubt getting to the end of something like Brass Eye, where you've been forced to be a sort of surrogate lawyer, well, that's the most creatively stifling thing you could possibly do." Morris also described the show as being "like the nightmares you have when you fall asleep listening to the BBC World Service" (a reference to the World Service also appears in one of the monologues read by Morris). Morris originally requested that the show be broadcast at 3 a.m. on Radio 1 "because at that hour, on insomniac radio, the amplitude of terrible things is enormously overblown". As a compromise, the show was broadcast at midnight without much promotion. Morris reportedly included sketches too graphic or transgressive for radio that he knew would be cut so as to make his other material seem less transgressive in comparison. During the airing of episode 6 of series one, a re-editing of the Archbishop of Canterbury's speech at Princess Diana's funeral was deemed too offensive for broadcast, and was switched with a different episode as it aired. Format and style Each episode opened (and closed) with a short spoken monologue (delivered by Morris) describing, in surreal, broken language, various bizarre feelings and situations (for example: "when you sick so sad you cry, and in crying cry a whole leopard from your eye"), set to ambient music interspersed with short clips of other songs and sounds. The introduction would always end with "welcome in Blue Jam", inviting the listener, who is presumably experiencing such feelings, to get lost in the program. (This format was replicated in the television adaptation Jam, often reusing opening monologues from series 3 of the radio series.) The sketches within dealt with heavy and taboo topics, such as murder, suicide, missing or dead children, and rape. Common recurring sketches Doctor (played by David Cann): "The Doctor" is a seemingly "normal" physician working in a standard British medical practice. However, he has a habit of treating his patients in bizarre and often disturbing ways, such as prescribing heroin for a cold, kissing patients on various body parts to make swellings go away, making a man with a headache jump up and down to make his penis swing (while mirroring the patient's bewildered jumping himself) and making a patient leave and go into the next room so he can examine him over the telephone. His name is revealed to be Michael Perlin in several sketches. The Monologue Man (played by Chris Morris): Short stories, often up to 10 minutes in length, written from the perspective of a lonely and socially inept man. Each story usually involves the protagonist's acquaintance Suzy in some capacity. Michael Alexander St. John: A parody of hyperbolic and pun-laden radio presenting, St. John presents items such as the top 10 singles charts and the weekend's gigs. Bad Sex: Short clips of two lovers (played by Julia Davis and Kevin Eldon) making increasingly bizarre erotic requests of one another, such as to "shit your leg off" and "make your spunk come out green". The Interviewer (played by Chris Morris): conducting real interviews with celebrities such as Andrew Morton and Jerry Springer, Morris confuses and mocks his subjects with ambiguous and odd questions. Mr. Ventham (played by Mark Heap): An extremely awkward man who requires one-to-one consultations with Mr. Reilly (played by David Cann), who seems to be his psychologist, for the most banal of matters. The sketches not listed are often in the style of a documentary; characters speak as if being interviewed about a recent event. In one sketch, a character voiced by Morris describes a man attempting to commit suicide by jumping off a second-story balcony repeatedly; in another, an angry man (Eldon) shouts about how his car, after being picked up from the garage, is only four feet long. Radio stings Morris included a series of 'radio stings', bizarre sequences of sounds and prose as a parody of modern DJs' own soundbites and self-advertising pieces. Each one revolves around a contemporary DJ, such as Chris Moyles, Jo Whiley and Mark Goodier, typically involving each DJ dying in a graphic way or going mad in some form – for example, Chris Moyles covering himself in jam and hanging himself from the top of a building. Episodes Three series were produced, with a total of eighteen episodes. All episodes were originally broadcast weekly on BBC Radio 1. Series 1 was broadcast from 14 November to 19 December 1997; series 2 was broadcast from 27 March to 1 May 1998; and series 3 broadcast from 21 January to 25 February 1999. Series 1 – (Fridays) 14 November 1997 to 19 December 1997, from 00:00 to 01:00. Series 2 – (Fridays) 27 March 1998 to 1 May 1998, from 01:00 to 02:00. Series 3 – (Thursdays) 21 January 1999 to 25 February 1999, from 00:00 to 01:00. The first five episodes of series 1 of Blue Jam were repeated by BBC Radio 4 Extra in February and March 2014, and series 2 was rebroadcast in December. Music Blue Jam features songs, generally of a downtempo nature, interspersed between (and sometimes during) sketches. Artists featured includes Massive Attack, Air, Morcheeba, The Chemical Brothers, Björk, Aphex Twin, Everything But the Girl and Dimitri from Paris, as well as various non-electronic artists including Sly and the Family Stone, Serge Gainsbourg, The Cardigans and Eels. Reception Blue Jam was favourably reviewed on several occasions by The Guardian and also received a positive review by The Independent. Digital Spy wrote in 2014: "It's a heady cocktail that provokes an odd, unsettling reaction in the listener, yet Blue Jam is still thumpingly and frequently laugh-out-loud hilarious." Hot Press called it "as odd as comedy gets". CD release A CD of a number of Blue Jam sketches was released on 23 October 2000 by record label Warp. Although the CD claims to have 22 tracks, the last one, "www.bishopslips.com", is not a track, but rather a reference to the "Bishopslips" sketch, which was cut in the middle of a broadcast. Most of the sketches on the CD were remade for Jam. Track listing "Blue Jam Intro" "Doc Phone" "Lamacq sting" "4 ft Car" "Suicide Journalist" "Acupuncture" "Bad Sex" "Mayo Sting" "Unflustered Parents" "Moyles Sting" "TV Lizards" "Doc Cock" "Hobbs Sting" "Morton Interview" "Fix It Girl" "Porn" "Kids Party" "Club News" "Whiley Sting" "Little Girl Balls" "Blue Jam Outro" "www.bishopslips.com" (not a real track) Related shows Blue Jam was later made for television and broadcast on Channel 4 as Jam. It used unusual editing techniques to achieve an unnerving ambience in keeping with the radio show. Many of the sketches were lifted from the radio version, even to the extent of simply setting images to the radio soundtrack. A subsequent "re-mixed" airing, called Jaaaaam was even more extreme in its use of post-production gadgetry, often heavily distorting the footage. Blue Jam shares parallels with early editions of a US public radio show Joe Frank: Work in Progress from the mid-1980s, that Joe Frank did on the NPR affiliate station, KCRW, in Santa Monica, California. References External links Blue Jam on the BBC Comedy site – repeats on BBC Radio 4 Extra 1997 radio programme debuts BBC Radio comedy programmes
2,865
6,322
https://en.wikipedia.org/wiki/Carolina%20parakeet
Carolina parakeet
The Carolina parakeet (Conuropsis carolinensis), or Carolina conure, is an extinct species of small green neotropical parrot with a bright yellow head, reddish orange face and pale beak that was native to the eastern, Midwest and plains states of the United States. It was the only indigenous parrot within its range, as well as one of only three parrot species native to the United States (the others being the thick-billed parrot, now extirpated, and the green parakeet, still present in Texas; a fourth parrot species, the red-crowned amazon, is debated). The Carolina parakeet was found from southern New York and Wisconsin to Kentucky, Tennessee and the Gulf of Mexico, from the Atlantic seaboard to as far west as eastern Colorado. It lived in old-growth forests along rivers and in swamps. It was called puzzi la née ("head of yellow") or pot pot chee by the Seminole and kelinky in Chickasaw. Though formerly prevalent within its range, the bird had become rare by the middle of the 19th century. The last confirmed sighting in the wild was of the ludovicianus subspecies in 1910. The last known specimen perished in captivity at the Cincinnati Zoo in 1918 and the species was declared extinct in 1939. The earliest reference to these parrots was in 1583 in Florida reported by Sir George Peckham in A True Report of the Late Discoveries of the Newfound Lands of expeditions conducted by English explorer Sir Humphrey Gilbert who notes that explorers in North America "doe testifie that they have found in those countryes; ... parrots." They were first scientifically described in English naturalist Mark Catesby's two volume Natural History of Carolina, Florida and the Bahama Islands published in London in 1731 and 1743. Carolina parakeets were probably poisonous—American naturalist and painter John J. Audubon noted that cats apparently died from eating them, and they are known to have eaten the toxic seeds of cockleburs. Taxonomy Carolinensis is a species of the genus Conuropsis, one of numerous genera of New World Neotropical parrots in family Psittacidae of true parrots. The specific name Psittacus carolinensis was assigned by Swedish zoologist Carl Linnaeus in the 10th edition of Systema Naturae published in 1758. The species was given its own genus Conuropsis by Italian zoologist and ornithologist Tommaso Salvadori in 1891 in his Catalogue of the Birds in the British Museum, volume 20. The name is derived from the Greek-ified conure ("parrot of the genus Conurus" an obsolete name of genus Aratinga) + -opsis ("likeness of") and Latinized Carolina (from Carolana, an English colonial province) + -ensis (of or "from a place"), therefore a bird "like a conure from Carolina." There are two recognized subspecies. The Louisiana subspecies of the Carolina parakeet, C. c. ludovicianus, was slightly different in color than the nominate subspecies, being more bluish-green and generally of a somewhat subdued coloration, and became extinct in much the same way, but at a somewhat earlier date (early 1910s). The Appalachian Mountains separated these birds from the eastern C. c. carolinensis. Evolution According to a study of mitochondrial DNA recovered from museum specimens, their closest living relatives include some of the South American Aratinga parakeets: The Nanday parakeet, the sun parakeet, and the golden-capped parakeet. The authors note the bright yellow and orange plumage and blue wing feathers found in Conuropsis carolinensis are traits shared by another species, the jandaya parakeet (A. jandaya), that was not sampled in the study but is generally thought to be closely related. To help resolve the divergence time a whole genome of a preserved specimen has now been sequenced. The Carolina parakeet colonized North America about 5.5 million years ago. This was well before North America and South America were joined by the formation of the Panama land bridge about 3.5 mya. Since the Carolina parakeets' more distant relations are geographically closer to its own historic range while its closest relatives are more geographically distant to it, these data are consistent with the generally accepted hypothesis that Central and North America were colonized at different times by distinct lineages of parrots – parrots that originally invaded South America from Antarctica some time after the breakup of Gondwana, where Neotropical parrots originated approximately 50 mya. The following cladogram shows the placement of the Carolina parakeet among its closest relatives, after a DNA study by Kirchman et al. (2012): A fossil parrot, designated Conuropsis fratercula, was described based on a single humerus from the Miocene Sheep Creek Formation (possibly late Hemingfordian, c. 16 mya, possibly later) of Snake River, Nebraska. It was a smaller bird, three-quarters the size of the Carolina parakeet. "The present species is of peculiar interest as it represents the first known parrot-like bird to be described as a fossil from North America." (Wetmore 1926; italics added) However, it is not completely certain that the species is correctly assigned to Conuropsis, but some authors consider it a paleosubspecies of the Carolina parakeet. Description The Carolina parakeet was a small green parrot very similar in size and coloration to the extant jenday parakeet and sun conure. The majority of the plumage was green with lighter green underparts, a bright yellow head and orange forehead and face extending to behind the eyes and upper cheeks (lores). The shoulders were yellow, continuing down the outer edge of the wings. The primary feathers were mostly green, but with yellow edges on the outer primaries. Thighs were green towards the top and yellow towards the feet. Male and female adults were identical in plumage, however males were slightly larger than females (sexually dimorphic). The legs and feet were light brown. They share the zygodactyl feet of the parrot family. The skin around the eyes was white and the beak was pale flesh colored. These birds weigh about 3.5 oz., are 13 in. long, and have wingspans of 2123 in. Young Carolina parakeets differed slightly in coloration from adults. The face and entire body was green, with paler underparts. They lacked yellow or orange plumage on the face, wings, and thighs. Hatchlings were covered in mouse-gray down, until about 39–40 days when green wings and tails appear. Fledglings had full adult plumage at around 1 year of age. ("Nature Serve, Conuropsis carolinensis", 2005; Fuller, 2001; Mauler, 2001; Rising, 2004; Snyder and Russell, 2002) These birds were fairly long lived, at least in captivity - a pair was kept at the Cincinnati Zoo for over 35 years. Distribution and habitat The Carolina parakeet had the northernmost range of any known parrot. It was found from southern New England and New York and Wisconsin to Kentucky, Tennessee and the Gulf of Mexico. It has also had a wide distribution west of the Mississippi River, as far west as eastern Colorado. Its range was described by early explorers thus: the 43rd parallel as the northern limit, the 26th as the most southern, the 73rd and 106th meridians as the eastern and western boundaries respectively, the range included all or portions of at least 28 states. Its habitats were old-growth wetland forests along rivers and in swamps especially in the Mississippi-Missouri drainage basin with large hollow trees including cypress and sycamore to use as roosting and nesting sites. Only very rough estimates of the birds' former prevalence can be made: with an estimated range of 20,000 to 2.5 million km2, and population density of 0.5 to 2.0 parrots per km2, population estimates range from tens of thousands to a few million birds (though the densest populations occurred in Florida covering 170,000 km2, so there may have been hundreds of thousands of the birds in that state alone). The species may have appeared as a very rare vagrant in places as far north as Southern Ontario. A few bones, including a pygostyle found at the Calvert Site in Southern Ontario, came from the Carolina parakeet. The possibility remains open that this specimen was taken to Southern Ontario for ceremonial purposes. Behavior and diet The bird lived in huge, noisy flocks of as many as 200–300 birds. It built its nest in a hollow tree, laying two to five (most accounts say two) round white eggs. It mostly ate the seeds of forest trees and shrubs including those of cypress, hackberry, beech, sycamore, elm, pine, maple, oak, and other plants such as thistles and sandspurs (Cenchrus species). It also ate fruits, including apples, grapes and figs (often from orchards by the time of its decline), as well as flower buds and, occasionally, insects. It was especially noted for its predilection for cockleburs (Xanthium strumarium), a plant which contains a toxic glucoside, and it was considered to be an agricultural pest of grain crops. Extinction The last captive Carolina parakeet, Incas, died at the Cincinnati Zoo on February 21, 1918, in the same cage as Martha, the last passenger pigeon, who died in 1914. There are no scientific studies or surveys of this bird by American naturalists; most information about it is from anecdotal accounts and museum specimens. Therefore, details of its prevalence and decline are unverified or speculative. There are extensive accounts of the pre-colonial and early colonial prevalence of this bird. The existence of flocks of gregarious, very colorful and raucous parrots could hardly have gone unnoted by European explorers, as parrots were virtually unknown in seafaring European nations in the 16th and 17th centuries. Later accounts in the latter half of the 19th century onward noted the birds' sparseness and absence. Genetic evidence suggests that while populations had been in decline since the last glacial maximum, the lack of evidence of inbreeding suggests that the birds declined very quickly. The birds' range collapsed from east to west with settlement and clearing of the eastern and southern deciduous forests. John J. Audubon commented as early as 1832 on the decline of the birds. The bird was rarely reported outside Florida after 1860. The last reported sighting east of the Mississippi River (except Florida) was in 1878 in Kentucky. By the turn of the century it was restricted to the swamps of central Florida. The last known wild specimen was killed in Okeechobee County, Florida, in 1904, and the last captive bird died at the Cincinnati Zoo on February 21, 1918. This was the male specimen, called "Incas," who died within a year of his mate, "Lady Jane." Additional reports of the bird were made in Okeechobee County, Florida, until the late 1920s, but these are not supported by specimens. It was not until 1939, however, that the American Ornithologists' Union declared that the Carolina parakeet had become extinct. The IUCN has listed the species as extinct since 1920. In 1937, three parakeets resembling this species were sighted and filmed in the Okefenokee Swamp of Georgia. However, the American Ornithologists' Union analyzed the film and concluded that they had probably filmed feral parakeets. A year later, in 1938, a flock of parakeets was apparently sighted by a group of experienced ornithologists in the swamps of the Santee River basin in South Carolina. However, this sighting was doubted by most other ornithologists. The birds were never seen again after this sighting, and shortly after a portion of the area was destroyed to make way for power lines, making the species' continued existence unlikely. About 720 skins and 16 skeletons are housed in museums around the world and analyzable DNA has been extracted from them. Reasons for extinction The evidence is indicative that humans had at least a contributory role in the extinction of the Carolina parakeet, through a variety of means. Chief was deforestation in the 18th and 19th centuries. Hunting played a significant role, both for decorative use of their colorful feathers, for example, adornment of women's hats, and for reduction of crop predation. This was partially offset by the recognition of their value in controlling invasive cockleburs. Minor roles were played by capture for the pet trade and, as noted in Pacific Standard, by the introduction for crop pollination of European honeybees that competed for nest sites. A factor that exacerbated their decline to extinction was the flocking behavior that led them to return to the vicinity of dead and dying birds (e.g., birds downed by hunting), enabling wholesale slaughter. The final extinction of the species in the early years of the 20th century is somewhat of a mystery, as it happened so rapidly. Vigorous flocks with many juveniles and reproducing pairs were noted as late as 1896, and the birds were long-lived in captivity, but they had virtually disappeared by 1904. Sufficient nest sites remained intact, so deforestation was not the final cause. American ornithologist Noel F. Snyder speculates that the most likely cause seems to be that the birds succumbed to poultry disease, although no recent or historical records exist of New World parrot populations being afflicted by domestic poultry diseases. The modern poultry scourge Newcastle disease was not detected until 1926 in Indonesia, and only a subacute form of it was reported in the United States in 1938. As well, genetic research on samples did not show any significant presence of bird viruses (though this does not solely rule out disease). See also Incas (Carolina parakeet), the last Carolina parakeet alive in captivity. Thick-billed parrot, one of two living parrots that had a native range in the contiguous United States; now restricted to Mexico Green parakeet, the other living U.S. parrot, found in southern Texas Monk parakeet, a prevalent feral parrot in the United States, often incorrectly presumed to be native Feral parrots, other non-native parrots in the United States Footnotes References Further reading Cokinos, Christopher (2009) Hope Is the Thing with Feathers: A Personal Chronicle of Vanished Birds (Chapter 1: Carolina Parakeet), Tarcher Snyder, Noel (2004) The Carolina Parakeet: Glimpses of a Vanished Bird, Princeton University Press Julian P. Hume, Michael Walters (2012) Extinct Birds (p. 186), Poyser Monographs External links Species profile - World Parrot Trust Fact file – ARKive "Carolina Parakeet (Conuropsis carolinensis) and Passenger Pigeon (Ectopistes migratorius)" - Carolina Nature "Carolina Parakeet: Removal of a Menace" - Cornell Lab of Ornithology "The Extinct Carolina Parakeet" - Ivory Bill News - City Parrots Carolina parakeet Extinct birds of North America Bird extinctions since 1500 Species made extinct by human activities Native birds of the Eastern United States Extinct animals of the United States Carolina parakeet 1939 in the environment Articles containing video clips Carolina parakeet
2,867
6,328
https://en.wikipedia.org/wiki/Cognate
Cognate
In historical linguistics, cognates or lexical cognates are sets of words in different languages that have been inherited in direct descent from an etymological ancestor in a common parent language. Because language change can have radical effects on both the sound and the meaning of a word, cognates may not be obvious, and often it takes rigorous study of historical sources and the application of the comparative method to establish whether lexemes are cognate. Cognates are distinguished from loanwords, where a word has been borrowed from another language. The term cognate derives from the Latin noun cognatus 'blood relative'. Characteristics Cognates need not have the same meaning, which may have changed as the languages developed independently. For example English starve and Dutch sterven 'to die' or German sterben 'to die' all descend from the same Proto-Germanic verb, *sterbaną 'to die'. Cognates also do not need to look or sound similar: English father, French père, and Armenian հայր (hayr) all descend directly from Proto-Indo-European *ph₂tḗr. An extreme case is Armenian երկու (erku) and English two, which descend from Proto-Indo-European *dwóh₁; the sound change *dw > erk in Armenian is regular. An example of cognates from the same Indo-European root are: night (English), nicht (Scots), Nacht (German), nacht (Dutch, Frisian), nag (Afrikaans), Naach (Colognian), natt (Swedish, Norwegian), nat (Danish), nátt (Faroese), nótt (Icelandic), noc (Czech, Slovak, Polish), ночь, noch (Russian), ноќ, noć (Macedonian), нощ, nosht (Bulgarian), ніч, nich (Ukrainian), ноч, noch/noč (Belarusian), noč (Slovene), noć (Serbo-Croatian), nakts (Latvian), naktis (Lithuanian), νύξ, nyx (Ancient Greek), νύχτα / nychta (Modern Greek), nakt- (Sanskrit), natë (Albanian), nox, gen. sg. noctis (Latin), nuit (French), nuit (Spanish), nueche (Asturian), noite (Portuguese and Galician), notte (Italian), nit (Catalan), nuet/nit/nueit (Aragonese), nuèch / nuèit (Occitan) and noapte (Romanian). These all mean 'night' and derive from the Proto-Indo-European 'night'. The Indo-European languages have hundreds of such cognate sets, though few of them are as neat as this. The Arabic salām, the Hebrew shalom, the Assyrian Neo-Aramaic shlama and the Amharic selam 'peace' are cognates, derived from the Proto-Semitic *šalām- 'peace'. False cognates False cognates are pairs of words that appear to have a common origin, but which in fact do not. For example, Latin and German both mean 'to have' and are phonetically similar. However, the words evolved from different Proto-Indo-European (PIE) roots: , like English have, comes from PIE *kh₂pyé- 'to grasp', and has the Latin cognate capere 'to seize, grasp, capture'. , on the other hand, is from PIE *gʰabʰ 'to give, to receive', and hence cognate with English give and German . Likewise, English much and Spanish look similar and have a similar meaning, but are not cognates: much is from Proto-Germanic *mikilaz < PIE *meǵ- and is from Latin multum < PIE *mel-. A true cognate of much is the archaic Spanish 'big'. Distinctions Cognates are distinguished from other kinds of relationships. Loanwords are words borrowed from one language into another, for example English beef is borrowed from Old French boef (meaning "ox"). Although they are part of a single etymological stemma, they are not cognates. Doublets are pairs of words in the same language which are derived from a single etymon, which may have similar but distinct meanings and uses. Often one is a loanword and the other is the native form, or they have developed in different dialects and then found themselves together in a modern standard language. For example, Old French boef is cognate with English cow, so English cow and beef are doublets. Translations, or semantic equivalents, are words in two different languages that have similar meanings. They may be cognate, but usually they are not. For example, the German equivalent of the English word cow is Kuh, which is also cognate, but the French equivalent is vache, which is unrelated. Related terms The etymon, or ancestor word, is the ultimate source word whence one or more cognates derive. For example, the etymon of both Welsh ceffyl and Irish capall would be the Proto-Celtic *kaballos (all meaning horse). Outside of historical linguistics, a parallel term for an etymon is a root or root word. In this usage however, the analysis is limited to within a single language rather than across separate languages. Run, as such, can be said to be the root of both running and runs, while happy would be the root word of such others as unhappiness or happily. A derivative is any word coming from a particular etymon. Similar to the distinction between etymon and root above, a nuanced distinction can sometimes be made between a derivative and a descendant. Descendant can be used more narrowly within the context of historical linguistics to emphasize a word inherited across a language barrier. For example, Russian мо́ре and Polish morze are both descendants of Proto-Slavic *moře. By contrast, within the study of morphological derivation, unhappy, happily, and unhappily are all derivatives of the word happy. See also Homology (biology) Indo-European vocabulary References External links Historical linguistics Comparative linguistics
2,871
6,335
https://en.wikipedia.org/wiki/Cucurbitaceae
Cucurbitaceae
The Cucurbitaceae, also called cucurbits or the gourd family, are a plant family consisting of about 965 species in around 95 genera, of which the most important to humans are: Cucurbita – squash, pumpkin, zucchini or courgette, some gourds Lagenaria – calabash, and others that are inedible Citrullus – watermelon (C. lanatus, C. colocynthis) and others Cucumis – cucumber (C. sativus), various melons and vines Momordica – bitter melon Luffa – the common name is also luffa, sometimes spelled loofah (when fully ripened, two species of this fibrous fruit are the source of the loofah scrubbing sponge) Cyclanthera – Caigua The plants in this family are grown around the tropics and in temperate areas, where those with edible fruits were among the earliest cultivated plants in both the Old and New Worlds. The family Cucurbitaceae ranks among the highest of plant families for number and percentage of species used as human food. The name Cucurbitaceae comes to international scientific vocabulary from New Latin, from Cucurbita, the type genus, + -aceae, a standardized suffix for plant family names in modern taxonomy. The genus name comes from the Classical Latin word cucurbita, "gourd". Description Most of the plants in this family are annual vines, but some are woody lianas, thorny shrubs, or trees (Dendrosicyos). Many species have large, yellow or white flowers. The stems are hairy and pentangular. Tendrils are present at 90° to the leaf petioles at nodes. Leaves are exstipulate, alternate, simple palmately lobed or palmately compound. The flowers are unisexual, with male and female flowers on different plants (dioecious) or on the same plant (monoecious). The female flowers have inferior ovaries. The fruit is often a kind of modified berry called a pepo. Fossil history One of the oldest fossil cucurbits so far is †Cucurbitaciphyllum lobatum from the Paleocene epoch, found at Shirley Canal, Montana. It was described for the first time in 1924 by the paleobotanist Frank Hall Knowlton. The fossil leaf is palmate, trilobed with rounded lobal sinuses and an entire or serrate margin. It has a leaf pattern similar to the members of the genera Kedrostis, Melothria and Zehneria. Classification Tribal classification The most recent classification of Cucurbitaceae delineates 15 tribes: Tribe Gomphogyneae Benth. & Hook.f. Alsomitra (Blume) Spach (1 sp.) Bayabusua (1 sp.) Gomphogyne Griff. (2 spp.) Gynostemma Blume (10 spp.) Hemsleya Cogn. ex F.B.Forbes & Hemsl. (30 spp.) Neoalsomitra Hutch. (12 spp.) Tribe Triceratieae A.Rich. Anisosperma Silva Manso (1 sp.) Cyclantheropsis Harms (3 spp.) Fevillea L. (8 spp.) Pteropepon (Cogn.) Cogn. (5 spp.) Sicydium Schltdl. (7 spp.) Tribe Zanonieae Benth. & Hook.f. Gerrardanthus Harvey in Hook.f. (3–5 spp.) Siolmatra Baill. (1 sp.) Xerosicyos Humbert (5 spp.) Zanonia L. (1 sp.) Tribe Actinostemmateae H.Schaef. & S.S.Renner Actinostemma Griff. (3 spp.) Tribe Indofevilleeae H.Schaef. & S.S.Renner Indofevillea Chatterjee (2 sp.) Tribe Thladiantheae H.Schaef. & S.S.Renner Baijiania A.M.Lu & J.Q.Li (30 spp.) Thladiantha Bunge 1833 (5 spp.) Tribe Siraitieae H. Schaef. & S.S. Renner Siraitia Merr. (3–4 spp.) Tribe Momordiceae H.Schaef. & S.S.Renner Momordica L. (60 spp.) Tribe Joliffieae Schrad. Ampelosicyos Thouars (5 spp.) Cogniauxia Baill. (2 spp.) Telfairia Hook. (3 spp.) Tribe Bryonieae Dumort. Austrobryonia H.Schaef. (4 spp.) Bryonia L. (10 spp.) Ecballium A.Rich. (1 sp.) Tribe Schizopeponeae C.Jeffrey Herpetospermum Wall. ex Hook.f. (3 spp.) Schizopepon Maxim. (6–8 spp.) Tribe Sicyoeae Schrad. Cyclanthera Schrad. (40 spp.) Echinocystis Torr. & A.Gray (1 sp.) Echinopepon Naudin (20 spp., including Brandegea Cogn.) Frantzia Pittier (5 spp.) Hanburia Seem. (7 spp.) Hodgsonia Hook.f. & Thomson (2 spp.) Linnaeosicyos H.Schaef. & Kocyan (1 sp.) Luffa Mill. (5–7 spp.) Marah Kellogg (7 spp.) Nothoalsomitra Hutch. (1 sp.) Sicyos L. (75 spp., including Sechium P.Browne) Trichosanthes L. (≤100 spp.) Tribe Coniandreae Endl. Apodanthera Arn. (16 spp.) Bambekea Cogn. (1 sp.) Ceratosanthes Adans. (4 spp.) Corallocarpus Welw. ex Benth. & Hook.f. (17 spp.) Cucurbitella Walp. (1 sp.) Dendrosicyos Balf.f. (1 sp.) Doyerea Grosourdy (1 sp.) Eureiandra Hook.f. (8 spp.) Gurania (Schltdl.) Cogn. (37 spp.) Halosicyos Mart.Crov (1 sp.) Helmontia Cogn. (2–4 spp.) Ibervillea Greene (9–10 spp.) Kedrostis Medik. (28 spp.) Melotrianthus M.Crovetto (1–3 spp.) Psiguria Neck. ex Arn. (6–12 spp.) Seyrigia Keraudren (6 spp.) Trochomeriopsis Cogn. (1 sp.) Tumamoca Rose (2 spp.) Wilbrandia Silva Manso (5 spp.) Tribe Benincaseae Ser. Acanthosicyos Welw. ex Hook.f. (1 sp.) Benincasa Savi (2 spp., including Praecitrullus Pangalo) Borneosicyos (1–2 spp.) Cephalopentandra Chiov. (1 sp.) Citrullus Schrad. (4 spp.) Coccinia Wight & Arn. (30 spp.) Ctenolepis Hook. f. 1867 (3 spp.) Cucumis L. (65 spp.) Dactyliandra Hook.f. (2 spp.) Diplocyclos (Endl.) T.Post & Kuntze (4 spp.) Indomelothria (2 spp.) Khmeriosicyos (1 sp.) Lagenaria Ser. (6 spp.) Lemurosicyos Keraudren (1 sp.) Melothria L. (12 spp., including M. scabra) Muellerargia Cogn. (2 sp.) Papuasicyos (8 spp.) Peponium Engl. (20 spp.) Raphidiocystis Hook.f. (5 spp.) Ruthalicia C.Jeffrey (2 spp.) Scopellaria W.J.de Wilde & Duyfjes (2 spp.) Solena Lour. (3 spp.) Trochomeria Hook.f. (8 spp.) Zehneria Endl. (ca. 60 spp.) Tribe Cucurbiteae Ser. Abobra Naudin (1 sp.) Calycophysum H.Karst. & Triana (5 spp.) Cayaponia Silva Manso (50–59 spp., including Selysia Cogn.) Cionosicys Griseb. (4–5 spp.) Cucurbita L. (15 spp.) Penelopeia Urb. (2 spp.) Peponopsis Naudin (1 sp.) Polyclathra Bertol. (6 spp.) Schizocarpum Schrad. (11 spp.) Sicana Naudin (4 spp.) Tecunumania Standl. & Steyerm. (1 sp.) Systematics Modern molecular phylogenetics suggest the following relationships: Pests and diseases Sweet potato whitefly is the vector of a number of cucurbit viruses that cause yellowing symptoms throughout the southern United States. References Further reading External links Cucurbitaceae in T.C. Andres (1995 onwards). Cucurbitaceae in L. Watson and M.J. Dallwitz (1992 onwards). The families of flowering plants: descriptions, illustrations, identification, information retrieval. Rosid families Extant Danian first appearances Taxa named by Antoine Laurent de Jussieu Taxa described in 1789
2,875
6,337
https://en.wikipedia.org/wiki/Carolyn%20Beug
Carolyn Beug
Carolyn Ann Mayer-Beug (December 11, 1952 – September 11, 2001) was a filmmaker and video producer from Santa Monica, California. She died in the September 11 attacks as a passenger of the American Airlines Flight 11. Career In addition to her work as video producer, Beug also directed three music videos for country singer Dwight Yoakam: "Ain't That Lonely Yet", "A Thousand Miles from Nowhere" and "Fast as You." Beug co-directed the former two videos with Yoakam and was the sole director of the latter video. She won an MTV Video Music award for the Van Halen music video of the song "Right Now", which she produced. She also served as senior vice president of Walt Disney Records. Personal life Beug lived in a Tudor-style home in the North 25th Street neighborhood. She hosted an annual backyard barbecue for the Santa Monica High School cross country and track team, which her daughters captained. Beug was a Latter-day Saint. Death and legacy Beug was killed at the age of 48 in the crash of American Airlines Flight 11 in the September 11, 2001 attacks. At the time of her death, Carolyn Beug was working on a children's book about Noah's Ark which was to be told from Noah's wife's point of view. On the plane with her was her mother, Mary Alice Wahlstrom. Beug was survived by her twin eighteen-year-old daughters Lauren and Lindsey Mayer-Beug, her 13-year-old son, Nick, and her husband, John Beug, a senior vice president in charge of filmed production for Warner Brothers' record division. She was returning home from taking her daughters to college at the Rhode Island School of Design. At the National 9/11 Memorial, Beug is memorialized at the North Pool, on Panel N-1. References External links Van Halen News Desk article 1952 births 2001 deaths American Airlines Flight 11 victims People from Santa Monica, California Latter Day Saints from California American terrorism victims Terrorism deaths in New York (state) 20th-century American women singers 20th-century American singers 21st-century American women
2,877
6,339
https://en.wikipedia.org/wiki/Cell%20biology
Cell biology
Cell biology (also cellular biology or cytology) is a branch of biology that studies the structure, function, and behavior of cells. All living organisms are made of cells. A cell is the basic unit of life that is responsible for the living and functioning of organisms. Cell biology is the study of structural and functional units of cells. Cell biology encompasses both prokaryotic and eukaryotic cells and has many subtopics which may include the study of cell metabolism, cell communication, cell cycle, biochemistry, and cell composition. The study of cells is performed using several microscopy techniques, cell culture, and cell fractionation. These have allowed for and are currently being used for discoveries and research pertaining to how cells function, ultimately giving insight into understanding larger organisms. Knowing the components of cells and how cells work is fundamental to all biological sciences while also being essential for research in biomedical fields such as cancer, and other diseases. Research in cell biology is interconnected to other fields such as genetics, molecular genetics, molecular biology, medical microbiology, immunology, and cytochemistry. History Cells were first seen in 17th century Europe with the invention of the compound microscope. In 1665, Robert Hooke termed the building block of all living organisms as "cells" (published in Micrographia) after looking at a piece of cork and observing a cell-like structure, however, the cells were dead and gave no indication to the actual overall components of a cell. A few years later, in 1674, Anton Van Leeuwenhoek was the first to analyze live cells in his examination of algae. All of this preceded the cell theory which states that all living things are made up of cells and that cells are the functional and structural unit of organisms. This was ultimately concluded by plant scientist, Matthias Schleiden and animal scientist Theodor Schwann in 1838, who viewed live cells in plant and animal tissue, respectively. 19 years later, Rudolf Virchow further contributed to the cell theory, adding that all cells come from the division of pre-existing cells. Viruses are not considered in cell biology – they lack the characteristics of a living cell, and instead are studied in the microbiology subclass of virology. Techniques Cell biology research looks at different ways to culture and manipulate cells outside of a living body to further research in human anatomy and physiology, and to derive medications.The techniques by which cells are studied have evolved. Due to advancements in microscopy, techniques and technology have allowed scientists to hold a better understanding of the structure and function of cells. Many techniques commonly used to study cell biology are listed below: Cell culture: Utilizes rapidly growing cells on media which allows for a large amount of a specific cell type and an efficient way to study cells. Cell culture is one of the major tools used in cellular and molecular biology, providing excellent model systems for studying the normal physiology and biochemistry of cells (e.g., metabolic studies, aging), the effects of drugs and toxic compounds on the cells, and mutagenesis and carcinogenesis. It is also used in drug screening and development, and large scale manufacturing of biological compounds (e.g., vaccines, therapeutic proteins). Fluorescence microscopy: Fluorescent markers such as GFP, are used to label a specific component of the cell. Afterwards, a certain light wavelength is used to excite the fluorescent marker which can then be visualized. Phase-contrast microscopy: Uses the optical aspect of light to represent the solid, liquid, and gas-phase changes as brightness differences. Confocal microscopy: Combines fluorescence microscopy with imaging by focusing light and snap shooting instances to form a 3-D image. Transmission electron microscopy: Involves metal staining and the passing of electrons through the cells, which will be deflected upon interaction with metal. This ultimately forms an image of the components being studied. Cytometry: The cells are placed in the machine which uses a beam to scatter the cells based on different aspects and can therefore separate them based on size and content. Cells may also be tagged with GFP-fluorescence and can be separated that way as well. Cell fractionation: This process requires breaking up the cell using high temperature or sonification followed by centrifugation to separate the parts of the cell allowing for them to be studied separately. Cell types There are two fundamental classifications of cells: prokaryotic and eukaryotic. Prokaryotic cells are distinguished from eukaryotic cells by the absence of a cell nucleus or other membrane-bound organelle. Prokaryotic cells are much smaller than eukaryotic cells, making them the smallest form of life. Prokaryotic cells include Bacteria and Archaea, and lack an enclosed cell nucleus.  Eukaryotic cells are found in plants, animals, fungi, and protists. They range from 10 to 100 μm in diameter, and their DNA is contained within a membrane-bound nucleus. Eukaryotes are organisms containing eukaryotic cells. The four eukaryotic kingdoms are Animalia, Plantae, Fungi, and Protista. They both reproduce through binary fission. Bacteria, the most prominent type, have several different shapes, although most are spherical or rod-shaped. Bacteria can be classed as either gram-positive or gram-negative depending on the cell wall composition. Gram-positive bacteria have a thicker peptidoglycan layer than gram-negative bacteria. Bacterial structural features include a flagellum that helps the cell to move, ribosomes for the translation of RNA to protein, and a nucleoid that holds all the genetic material in a circular structure. There are many processes that occur in prokaryotic cells that allow them to survive. In prokaryotes, mRNA synthesis is initiated at a promoter sequence on the DNA template comprising two consensus sequences that recruit RNA polymerase. The prokaryotic polymerase consists of a core enzyme of four protein subunits and a σ protein that assists only with initiation. For instance, in a process termed conjugation, the fertility factor allows the bacteria to possess a pilus which allows it to transmit DNA to another bacteria which lacks the F factor, permitting the transmittance of resistance allowing it to survive in certain environments. Structure and function Structure of eukaryotic cells Eukaryotic cells are composed of the following organelles: Nucleus: The nucleus of the cell functions as the genome and genetic information storage for the cell, containing all the DNA organized in the form of chromosomes. It is surrounded by a nuclear envelope, which includes nuclear pores allowing for the transportation of proteins between the inside and outside of the nucleus. This is also the site for replication of DNA as well as transcription of DNA to RNA. Afterwards, the RNA is modified and transported out to the cytosol to be translated to protein. Nucleolus: This structure is within the nucleus, usually dense and spherical in shape. It is the site of ribosomal RNA (rRNA) synthesis, which is needed for ribosomal assembly. Endoplasmic reticulum (ER): This functions to synthesize, store, and secrete proteins to the Golgi apparatus. Structurally, the endoplasmic reticulum is a network of membranes found throughout the cell and connected to the nucleus. The membranes are slightly different from cell to cell and a cell's function determines the size and structure of the ER. Mitochondria: Commonly known as the powerhouse of the cell is a double membrane bound cell organelle. This functions for the production of energy or ATP within the cell. Specifically, this is the place where the Krebs cycle or TCA cycle for the production of NADH and FADH occurs. Afterwards, these products are used within the electron transport chain (ETC) and oxidative phosphorylation for the final production of ATP. Golgi apparatus: This functions to further process, package, and secrete the proteins to their destination. The proteins contain a signal sequence that allows the Golgi apparatus to recognize and direct it to the correct place. Golgi apparatus also produce glycoproteins and glycolipids. Lysosome: The lysosome functions to degrade material brought in from the outside of the cell or old organelles. This contains many acid hydrolases, proteases, nucleases, and lipases, which break down the various molecules. Autophagy is the process of degradation through lysosomes which occurs when a vesicle buds off from the ER and engulfs the material, then, attaches and fuses with the lysosome to allow the material to be degraded. Ribosomes: Functions to translate RNA to protein. it serves as a site of protein synthesis. Cytoskeleton: Cytoskeleton is a structure that helps to maintain the shape and general organization of the cytoplasm. It anchors organelles within the cells and makes up the structure and stability of the cell. The cytoskeleton is composed of three principal types of protein filaments: actin filaments, intermediate filaments, and microtubules, which are held together and linked to subcellular organelles and the plasma membrane by a variety of accessory proteins. Cell membrane: The cell membrane can be described as a phospholipid bilayer and is also consisted of lipids and proteins. Because the inside of the bilayer is hydrophobic and in order for molecules to participate in reactions within the cell, they need to be able to cross this membrane layer to get into the cell via osmotic pressure, diffusion, concentration gradients, and membrane channels. Centrioles: Function to produce spindle fibers which are used to separate chromosomes during cell division. Eukaryotic cells may also be composed of the following molecular components: Chromatin: This makes up chromosomes and is a mixture of DNA with various proteins. Cilia: They help to propel substances and can also be used for sensory purposes. Cell metabolism Cell metabolism is necessary for the production of energy for the cell and therefore its survival and includes many pathways. For cellular respiration, once glucose is available, glycolysis occurs within the cytosol of the cell to produce pyruvate. Pyruvate undergoes decarboxylation using the multi-enzyme complex to form acetyl coA which can readily be used in the TCA cycle to produce NADH and FADH2. These products are involved in the electron transport chain to ultimately form a proton gradient across the inner mitochondrial membrane. This gradient can then drive the production of ATP and H2O during oxidative phosphorylation. Metabolism in plant cells includes photosynthesis which is simply the exact opposite of respiration as it ultimately produces molecules of glucose. Cell signaling Cell signaling or cell communication is important for cell regulation and for cells to process information from the environment and respond accordingly. Signaling can occur through direct cell contact or endocrine, paracrine, and autocrine signaling. Direct cell-cell contact is when a receptor on a cell binds a molecule that is attached to the membrane of another cell. Endocrine signaling occurs through molecules secreted into the bloodstream. Paracrine signaling uses molecules diffusing between two cells to communicate. Autocrine is a cell sending a signal to itself by secreting a molecule that binds to a receptor on its surface. Forms of communication can be through: Ion channels: Can be of different types such as voltage or ligand gated ion channels. They allow for the outflow and inflow of molecules and ions. G-protein coupled receptor (GPCR): Is widely recognized to contain seven transmembrane domains. The ligand binds on the extracellular domain and once the ligand binds, this signals a guanine exchange factor to convert GDP to GTP and activate the G-α subunit. G-α can target other proteins such as adenyl cyclase or phospholipase C, which ultimately produce secondary messengers such as cAMP, Ip3, DAG, and calcium. These secondary messengers function to amplify signals and can target ion channels or other enzymes. One example for amplification of a signal is cAMP binding to and activating PKA by removing the regulatory subunits and releasing the catalytic subunit. The catalytic subunit has a nuclear localization sequence which prompts it to go into the nucleus and phosphorylate other proteins to either repress or activate gene activity. Receptor tyrosine kinases: Bind growth factors, further promoting the tyrosine on the intracellular portion of the protein to cross phosphorylate. The phosphorylated tyrosine becomes a landing pad for proteins containing an SH2 domain allowing for the activation of Ras and the involvement of the MAP kinase pathway. Growth and development Eukaryotic cell cycle Cells are the foundation of all organisms and are the fundamental units of life. The growth and development of cells are essential for the maintenance of the host and survival of the organism. For this process, the cell goes through the steps of the cell cycle and development which involves cell growth, DNA replication, cell division, regeneration, and cell death. The cell cycle is divided into four distinct phases: G1, S, G2, and M. The G phase – which is the cell growth phase – makes up approximately 95% of the cycle. The proliferation of cells is instigated by progenitors. All cells start out in an identical form and can essentially become any type of cells. Cell signaling such as induction can influence nearby cells to determinate the type of cell it will become. Moreover, this allows cells of the same type to aggregate and form tissues, then organs, and ultimately systems. The G1, G2, and S phase (DNA replication, damage and repair) are considered to be the interphase portion of the cycle, while the M phase (mitosis) is the cell division portion of the cycle. Mitosis is composed of many stages which include, prophase, metaphase, anaphase, telophase, and cytokinesis, respectively. The ultimate result of mitosis is the formation of two identical daughter cells. The cell cycle is regulated in cell cycle checkpoints, by a series of signaling factors and complexes such as cyclins, cyclin-dependent kinase, and p53. When the cell has completed its growth process and if it is found to be damaged or altered, it undergoes cell death, either by apoptosis or necrosis, to eliminate the threat it can cause to the organism's survival. Cell mortality, cell lineage immortality The ancestry of each present day cell presumably traces back, in an unbroken lineage for over 3 billion years to the origin of life. It is not actually cells that are immortal but multi-generational cell lineages. The immortality of a cell lineage depends on the maintenance of cell division potential. This potential may be lost in any particular lineage because of cell damage, terminal differentiation as occurs in nerve cells, or programmed cell death (apoptosis) during development. Maintenance of cell division potential over successive generations depends on the avoidance and the accurate repair of cellular damage, particularly DNA damage. In sexual organisms, continuity of the germline depends on the effectiveness of processes for avoiding DNA damage and repairing those DNA damages that do occur. Sexual processes in eukaryotes, as well as in prokaryotes, provide an opportunity for effective repair of DNA damages in the germ line by homologous recombination. Cell cycle phases The cell cycle is a four-stage process that a cell goes through as it develops and divides. It includes Gap 1 (G1), synthesis (S), Gap 2 (G2), and mitosis (M).The cell either restarts the cycle from G1 or leaves the cycle through G0 after completing the cycle. The cell can progress from G0 through terminal differentiation. The interphase refers to the phases of the cell cycle that occur between one mitosis and the next, and includes G1, S, and G2. G1 phase The size of the cell grows. The contents of cells are replicated. S phase Replication of DNA The cell replicates each of the 46 chromosomes (23 pairs). G2 phase The cell multiplies. In preparation for cell division, organelles and proteins form. M phase After mitosis, cytokinesis occurs (cell separation) Formation of two daughter cells that are identical G0 phase These cells leave G1 and enter G0, a resting stage. A cell in G0 is doing its job without actively preparing to divide. Pathology The scientific branch that studies and diagnoses diseases on the cellular level is called cytopathology. Cytopathology is generally used on samples of free cells or tissue fragments, in contrast to the pathology branch of histopathology, which studies whole tissues. Cytopathology is commonly used to investigate diseases involving a wide range of body sites, often to aid in the diagnosis of cancer but also in the diagnosis of some infectious diseases and other inflammatory conditions. For example, a common application of cytopathology is the Pap smear, a screening test used to detect cervical cancer, and precancerous cervical lesions that may lead to cervical cancer. Cell cycle checkpoints and DNA damage repair system The cell cycle is composed of a number of well-ordered, consecutive stages that result in cellular division. The fact that cells do not begin the next stage until the last one is finished, is a significant element of cell cycle regulation. Cell cycle checkpoints are characteristics that constitute an excellent monitoring strategy for accurate cell cycle and divisions. Cdks, associated cyclin counterparts, protein kinases, and phosphatases regulate cell growth and division from one stage to another. The cell cycle is controlled by the temporal activation of Cdks, which is governed by cyclin partner interaction, phosphorylation by particular protein kinases, and de-phosphorylation by Cdc25 family phosphatases. In response to DNA damage, a cell's DNA repair reaction is a cascade of signaling pathways that leads to checkpoint engagement, regulates, the repairing mechanism in DNA, cell cycle alterations, and apoptosis. Numerous biochemical structures, as well as processes that detect damage in DNA, are ATM and ATR, which induce the DNA repair checkpoints The cell cycle is a sequence of activities in which cell organelles are duplicated and subsequently separated into daughter cells with precision. There are major events that happen during a cell cycle. The processes that happen in the cell cycle include cell development, replication and segregation of chromosomes.  The cell cycle checkpoints are surveillance systems that keep track of the cell cycle's integrity, accuracy, and chronology. Each checkpoint serves as an alternative cell cycle endpoint, wherein the cell's parameters are examined and only when desirable characteristics are fulfilled does the cell cycle advance through the distinct steps.The cell cycle's goal is to precisely copy each organism's DNA and afterwards equally split the cell and its components between the two new cells. Four main stages occur in the eukaryotes. In G1, the cell is usually active and continues to grow rapidly, while in G2, the cell growth continues while protein molecules become ready for separation. These are not dormant times; they are when cells gain mass, integrate growth factor receptors, establish a replicated genome, and prepare for chromosome segregation. DNA replication is restricted to a separate Synthesis in eukaryotes, which is also known as the S-phase. During mitosis, which is also known as the M-phase, the segregation of the chromosomes occur. DNA, like every other molecule, is capable of undergoing a wide range of chemical reactions. Modifications in DNA's sequence, on the other hand, have a considerably bigger impact than modifications in other cellular constituents like RNAs or proteins because DNA acts as a permanent copy of the cell genome. When erroneous nucleotides are incorporated during DNA replication, mutations can occur. The majority of DNA damage is fixed by removing the defective bases and then re-synthesizing the excised area. On the other hand, some DNA lesions can be mended by reversing the damage, which may be a more effective method of coping with common types of DNA damage. Only a few forms of DNA damage are mended in this fashion, including pyrimidine dimers caused by ultraviolet (UV) light changed by the insertion of methyl or ethyl groups at the purine ring's O6 position. Mitochondrial membrane dynamics Mitochondria are commonly referred to as the cell's "powerhouses" because of their capacity to effectively produce ATP which is essential to maintain cellular homeostasis and metabolism. Moreover, researchers have gained a better knowledge of mitochondria's significance in cell biology because of the discovery of cell signaling pathways by mitochondria which are crucial platforms for cell function regulation such as apoptosis. Its physiological adaptability is strongly linked to the cell mitochondrial channel's ongoing reconfiguration through a range of mechanisms known as mitochondrial membrane dynamics, which include endomembrane fusion and fragmentation (separation) as well as ultrastructural membrane remodeling. As a result, mitochondrial dynamics regulate and frequently choreograph not only metabolic but also complicated cell signaling processes such as cell pluripotent stem cells, proliferation, maturation, aging, and mortality. Mutually, post-translational alterations of mitochondrial apparatus and the development of transmembrane contact sites among mitochondria and other structures, which both have the potential to link signals from diverse routes that affect mitochondrial membrane dynamics substantially, Mitochondria are wrapped by two membranes: an inner mitochondrial membrane (IMM) and an outer mitochondrial membrane (OMM), each with a distinctive function and structure, which parallels their dual role as cellular powerhouses and signaling organelles. The inner mitochondrial membrane divides the mitochondrial lumen into two parts: the inner border membrane, which runs parallel to the OMM, and the cristae, which are deeply twisted, multinucleated invaginations that give room for surface area enlargement and house the mitochondrial respiration apparatus. The outer mitochondrial membrane, on the other hand, is soft and permeable. It, therefore, acts as a foundation for cell signaling pathways to congregate, be deciphered, and be transported into mitochondria. Furthermore, the OMM connects to other cellular organelles, such as the endoplasmic reticulum (ER), lysosomes, endosomes, and the plasma membrane. Mitochondria play a wide range of roles in cell biology, which is reflected in their morphological diversity. Ever since the beginning of the mitochondrial study, it has been well documented that mitochondria can have a variety of forms, with both their general and ultra-structural morphology varying greatly among cells, during the cell cycle, and in response to metabolic or cellular cues. Mitochondria can exist as independent organelles or as part of larger systems; they can also be unequally distributed in the cytosol through regulated mitochondrial transport and placement to meet the cell's localized energy requirements. Mitochondrial dynamics refers to the adaptive and variable aspect of mitochondria, including their shape and subcellular distribution. Autophagy Autophagy is a self-degradative mechanism that regulates energy sources during growth and reaction to dietary stress. Autophagy also cleans up after itself, clearing aggregated proteins, cleaning damaged structures including mitochondria and endoplasmic reticulum and eradicating intracellular infections. Additionally, autophagy has antiviral and antibacterial roles within the cell, and it is involved at the beginning of distinctive and adaptive immune responses to viral and bacterial contamination. Some viruses include virulence proteins that prevent autophagy, while others utilize autophagy elements for intracellular development or cellular splitting. Macro autophagy, micro autophagy, and chaperon-mediated autophagy are the three basic types of autophagy. When macro autophagy is triggered, an exclusion membrane incorporates a section of the cytoplasm, generating the autophagosome, a distinctive double-membraned organelle. The autophagosome then joins the lysosome to create an autolysosome, with lysosomal enzymes degrading the components. In micro autophagy, the lysosome or vacuole engulfs a piece of the cytoplasm by invaginating or protruding the lysosomal membrane to enclose the cytosol or organelles. The chaperone-mediated autophagy (CMA) protein quality assurance by digesting oxidized and altered proteins under stressful circumstances and supplying amino acids through protein denaturation. Autophagy is the primary intrinsic degradative system for peptides, fats, carbohydrates, and other cellular structures. In both physiologic and stressful situations, this cellular progression is vital for upholding the correct cellular balance. Autophagy instability leads to a variety of illness symptoms, including inflammation, biochemical disturbances, aging, and neurodegenerative, due to its involvement in controlling cell integrity. The modification of the autophagy-lysosomal networks is a typical hallmark of many neurological and muscular illnesses. As a result, autophagy has been identified as a potential strategy for the prevention and treatment of various disorders. Many of these disorders are prevented or improved by consuming polyphenol in the meal. As a result, natural compounds with the ability to modify the autophagy mechanism are seen as a potential therapeutic option. The creation of the double membrane (phagophore), which would be known as nucleation, is the first step in macro-autophagy. The phagophore approach indicates dysregulated polypeptides or defective organelles that come from the cell membrane, Golgi apparatus, endoplasmic reticulum, and mitochondria. With the conclusion of the autophagocyte, the phagophore's enlargement comes to an end. The auto-phagosome combines with the lysosomal vesicles to formulate an auto-lysosome that degrades the encapsulated substances, referred to as phagocytosis. Notable cell biologists Jean Baptiste Carnoy Peter Agre Günter Blobel Robert Brown Geoffrey M. Cooper Christian de Duve Henri Dutrochet Robert Hooke H. Robert Horvitz Marc Kirschner Anton van Leeuwenhoek Ira Mellman Peter D. Mitchell Rudolf Virchow Paul Nurse George Emil Palade Keith R. Porter Ray Rappaport Michael Swann Roger Tsien Edmund Beecher Wilson Kenneth R. Miller Matthias Jakob Schleiden Theodor Schwann Yoshinori Ohsumi Jan Evangelista Purkyně See also The American Society for Cell Biology Cell biophysics Cell disruption Cell physiology Cellular adaptation Cellular microbiology Institute of Molecular and Cell Biology (disambiguation) Meiomitosis Organoid Outline of cell biology Notes References electronic-book electronic- Cell and Molecular Biology by Karp 5th Ed., External links Aging Cell "Francis Harry Compton Crick (1916-2004)" by A. Andrei at the Embryo Project Encyclopedia "Biology Resource By Professor Lin."
2,878
6,346
https://en.wikipedia.org/wiki/Chloramphenicol
Chloramphenicol
Chloramphenicol is an antibiotic useful for the treatment of a number of bacterial infections. This includes use as an eye ointment to treat conjunctivitis. By mouth or by injection into a vein, it is used to treat meningitis, plague, cholera, and typhoid fever. Its use by mouth or by injection is only recommended when safer antibiotics cannot be used. Monitoring both blood levels of the medication and blood cell levels every two days is recommended during treatment. Common side effects include bone marrow suppression, nausea, and diarrhea. The bone marrow suppression may result in death. To reduce the risk of side effects treatment duration should be as short as possible. People with liver or kidney problems may need lower doses. In young children a condition known as gray baby syndrome may occur which results in a swollen stomach and low blood pressure. Its use near the end of pregnancy and during breastfeeding is typically not recommended. Chloramphenicol is a broad-spectrum antibiotic that typically stops bacterial growth by stopping the production of proteins. Chloramphenicol was discovered after being isolated from Streptomyces venezuelae in 1947. Its chemical structure was identified and it was first synthesized in 1949. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. Medical uses The original indication of chloramphenicol was in the treatment of typhoid, but the presence of multiple drug-resistant Salmonella Typhi has meant it is seldom used for this indication except when the organism is known to be sensitive. In low-income countries, the WHO no longer recommends only chloramphenicol as first-line to treat meningitis, but recognises it may be used with caution if there are no available alternatives. During the last decade chloramphenicol has been re-evaluated as an old agent with potential against systemic infections due to multidrug-resistant gram positive microorganisms (including vancomycin resistant enterococci). In vitro data have shown an activity against the majority (> 80%) of vancomycin resistant E. faecium strains. In the context of preventing endophthalmitis, a complication of cataract surgery, a 2017 systematic review found moderate evidence that using chloramphenicol eye drops in addition to an antibiotic injection (cefuroxime or penicillin) will likely lower the risk of endophthalmitis, compared to eye drops or antibiotic injections alone. Spectrum Chloramphenicol has a broad spectrum of activity and has been effective in treating ocular infections such as conjunctivitis, blepharitis etc. caused by a number of bacteria including Staphylococcus aureus, Streptococcus pneumoniae, and Escherichia coli. It is not effective against Pseudomonas aeruginosa. The following susceptibility data represent the minimum inhibitory concentration for a few medically significant organisms. Escherichia coli: 0.015 – 10,000 μg/mL Staphylococcus aureus: 0.06 – 128 μg/mL Streptococcus pneumoniae: 2 – 16 μg/mL Each of these concentrations is dependent upon the bacterial strain being targeted. Some strains of E. coli, for example, show spontaneous emergence of chloramphenicol resistance. Resistance Three mechanisms of resistance to chloramphenicol are known: reduced membrane permeability, mutation of the 50S ribosomal subunit, and elaboration of chloramphenicol acetyltransferase. It is easy to select for reduced membrane permeability to chloramphenicol in vitro by serial passage of bacteria, and this is the most common mechanism of low-level chloramphenicol resistance. High-level resistance is conferred by the cat-gene; this gene codes for an enzyme called chloramphenicol acetyltransferase, which inactivates chloramphenicol by covalently linking one or two acetyl groups, derived from acetyl-S-coenzyme A, to the hydroxyl groups on the chloramphenicol molecule. The acetylation prevents chloramphenicol from binding to the ribosome. Resistance-conferring mutations of the 50S ribosomal subunit are rare. Chloramphenicol resistance may be carried on a plasmid that also codes for resistance to other drugs. One example is the ACCoT plasmid (A=ampicillin, C=chloramphenicol, Co=co-trimoxazole, T=tetracycline), which mediates multiple drug resistance in typhoid (also called R factors). As of 2014 some Enterococcus faecium and Pseudomonas aeruginosa strains are resistant to chloramphenicol. Some Veillonella spp. and Staphylococcus capitis strains have also developed resistance to chloramphenicol to varying degrees. Adverse effects Aplastic anemia The most serious side effect of chloramphenicol treatment is aplastic anaemia. This effect is rare but sometimes fatal. The risk of AA is high enough that alternatives should be strongly considered. Treatments are available but expensive. No way exists to predict who may or may not suffer this side effect. The effect usually occurs weeks or months after treatment has been stopped, and a genetic predisposition may be involved. It is not known whether monitoring the blood counts of patients can prevent the development of aplastic anaemia, but patients are recommended to have a baseline blood count with a repeat blood count every few days while on treatment. Chloramphenicol should be discontinued if the complete blood count drops. The highest risk is with oral chloramphenicol (affecting 1 in 24,000–40,000) and the lowest risk occurs with eye drops (affecting less than one in 224,716 prescriptions). Thiamphenicol, a related compound with a similar spectrum of activity, is available in Italy and China for human use, and has never been associated with aplastic anaemia. Thiamphenicol is available in the U.S. and Europe as a veterinary antibiotic, but is not approved for use in humans. Bone marrow suppression Chloramphenicol may cause bone marrow suppression during treatment; this is a direct toxic effect of the drug on human mitochondria. This effect manifests first as a fall in hemoglobin levels, which occurs quite predictably once a cumulative dose of 20 g has been given. The anaemia is fully reversible once the drug is stopped and does not predict future development of aplastic anaemia. Studies in mice have suggested existing marrow damage may compound any marrow damage resulting from the toxic effects of chloramphenicol. Leukemia Leukemia, a cancer of the blood or bone marrow, is characterized by an abnormal increase of immature white blood cells. The risk of childhood leukemia is increased, as demonstrated in a Chinese case–control study, and the risk increases with length of treatment. Gray baby syndrome Intravenous chloramphenicol use has been associated with the so-called gray baby syndrome. This phenomenon occurs in newborn infants because they do not yet have fully functional liver enzymes (i.e. UDP-glucuronyl transferase), so chloramphenicol remains unmetabolized in the body. This causes several adverse effects, including hypotension and cyanosis. The condition can be prevented by using the drug at the recommended doses, and monitoring blood levels. Hypersensitivity reactions Fever, macular and vesicular rashes, angioedema, urticaria, and anaphylaxis may occur. Herxheimer's reactions have occurred during therapy for typhoid fever. Neurotoxic reactions Headache, mild depression, mental confusion, and delirium have been described in patients receiving chloramphenicol. Optic and peripheral neuritis have been reported, usually following long-term therapy. If this occurs, the drug should be promptly withdrawn. Pharmacokinetics Chloramphenicol is extremely lipid-soluble; it remains relatively unbound to protein and is a small molecule. It has a large apparent volume of distribution and penetrates effectively into all tissues of the body, including the brain. Distribution is not uniform, with highest concentrations found in the liver and kidney, with lowest in the brain and cerebrospinal fluid. The concentration achieved in brain and cerebrospinal fluid is around 30 to 50% of the overall average body concentration, even when the meninges are not inflamed; this increases to as high as 89% when the meninges are inflamed. Chloramphenicol increases the absorption of iron. Use in special populations Chloramphenicol is metabolized by the liver to chloramphenicol glucuronate (which is inactive). In liver impairment, the dose of chloramphenicol must therefore be reduced. No standard dose reduction exists for chloramphenicol in liver impairment, and the dose should be adjusted according to measured plasma concentrations. The majority of the chloramphenicol dose is excreted by the kidneys as the inactive metabolite, chloramphenicol glucuronate. Only a tiny fraction of the chloramphenicol is excreted by the kidneys unchanged. Plasma levels should be monitored in patients with renal impairment, but this is not mandatory. Chloramphenicol succinate ester (an intravenous prodrug form) is readily excreted unchanged by the kidneys, more so than chloramphenicol base, and this is the major reason why levels of chloramphenicol in the blood are much lower when given intravenously than orally. Chloramphenicol passes into breast milk, so should therefore be avoided during breast feeding, if possible. Dose monitoring Plasma levels of chloramphenicol must be monitored in neonates and patients with abnormal liver function. Plasma levels should be monitored in all children under the age of four, the elderly, and patients with kidney failure. Because efficacy and toxicity of chloramphenicol are associated with a maximum serum concentration, peak levels (one hour after the intravenous dose is given) should be 10–20 µg/ml with toxicity ; trough levels (taken immediately before a dose) should be 5–10 µg/ml. Drug interactions Administration of chloramphenicol concomitantly with bone marrow depressant drugs is contraindicated, although concerns over aplastic anaemia associated with ocular chloramphenicol have largely been discounted. Chloramphenicol is a potent inhibitor of the cytochrome P450 isoforms CYP2C19 and CYP3A4 in the liver. Inhibition of CYP2C19 causes decreased metabolism and therefore increased levels of, for example, antidepressants, antiepileptics, proton-pump inhibitors, and anticoagulants if they are given concomitantly. Inhibition of CYP3A4 causes increased levels of, for example, calcium channel blockers, immunosuppressants, chemotherapeutic drugs, benzodiazepines, azole antifungals, tricyclic antidepressants, macrolide antibiotics, SSRIs, statins, cardiac antiarrhythmics, antivirals, anticoagulants, and PDE5 inhibitors. Drug antagonistic Chloramphenicol is antagonistic with most cephalosporins and using both together should be avoided in the treatment of infections. Drug synergism Chloramphenicol has been demonstrated a synergistic effect when combined with fosfomycin against clinical isolates of Enterococcs faecium. Mechanism of action Chloramphenicol is a bacteriostatic agent, inhibiting protein synthesis. It prevents protein chain elongation by inhibiting the peptidyl transferase activity of the bacterial ribosome. It specifically binds to A2451 and A2452 residues in the 23S rRNA of the 50S ribosomal subunit, preventing peptide bond formation. Chloramphenicol directly interferes with substrate binding in the ribosome, as compared to macrolides, which sterically block the progression of the growing peptide. History Chloramphenicol was first isolated from Streptomyces venezuelae in 1947 and in 1949 a team of scientists at Parke-Davis including Mildred Rebstock published their identification of the chemical structure and their synthesis. In 1972, Senator Ted Kennedy combined the two examples of the Tuskegee Syphilis Study and the 1958 Los Angeles Infant Chloramphenicol experiments as initial subjects of a Senate Subcommittee investigation into dangerous medical experimentation on human subjects. In 2007, the accumulation of reports associating aplastic anemia and blood dyscrasia with chloramphenicol eye drops led to the classification of “probable human carcinogen” according to World Health Organization criteria, based on the known published case reports and the spontaneous reports submitted to the National Registry of Drug-Induced Ocular Side Effects. Society and culture Names Chloramphenicol is available as a generic worldwide under many brandnames and also under various generic names in eastern Europe and Russia, including chlornitromycin, levomycetin, and chloromycetin; the racemate is known as synthomycetin. Formulations Chloramphenicol is available as a capsule or as a liquid. In some countries, it is sold as chloramphenicol palmitate ester (CPE). CPE is inactive, and is hydrolysed to active chloramphenicol in the small intestine. No difference in bioavailability is noted between chloramphenicol and CPE. Manufacture of oral chloramphenicol in the U.S. stopped in 1991, because the vast majority of chloramphenicol-associated cases of aplastic anaemia are associated with the oral preparation. No oral formulation of chloramphenicol is available in the U.S. for human use. In molecular biology, chloramphenicol is prepared in ethanol. Intravenous The intravenous (IV) preparation of chloramphenicol is the succinate ester. This creates a problem: Chloramphenicol succinate ester is an inactive prodrug and must first be hydrolysed to chloramphenicol; however, the hydrolysis process is often incomplete, and 30% of the dose is lost and removed in the urine. Serum concentrations of IV chloramphenicol are only 70% of those achieved when chloramphenicol is given orally. For this reason, the dose needs to be increased to 75 mg/kg/day when administered IV to achieve levels equivalent to the oral dose. Oily Oily chloramphenicol (or chloramphenicol oil suspension) is a long-acting preparation of chloramphenicol first introduced by Roussel in 1954; marketed as Tifomycine, it was originally used as a treatment for typhoid. Roussel stopped production of oily chloramphenicol in 1995; the International Dispensary Association Foundation has manufactured it since 1998, first in Malta and then in India from December 2004. Oily chloramphenicol was first used to treat meningitis in 1975 and numerous studies since have demonstrated its efficacy. It is the cheapest treatment available for meningitis (US$5 per treatment course, compared to US$30 for ampicillin and US$15 for five days of ceftriaxone). It has the great advantage of requiring only a single injection, whereas ceftriaxone is traditionally given daily for five days. This recommendation may yet change, now that a single dose of ceftriaxone (cost US$3) has been shown to be equivalent to one dose of oily chloramphenicol. Eye drops Chloramphenicol is used in topical preparations (ointments and eye drops) for the treatment of bacterial conjunctivitis. Isolated case reports of aplastic anaemia following use of chloramphenicol eyedrops exist, but the risk is estimated to be of the order of less than one in 224,716 prescriptions. In Mexico, this is the treatment used prophylactically in newborns for neonatal conjunctivitis. Veterinary uses Although its use in veterinary medicine is highly restricted, chloramphenicol still has some important veterinary uses. It is currently considered the most useful treatment of chlamydial disease in koalas. The pharmacokinetics of chloramphenicol have been investigated in koalas. References External links Acetaldehyde dehydrogenase inhibitors Acetamides Amphenicols Chlorine-containing natural products CYP3A4 inhibitors Diols Halogen-containing natural products IARC Group 2A carcinogens Nitrobenzenes Organochlorides Otologicals Phenylethanolamines Substances discovered in the 1940s Wikipedia medicine articles ready to translate World Health Organization essential medicines
2,882
6,354
https://en.wikipedia.org/wiki/Council%20of%20Trent
Council of Trent
The Council of Trent (), held between 1545 and 1563 in Trent (or Trento), now in northern Italy, was the 19th ecumenical council of the Catholic Church. Prompted by the Protestant Reformation at the time, it has been described as the embodiment of the Counter-Reformation. The Council issued condemnations of what it defined to be heresies committed by proponents of Protestantism, and also issued key statements and clarifications of the Church's doctrine and teachings, including scripture, the biblical canon, sacred tradition, original sin, justification, salvation, the sacraments, the Mass, and the veneration of saints. The Council met for twenty-five sessions between 13 December 1545 and 4 December 1563. Pope Paul III, who convoked the Council, oversaw the first eight sessions (1545–47), while the twelfth to sixteenth sessions (1551–52) were overseen by Pope Julius III and the seventeenth to twenty-fifth sessions (1562–63) by Pope Pius IV. The consequences of the Council were also significant with regard to the Church's liturgy and practices. In its decrees, the Council made the Latin Vulgate the official biblical text of the Roman Church (without prejudice to the original texts in Hebrew and Greek, nor to other traditional translations of the Church, but favoring the Latin language over vernacular translations, such as the controversial English-language Tyndale Bible). In doing so, they commissioned the creation of a revised and standardized Vulgate in light of textual criticism, although this was not achieved until the 1590s. The Council also officially affirmed (for the second time at an ecumenical council) the traditional Catholic Canon of biblical books in response to the increasing Protestant exclusion of the deuterocanonical books. The former dogmatic affirmation of the Canonical books was at the Council of Florence in the 1441 bull Cantate Domino, as affirmed by Pope Leo XIII in his 1893 encyclical Providentissimus Deus (#20). In 1565, a year after the Council finished its work, Pius IV issued the Tridentine Creed (after Tridentum, Trent's Latin name) and his successor Pius V then issued the Roman Catechism and revisions of the Breviary and Missal in, respectively, 1566, 1568 and 1570. These, in turn, led to the codification of the Tridentine Mass, which remained the Church's primary form of the Mass for the next four hundred years. More than three hundred years passed until the next ecumenical council, the First Vatican Council, was convened in 1869. Background information Obstacles and events before the Council's problem area On 15 March 1517, the Fifth Council of the Lateran closed its activities with a number of reform proposals (on the selection of bishops, taxation, censorship and preaching) but not on the major problems that confronted the Church in Germany and other parts of Europe. A few months later, on 31 October 1517, Martin Luther issued his 95 Theses in Wittenberg. A general, free council in Germany Luther's position on ecumenical councils shifted over time, but in 1520 he appealed to the German princes to oppose the papal Church at the time, if necessary with a council in Germany, open and free of the Papacy. After the Pope condemned in Exsurge Domine fifty-two of Luther's theses as heresy, German opinion considered a council the best method to reconcile existing differences. German Catholics, diminished in number, hoped for a council to clarify matters. It took a generation for the council to materialise, partly due to papal fears over potentially renewing a schism over conciliarism; partly because Lutherans demanded the exclusion of the papacy from the Council; partly because of ongoing political rivalries between France and the Holy Roman Empire; and partly due to the Turkish dangers in the Mediterranean. Under Pope Clement VII (1523–34), troops of the Catholic Holy Roman Emperor Charles V sacked Papal Rome in 1527, "raping, killing, burning, stealing, the like had not been seen since the Vandals". Saint Peter's Basilica and the Sistine Chapel were used for horses. Pope Clement, fearful of the potential for more violence, delayed calling the Council. Charles V strongly favoured a council but needed the support of King Francis I of France, who attacked him militarily. Francis I generally opposed a general council due to partial support of the Protestant cause within France. In 1532 he agreed to the Nuremberg Religious Peace granting religious liberty to the Protestants, and in 1533 he further complicated matters when suggesting a general council to include both Catholic and Protestant rulers of Europe that would devise a compromise between the two theological systems. This proposal met the opposition of the Pope for it gave recognition to Protestants and also elevated the secular Princes of Europe above the clergy on church matters. Faced with a Turkish attack, Charles held the support of the Protestant German rulers, all of whom delayed the opening of the Council of Trent. Occasion, sessions, and attendance In reply to the Papal bull Exsurge Domine of Pope Leo X (1520), Martin Luther burned the document and appealed for a general council. In 1522 German diets joined in the appeal, with Charles V seconding and pressing for a council as a means of reunifying the Church and settling the Reformation controversies. Pope Clement VII (1523–1534) was vehemently against the idea of a council, agreeing with Francis I of France, after Pope Pius II, in his bull Execrabilis (1460) and his reply to the University of Cologne (1463), set aside the theory of the supremacy of general councils laid down by the Council of Constance. Pope Paul III (1534–1549), seeing that the Protestant Reformation was no longer confined to a few preachers, but had won over various princes, particularly in Germany, to its ideas, desired a council. Yet when he proposed the idea to his cardinals, it was almost unanimously opposed. Nonetheless, he sent nuncios throughout Europe to propose the idea. Paul III issued a decree for a general council to be held in Mantua, Italy, to begin on 23 May 1537. Martin Luther wrote the Smalcald Articles in preparation for the general council. The Smalcald Articles were designed to sharply define where the Lutherans could and could not compromise. The council was ordered by the Emperor and Pope Paul III to convene in Mantua on 23 May 1537. It failed to convene after another war broke out between France and Charles V, resulting in a non-attendance of French prelates. Protestants refused to attend as well. Financial difficulties in Mantua led the Pope in the autumn of 1537 to move the council to Vicenza, where participation was poor. The Council was postponed indefinitely on 21 May 1539. Pope Paul III then initiated several internal Church reforms while Emperor Charles V convened with Protestants and Cardinal Gasparo Contarini at the Diet of Regensburg, to reconcile differences. Mediating and conciliatory formulations were developed on certain topics. In particular, a two-part doctrine of justification was formulated that would later be rejected at Trent. Unity failed between Catholic and Protestant representatives "because of different concepts of Church and justification". However, the council was delayed until 1545 and, as it happened, convened right before Luther's death. Unable, however, to resist the urging of Charles V, the pope, after proposing Mantua as the place of meeting, convened the council at Trent (at that time ruled by a prince-bishop under the Holy Roman Empire), on 13 December 1545; the Pope's decision to transfer it to Bologna in March 1547 on the pretext of avoiding a plague failed to take effect and the Council was indefinitely prorogued on 17 September 1549. None of the three popes reigning over the duration of the council ever attended, which had been a condition of Charles V. Papal legates were appointed to represent the Papacy. Reopened at Trent on 1 May 1551 by the convocation of Pope Julius III (1550–1555), it was broken up by the sudden victory of Maurice, Elector of Saxony over Emperor Charles V and his march into surrounding state of Tirol on 28 April 1552. There was no hope of reassembling the council while the very anti-Protestant Paul IV was Pope. The council was reconvened by Pope Pius IV (1559–1565) for the last time, meeting from 18 January 1562 at Santa Maria Maggiore, and continued until its final adjournment on 4 December 1563. It closed with a series of ritual acclamations honouring the reigning Pope, the Popes who had convoked the Council, the emperor and the kings who had supported it, the papal legates, the cardinals, the ambassadors present, and the bishops, followed by acclamations of acceptance of the faith of the Council and its decrees, and of anathema for all heretics. The history of the council is thus divided into three distinct periods: 1545–1549, 1551–1552 and 1562–1563. During the second period, the Protestants present asked for a renewed discussion on points already defined and for bishops to be released from their oaths of allegiance to the Pope. When the last period began, all intentions of conciliating the Protestants was gone and the Jesuits had become a strong force. This last period was begun especially as an attempt to prevent the formation of a general council including Protestants, as had been demanded by some in France. The number of attending members in the three periods varied considerably. The council was small to begin with, opening with only about 30 bishops. It increased toward the close, but never reached the number of the First Council of Nicaea (which had 318 members) nor of the First Vatican Council (which numbered 744). The decrees were signed in 1563 by 255 members, the highest attendance of the whole council, including four papal legates, two cardinals, three patriarchs, twenty-five archbishops, and 168 bishops, two-thirds of whom were Italians. The Italian and Spanish prelates were vastly preponderant in power and numbers. At the passage of the most important decrees, not more than sixty prelates were present. Although most Protestants did not attend, ambassadors and theologians of Brandenburg, Württemberg, and Strasbourg attended having been granted an improved safe conduct. The French monarchy boycotted the entire council until the last minute when a delegation led by Charles de Guise, Cardinal of Lorraine finally arrived in November 1562. The first outbreak of the French Wars of Religion had occurred earlier in the year and the French Church, facing a significant and powerful Protestant minority in France, experienced iconoclasm violence regarding the use of sacred images. Such concerns were not primary in the Italian and Spanish Churches. The last-minute inclusion of a decree on sacred images was a French initiative, and the text, never discussed on the floor of the council or referred to council theologians, was based on a French draft. Objectives and overall results The main objectives of the council were twofold, although there were other issues that were also discussed: To condemn the principles and doctrines of Protestantism and to clarify the doctrines of the Catholic Church on all disputed points. This had not been done formally since the 1530 Confutatio Augustana. It is true that the emperor intended it to be a strictly general or truly ecumenical council, at which the Protestants should have a fair hearing. He secured, during the council's second period, 1551–1553, an invitation, twice given, to the Protestants to be present and the council issued a letter of safe conduct (thirteenth session) and offered them the right of discussion, but denied them a vote. Melanchthon and Johannes Brenz, with some other German Lutherans, actually started in 1552 on the journey to Trent. Brenz offered a confession and Melanchthon, who got no farther than Nuremberg, took with him the Confessio Saxonica. But the refusal to give the Protestants the vote and the consternation produced by the success of Maurice in his campaign against Charles V in 1552 effectually put an end to Protestant cooperation. To effect a reformation in discipline or administration. This object had been one of the causes calling forth the reformatory councils and had been lightly touched upon by the Fifth Council of the Lateran under Pope Julius II. The obvious corruption in the administration of the Church was one of the numerous causes of the Reformation. Twenty-five public sessions were held, but nearly half of them were spent in solemn formalities. The chief work was done in committees or congregations. The entire management was in the hands of the papal legate. The liberal elements lost out in the debates and voting. The council abolished some of the most notorious abuses and introduced or recommended disciplinary reforms affecting the sale of indulgences, the morals of convents, the education of the clergy, the non-residence of bishops (also bishops having plurality of benefices, which was fairly common), and the careless fulmination of censures, and forbade duelling. Although evangelical sentiments were uttered by some of the members in favour of the supreme authority of the Scriptures and justification by faith, no concession whatsoever was made to Protestantism. The Church is the ultimate interpreter of Scripture. Also, the Bible and church tradition (the tradition that composed part of the Catholic faith) were equally and independently authoritative. The relationship of faith and works in salvation was defined, following controversy over Martin Luther's doctrine of "justification by faith alone". Other Catholic practices that drew the ire of reformers within the Church, such as indulgences, pilgrimages, the veneration of saints and relics, and the veneration of the Virgin Mary were strongly reaffirmed, though abuses of them were forbidden. Decrees concerning sacred music and religious art, though inexplicit, were subsequently amplified by theologians and writers to condemn many types of Renaissance and medieval styles and iconographies, impacting heavily on the development of these art forms. The doctrinal decisions of the council are set forth in decrees (decreta), which are divided into chapters (capita), which contain the positive statement of the conciliar dogmas, and into short canons (canones), which condemn the dissenting Protestant views with the concluding anathema sit ("let him be anathema"). Decrees The doctrinal acts are as follows: after reaffirming the Niceno-Constantinopolitan Creed (third session), the decree was passed (fourth session) confirming that the deuterocanonical books were on a par with the other books of the canon (against Luther's placement of these books in the Apocrypha of his edition) and coordinating church tradition with the Scriptures as a rule of faith. The Vulgate translation was affirmed to be authoritative for the text of Scripture. Justification (sixth session) was declared to be offered upon the basis of human cooperation with divine grace as opposed to the Protestant doctrine of passive reception of grace. Understanding the Protestant "faith alone" doctrine to be one of simple human confidence in Divine Mercy, the Council rejected the "vain confidence" of the Protestants, stating that no one can know who has received the grace of God. Furthermore, the Council affirmed—against some Protestants—that the grace of God can be forfeited through mortal sin. The greatest weight in the Council's decrees is given to the sacraments. The seven sacraments were reaffirmed and the Eucharist pronounced to be a true propitiatory sacrifice as well as a sacrament, in which the bread and wine were consecrated into the Eucharist (thirteenth and twenty-second sessions). The term transubstantiation was used by the Council, but the specific Aristotelian explanation given by Scholasticism was not cited as dogmatic. Instead, the decree states that Christ is "really, truly, substantially present" in the consecrated forms. The sacrifice of the Mass was to be offered for dead and living alike and in giving to the apostles the command "do this in remembrance of me," Christ conferred upon them a sacerdotal power. The practice of withholding the cup from the laity was confirmed (twenty-first session) as one which the Church Fathers had commanded for good and sufficient reasons; yet in certain cases the Pope was made the supreme arbiter as to whether the rule should be strictly maintained. On the language of the Mass, "contrary to what is often said", the council condemned the belief that only vernacular languages should be used, while insisting on the use of Latin. Ordination (twenty-third session) was defined to imprint an indelible character on the soul. The priesthood of the New Testament takes the place of the Levitical priesthood. To the performance of its functions, the consent of the people is not necessary. In the decrees on marriage (twenty-fourth session) the excellence of the celibate state was reaffirmed, concubinage condemned and the validity of marriage made dependent upon the wedding taking place before a priest and two witnesses, although the lack of a requirement for parental consent ended a debate that had proceeded from the 12th century. In the case of a divorce, the right of the innocent party to marry again was denied so long as the other party was alive, even if the other party had committed adultery. However the council "refused … to assert the necessity or usefulness of clerical celibacy". In the twenty-fifth and last session, the doctrines of purgatory, the invocation of saints and the veneration of relics were reaffirmed, as was also the efficacy of indulgences as dispensed by the Church according to the power given her, but with some cautionary recommendations, and a ban on the sale of indulgences. Short and rather inexplicit passages concerning religious images, were to have great impact on the development of Catholic Church art. Much more than the Second Council of Nicaea (787) the Council fathers of Trent stressed the pedagogical purpose of Christian images. The council appointed, in 1562 (eighteenth session), a commission to prepare a list of forbidden books (Index Librorum Prohibitorum), but it later left the matter to the Pope. The preparation of a catechism and the revision of the Breviary and Missal were also left to the pope. The catechism embodied the council's far-reaching results, including reforms and definitions of the sacraments, the Scriptures, church dogma, and duties of the clergy. Ratification and promulgation On adjourning, the Council asked the supreme pontiff to ratify all its decrees and definitions. This petition was complied with by Pope Pius IV, on 26 January 1564, in the papal bull, Benedictus Deus, which enjoins strict obedience upon all Catholics and forbids, under pain of ex-communication, all unauthorised interpretation, reserving this to the Pope alone and threatens the disobedient with "the indignation of Almighty God and of his blessed apostles, Peter and Paul." Pope Pius appointed a commission of cardinals to assist him in interpreting and enforcing the decrees. The Index Librorum Prohibitorum was announced in 1564 and the following books were issued with the papal imprimatur: the Profession of the Tridentine Faith and the Tridentine Catechism (1566), the Breviary (1568), the Missal (1570) and the Vulgate (1590 and then 1592). The decrees of the council were acknowledged in Italy, Portugal, Poland and by the Catholic princes of Germany at the Diet of Augsburg in 1566. Philip II of Spain accepted them for Spain, the Netherlands and Sicily inasmuch as they did not infringe the royal prerogative. In France, they were officially recognised by the king only in their doctrinal parts. Although the disciplinary or moral reformatory decrees were never published by the throne, they received official recognition at provincial synods and were enforced by the bishops. Holy Roman Emperors Ferdinand I and Maximilian II never recognized the existence of any of the decrees. No attempt was made to introduce it into England. Pius IV sent the decrees to Mary, Queen of Scots, with a letter dated 13 June 1564, requesting her to publish them in Scotland, but she dared not do it in the face of John Knox and the Reformation. These decrees were later supplemented by the First Vatican Council of 1870. Publication of documents A comprehensive history is found in Hubert Jedin's The History of the Council of Trent (Geschichte des Konzils von Trient) with about 2500 pages in four volumes: The History of the Council of Trent: The fight for a Council (Vol I, 1951); The History of the Council of Trent: The first Sessions in Trent (1545–1547) (Vol II, 1957); The History of the Council of Trent: Sessions in Bologna 1547–1548 and Trento 1551–1552 (Vol III, 1970, 1998); The History of the Council of Trent: Third Period and Conclusion (Vol IV, 1976). The canons and decrees of the council have been published very often and in many languages. The first issue was by Paulus Manutius (Rome, 1564). Commonly used Latin editions are by Judocus Le Plat (Antwerp, 1779) and by Johann Friedrich von Schulte and Aemilius Ludwig Richter (Leipzig, 1853). Other editions are in vol. vii. of the Acta et decreta conciliorum recentiorum. Collectio Lacensis (7 vols., Freiburg, 1870–90), reissued as independent volume (1892); Concilium Tridentinum: Diariorum, actorum, epistularum, … collectio, ed. Sebastianus Merkle (4 vols., Freiburg, 1901 sqq.); as well as Mansi, Concilia, xxxv. 345 sqq. Note also Carl Mirbt, Quellen, 2d ed, pp. 202–255. An English edition is by James Waterworth (London, 1848; With Essays on the External and Internal History of the Council). The original acts and debates of the council, as prepared by its general secretary, Bishop Angelo Massarelli, in six large folio volumes, are deposited in the Vatican Library and remained there unpublished for more than 300 years and were brought to light, though only in part, by Augustin Theiner, priest of the oratory (d. 1874), in Acta genuina sancti et oecumenici Concilii Tridentini nunc primum integre edita (2 vols., Leipzig, 1874). Most of the official documents and private reports, however, which bear upon the council, were made known in the 16th century and since. The most complete collection of them is that of J. Le Plat, Monumentorum ad historicam Concilii Tridentini collectio (7 vols., Leuven, 1781–87). New materials(Vienna, 1872); by JJI von Döllinger (Ungedruckte Berichte und Tagebücher zur Geschichte des Concilii von Trient) (2 parts, Nördlingen, 1876); and August von Druffel, Monumenta Tridentina (Munich, 1884–97). List of doctrinal decrees Protestant response Out of 87 books written between 1546 and 1564 attacking the Council of Trent, 41 were written by Pier Paolo Vergerio, a former papal nuncio turned Protestant Reformer. The 1565–73 Examen decretorum Concilii Tridentini (Examination of the Council of Trent) by Martin Chemnitz was the main Lutheran response to the Council of Trent. Making extensive use of scripture and patristic sources, it was presented in response to a polemical writing which Diogo de Payva de Andrada had directed against Chemnitz. The Examen had four parts: Volume I examined sacred scripture, free will, original sin, justification, and good works. Volume II examined the sacraments, including baptism, confirmation, the sacrament of the eucharist, communion under both kinds, the mass, penance, extreme unction, holy orders, and matrimony. Volume III examined virginity, celibacy, purgatory, and the invocation of saints. Volume IV examined the relics of the saints, images, indulgences, fasting, the distinction of foods, and festivals. In response, Andrada wrote the five-part Defensio Tridentinæ fidei, which was published posthumously in 1578. However, the Defensio did not circulate as extensively as the Examen, nor were any full translations ever published. A French translation of the Examen by Eduard Preuss was published in 1861. German translations were published in 1861, 1884, and 1972. In English, a complete translation by Fred Kramer drawing from the original Latin and the 1861 German was published beginning in 1971. See also Nicolas Psaume, bishop of Verdun Black Legend (Spain) Popery Notes References Bühren, Ralf van: Kunst und Kirche im 20. Jahrhundert. Die Rezeption des Zweiten Vatikanischen Konzils (Konziliengeschichte, Reihe B: Untersuchungen), Paderborn 2008, O'Malley, John W., in The Sensuous in the Counter-Reformation Church, Eds: Marcia B. Hall, Tracy E. Cooper, 2013, Cambridge University Press, , google books James Waterworth (ed.), The Canons and Decrees of the Sacred and Oecumenical Council of Trent (1848) Further reading (with imprimatur of cardinal Farley) Paolo Sarpi, Historia del Concilio Tridentino, London: John Bill,1619 (History of the Council of Trent, English translation by Nathaniel Brent, London 1620, 1629 and 1676) Francesco Sforza Pallavicino, Istoria del concilio di Trento. In Roma, nella stamperia d'Angelo Bernabò dal Verme erede del Manelfi: per Giovanni Casoni libraro, 1656–57 John W. O'Malley: Trent: What Happened at the Council, Cambridge (Massachusetts), The Belknap Press of Harvard University Press, 2013, Hubert Jedin: Entstehung und Tragweite des Trienter Dekrets über die Bilderverehrung, in: Tübinger Theologische Quartalschrift 116, 1935, pp. 143–88, 404–429 Hubert Jedin: Geschichte des Konzils von Trient, 4 vol., Freiburg im Breisgau 1949–1975 (A History of the Council of Trent, 2 vol., London 1957 and 1961) Hubert Jedin: Konziliengeschichte, Freiburg im Breisgau 1959 Mullett, Michael A. "The Council of Trent and the Catholic Reformation", in his The Catholic Reformation (London: Routledge, 1999, , pbk.), pp. 29–68. N.B.: The author also mentions the Council elsewhere in his book. Schroeder, H. J., ed. and trans. The Canons and Decrees of the Council of Trent: English Translation, trans. [and introduced] by H. J. Schroeder. Rockford, Ill.: TAN Books and Publishers, 1978. N.B.: "The original 1941 edition contained [both] the Latin text and the English translation. This edition contains only the English translation..."; comprises only the Council's dogmatic decrees, excluding the purely disciplinary ones. Mathias Mütel: Mit den Kirchenvätern gegen Martin Luther? Die Debatten um Tradition und auctoritas patrum auf dem Konzil von Trient, Paderborn 2017 (= Konziliengeschichte. Reihe B., Untersuchungen) External links The text of the Council of Trent translated by J. Waterworth, 1848 (also on Intratext) Documents of the Council in latin ZIP version of the documents of the Council of Trent 1545 establishments in the Papal States 1563 disestablishments Counter-Reformation Trent Pope Julius III Pope Paul III Pope Pius IV Pope Pius V Trent Prince-Bishopric of Trent
2,886
6,355
https://en.wikipedia.org/wiki/Chloroplast
Chloroplast
A chloroplast () is a type of membrane-bound organelle known as a plastid that conducts photosynthesis mostly in plant and algal cells. The photosynthetic pigment chlorophyll captures the energy from sunlight, converts it, and stores it in the energy-storage molecules ATP and NADPH while freeing oxygen from water in the cells. The ATP and NADPH is then used to make organic molecules from carbon dioxide in a process known as the Calvin cycle. Chloroplasts carry out a number of other functions, including fatty acid synthesis, amino acid synthesis, and the immune response in plants. The number of chloroplasts per cell varies from one, in unicellular algae, up to 100 in plants like Arabidopsis and wheat. A chloroplast is characterized by its two membranes and a high concentration of chlorophyll. Other plastid types, such as the leucoplast and the chromoplast, contain little chlorophyll and do not carry out photosynthesis. Chloroplasts are highly dynamic—they circulate and are moved around within plant cells, and occasionally pinch in two to reproduce. Their behavior is strongly influenced by environmental factors like light color and intensity. Chloroplasts, like mitochondria, contain their own DNA, which is thought to be inherited from their ancestor—a photosynthetic cyanobacterium that was engulfed by an early eukaryotic cell. Chloroplasts cannot be made by the plant cell and must be inherited by each daughter cell during cell division. With one exception (the amoeboid Paulinella chromatophora), all chloroplasts can probably be traced back to a single endosymbiotic event, when a cyanobacterium was engulfed by the eukaryote. Despite this, chloroplasts can be found in an extremely wide set of organisms, some not even directly related to each other—a consequence of many secondary and even tertiary endosymbiotic events. The word chloroplast is derived from the Greek words chloros (χλωρός), which means green, and plastes (πλάστης), which means "the one who forms". Discovery The first definitive description of a chloroplast (Chlorophyllkörnen, "grain of chlorophyll") was given by Hugo von Mohl in 1837 as discrete bodies within the green plant cell. In 1883, Andreas Franz Wilhelm Schimper would name these bodies as "chloroplastids" (Chloroplastiden). In 1884, Eduard Strasburger adopted the term "chloroplasts" (Chloroplasten). Lineages and evolution Chloroplasts are one of many types of organelles in the plant cell. They are considered to have evolved from endosymbiotic cyanobacteria. Mitochondria are thought to have come from a similar endosymbiosis event, where an aerobic prokaryote was engulfed. This origin of chloroplasts was first suggested by the Russian biologist Konstantin Mereschkowski in 1905 after Andreas Franz Wilhelm Schimper observed in 1883 that chloroplasts closely resemble cyanobacteria. Chloroplasts are only found in plants, algae, and three species of amoeba – Paulinella chromatophora, P. micropora, and marine P. longichromatophora. Parent group: Cyanobacteria Chloroplasts are considered endosymbiotic Cyanobacteria. Cyanobacteria are sometimes called blue-green algae even though they are prokaryotes. They are a diverse phylum of gram-negative bacteria capable of carrying out photosynthesis. Cyanobacteria also contain a peptidoglycan cell wall, which is thicker than in other gram-negative bacteria, and which is located between their two cell membranes. Like chloroplasts, they have thylakoids within them. On the thylakoid membranes are photosynthetic pigments, including chlorophyll a. Phycobilins are also common cyanobacterial pigments, usually organized into hemispherical phycobilisomes attached to the outside of the thylakoid membranes (phycobilins are not shared with all chloroplasts though). Primary endosymbiosis Somewhere between 1 and 2 billion years ago, a free-living cyanobacterium entered an early eukaryotic cell, either as food or as an internal parasite, but managed to escape the phagocytic vacuole it was contained in. The two innermost lipid-bilayer membranes that surround all chloroplasts correspond to the outer and inner membranes of the ancestral cyanobacterium's gram negative cell wall, and not the phagosomal membrane from the host, which was probably lost. The new cellular resident quickly became an advantage, providing food for the eukaryotic host, which allowed it to live within it. Over time, the cyanobacterium was assimilated, and many of its genes were lost or transferred to the nucleus of the host. From genomes that probably originally contained over 3000 genes only about 130 genes remain in the chloroplasts of contemporary plants. Some of its proteins were then synthesized in the cytoplasm of the host cell, and imported back into the chloroplast (formerly the cyanobacterium). Separately, somewhere about 90–140 million years ago, it happened again and led to the amoeboid Paulinella chromatophora. This event is called endosymbiosis, or "cell living inside another cell with a mutual benefit for both". The external cell is commonly referred to as the host while the internal cell is called the endosymbiont. Chloroplasts are believed to have arisen after mitochondria, since all eukaryotes contain mitochondria, but not all have chloroplasts. This is called serial endosymbiosis—an early eukaryote engulfing the mitochondrion ancestor, and some descendants of it then engulfing the chloroplast ancestor, creating a cell with both chloroplasts and mitochondria. Whether or not primary chloroplasts came from a single endosymbiotic event, or many independent engulfments across various eukaryotic lineages, has long been debated. It is now generally held that organisms with primary chloroplasts share a single ancestor that took in a cyanobacterium 600–2000 million years ago. It has been proposed this the closest living relative of this bacterium is Gloeomargarita lithophora. The exception is the amoeboid Paulinella chromatophora, which descends from an ancestor that took in a Prochlorococcus cyanobacterium 90–500 million years ago. These chloroplasts, which can be traced back directly to a cyanobacterial ancestor, are known as primary plastids ("plastid" in this context means almost the same thing as chloroplast). All primary chloroplasts belong to one of four chloroplast lineages—the glaucophyte chloroplast lineage, the amoeboid Paulinella chromatophora lineage, the rhodophyte (red algal) chloroplast lineage, or the chloroplastidan (green) chloroplast lineage. The rhodophyte and chloroplastidan lineages are the largest, with chloroplastidan (green) being the one that contains the land plants. Glaucophyta Usually the endosymbiosis event is considered to have occurred in the Archaeplastida, within which the glaucophyta being the possible earliest diverging lineage. The glaucophyte chloroplast group is the smallest of the three primary chloroplast lineages, being found in only 13 species, and is thought to be the one that branched off the earliest. Glaucophytes have chloroplasts that retain a peptidoglycan wall between their double membranes, like their cyanobacterial parent. For this reason, glaucophyte chloroplasts are also known as 'muroplasts' (besides 'cyanoplasts' or 'cyanelles'). Glaucophyte chloroplasts also contain concentric unstacked thylakoids, which surround a carboxysome – an icosahedral structure that glaucophyte chloroplasts and cyanobacteria keep their carbon fixation enzyme RuBisCO in. The starch that they synthesize collects outside the chloroplast. Like cyanobacteria, glaucophyte and rhodophyte chloroplast thylakoids are studded with light collecting structures called phycobilisomes. For these reasons, glaucophyte chloroplasts are considered a primitive intermediate between cyanobacteria and the more evolved chloroplasts in red algae and plants. Rhodophyceae (red algae) The rhodophyte, or red algae chloroplast group is another large and diverse chloroplast lineage. Rhodophyte chloroplasts are also called rhodoplasts, literally "red chloroplasts". Rhodoplasts have a double membrane with an intermembrane space and phycobilin pigments organized into phycobilisomes on the thylakoid membranes, preventing their thylakoids from stacking. Some contain pyrenoids. Rhodoplasts have chlorophyll a and phycobilins for photosynthetic pigments; the phycobilin phycoerythrin is responsible for giving many red algae their distinctive red color. However, since they also contain the blue-green chlorophyll a and other pigments, many are reddish to purple from the combination. The red phycoerytherin pigment is an adaptation to help red algae catch more sunlight in deep water—as such, some red algae that live in shallow water have less phycoerythrin in their rhodoplasts, and can appear more greenish. Rhodoplasts synthesize a form of starch called floridean starch, which collects into granules outside the rhodoplast, in the cytoplasm of the red alga. Chloroplastida (green algae and plants) The chloroplastida chloroplasts, or green chloroplasts, are another large, highly diverse primary chloroplast lineage. Their host organisms are commonly known as green algae and land plants. They differ from glaucophyte and red algal chloroplasts in that they have lost their phycobilisomes, and contain chlorophyll b instead. Most green chloroplasts are (obviously) green, though some aren't, like some forms of Hæmatococcus pluvialis, due to accessory pigments that override the chlorophylls' green colors. Chloroplastida chloroplasts have lost the peptidoglycan wall between their double membrane, leaving an intermembrane space. Some plants seem to have kept the genes for the synthesis of the peptidoglycan layer, though they've been repurposed for use in chloroplast division instead. Most of the chloroplasts depicted in this article are green chloroplasts. Green algae and plants keep their starch inside their chloroplasts, and in plants and some algae, the chloroplast thylakoids are arranged in grana stacks. Some green algal chloroplasts contain a structure called a pyrenoid, which is functionally similar to the glaucophyte carboxysome in that it is where RuBisCO and CO are concentrated in the chloroplast. Helicosporidium is a genus of nonphotosynthetic parasitic green algae that is thought to contain a vestigial chloroplast. Genes from a chloroplast and nuclear genes indicating the presence of a chloroplast have been found in Helicosporidium even if nobody's seen the chloroplast itself. Paulinella chromatophora While most chloroplasts originate from that first set of endosymbiotic events, Paulinella chromatophora is an exception that acquired a photosynthetic cyanobacterial endosymbiont more recently. It is not clear whether that symbiont is closely related to the ancestral chloroplast of other eukaryotes. Being in the early stages of endosymbiosis, Paulinella chromatophora can offer some insights into how chloroplasts evolved. Paulinella cells contain one or two sausage-shaped blue-green photosynthesizing structures called chromatophores, descended from the cyanobacterium Synechococcus. Chromatophores cannot survive outside their host. Chromatophore DNA is about a million base pairs long, containing around 850 protein-encoding genes—far less than the three million base pair Synechococcus genome, but much larger than the approximately 150,000 base pair genome of the more assimilated chloroplast. Chromatophores have transferred much less of their DNA to the nucleus of their host. About 0.3–0.8% of the nuclear DNA in Paulinella is from the chromatophore, compared with 11–14% from the chloroplast in plants. Secondary and tertiary endosymbiosis Many other organisms obtained chloroplasts from the primary chloroplast lineages through secondary endosymbiosis—engulfing a red or green alga that contained a chloroplast. These chloroplasts are known as secondary plastids. While primary chloroplasts have a double membrane from their cyanobacterial ancestor, secondary chloroplasts have additional membranes outside of the original two, as a result of the secondary endosymbiotic event, when a nonphotosynthetic eukaryote engulfed a chloroplast-containing alga but failed to digest it—much like the cyanobacterium at the beginning of this story. The engulfed alga was broken down, leaving only its chloroplast, and sometimes its cell membrane and nucleus, forming a chloroplast with three or four membranes—the two cyanobacterial membranes, sometimes the eaten alga's cell membrane, and the phagosomal vacuole from the host's cell membrane. The genes in the phagocytosed eukaryote's nucleus are often transferred to the secondary host's nucleus. Cryptomonads and chlorarachniophytes retain the phagocytosed eukaryote's nucleus, an object called a nucleomorph, located between the second and third membranes of the chloroplast. All secondary chloroplasts come from green and red algae—no secondary chloroplasts from glaucophytes have been observed, probably because glaucophytes are relatively rare in nature, making them less likely to have been taken up by another eukaryote. Green algal derived chloroplasts Green algae have been taken up by the euglenids, chlorarachniophytes, a lineage of dinoflagellates, and possibly the ancestor of the CASH lineage (cryptomonads, alveolates, stramenopiles and haptophytes) in three or four separate engulfments. Many green algal derived chloroplasts contain pyrenoids, but unlike chloroplasts in their green algal ancestors, storage product collects in granules outside the chloroplast. Euglenophytes Euglenophytes are a group of common flagellated protists that contain chloroplasts derived from a green alga. Euglenophyte chloroplasts have three membranes—it is thought that the membrane of the primary endosymbiont was lost, leaving the cyanobacterial membranes, and the secondary host's phagosomal membrane. Euglenophyte chloroplasts have a pyrenoid and thylakoids stacked in groups of three. Photosynthetic product is stored in the form of paramylon, which is contained in membrane-bound granules in the cytoplasm of the euglenophyte. Chlorarachniophytes Chlorarachniophytes are a rare group of organisms that also contain chloroplasts derived from green algae, though their story is more complicated than that of the euglenophytes. The ancestor of chlorarachniophytes is thought to have been a eukaryote with a red algal derived chloroplast. It is then thought to have lost its first red algal chloroplast, and later engulfed a green alga, giving it its second, green algal derived chloroplast. Chlorarachniophyte chloroplasts are bounded by four membranes, except near the cell membrane, where the chloroplast membranes fuse into a double membrane. Their thylakoids are arranged in loose stacks of three. Chlorarachniophytes have a form of polysaccharide called chrysolaminarin, which they store in the cytoplasm, often collected around the chloroplast pyrenoid, which bulges into the cytoplasm. Chlorarachniophyte chloroplasts are notable because the green alga they are derived from has not been completely broken down—its nucleus still persists as a nucleomorph found between the second and third chloroplast membranes—the periplastid space, which corresponds to the green alga's cytoplasm. Prasinophyte-derived dinophyte chloroplast Lepidodinium viride and its close relatives are dinophytes (see below) that lost their original peridinin chloroplast and replaced it with a green algal derived chloroplast (more specifically, a prasinophyte). Lepidodinium is the only dinophyte that has a chloroplast that's not from the rhodoplast lineage. The chloroplast is surrounded by two membranes and has no nucleomorph—all the nucleomorph genes have been transferred to the dinophyte nucleus. The endosymbiotic event that led to this chloroplast was serial secondary endosymbiosis rather than tertiary endosymbiosis—the endosymbiont was a green alga containing a primary chloroplast (making a secondary chloroplast). Red algal derived chloroplasts Cryptophytes Cryptophytes, or cryptomonads are a group of algae that contain a red-algal derived chloroplast. Cryptophyte chloroplasts contain a nucleomorph that superficially resembles that of the chlorarachniophytes. Cryptophyte chloroplasts have four membranes, the outermost of which is continuous with the rough endoplasmic reticulum. They synthesize ordinary starch, which is stored in granules found in the periplastid space—outside the original double membrane, in the place that corresponds to the red alga's cytoplasm. Inside cryptophyte chloroplasts is a pyrenoid and thylakoids in stacks of two. Their chloroplasts do not have phycobilisomes, but they do have phycobilin pigments which they keep in their thylakoid space, rather than anchored on the outside of their thylakoid membranes. Cryptophytes may have played a key role in the spreading of red algal based chloroplasts. Haptophytes Haptophytes are similar and closely related to cryptophytes or heterokontophytes. Their chloroplasts lack a nucleomorph, their thylakoids are in stacks of three, and they synthesize chrysolaminarin sugar, which they store completely outside of the chloroplast, in the cytoplasm of the haptophyte. Heterokontophytes (stramenopiles) The heterokontophytes, also known as the stramenopiles, are a very large and diverse group of eukaryotes. The photoautotrophic lineage, Ochrophyta, including the diatoms and the brown algae, golden algae, and yellow-green algae, also contains red algal derived chloroplasts. Heterokont chloroplasts are very similar to haptophyte chloroplasts, containing a pyrenoid, triplet thylakoids, and with some exceptions, having four layer plastidic envelope, the outermost epiplastid membrane connected to the endoplasmic reticulum. Like haptophytes, heterokontophytes store sugar in chrysolaminarin granules in the cytoplasm. Heterokontophyte chloroplasts contain chlorophyll a and with a few exceptions chlorophyll c, but also have carotenoids which give them their many colors. Apicomplexans, chromerids, and dinophytes The alveolates are a major clade of unicellular eukaryotes of both autotrophic and heterotrophic members. The most notable shared characteristic is the presence of cortical (outer-region) alveoli (sacs). These are flattened vesicles (sacs) packed into a continuous layer just under the membrane and supporting it, typically forming a flexible pellicle (thin skin). In dinoflagellates they often form armor plates. Many members contain a red-algal derived plastid. One notable characteristic of this diverse group is the frequent loss of photosynthesis. However, a majority of these heterotrophs continue to process a non-photosynthetic plastid. Apicomplexans Apicomplexans are a group of alveolates. Like the helicosproidia, they're parasitic, and have a nonphotosynthetic chloroplast. They were once thought to be related to the helicosproidia, but it is now known that the helicosproida are green algae rather than part of the CASH lineage. The apicomplexans include Plasmodium, the malaria parasite. Many apicomplexans keep a vestigial red algal derived chloroplast called an apicoplast, which they inherited from their ancestors. Other apicomplexans like Cryptosporidium have lost the chloroplast completely. Apicomplexans store their energy in amylopectin granules that are located in their cytoplasm, even though they are nonphotosynthetic. Apicoplasts have lost all photosynthetic function, and contain no photosynthetic pigments or true thylakoids. They are bounded by four membranes, but the membranes are not connected to the endoplasmic reticulum. The fact that apicomplexans still keep their nonphotosynthetic chloroplast around demonstrates how the chloroplast carries out important functions other than photosynthesis. Plant chloroplasts provide plant cells with many important things besides sugar, and apicoplasts are no different—they synthesize fatty acids, isopentenyl pyrophosphate, iron-sulfur clusters, and carry out part of the heme pathway. This makes the apicoplast an attractive target for drugs to cure apicomplexan-related diseases. The most important apicoplast function is isopentenyl pyrophosphate synthesis—in fact, apicomplexans die when something interferes with this apicoplast function, and when apicomplexans are grown in an isopentenyl pyrophosphate-rich medium, they dump the organelle. Chromerids The Chromerida is a newly discovered group of algae from Australian corals which comprises some close photosynthetic relatives of the apicomplexans. The first member, Chromera velia, was discovered and first isolated in 2001. The discovery of Chromera velia with similar structure to the apicomplexanss, provides an important link in the evolutionary history of the apicomplexans and dinophytes. Their plastids have four membranes, lack chlorophyll c and use the type II form of RuBisCO obtained from a horizontal transfer event. Dinophytes The dinoflagellates are yet another very large and diverse group of protists, around half of which are (at least partially) photosynthetic. Most dinophyte chloroplasts are secondary red algal derived chloroplasts. Many other dinophytes have lost the chloroplast (becoming the nonphotosynthetic kind of dinoflagellate), or replaced it though tertiary endosymbiosis—the engulfment of another eukaryotic algae containing a red algal derived chloroplast. Others replaced their original chloroplast with a green algal derived one. Most dinophyte chloroplasts contain form II RuBisCO, at least the photosynthetic pigments chlorophyll a, chlorophyll c2, beta-carotene, and at least one dinophyte-unique xanthophyll (peridinin, dinoxanthin, or diadinoxanthin), giving many a golden-brown color. All dinophytes store starch in their cytoplasm, and most have chloroplasts with thylakoids arranged in stacks of three. The most common dinophyte chloroplast is the peridinin-type chloroplast, characterized by the carotenoid pigment peridinin in their chloroplasts, along with chlorophyll a and chlorophyll c2. Peridinin is not found in any other group of chloroplasts. The peridinin chloroplast is bounded by three membranes (occasionally two), having lost the red algal endosymbiont's original cell membrane. The outermost membrane is not connected to the endoplasmic reticulum. They contain a pyrenoid, and have triplet-stacked thylakoids. Starch is found outside the chloroplast. An important feature of these chloroplasts is that their chloroplast DNA is highly reduced and fragmented into many small circles. Most of the genome has migrated to the nucleus, and only critical photosynthesis-related genes remain in the chloroplast. The peridinin chloroplast is thought to be the dinophytes' "original" chloroplast, which has been lost, reduced, replaced, or has company in several other dinophyte lineages. Fucoxanthin-containing (haptophyte-derived) dinophyte chloroplasts The fucoxanthin dinophyte lineages (including Karlodinium and Karenia) lost their original red algal derived chloroplast, and replaced it with a new chloroplast derived from a haptophyte endosymbiont. Karlodinium and Karenia probably took up different heterokontophytes. Because the haptophyte chloroplast has four membranes, tertiary endosymbiosis would be expected to create a six membraned chloroplast, adding the haptophyte's cell membrane and the dinophyte's phagosomal vacuole. However, the haptophyte was heavily reduced, stripped of a few membranes and its nucleus, leaving only its chloroplast (with its original double membrane), and possibly one or two additional membranes around it. Fucoxanthin-containing chloroplasts are characterized by having the pigment fucoxanthin (actually 19′-hexanoyloxy-fucoxanthin and/or 19′-butanoyloxy-fucoxanthin) and no peridinin. Fucoxanthin is also found in haptophyte chloroplasts, providing evidence of ancestry. Diatom-derived dinophyte chloroplasts Some dinophytes, like Kryptoperidinium and Durinskia, have a diatom (heterokontophyte)-derived chloroplast. These chloroplasts are bounded by up to five membranes, (depending on whether the entire diatom endosymbiont is counted as the chloroplast, or just the red algal derived chloroplast inside it). The diatom endosymbiont has been reduced relatively little—it still retains its original mitochondria, and has endoplasmic reticulum, ribosomes, a nucleus, and of course, red algal derived chloroplasts—practically a complete cell, all inside the host's endoplasmic reticulum lumen. However the diatom endosymbiont can't store its own food—its storage polysaccharide is found in granules in the dinophyte host's cytoplasm instead. The diatom endosymbiont's nucleus is present, but it probably can't be called a nucleomorph because it shows no sign of genome reduction, and might have even been expanded. Diatoms have been engulfed by dinoflagellates at least three times. The diatom endosymbiont is bounded by a single membrane, inside it are chloroplasts with four membranes. Like the diatom endosymbiont's diatom ancestor, the chloroplasts have triplet thylakoids and pyrenoids. In some of these genera, the diatom endosymbiont's chloroplasts aren't the only chloroplasts in the dinophyte. The original three-membraned peridinin chloroplast is still around, converted to an eyespot. Kleptoplasty In some groups of mixotrophic protists, like some dinoflagellates (e.g. Dinophysis), chloroplasts are separated from a captured alga and used temporarily. These klepto chloroplasts may only have a lifetime of a few days and are then replaced. Cryptophyte-derived dinophyte chloroplast Members of the genus Dinophysis have a phycobilin-containing chloroplast taken from a cryptophyte. However, the cryptophyte is not an endosymbiont—only the chloroplast seems to have been taken, and the chloroplast has been stripped of its nucleomorph and outermost two membranes, leaving just a two-membraned chloroplast. Cryptophyte chloroplasts require their nucleomorph to maintain themselves, and Dinophysis species grown in cell culture alone cannot survive, so it is possible (but not confirmed) that the Dinophysis chloroplast is a kleptoplast—if so, Dinophysis chloroplasts wear out and Dinophysis species must continually engulf cryptophytes to obtain new chloroplasts to replace the old ones. Chloroplast DNA Chloroplasts, like other types of plastid, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA (cpDNA) was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. Chloroplast DNA was first sequenced in 1986. Since then, hundreds of chloroplast DNAs from various species have been sequenced, but they are mostly those of land plants and green algae—glaucophytes, red algae, and other algal groups are extremely underrepresented, potentially introducing some bias in views of "typical" chloroplast DNA structure and content. Molecular structure With few exceptions, most chloroplasts have their entire chloroplast genome combined into a single large circular DNA molecule, typically 120,000–170,000 base pairs long. They can have a contour length of around 30–60 micrometers, and have a mass of about 80–130 million daltons. While usually thought of as a circular molecule, there is some evidence that chloroplast DNA molecules more often take on a linear shape. Inverted repeats Many chloroplast DNAs contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC). While a given pair of inverted repeats are rarely completely identical, they are always very similar to each other, apparently resulting from concerted evolution. The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each and containing as few as four or as many as over 150 genes. Inverted repeats in plants tend to be at the upper end of this range, each being 20,000–25,000 base pairs long. The inverted repeat regions are highly conserved among land plants, and accumulate few mutations. Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceae), suggesting that they predate the chloroplast, though some chloroplast DNAs have since lost or flipped the inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast DNAs which have lost some of the inverted repeat segments tend to get rearranged more. Nucleoids New chloroplasts may contain up to 100 copies of their DNA, though the number of chloroplast DNA copies decreases to about 15–20 as the chloroplasts age. They are usually packed into nucleoids, which can contain several identical chloroplast DNA rings. Many nucleoids can be found in each chloroplast. In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of the chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma. Though chloroplast DNA is not associated with true histones, in red algae, similar proteins that tightly pack each chloroplast DNA ring into a nucleoid have been found. DNA repair In chloroplasts of the moss Physcomitrella patens, the DNA mismatch repair protein Msh1 interacts with the recombinational repair proteins RecA and RecG to maintain chloroplast genome stability. In chloroplasts of the plant Arabidopsis thaliana the RecA protein maintains the integrity of the chloroplast's DNA by a process that likely involves the recombinational repair of DNA damage. DNA replication The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Transcription starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to transcribe the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes. In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine. Hypoxanthine can bind to cytosine, and when the XC base pair is replicated, it becomes a GC (thus, an A → G base change). In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures. One of competing model for cpDNA replication asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to the linear and circular DNA structures of bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more species still contain complex structures that scientists do not yet understand. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not expand the multiple A --> G gradients seen in plastomes. Because of the failure to explain the deamination gradient as well as the numerous plant species that have been shown to have circular cpDNA, the predominant theory continues to hold that most cpDNA is circular and most likely replicates via a D loop mechanism. Gene content and protein synthesis The chloroplast genome most commonly includes around 100 genes that code for a variety of things, mostly to do with the protein pipeline and photosynthesis. As in prokaryotes, genes in chloroplast DNA are organized into operons. Unlike prokaryotic DNA molecules, chloroplast DNA molecules contain introns (plant mitochondrial DNAs do too, but not human mtDNAs). Among land plants, the contents of the chloroplast genome are fairly similar. Chloroplast genome reduction and gene transfer Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called endosymbiotic gene transfer. As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. Recently, a plastid without a genome was found, demonstrating chloroplasts can lose their genome during endosymbiotic the gene transfer process. Endosymbiotic gene transfer is how we know about the lost chloroplasts in many CASH lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast. In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in Arabidopsis, corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants. Of the approximately 3000 proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called retrograde signaling. Protein synthesis Protein synthesis within chloroplasts relies on two RNA polymerases. One is coded by the chloroplast DNA, the other is of nuclear origin. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes. Protein targeting and import Because so many chloroplast genes have been moved to the nucleus, many proteins that would originally have been translated in the chloroplast are now synthesized in the cytoplasm of the plant cell. These proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes. Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway, though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell because to reach the chloroplast from the cytosol, the cell membrane must be crossed, which signifies entrance into the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway. Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle. In most, but not all cases, nuclear-encoded chloroplast proteins are translated with a cleavable transit peptide that's added to the N-terminus of the protein precursor. Sometimes the transit sequence is found on the C-terminus of the protein, or within the functional part of the protein. Transport proteins and membrane translocons After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, an enzyme specific to chloroplast proteins phosphorylates, or adds a phosphate group to many (but not all) of them in their transit sequences. Phosphorylation helps many proteins bind the polypeptide, keeping it from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized by the chloroplast. These proteins also help the polypeptide get imported into the chloroplast. From here, chloroplast proteins bound for the stroma must pass through two protein complexes—the TOC complex, or translocon on the outer chloroplast membrane, and the TIC translocon, or translocon on the inner chloroplast membrane translocon. Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space. Structure In land plants, chloroplasts are generally lens-shaped, 3–10 μm in diameter and 1–3 μm thick. Corn seedling chloroplasts are ≈20 µm3 in volume. Greater diversity in chloroplast shapes exists among the algae, which often contain a single chloroplast that can be shaped like a net (e.g., Oedogonium), a cup (e.g., Chlamydomonas), a ribbon-like spiral around the edges of the cell (e.g., Spirogyra), or slightly twisted bands at the cell edges (e.g., Sirogonium). Some algae have two chloroplasts in each cell; they are star-shaped in Zygnema, or may follow the shape of half the cell in order Desmidiales. In some algae, the chloroplast takes up most of the cell, with pockets for the nucleus and other organelles, for example, some species of Chlorella have a cup-shaped chloroplast that occupies much of the cell. All chloroplasts have at least three membrane systems—the outer chloroplast membrane, the inner chloroplast membrane, and the thylakoid system. Chloroplasts that are the product of secondary endosymbiosis may have additional membranes surrounding these three. Inside the outer and inner chloroplast membranes is the chloroplast stroma, a semi-gel-like fluid that makes up much of a chloroplast's volume, and in which the thylakoid system floats. There are some common misconceptions about the outer and inner chloroplast membranes. The fact that chloroplasts are surrounded by a double membrane is often cited as evidence that they are the descendants of endosymbiotic cyanobacteria. This is often interpreted as meaning the outer chloroplast membrane is the product of the host's cell membrane infolding to form a vesicle to surround the ancestral cyanobacterium—which is not true—both chloroplast membranes are homologous to the cyanobacterium's original double membranes. The chloroplast double membrane is also often compared to the mitochondrial double membrane. This is not a valid comparison—the inner mitochondria membrane is used to run proton pumps and carry out oxidative phosphorylation across to generate ATP energy. The only chloroplast structure that can considered analogous to it is the internal thylakoid system. Even so, in terms of "in-out", the direction of chloroplast H ion flow is in the opposite direction compared to oxidative phosphorylation in mitochondria. In addition, in terms of function, the inner chloroplast membrane, which regulates metabolite passage and synthesizes some materials, has no counterpart in the mitochondrion. Outer chloroplast membrane The outer chloroplast membrane is a semi-porous membrane that small molecules and ions can easily diffuse across. However, it is not permeable to larger proteins, so chloroplast polypeptides being synthesized in the cell cytoplasm must be transported across the outer chloroplast membrane by the TOC complex, or translocon on the outer chloroplast membrane. The chloroplast membranes sometimes protrude out into the cytoplasm, forming a stromule, or stroma-containing tubule. Stromules are very rare in chloroplasts, and are much more common in other plastids like chromoplasts and amyloplasts in petals and roots, respectively. They may exist to increase the chloroplast's surface area for cross-membrane transport, because they are often branched and tangled with the endoplasmic reticulum. When they were first observed in 1962, some plant biologists dismissed the structures as artifactual, claiming that stromules were just oddly shaped chloroplasts with constricted regions or dividing chloroplasts. However, there is a growing body of evidence that stromules are functional, integral features of plant cell plastids, not merely artifacts. Intermembrane space and peptidoglycan wall Usually, a thin intermembrane space about 10–20 nanometers thick exists between the outer and inner chloroplast membranes. Glaucophyte algal chloroplasts have a peptidoglycan layer between the chloroplast membranes. It corresponds to the peptidoglycan cell wall of their cyanobacterial ancestors, which is located between their two cell membranes. These chloroplasts are called muroplasts (from Latin "mura", meaning "wall"). Other chloroplasts were assumed to have lost the cyanobacterial wall, leaving an intermembrane space between the two chloroplast envelope membranes, but has since been found also in moss, lycophytes and ferns. Inner chloroplast membrane The inner chloroplast membrane borders the stroma and regulates passage of materials in and out of the chloroplast. After passing through the TOC complex in the outer chloroplast membrane, polypeptides must pass through the TIC complex (translocon on the inner chloroplast membrane) which is located in the inner chloroplast membrane. In addition to regulating the passage of materials, the inner chloroplast membrane is where fatty acids, lipids, and carotenoids are synthesized. Peripheral reticulum Some chloroplasts contain a structure called the chloroplast peripheral reticulum. It is often found in the chloroplasts of plants, though it has also been found in some angiosperms, and even some gymnosperms. The chloroplast peripheral reticulum consists of a maze of membranous tubes and vesicles continuous with the inner chloroplast membrane that extends into the internal stromal fluid of the chloroplast. Its purpose is thought to be to increase the chloroplast's surface area for cross-membrane transport between its stroma and the cell cytoplasm. The small vesicles sometimes observed may serve as transport vesicles to shuttle stuff between the thylakoids and intermembrane space. Stroma The protein-rich, alkaline, aqueous fluid within the inner chloroplast membrane and outside of the thylakoid space is called the stroma, which corresponds to the cytosol of the original cyanobacterium. Nucleoids of chloroplast DNA, chloroplast ribosomes, the thylakoid system with plastoglobuli, starch granules, and many proteins can be found floating around in it. The Calvin cycle, which fixes CO into G3P takes place in the stroma. Chloroplast ribosomes Chloroplasts have their own ribosomes, which they use to synthesize a small fraction of their proteins. Chloroplast ribosomes are about two-thirds the size of cytoplasmic ribosomes (around 17 nm vs 25 nm). They take mRNAs transcribed from the chloroplast DNA and translate them into protein. While similar to bacterial ribosomes, chloroplast translation is more complex than in bacteria, so chloroplast ribosomes include some chloroplast-unique features. Small subunit ribosomal RNAs in several Chlorophyta and euglenid chloroplasts lack motifs for Shine-Dalgarno sequence recognition, which is considered essential for translation initiation in most chloroplasts and prokaryotes. Such loss is also rarely observed in other plastids and prokaryotes. An additional 4.5S rRNA with homology to the 3' tail of 23S is found in "higher" plants. Plastoglobuli Plastoglobuli (singular plastoglobulus, sometimes spelled plastoglobule(s)), are spherical bubbles of lipids and proteins about 45–60 nanometers across. They are surrounded by a lipid monolayer. Plastoglobuli are found in all chloroplasts, but become more common when the chloroplast is under oxidative stress, or when it ages and transitions into a gerontoplast. Plastoglobuli also exhibit a greater size variation under these conditions. They are also common in etioplasts, but decrease in number as the etioplasts mature into chloroplasts. Plastoglubuli contain both structural proteins and enzymes involved in lipid synthesis and metabolism. They contain many types of lipids including plastoquinone, vitamin E, carotenoids and chlorophylls. Plastoglobuli were once thought to be free-floating in the stroma, but it is now thought that they are permanently attached either to a thylakoid or to another plastoglobulus attached to a thylakoid, a configuration that allows a plastoglobulus to exchange its contents with the thylakoid network. In normal green chloroplasts, the vast majority of plastoglobuli occur singularly, attached directly to their parent thylakoid. In old or stressed chloroplasts, plastoglobuli tend to occur in linked groups or chains, still always anchored to a thylakoid. Plastoglobuli form when a bubble appears between the layers of the lipid bilayer of the thylakoid membrane, or bud from existing plastoglubuli—though they never detach and float off into the stroma. Practically all plastoglobuli form on or near the highly curved edges of the thylakoid disks or sheets. They are also more common on stromal thylakoids than on granal ones. Starch granules Starch granules are very common in chloroplasts, typically taking up 15% of the organelle's volume, though in some other plastids like amyloplasts, they can be big enough to distort the shape of the organelle. Starch granules are simply accumulations of starch in the stroma, and are not bounded by a membrane. Starch granules appear and grow throughout the day, as the chloroplast synthesizes sugars, and are consumed at night to fuel respiration and continue sugar export into the phloem, though in mature chloroplasts, it is rare for a starch granule to be completely consumed or for a new granule to accumulate. Starch granules vary in composition and location across different chloroplast lineages. In red algae, starch granules are found in the cytoplasm rather than in the chloroplast. In plants, mesophyll chloroplasts, which do not synthesize sugars, lack starch granules. RuBisCO The chloroplast stroma contains many proteins, though the most common and important is RuBisCO, which is probably also the most abundant protein on the planet. RuBisCO is the enzyme that fixes CO into sugar molecules. In plants, RuBisCO is abundant in all chloroplasts, though in plants, it is confined to the bundle sheath chloroplasts, where the Calvin cycle is carried out in plants. Pyrenoids The chloroplasts of some hornworts and algae contain structures called pyrenoids. They are not found in higher plants. Pyrenoids are roughly spherical and highly refractive bodies which are a site of starch accumulation in plants that contain them. They consist of a matrix opaque to electrons, surrounded by two hemispherical starch plates. The starch is accumulated as the pyrenoids mature. In algae with carbon concentrating mechanisms, the enzyme RuBisCO is found in the pyrenoids. Starch can also accumulate around the pyrenoids when CO2 is scarce. Pyrenoids can divide to form new pyrenoids, or be produced "de novo". Thylakoid system Thylakoids (sometimes spelled thylakoïds), are small interconnected sacks which contain the membranes that the light reactions of photosynthesis take place on. The word thylakoid comes from the Greek word thylakos which means "sack". Suspended within the chloroplast stroma is the thylakoid system, a highly dynamic collection of membranous sacks called thylakoids where chlorophyll is found and the light reactions of photosynthesis happen. In most vascular plant chloroplasts, the thylakoids are arranged in stacks called grana, though in certain plant chloroplasts and some algal chloroplasts, the thylakoids are free floating. Thylakoid structure Using a light microscope, it is just barely possible to see tiny green granules—which were named grana. With electron microscopy, it became possible to see the thylakoid system in more detail, revealing it to consist of stacks of flat thylakoids which made up the grana, and long interconnecting stromal thylakoids which linked different grana. In the transmission electron microscope, thylakoid membranes appear as alternating light-and-dark bands, 8.5 nanometers thick. For a long time, the three-dimensional structure of the thylakoid membrane system had been unknown or disputed. Many models have been proposed, the most prevalent being the helical model, in which granum stacks of thylakoids are wrapped by helical stromal thylakoids. Another model known as the 'bifurcation model', which was based on the first electron tomography study of plant thylakoid membranes, depicts the stromal membranes as wide lamellar sheets perpendicular to the grana columns which bifurcates into multiple parallel discs forming the granum-stroma assembly. The helical model was supported by several additional works, but ultimately it was determined in 2019 that features from both the helical and bifurcation models are consolidated by newly discovered left-handed helical membrane junctions. Likely for ease, the thylakoid system is still commonly depicted by older "hub and spoke" models where the grana are connected to each other by tubes of stromal thylakoids. Grana consist of a stacks of flattened circular granal thylakoids that resemble pancakes. Each granum can contain anywhere from two to a hundred thylakoids, though grana with 10–20 thylakoids are most common. Wrapped around the grana are multiple parallel right-handed helical stromal thylakoids, also known as frets or lamellar thylakoids. The helices ascend at an angle of ~20°, connecting to each granal thylakoid at a bridge-like slit junction. The stroma lamellae extend as large sheets perpendicular to the grana columns. These sheets are connected to the right-handed helices either directly or through bifurcations that form left-handed helical membrane surfaces. The left-handed helical surfaces have a similar tilt angle to the right-handed helices (~20°), but ¼ the pitch. Approximately 4 left-handed helical junctions are present per granum, resulting in a pitch-balanced array of right- and left-handed helical membrane surfaces of different radii and pitch that consolidate the network with minimal surface and bending energies. While different parts of the thylakoid system contain different membrane proteins, the thylakoid membranes are continuous and the thylakoid space they enclose form a single continuous labyrinth. Thylakoid composition Embedded in the thylakoid membranes are important protein complexes which carry out the light reactions of photosynthesis. Photosystem II and photosystem I contain light-harvesting complexes with chlorophyll and carotenoids that absorb light energy and use it to energize electrons. Molecules in the thylakoid membrane use the energized electrons to pump hydrogen ions into the thylakoid space, decreasing the pH and turning it acidic. ATP synthase is a large protein complex that harnesses the concentration gradient of the hydrogen ions in the thylakoid space to generate ATP energy as the hydrogen ions flow back out into the stroma—much like a dam turbine. There are two types of thylakoids—granal thylakoids, which are arranged in grana, and stromal thylakoids, which are in contact with the stroma. Granal thylakoids are pancake-shaped circular disks about 300–600 nanometers in diameter. Stromal thylakoids are helicoid sheets that spiral around grana. The flat tops and bottoms of granal thylakoids contain only the relatively flat photosystem II protein complex. This allows them to stack tightly, forming grana with many layers of tightly appressed membrane, called granal membrane, increasing stability and surface area for light capture. In contrast, photosystem I and ATP synthase are large protein complexes which jut out into the stroma. They can't fit in the appressed granal membranes, and so are found in the stromal thylakoid membrane—the edges of the granal thylakoid disks and the stromal thylakoids. These large protein complexes may act as spacers between the sheets of stromal thylakoids. The number of thylakoids and the total thylakoid area of a chloroplast is influenced by light exposure. Shaded chloroplasts contain larger and more grana with more thylakoid membrane area than chloroplasts exposed to bright light, which have smaller and fewer grana and less thylakoid area. Thylakoid extent can change within minutes of light exposure or removal. Pigments and chloroplast colors Inside the photosystems embedded in chloroplast thylakoid membranes are various photosynthetic pigments, which absorb and transfer light energy. The types of pigments found are different in various groups of chloroplasts, and are responsible for a wide variety of chloroplast colorations. Paper chroma-tography of some spinach leaf extract shows the various pigments present in their chloroplasts. Xanthophylls Chlorophyll a Chlorophyll b Chlorophylls Chlorophyll a is found in all chloroplasts, as well as their cyanobacterial ancestors. Chlorophyll a is a blue-green pigment partially responsible for giving most cyanobacteria and chloroplasts their color. Other forms of chlorophyll exist, such as the accessory pigments chlorophyll b, chlorophyll c, chlorophyll d, and chlorophyll f. Chlorophyll b is an olive green pigment found only in the chloroplasts of plants, green algae, any secondary chloroplasts obtained through the secondary endosymbiosis of a green alga, and a few cyanobacteria. It is the chlorophylls a and b together that make most plant and green algal chloroplasts green. Chlorophyll c is mainly found in secondary endosymbiotic chloroplasts that originated from a red alga, although it is not found in chloroplasts of red algae themselves. Chlorophyll c is also found in some green algae and cyanobacteria. Chlorophylls d and f are pigments found only in some cyanobacteria. Carotenoids In addition to chlorophylls, another group of yellow–orange pigments called carotenoids are also found in the photosystems. There are about thirty photosynthetic carotenoids. They help transfer and dissipate excess energy, and their bright colors sometimes override the chlorophyll green, like during the fall, when the leaves of some land plants change color. β-carotene is a bright red-orange carotenoid found in nearly all chloroplasts, like chlorophyll a. Xanthophylls, especially the orange-red zeaxanthin, are also common. Many other forms of carotenoids exist that are only found in certain groups of chloroplasts. Phycobilins Phycobilins are a third group of pigments found in cyanobacteria, and glaucophyte, red algal, and cryptophyte chloroplasts. Phycobilins come in all colors, though phycoerytherin is one of the pigments that makes many red algae red. Phycobilins often organize into relatively large protein complexes about 40 nanometers across called phycobilisomes. Like photosystem I and ATP synthase, phycobilisomes jut into the stroma, preventing thylakoid stacking in red algal chloroplasts. Cryptophyte chloroplasts and some cyanobacteria don't have their phycobilin pigments organized into phycobilisomes, and keep them in their thylakoid space instead. Specialized chloroplasts in plants To fix carbon dioxide into sugar molecules in the process of photosynthesis, chloroplasts use an enzyme called RuBisCO. RuBisCO has trouble distinguishing between carbon dioxide and oxygen, so at high oxygen concentrations, RuBisCO starts accidentally adding oxygen to sugar precursors. This has the result of ATP energy being wasted and being released, all with no sugar being produced. This is a big problem, since O is produced by the initial light reactions of photosynthesis, causing issues down the line in the Calvin cycle which uses RuBisCO. plants evolved a way to solve this—by spatially separating the light reactions and the Calvin cycle. The light reactions, which store light energy in ATP and NADPH, are done in the mesophyll cells of a leaf. The Calvin cycle, which uses the stored energy to make sugar using RuBisCO, is done in the bundle sheath cells, a layer of cells surrounding a vein in a leaf. As a result, chloroplasts in mesophyll cells and bundle sheath cells are specialized for each stage of photosynthesis. In mesophyll cells, chloroplasts are specialized for the light reactions, so they lack RuBisCO, and have normal grana and thylakoids, which they use to make ATP and NADPH, as well as oxygen. They store in a four-carbon compound, which is why the process is called photosynthesis. The four-carbon compound is then transported to the bundle sheath chloroplasts, where it drops off and returns to the mesophyll. Bundle sheath chloroplasts do not carry out the light reactions, preventing oxygen from building up in them and disrupting RuBisCO activity. Because of this, they lack thylakoids organized into grana stacks—though bundle sheath chloroplasts still have free-floating thylakoids in the stroma where they still carry out cyclic electron flow, a light-driven method of synthesizing ATP to power the Calvin cycle without generating oxygen. They lack photosystem II, and only have photosystem I—the only protein complex needed for cyclic electron flow. Because the job of bundle sheath chloroplasts is to carry out the Calvin cycle and make sugar, they often contain large starch grains. Both types of chloroplast contain large amounts of chloroplast peripheral reticulum, which they use to get more surface area to transport stuff in and out of them. Mesophyll chloroplasts have a little more peripheral reticulum than bundle sheath chloroplasts. Location Distribution in a plant Not all cells in a multicellular plant contain chloroplasts. All green parts of a plant contain chloroplasts—the chloroplasts, or more specifically, the chlorophyll in them are what make the photosynthetic parts of a plant green. The plant cells which contain chloroplasts are usually parenchyma cells, though chloroplasts can also be found in collenchyma tissue. A plant cell which contains chloroplasts is known as a chlorenchyma cell. A typical chlorenchyma cell of a land plant contains about 10 to 100 chloroplasts. In some plants such as cacti, chloroplasts are found in the stems, though in most plants, chloroplasts are concentrated in the leaves. One square millimeter of leaf tissue can contain half a million chloroplasts. Within a leaf, chloroplasts are mainly found in the mesophyll layers of a leaf, and the guard cells of stomata. Palisade mesophyll cells can contain 30–70 chloroplasts per cell, while stomatal guard cells contain only around 8–15 per cell, as well as much less chlorophyll. Chloroplasts can also be found in the bundle sheath cells of a leaf, especially in C plants, which carry out the Calvin cycle in their bundle sheath cells. They are often absent from the epidermis of a leaf. Cellular location Chloroplast movement The chloroplasts of plant and algal cells can orient themselves to best suit the available light. In low-light conditions, they will spread out in a sheet—maximizing the surface area to absorb light. Under intense light, they will seek shelter by aligning in vertical columns along the plant cell's cell wall or turning sideways so that light strikes them edge-on. This reduces exposure and protects them from photooxidative damage. This ability to distribute chloroplasts so that they can take shelter behind each other or spread out may be the reason why land plants evolved to have many small chloroplasts instead of a few big ones. Chloroplast movement is considered one of the most closely regulated stimulus-response systems that can be found in plants. Mitochondria have also been observed to follow chloroplasts as they move. In higher plants, chloroplast movement is run by phototropins, blue light photoreceptors also responsible for plant phototropism. In some algae, mosses, ferns, and flowering plants, chloroplast movement is influenced by red light in addition to blue light, though very long red wavelengths inhibit movement rather than speeding it up. Blue light generally causes chloroplasts to seek shelter, while red light draws them out to maximize light absorption. Studies of Vallisneria gigantea, an aquatic flowering plant, have shown that chloroplasts can get moving within five minutes of light exposure, though they don't initially show any net directionality. They may move along microfilament tracks, and the fact that the microfilament mesh changes shape to form a honeycomb structure surrounding the chloroplasts after they have moved suggests that microfilaments may help to anchor chloroplasts in place. Function and chemistry Guard cell chloroplasts Unlike most epidermal cells, the guard cells of plant stomata contain relatively well-developed chloroplasts. However, exactly what they do is controversial. Plant innate immunity Plants lack specialized immune cells—all plant cells participate in the plant immune response. Chloroplasts, along with the nucleus, cell membrane, and endoplasmic reticulum, are key players in pathogen defense. Due to its role in a plant cell's immune response, pathogens frequently target the chloroplast. Plants have two main immune responses—the hypersensitive response, in which infected cells seal themselves off and undergo programmed cell death, and systemic acquired resistance, where infected cells release signals warning the rest of the plant of a pathogen's presence. Chloroplasts stimulate both responses by purposely damaging their photosynthetic system, producing reactive oxygen species. High levels of reactive oxygen species will cause the hypersensitive response. The reactive oxygen species also directly kill any pathogens within the cell. Lower levels of reactive oxygen species initiate systemic acquired resistance, triggering defense-molecule production in the rest of the plant. In some plants, chloroplasts are known to move closer to the infection site and the nucleus during an infection. Chloroplasts can serve as cellular sensors. After detecting stress in a cell, which might be due to a pathogen, chloroplasts begin producing molecules like salicylic acid, jasmonic acid, nitric oxide and reactive oxygen species which can serve as defense-signals. As cellular signals, reactive oxygen species are unstable molecules, so they probably don't leave the chloroplast, but instead pass on their signal to an unknown second messenger molecule. All these molecules initiate retrograde signaling—signals from the chloroplast that regulate gene expression in the nucleus. In addition to defense signaling, chloroplasts, with the help of the peroxisomes, help synthesize an important defense molecule, jasmonate. Chloroplasts synthesize all the fatty acids in a plant cell—linoleic acid, a fatty acid, is a precursor to jasmonate. Photosynthesis One of the main functions of the chloroplast is its role in photosynthesis, the process by which light is transformed into chemical energy, to subsequently produce food in the form of sugars. Water (H2O) and carbon dioxide (CO2) are used in photosynthesis, and sugar and oxygen (O2) is made, using light energy. Photosynthesis is divided into two stages—the light reactions, where water is split to produce oxygen, and the dark reactions, or Calvin cycle, which builds sugar molecules from carbon dioxide. The two phases are linked by the energy carriers adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADP+). Light reactions The light reactions take place on the thylakoid membranes. They take light energy and store it in NADPH, a form of NADP+, and ATP to fuel the dark reactions. Energy carriers ATP is the phosphorylated version of adenosine diphosphate (ADP), which stores energy in a cell and powers most cellular activities. ATP is the energized form, while ADP is the (partially) depleted form. NADP+ is an electron carrier which ferries high energy electrons. In the light reactions, it gets reduced, meaning it picks up electrons, becoming NADPH. Photophosphorylation Like mitochondria, chloroplasts use the potential energy stored in an H+, or hydrogen ion gradient to generate ATP energy. The two photosystems capture light energy to energize electrons taken from water, and release them down an electron transport chain. The molecules between the photosystems harness the electrons' energy to pump hydrogen ions into the thylakoid space, creating a concentration gradient, with more hydrogen ions (up to a thousand times as many) inside the thylakoid system than in the stroma. The hydrogen ions in the thylakoid space then diffuse back down their concentration gradient, flowing back out into the stroma through ATP synthase. ATP synthase uses the energy from the flowing hydrogen ions to phosphorylate adenosine diphosphate into adenosine triphosphate, or ATP. Because chloroplast ATP synthase projects out into the stroma, the ATP is synthesized there, in position to be used in the dark reactions. NADP+ reduction Electrons are often removed from the electron transport chains to charge NADP+ with electrons, reducing it to NADPH. Like ATP synthase, ferredoxin-NADP+ reductase, the enzyme that reduces NADP+, releases the NADPH it makes into the stroma, right where it is needed for the dark reactions. Because NADP+ reduction removes electrons from the electron transport chains, they must be replaced—the job of photosystem II, which splits water molecules (H2O) to obtain the electrons from its hydrogen atoms. Cyclic photophosphorylation While photosystem II photolyzes water to obtain and energize new electrons, photosystem I simply reenergizes depleted electrons at the end of an electron transport chain. Normally, the reenergized electrons are taken by NADP+, though sometimes they can flow back down more H+-pumping electron transport chains to transport more hydrogen ions into the thylakoid space to generate more ATP. This is termed cyclic photophosphorylation because the electrons are recycled. Cyclic photophosphorylation is common in plants, which need more ATP than NADPH. Dark reactions The Calvin cycle, also known as the dark reactions, is a series of biochemical reactions that fixes CO2 into G3P sugar molecules and uses the energy and electrons from the ATP and NADPH made in the light reactions. The Calvin cycle takes place in the stroma of the chloroplast. While named "the dark reactions", in most plants, they take place in the light, since the dark reactions are dependent on the products of the light reactions. Carbon fixation and G3P synthesis The Calvin cycle starts by using the enzyme RuBisCO to fix CO2 into five-carbon Ribulose bisphosphate (RuBP) molecules. The result is unstable six-carbon molecules that immediately break down into three-carbon molecules called 3-phosphoglyceric acid, or 3-PGA. The ATP and NADPH made in the light reactions is used to convert the 3-PGA into glyceraldehyde-3-phosphate, or G3P sugar molecules. Most of the G3P molecules are recycled back into RuBP using energy from more ATP, but one out of every six produced leaves the cycle—the end product of the dark reactions. Sugars and starches Glyceraldehyde-3-phosphate can double up to form larger sugar molecules like glucose and fructose. These molecules are processed, and from them, the still larger sucrose, a disaccharide commonly known as table sugar, is made, though this process takes place outside of the chloroplast, in the cytoplasm. Alternatively, glucose monomers in the chloroplast can be linked together to make starch, which accumulates into the starch grains found in the chloroplast. Under conditions such as high atmospheric CO2 concentrations, these starch grains may grow very large, distorting the grana and thylakoids. The starch granules displace the thylakoids, but leave them intact. Waterlogged roots can also cause starch buildup in the chloroplasts, possibly due to less sucrose being exported out of the chloroplast (or more accurately, the plant cell). This depletes a plant's free phosphate supply, which indirectly stimulates chloroplast starch synthesis. While linked to low photosynthesis rates, the starch grains themselves may not necessarily interfere significantly with the efficiency of photosynthesis, and might simply be a side effect of another photosynthesis-depressing factor. Photorespiration Photorespiration can occur when the oxygen concentration is too high. RuBisCO cannot distinguish between oxygen and carbon dioxide very well, so it can accidentally add O2 instead of CO2 to RuBP. This process reduces the efficiency of photosynthesis—it consumes ATP and oxygen, releases CO2, and produces no sugar. It can waste up to half the carbon fixed by the Calvin cycle. Several mechanisms have evolved in different lineages that raise the carbon dioxide concentration relative to oxygen within the chloroplast, increasing the efficiency of photosynthesis. These mechanisms are called carbon dioxide concentrating mechanisms, or CCMs. These include Crassulacean acid metabolism, carbon fixation, and pyrenoids. Chloroplasts in plants are notable as they exhibit a distinct chloroplast dimorphism. pH Because of the H+ gradient across the thylakoid membrane, the interior of the thylakoid is acidic, with a pH around 4, while the stroma is slightly basic, with a pH of around 8. The optimal stroma pH for the Calvin cycle is 8.1, with the reaction nearly stopping when the pH falls below 7.3. CO2 in water can form carbonic acid, which can disturb the pH of isolated chloroplasts, interfering with photosynthesis, even though CO2 is used in photosynthesis. However, chloroplasts in living plant cells are not affected by this as much. Chloroplasts can pump K+ and H+ ions in and out of themselves using a poorly understood light-driven transport system. In the presence of light, the pH of the thylakoid lumen can drop up to 1.5 pH units, while the pH of the stroma can rise by nearly one pH unit. Amino acid synthesis Chloroplasts alone make almost all of a plant cell's amino acids in their stroma except the sulfur-containing ones like cysteine and methionine. Cysteine is made in the chloroplast (the proplastid too) but it is also synthesized in the cytosol and mitochondria, probably because it has trouble crossing membranes to get to where it is needed. The chloroplast is known to make the precursors to methionine but it is unclear whether the organelle carries out the last leg of the pathway or if it happens in the cytosol. Other nitrogen compounds Chloroplasts make all of a cell's purines and pyrimidines—the nitrogenous bases found in DNA and RNA. They also convert nitrite (NO2−) into ammonia (NH3) which supplies the plant with nitrogen to make its amino acids and nucleotides. Other chemical products The plastid is the site of diverse and complex lipid synthesis in plants. The carbon used to form the majority of the lipid is from acetyl-CoA, which is the decarboxylation product of pyruvate. Pyruvate may enter the plastid from the cytosol by passive diffusion through the membrane after production in glycolysis. Pyruvate is also made in the plastid from phosphoenolpyruvate, a metabolite made in the cytosol from pyruvate or PGA. Acetate in the cytosol is unavailable for lipid biosynthesis in the plastid. The typical length of fatty acids produced in the plastid are 16 or 18 carbons, with 0-3 cis double bonds. The biosynthesis of fatty acids from acetyl-CoA primarily requires two enzymes. Acetyl-CoA carboxylase creates malonyl-CoA, used in both the first step and the extension steps of synthesis. Fatty acid synthase (FAS) is a large complex of enzymes and cofactors including acyl carrier protein (ACP) which holds the acyl chain as it is synthesized. The initiation of synthesis begins with the condensation of malonyl-ACP with acetyl-CoA to produce ketobutyryl-ACP. 2 reductions involving the use of NADPH and one dehydration creates butyryl-ACP. Extension of the fatty acid comes from repeated cycles of malonyl-ACP condensation, reduction, and dehydration. Other lipids are derived from the methyl-erythritol phosphate (MEP) pathway and consist of gibberelins, sterols, abscisic acid, phytol, and innumerable secondary metabolites. Differentiation, replication, and inheritance Chloroplasts are a special type of a plant cell organelle called a plastid, though the two terms are sometimes used interchangeably. There are many other types of plastids, which carry out various functions. All chloroplasts in a plant are descended from undifferentiated proplastids found in the zygote, or fertilized egg. Proplastids are commonly found in an adult plant's apical meristems. Chloroplasts do not normally develop from proplastids in root tip meristems—instead, the formation of starch-storing amyloplasts is more common. In shoots, proplastids from shoot apical meristems can gradually develop into chloroplasts in photosynthetic leaf tissues as the leaf matures, if exposed to the required light. This process involves invaginations of the inner plastid membrane, forming sheets of membrane that project into the internal stroma. These membrane sheets then fold to form thylakoids and grana. If angiosperm shoots are not exposed to the required light for chloroplast formation, proplastids may develop into an etioplast stage before becoming chloroplasts. An etioplast is a plastid that lacks chlorophyll, and has inner membrane invaginations that form a lattice of tubes in their stroma, called a prolamellar body. While etioplasts lack chlorophyll, they have a yellow chlorophyll precursor stocked. Within a few minutes of light exposure, the prolamellar body begins to reorganize into stacks of thylakoids, and chlorophyll starts to be produced. This process, where the etioplast becomes a chloroplast, takes several hours. Gymnosperms do not require light to form chloroplasts. Light, however, does not guarantee that a proplastid will develop into a chloroplast. Whether a proplastid develops into a chloroplast some other kind of plastid is mostly controlled by the nucleus and is largely influenced by the kind of cell it resides in. Plastid interconversion Plastid differentiation is not permanent, in fact many interconversions are possible. Chloroplasts may be converted to chromoplasts, which are pigment-filled plastids responsible for the bright colors seen in flowers and ripe fruit. Starch storing amyloplasts can also be converted to chromoplasts, and it is possible for proplastids to develop straight into chromoplasts. Chromoplasts and amyloplasts can also become chloroplasts, like what happens when a carrot or a potato is illuminated. If a plant is injured, or something else causes a plant cell to revert to a meristematic state, chloroplasts and other plastids can turn back into proplastids. Chloroplast, amyloplast, chromoplast, proplast, etc., are not absolute states—intermediate forms are common. Division Most chloroplasts in a photosynthetic cell do not develop directly from proplastids or etioplasts. In fact, a typical shoot meristematic plant cell contains only 7–20 proplastids. These proplastids differentiate into chloroplasts, which divide to create the 30–70 chloroplasts found in a mature photosynthetic plant cell. If the cell divides, chloroplast division provides the additional chloroplasts to partition between the two daughter cells. In single-celled algae, chloroplast division is the only way new chloroplasts are formed. There is no proplastid differentiation—when an algal cell divides, its chloroplast divides along with it, and each daughter cell receives a mature chloroplast. Almost all chloroplasts in a cell divide, rather than a small group of rapidly dividing chloroplasts. Chloroplasts have no definite S-phase—their DNA replication is not synchronized or limited to that of their host cells. Much of what we know about chloroplast division comes from studying organisms like Arabidopsis and the red alga Cyanidioschyzon merolæ. The division process starts when the proteins FtsZ1 and FtsZ2 assemble into filaments, and with the help of a protein ARC6, form a structure called a Z-ring within the chloroplast's stroma. The Min system manages the placement of the Z-ring, ensuring that the chloroplast is cleaved more or less evenly. The protein MinD prevents FtsZ from linking up and forming filaments. Another protein ARC3 may also be involved, but it is not very well understood. These proteins are active at the poles of the chloroplast, preventing Z-ring formation there, but near the center of the chloroplast, MinE inhibits them, allowing the Z-ring to form. Next, the two plastid-dividing rings, or PD rings form. The inner plastid-dividing ring is located in the inner side of the chloroplast's inner membrane, and is formed first. The outer plastid-dividing ring is found wrapped around the outer chloroplast membrane. It consists of filaments about 5 nanometers across, arranged in rows 6.4 nanometers apart, and shrinks to squeeze the chloroplast. This is when chloroplast constriction begins. In a few species like Cyanidioschyzon merolæ, chloroplasts have a third plastid-dividing ring located in the chloroplast's intermembrane space. Late into the constriction phase, dynamin proteins assemble around the outer plastid-dividing ring, helping provide force to squeeze the chloroplast. Meanwhile, the Z-ring and the inner plastid-dividing ring break down. During this stage, the many chloroplast DNA plasmids floating around in the stroma are partitioned and distributed to the two forming daughter chloroplasts. Later, the dynamins migrate under the outer plastid dividing ring, into direct contact with the chloroplast's outer membrane, to cleave the chloroplast in two daughter chloroplasts. A remnant of the outer plastid dividing ring remains floating between the two daughter chloroplasts, and a remnant of the dynamin ring remains attached to one of the daughter chloroplasts. Of the five or six rings involved in chloroplast division, only the outer plastid-dividing ring is present for the entire constriction and division phase—while the Z-ring forms first, constriction does not begin until the outer plastid-dividing ring forms. Regulation In species of algae that contain a single chloroplast, regulation of chloroplast division is extremely important to ensure that each daughter cell receives a chloroplast—chloroplasts can't be made from scratch. In organisms like plants, whose cells contain multiple chloroplasts, coordination is looser and less important. It is likely that chloroplast and cell division are somewhat synchronized, though the mechanisms for it are mostly unknown. Light has been shown to be a requirement for chloroplast division. Chloroplasts can grow and progress through some of the constriction stages under poor quality green light, but are slow to complete division—they require exposure to bright white light to complete division. Spinach leaves grown under green light have been observed to contain many large dumbbell-shaped chloroplasts. Exposure to white light can stimulate these chloroplasts to divide and reduce the population of dumbbell-shaped chloroplasts. Chloroplast inheritance Like mitochondria, chloroplasts are usually inherited from a single parent. Biparental chloroplast inheritance—where plastid genes are inherited from both parent plants—occurs in very low levels in some flowering plants. Many mechanisms prevent biparental chloroplast DNA inheritance, including selective destruction of chloroplasts or their genes within the gamete or zygote, and chloroplasts from one parent being excluded from the embryo. Parental chloroplasts can be sorted so that only one type is present in each offspring. Gymnosperms, such as pine trees, mostly pass on chloroplasts paternally, while flowering plants often inherit chloroplasts maternally. Flowering plants were once thought to only inherit chloroplasts maternally. However, there are now many documented cases of angiosperms inheriting chloroplasts paternally. Angiosperms, which pass on chloroplasts maternally, have many ways to prevent paternal inheritance. Most of them produce sperm cells that do not contain any plastids. There are many other documented mechanisms that prevent paternal inheritance in these flowering plants, such as different rates of chloroplast replication within the embryo. Among angiosperms, paternal chloroplast inheritance is observed more often in hybrids than in offspring from parents of the same species. This suggests that incompatible hybrid genes might interfere with the mechanisms that prevent paternal inheritance. Transplastomic plants Recently, chloroplasts have caught attention by developers of genetically modified crops. Since, in most flowering plants, chloroplasts are not inherited from the male parent, transgenes in these plastids cannot be disseminated by pollen. This makes plastid transformation a valuable tool for the creation and cultivation of genetically modified plants that are biologically contained, thus posing significantly lower environmental risks. This biological containment strategy is therefore suitable for establishing the coexistence of conventional and organic agriculture. While the reliability of this mechanism has not yet been studied for all relevant crop species, recent results in tobacco plants are promising, showing a failed containment rate of transplastomic plants at 3 in 1,000,000. References External links Chloroplast – Cell Centered Database Co-Extra research on chloroplast transformation NCBI full chloroplast genome Organelles Photosynthesis Endosymbiotic events
2,887
6,420
https://en.wikipedia.org/wiki/Corona%20Borealis
Corona Borealis
Corona Borealis is a small constellation in the Northern Celestial Hemisphere. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and remains one of the 88 modern constellations. Its brightest stars form a semicircular arc. Its Latin name, inspired by its shape, means "northern crown". In classical mythology Corona Borealis generally represented the crown given by the god Dionysus to the Cretan princess Ariadne and set by her in the heavens. Other cultures likened the pattern to a circle of elders, an eagle's nest, a bear's den or a smokehole. Ptolemy also listed a southern counterpart, Corona Australis, with a similar pattern. The brightest star is the magnitude 2.2 Alpha Coronae Borealis. The yellow supergiant R Coronae Borealis is the prototype of a rare class of giant stars—the R Coronae Borealis variables—that are extremely hydrogen deficient, and thought to result from the merger of two white dwarfs. T Coronae Borealis, also known as the Blaze Star, is another unusual type of variable star known as a recurrent nova. Normally of magnitude 10, it last flared up to magnitude 2 in 1946. ADS 9731 and Sigma Coronae Borealis are multiple star systems with six and five components respectively. Five star systems have been found to have Jupiter-sized exoplanets. Abell 2065 is a highly concentrated galaxy cluster one billion light-years from the Solar System containing more than 400 members, and is itself part of the larger Corona Borealis Supercluster. Characteristics Covering 179 square degrees and hence 0.433% of the sky, Corona Borealis ranks 73rd of the 88 modern constellations by area. Its position in the Northern Celestial Hemisphere means that the whole constellation is visible to observers north of 50°S. It is bordered by Boötes to the north and west, Serpens Caput to the south, and Hercules to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "CrB". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of eight segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 39.71° and 25.54°. It has a counterpart—Corona Australis—in the Southern Celestial Hemisphere. Features Stars The seven stars that make up the constellation's distinctive crown-shaped pattern are all 4th-magnitude stars except for the brightest of them, Alpha Coronae Borealis. The other six stars are Theta, Beta, Gamma, Delta, Epsilon and Iota Coronae Borealis. The German cartographer Johann Bayer gave twenty stars in Corona Borealis Bayer designations from Alpha to Upsilon in his 1603 star atlas Uranometria. Zeta Coronae Borealis was noted to be a double star by later astronomers and its components designated Zeta1 and Zeta2. John Flamsteed did likewise with Nu Coronae Borealis; classed by Bayer as a single star, it was noted to be two close stars by Flamsteed. He named them 20 and 21 Coronae Borealis in his catalogue, alongside the designations Nu1 and Nu2 respectively. Chinese astronomers deemed nine stars to make up the asterism, adding Pi and Rho Coronae Borealis. Within the constellation's borders, there are 37 stars brighter than or equal to apparent magnitude 6.5. Alpha Coronae Borealis (officially named Alphecca by the IAU, but sometimes also known as Gemma) appears as a blue-white star of magnitude 2.2. In fact, it is an Algol-type eclipsing binary that varies by 0.1 magnitude with a period of 17.4 days. The primary is a white main-sequence star of spectral type A0V that is 2.91 times the mass of the Sun () and 57 times as luminous (), and is surrounded by a debris disk out to a radius of around 60 astronomical units (AU). The secondary companion is a yellow main-sequence star of spectral type G5V that is a little smaller (0.9 times) the diameter of the Sun. Lying 75±0.5 light-years from Earth, Alphecca is believed to be a member of the Ursa Major Moving Group of stars that have a common motion through space. Located 112±3 light-years away, Beta Coronae Borealis or Nusakan is a spectroscopic binary system whose two components are separated by 10 AU and orbit each other every 10.5 years. The brighter component is a rapidly oscillating Ap star, pulsating with a period of 16.2 minutes. Of spectral type A5V with a surface temperature of around 7980 K, it has around , 2.6 solar radii (), and . The smaller star is of spectral type F2V with a surface temperature of around 6750 K, and has around , , and between 4 and . Near Nusakan is Theta Coronae Borealis, a binary system that shines with a combined magnitude of 4.13 located 380±20 light-years distant. The brighter component, Theta Coronae Borealis A, is a blue-white star that spins extremely rapidly—at a rate of around 393 km per second. A Be star, it is surrounded by a debris disk. Flanking Alpha to the east is Gamma Coronae Borealis, yet another binary star system, whose components orbit each other every 92.94 years and are roughly as far apart from each other as the Sun and Neptune. The brighter component has been classed as a Delta Scuti variable star, though this view is not universal. The components are main sequence stars of spectral types B9V and A3V. Located 170±2 light-years away, 4.06-magnitude Delta Coronae Borealis is a yellow giant star of spectral type G3.5III that is around and has swollen to . It has a surface temperature of 5180 K. For most of its existence, Delta Coronae Borealis was a blue-white main-sequence star of spectral type B before it ran out of hydrogen fuel in its core. Its luminosity and spectrum suggest it has just crossed the Hertzsprung gap, having finished burning core hydrogen and just begun burning hydrogen in a shell that surrounds the core. Zeta Coronae Borealis is a double star with two blue-white components 6.3 arcseconds apart that can be readily separated at 100x magnification. The primary is of magnitude 5.1 and the secondary is of magnitude 6.0. Nu Coronae Borealis is an optical double, whose components are a similar distance from Earth but have different radial velocities, hence are assumed to be unrelated. The primary, Nu1 Coronae Borealis, is a red giant of spectral type M2III and magnitude 5.2, lying 640±30 light-years distant, and the secondary, Nu2 Coronae Borealis, is an orange-hued giant star of spectral type K5III and magnitude 5.4, estimated to be 590±30 light-years away. Sigma Coronae Borealis, on the other hand, is a true multiple star system divisible by small amateur telescopes. It is actually a complex system composed of two stars around as massive as the Sun that orbit each other every 1.14 days, orbited by a third Sun-like star every 726 years. The fourth and fifth components are a binary red dwarf system that is 14,000 AU distant from the other three stars. ADS 9731 is an even rarer multiple system in the constellation, composed of six stars, two of which are spectroscopic binaries. Corona Borealis is home to two remarkable variable stars. T Coronae Borealis is a cataclysmic variable star also known as the Blaze Star. Normally placid around magnitude 10—it has a minimum of 10.2 and maximum of 9.9—it brightens to magnitude 2 in a period of hours, caused by a nuclear chain reaction and the subsequent explosion. T Coronae Borealis is one of a handful of stars called recurrent novae, which include T Pyxidis and U Scorpii. An outburst of T Coronae Borealis was first recorded in 1866; its second recorded outburst was in February 1946. T Coronae Borealis is a binary star with a red-hued giant primary and a white dwarf secondary, the two stars orbiting each other over a period of approximately 8 months. R Coronae Borealis is a yellow-hued variable supergiant star, over 7000 light-years from Earth, and prototype of a class of stars known as R Coronae Borealis variables. Normally of magnitude 6, its brightness periodically drops as low as magnitude 15 and then slowly increases over the next several months. These declines in magnitude come about as dust that has been ejected from the star obscures it. Direct imaging with the Hubble Space Telescope shows extensive dust clouds out to a radius of around 2000 AU from the star, corresponding with a stream of fine dust (composed of grains 5 nm in diameter) associated with the star's stellar wind and coarser dust (composed of grains with a diameter of around 0.14 µm) ejected periodically. There are several other variables of reasonable brightness for amateur astronomer to observe, including three Mira-type long period variables: S Coronae Borealis ranges between magnitudes 5.8 and 14.1 over a period of 360 days. Located around 1946 light-years distant, it shines with a luminosity 16,643 times that of the Sun and has a surface temperature of 3033 K. One of the reddest stars in the sky, V Coronae Borealis is a cool star with a surface temperature of 2877 K that shines with a luminosity 102,831 times that of the Sun and is a remote 8810 light-years distant from Earth. Varying between magnitudes 6.9 and 12.6 over a period of 357 days, it is located near the junction of the border of Corona Borealis with Hercules and Bootes. Located 1.5° northeast of Tau Coronae Borealis, W Coronae Borealis ranges between magnitudes 7.8 and 14.3 over a period of 238 days. Another red giant, RR Coronae Borealis is a M3-type semiregular variable star that varies between magnitudes 7.3 and 8.2 over 60.8 days. RS Coronae Borealis is yet another semiregular variable red giant, which ranges between magnitudes 8.7 to 11.6 over 332 days. It is unusual in that it is a red star with a high proper motion (greater than 50 milliarcseconds a year). Meanwhile, U Coronae Borealis is an Algol-type eclipsing binary star system whose magnitude varies between 7.66 and 8.79 over a period of 3.45 days TY Coronae Borealis is a pulsating white dwarf (of ZZ Ceti) type, which is around 70% as massive as the Sun, yet has only 1.1% of its diameter. Discovered in 1990, UW Coronae Borealis is a low-mass X-ray binary system composed of a star less massive than the Sun and a neutron star surrounded by an accretion disk that draws material from the companion star. It varies in brightness in an unusually complex manner: the two stars orbit each other every 111 minutes, yet there is another cycle of 112.6 minutes, which corresponds to the orbit of the disk around the degenerate star. The beat period of 5.5 days indicates the time the accretion disk—which is asymmetrical—takes to precess around the star. Extrasolar planetary systems Extrasolar planets have been confirmed in five star systems, four of which were found by the radial velocity method. The spectrum of Epsilon Coronae Borealis was analysed for seven years from 2005 to 2012, revealing a planet around 6.7 times as massive as Jupiter () orbiting every 418 days at an average distance of around 1.3 AU. Epsilon itself is a orange giant of spectral type K2III that has swollen to and . Kappa Coronae Borealis is a spectral type K1IV orange subgiant nearly twice as massive as the Sun; around it lies a dust debris disk, and one planet with a period of 3.4 years. This planet's mass is estimated at . The dimensions of the debris disk indicate it is likely there is a second substellar companion. Omicron Coronae Borealis is a K-type clump giant with one confirmed planet with a mass of that orbits every 187 days—one of the two least massive planets known around clump giants. HD 145457 is an orange giant of spectral type K0III found to have one planet of . Discovered by the Doppler method in 2010, it takes 176 days to complete an orbit. XO-1 is a magnitude 11 yellow main-sequence star located approximately light-years away, of spectral type G1V with a mass and radius similar to the Sun. In 2006 the hot Jupiter exoplanet XO-1b was discovered orbiting XO-1 by the transit method using the XO Telescope. Roughly the size of Jupiter, it completes an orbit around its star every three days. The discovery of a Jupiter-sized planetary companion was announced in 1997 via analysis of the radial velocity of Rho Coronae Borealis, a yellow main sequence star and Solar analog of spectral type G0V, around 57 light-years distant from Earth. More accurate measurement of data from the Hipparcos satellite subsequently showed it instead to be a low-mass star somewhere between 100 and 200 times the mass of Jupiter. Possible stable planetary orbits in the habitable zone were calculated for the binary star Eta Coronae Borealis, which is composed of two stars—yellow main sequence stars of spectral type G1V and G3V respectively—similar in mass and spectrum to the Sun. No planet has been found, but a brown dwarf companion about 63 times as massive as Jupiter with a spectral type of L8 was discovered at a distance of 3640 AU from the pair in 2001. Deep-sky objects Corona Borealis contains few galaxies observable with amateur telescopes. NGC 6085 and 6086 are a faint spiral and elliptical galaxy respectively close enough to each other to be seen in the same visual field through a telescope. Abell 2142 is a huge (six million light-year diameter), X-ray luminous galaxy cluster that is the result of an ongoing merger between two galaxy clusters. It has a redshift of 0.0909 (meaning it is moving away from us at 27,250 km/s) and a visual magnitude of 16.0. It is about 1.2 billion light-years away. Another galaxy cluster in the constellation, RX J1532.9+3021, is approximately 3.9 billion light-years from Earth. At the cluster's center is a large elliptical galaxy containing one of the most massive and most powerful supermassive black holes yet discovered. Abell 2065 is a highly concentrated galaxy cluster containing more than 400 members, the brightest of which are 16th magnitude; the cluster is more than one billion light-years from Earth. On a larger scale still, Abell 2065, along with Abell 2061, Abell 2067, Abell 2079, Abell 2089, and Abell 2092, make up the Corona Borealis Supercluster. Another galaxy cluster, Abell 2162, is a member of the Hercules Superclusters. Mythology In Greek mythology, Corona Borealis was linked to the legend of Theseus and the minotaur. It was generally considered to represent a crown given by Dionysus to Ariadne, the daughter of Minos of Crete, after she had been abandoned by the Athenian prince Theseus. When she wore the crown at her marriage to Dionysus, he placed it in the heavens to commemorate their wedding. An alternate version has the besotted Dionysus give the crown to Ariadne, who in turn gives it to Theseus after he arrives in Crete to kill the minotaur that the Cretans have demanded tribute from Athens to feed. The hero uses the crown's light to escape the labyrinth after disposing of the creature, and Dionysus later sets it in the heavens. The Latin author Hyginus linked it to a crown or wreath worn by Bacchus (Dionysus) to disguise his appearance when first approaching Mount Olympus and revealing himself to the gods, having been previously hidden as yet another child of Jupiter's trysts with a mortal, in this case Semele. Corona Borealis was one of the 48 constellations mentioned in the Almagest of classical astronomer Ptolemy. In Mesopotamia, Corona Borealis was associated with the goddess Nanaya. In Welsh mythology, it was called Caer Arianrhod, "the Castle of the Silver Circle", and was the heavenly abode of the Lady Arianrhod. To the ancient Balts, Corona Borealis was known as Darželis, the "flower garden." The Arabs called the constellation Alphecca (a name later given to Alpha Coronae Borealis), which means "separated" or "broken up" ( ), a reference to the resemblance of the stars of Corona Borealis to a loose string of jewels. This was also interpreted as a broken dish. Among the Bedouins, the constellation was known as (), or "the dish/bowl of the poor people". The Skidi people of Native Americans saw the stars of Corona Borealis representing a council of stars whose chief was Polaris. The constellation also symbolised the smokehole over a fireplace, which conveyed their messages to the gods, as well as how chiefs should come together to consider matters of importance. The Shawnee people saw the stars as the Heavenly Sisters, who descended from the sky every night to dance on earth. Alphecca signifies the youngest and most comely sister, who was seized by a hunter who transformed into a field mouse to get close to her. They married though she later returned to the sky, with her heartbroken husband and son following later. The Mi'kmaq of eastern Canada saw Corona Borealis as Mskegwǒm, the den of the celestial bear (Alpha, Beta, Gamma and Delta Ursae Majoris). Polynesian peoples often recognized Corona Borealis; the people of the Tuamotus named it Na Kaua-ki-tokerau and probably Te Hetu. The constellation was likely called Kaua-mea in Hawaii, Rangawhenua in New Zealand, and Te Wale-o-Awitu in the Cook Islands atoll of Pukapuka. Its name in Tonga was uncertain; it was either called Ao-o-Uvea or Kau-kupenga. In Australian Aboriginal astronomy, the constellation is called womera ("the boomerang") due to the shape of the stars. The Wailwun people of northwestern New South Wales saw Corona Borealis as mullion wollai "eagle's nest", with Altair and Vega—each called mullion—the pair of eagles accompanying it. The Wardaman people of northern Australia held the constellation to be a gathering point for Men's Law, Women's Law and Law of both sexes come together and consider matters of existence. Later references Corona Borealis was renamed Corona Firmiana in honour of the Archbishop of Salzburg in the 1730 Atlas Mercurii Philosophicii Firmamentum Firminianum Descriptionem by Corbinianus Thomas, but this was not taken up by subsequent cartographers. The constellation was featured as a main plot ingredient in the short story "Hypnos" by H. P. Lovecraft, published in 1923; it is the object of fear of one of the protagonists in the short story. Finnish band Cadacross released an album titled Corona Borealis in 2002. See also Corona Borealis (Chinese astronomy) Notes References Cited texts External links Warburg Institute Iconographic Database (medieval and early modern images of Corona Borealis) Constellations Constellations listed by Ptolemy Northern constellations Ariadne
2,901
6,436
https://en.wikipedia.org/wiki/Chamaeleon
Chamaeleon
Chamaeleon () is a small constellation in the deep southern sky. It is named after the chameleon, a kind of lizard. It was first defined in the 16th century. History Chamaeleon was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. It first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius and Jodocus Hondius. Johann Bayer was the first uranographer to put Chamaeleon in a celestial atlas. It was one of many constellations created by European explorers in the 15th and 16th centuries out of unfamiliar Southern Hemisphere stars. Features Stars There are four bright stars in Chamaeleon that form a compact diamond-shape approximately 10 degrees from the south celestial pole and about 15 degrees south of Acrux, along the axis formed by Acrux and Gamma Crucis. Alpha Chamaeleontis is a white-hued star of magnitude 4.1, 63 light-years from Earth. Beta Chamaeleontis is a blue-white hued star of magnitude 4.2, 271 light-years from Earth. Gamma Chamaeleontis is a red-hued giant star of magnitude 4.1, 413 light-years from Earth. The other bright star in Chamaeleon is Delta Chamaeleontis, a wide double star. The brighter star is Delta2 Chamaeleontis, a blue-hued star of magnitude 4.4. Delta1 Chamaeleontis, the dimmer component, is an orange-hued giant star of magnitude 5.5. They both lie about 350 light years away. Chamaeleon is also the location of Cha 110913, a unique dwarf star or proto solar system. Deep-sky objects In 1999, a nearby open cluster was discovered centered on the star η Chamaeleontis. The cluster, known as either the Eta Chamaeleontis cluster or Mamajek 1, is 8 million years old, and lies 316 light years from Earth. The constellation contains a number of molecular clouds (the Chamaeleon dark clouds) that are forming low-mass T Tauri stars. The cloud complex lies some 400 to 600 light years from Earth, and contains tens of thousands of solar masses of gas and dust. The most prominent cluster of T Tauri stars and young B-type stars are in the Chamaeleon I cloud, and are associated with the reflection nebula IC 2631. Chamaeleon contains one planetary nebula, NGC 3195, which is fairly faint. It appears in a telescope at about the same apparent size as Jupiter. Equivalents In Chinese astronomy, the stars that form Chamaeleon were classified as the Little Dipper (小斗, Xiǎodǒu) among the Southern Asterisms (近南極星區, Jìnnánjíxīngōu) by Xu Guangqi. Chamaeleon is sometimes also called the Frying Pan in Australia. See also Chamaeleon (Chinese astronomy) IAU-recognized constellations Citations References External links The Deep Photographic Guide to the Constellations: Chamaeleon The clickable Chamaeleon "The eta Chamaeleontis Cluster: A Remarkable New Nearby Young Open Cluster" (Mamajek, Lawson, & Feigelson 1999) "WEBDA open cluster database entry for Mamajek 1" Star Tales – Chamaeleon Southern constellations Constellations listed by Petrus Plancius Dutch celestial cartography in the Age of Discovery Astronomy in the Dutch Republic 1590s in the Dutch Republic
2,914
6,437
https://en.wikipedia.org/wiki/Cholesterol
Cholesterol
Cholesterol is any of a class of certain organic molecules called lipids. It is a sterol (or modified steroid), a type of lipid. Cholesterol is biosynthesized by all animal cells and is an essential structural component of animal cell membranes. When chemically isolated, it is a yellowish crystalline solid. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid and vitamin D. Cholesterol is the principal sterol synthesized by all animals. In vertebrates, hepatic cells typically produce the greatest amounts. It is absent among prokaryotes (bacteria and archaea), although there are some exceptions, such as Mycoplasma, which require cholesterol for growth. François Poulletier de la Salle first identified cholesterol in solid form in gallstones in 1769. However, it was not until 1815 that chemist Michel Eugène Chevreul named the compound "cholesterine". Etymology The word cholesterol comes from Ancient Greek chole- 'bile' and stereos 'solid', followed by the chemical suffix -ol for an alcohol. Physiology Cholesterol is essential for all animal life, with each cell capable of synthesizing it by way of a complex 37-step process. This begins with the mevalonate or HMG-CoA reductase pathway, the target of statin drugs, which encompasses the first 18 steps. This is followed by 19 additional steps to convert the resulting lanosterol into cholesterol. A human male weighing 68 kg (150 lb) normally synthesizes about 1 gram (1,000 mg) of cholesterol per day, and his body contains about 35 g, mostly contained within the cell membranes. Typical daily cholesterol dietary intake for a man in the United States is 307 mg. Most ingested cholesterol is esterified, which causes it to be poorly absorbed by the gut. The body also compensates for absorption of ingested cholesterol by reducing its own cholesterol synthesis. For these reasons, cholesterol in food, seven to ten hours after ingestion, has little, if any effect on concentrations of cholesterol in the blood. However, during the first seven hours after ingestion of cholesterol, as absorbed fats are being distributed around the body within extracellular water by the various lipoproteins (which transport all fats in the water outside cells), the concentrations increase. Plants make cholesterol in very small amounts. In larger quantities they produce phytosterols, chemically similar substances which can compete with cholesterol for reabsorption in the intestinal tract, thus potentially reducing cholesterol reabsorption. When intestinal lining cells absorb phytosterols, in place of cholesterol, they usually excrete the phytosterol molecules back into the GI tract, an important protective mechanism. The intake of naturally occurring phytosterols, which encompass plant sterols and stanols, ranges between ≈200–300 mg/day depending on eating habits. Specially designed vegetarian experimental diets have been produced yielding upwards of 700 mg/day. Function Membranes Cholesterol composes about 30% of all animal cell membranes. It is required to build and maintain membranes and modulates membrane fluidity over the range of physiological temperatures. The hydroxyl group of each cholesterol molecule interacts with water molecules surrounding the membrane, as do the polar heads of the membrane phospholipids and sphingolipids, while the bulky steroid and the hydrocarbon chain are embedded in the membrane, alongside the nonpolar fatty-acid chain of the other lipids. Through the interaction with the phospholipid fatty-acid chains, cholesterol increases membrane packing, which both alters membrane fluidity and maintains membrane integrity so that animal cells do not need to build cell walls (like plants and most bacteria). The membrane remains stable and durable without being rigid, allowing animal cells to change shape and animals to move. The structure of the tetracyclic ring of cholesterol contributes to the fluidity of the cell membrane, as the molecule is in a trans conformation making all but the side chain of cholesterol rigid and planar. In this structural role, cholesterol also reduces the permeability of the plasma membrane to neutral solutes, hydrogen ions, and sodium ions. Substrate presentation Cholesterol regulates the biological process of substrate presentation and the enzymes that use substrate presentation as a mechanism of their activation. (PLD2) is a well-defined example of an enzyme activated by substrate presentation. The enzyme is palmitoylated causing the enzyme to traffic to cholesterol dependent lipid domains sometimes called "lipid rafts". The substrate of phospholipase D is phosphatidylcholine (PC) which is unsaturated and is of low abundance in lipid rafts. PC localizes to the disordered region of the cell along with the polyunsaturated lipid phosphatidylinositol 4,5-bisphosphate (PIP2). PLD2 has a PIP2 binding domain. When PIP2 concentration in the membrane increases, PLD2 leaves the cholesterol-dependent domains and binds to PIP2 where it then gains access to its substrate PC and commences catalysis based on substrate presentation. Signaling Cholesterol is also implicated in cell signaling processes, assisting in the formation of lipid rafts in the plasma membrane, which brings receptor proteins in close proximity with high concentrations of second messenger molecules. In multiple layers, cholesterol and phospholipids, both electrical insulators, can facilitate speed of transmission of electrical impulses along nerve tissue. For many neuron fibers, a myelin sheath, rich in cholesterol since it is derived from compacted layers of Schwann cell or oligodendrocyte membranes, provides insulation for more efficient conduction of impulses. Demyelination (loss of myelin) is believed to be part of the basis for multiple sclerosis. Cholesterol binds to and affects the gating of a number of ion channels such as the nicotinic acetylcholine receptor, GABAA receptor, and the inward-rectifier potassium channel. Cholesterol also activates the estrogen-related receptor alpha (ERRα), and may be the endogenous ligand for the receptor. The constitutively active nature of the receptor may be explained by the fact that cholesterol is ubiquitous in the body. Inhibition of ERRα signaling by reduction of cholesterol production has been identified as a key mediator of the effects of statins and bisphosphonates on bone, muscle, and macrophages. On the basis of these findings, it has been suggested that the ERRα should be de-orphanized and classified as a receptor for cholesterol. Chemical precursor Within cells, cholesterol is also a precursor molecule for several biochemical pathways. For example, it is the precursor molecule for the synthesis of vitamin D in the calcium metabolism and all steroid hormones, including the adrenal gland hormones cortisol and aldosterone, as well as the sex hormones progesterone, estrogens, and testosterone, and their derivatives. Epidermis The stratum corneum is the outermost layer of the epidermis. It is composed of terminally differentiated and enucleated corneocytes that reside within a lipid matrix, like "bricks and mortar." Together with ceramides and free fatty acids, cholesterol forms the lipid mortar, a water-impermeable barrier that prevents evaporative water loss. As a general rule of thumb, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (~50% by weight), cholesterol (~ 25% by weight), and free fatty acids (~15% by weight), with smaller quantities of other lipids also being present. Cholesterol sulfate reaches its highest concentration in the granular layer of the epidermis. Steroid sulfate sulfatase then decreases its concentration in the stratum corneum, the outermost layer of the epidermis. The relative abundance of cholesterol sulfate in the epidermis varies across different body sites with the heel of the foot having the lowest concentration. Metabolism Cholesterol is recycled in the body. The liver excretes cholesterol into biliary fluids, which are then stored in the gallbladder, which then excretes them in a non-esterified form (via bile) into the digestive tract. Typically, about 50% of the excreted cholesterol is reabsorbed by the small intestine back into the bloodstream. Biosynthesis and regulation Biosynthesis All animal cells (exceptions exist within the invertebrates) manufacture cholesterol, for both membrane structure and other uses, with relative production rates varying by cell type and organ function. About 80% of total daily cholesterol production occurs in the liver and the intestines; other sites of higher synthesis rates include the brain, the adrenal glands, and the reproductive organs. Synthesis within the body starts with the mevalonate pathway where two molecules of acetyl CoA condense to form acetoacetyl-CoA. This is followed by a second condensation between acetyl CoA and acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl CoA (HMG-CoA). This molecule is then reduced to mevalonate by the enzyme HMG-CoA reductase. Production of mevalonate is the rate-limiting and irreversible step in cholesterol synthesis and is the site of action for statins (a class of cholesterol-lowering drugs). Mevalonate is finally converted to isopentenyl pyrophosphate (IPP) through two phosphorylation steps and one decarboxylation step that requires ATP. Three molecules of isopentenyl pyrophosphate condense to form farnesyl pyrophosphate through the action of geranyl transferase. Two molecules of farnesyl pyrophosphate then condense to form squalene by the action of squalene synthase in the endoplasmic reticulum. Oxidosqualene cyclase then cyclizes squalene to form lanosterol. Finally, lanosterol is converted to cholesterol via either of two pathways, the Bloch pathway, or the Kandutsch-Russell pathway. The final 19 steps to cholesterol contain NADPH and oxygen to help oxidize methyl groups for removal of carbons, mutases to move alkene groups, and NADH to help reduce ketones. Konrad Bloch and Feodor Lynen shared the Nobel Prize in Physiology or Medicine in 1964 for their discoveries concerning some of the mechanisms and methods of regulation of cholesterol and fatty acid metabolism. Regulation of cholesterol synthesis Biosynthesis of cholesterol is directly regulated by the cholesterol levels present, though the homeostatic mechanisms involved are only partly understood. A higher intake of food leads to a net decrease in endogenous production, whereas a lower intake of food has the opposite effect. The main regulatory mechanism is the sensing of intracellular cholesterol in the endoplasmic reticulum by the protein SREBP (sterol regulatory element-binding protein 1 and 2). In the presence of cholesterol, SREBP is bound to two other proteins: SCAP (SREBP cleavage-activating protein) and INSIG-1. When cholesterol levels fall, INSIG-1 dissociates from the SREBP-SCAP complex, which allows the complex to migrate to the Golgi apparatus. Here SREBP is cleaved by S1P and S2P (site-1 protease and site-2 protease), two enzymes that are activated by SCAP when cholesterol levels are low. The cleaved SREBP then migrates to the nucleus and acts as a transcription factor to bind to the sterol regulatory element (SRE), which stimulates the transcription of many genes. Among these are the low-density lipoprotein (LDL) receptor and HMG-CoA reductase. The LDL receptor scavenges circulating LDL from the bloodstream, whereas HMG-CoA reductase leads to an increase in endogenous production of cholesterol. A large part of this signaling pathway was clarified by Dr. Michael S. Brown and Dr. Joseph L. Goldstein in the 1970s. In 1985, they received the Nobel Prize in Physiology or Medicine for their work. Their subsequent work shows how the SREBP pathway regulates the expression of many genes that control lipid formation and metabolism and body fuel allocation. Cholesterol synthesis can also be turned off when cholesterol levels are high. HMG-CoA reductase contains both a cytosolic domain (responsible for its catalytic function) and a membrane domain. The membrane domain senses signals for its degradation. Increasing concentrations of cholesterol (and other sterols) cause a change in this domain's oligomerization state, which makes it more susceptible to destruction by the proteasome. This enzyme's activity can also be reduced by phosphorylation by an AMP-activated protein kinase. Because this kinase is activated by AMP, which is produced when ATP is hydrolyzed, it follows that cholesterol synthesis is halted when ATP levels are low. Plasma transport and regulation of absorption As an isolated molecule, cholesterol is only minimally soluble in water, or hydrophilic. Because of this, it dissolves in blood at exceedingly small concentrations. To be transported effectively, cholesterol is instead packaged within lipoproteins, complex discoidal particles with exterior amphiphilic proteins and lipids, whose outward-facing surfaces are water-soluble and inward-facing surfaces are lipid-soluble. This allows it to travel through the blood via emulsification. Unbound cholesterol, being amphipathic, is transported in the monolayer surface of the lipoprotein particle along with phospholipids and proteins. Cholesterol esters bound to fatty acid, on the other hand, are transported within the fatty hydrophobic core of the lipoprotein, along with triglyceride. There are several types of lipoproteins in the blood. In order of increasing density, they are chylomicrons, very-low-density lipoprotein (VLDL), intermediate-density lipoprotein (IDL), low-density lipoprotein (LDL), and high-density lipoprotein (HDL). Lower protein/lipid ratios make for less dense lipoproteins. Cholesterol within different lipoproteins is identical, although some is carried as its native "free" alcohol form (the cholesterol-OH group facing the water surrounding the particles), while others as fatty acyl esters, known also as cholesterol esters, within the particles. Lipoprotein particles are organized by complex apolipoproteins, typically 80–100 different proteins per particle, which can be recognized and bound by specific receptors on cell membranes, directing their lipid payload into specific cells and tissues currently ingesting these fat transport particles. These surface receptors serve as unique molecular signatures, which then help determine fat distribution delivery throughout the body. Chylomicrons, the least dense cholesterol transport molecules, contain apolipoprotein B-48, apolipoprotein C, and apolipoprotein E (the principal cholesterol carrier in the brain) in their shells. Chylomicrons carry fats from the intestine to muscle and other tissues in need of fatty acids for energy or fat production. Unused cholesterol remains in more cholesterol-rich chylomicron remnants, and taken up from here to the bloodstream by the liver. VLDL molecules are produced by the liver from triacylglycerol and cholesterol which was not used in the synthesis of bile acids. These molecules contain apolipoprotein B100 and apolipoprotein E in their shells, and can be degraded by lipoprotein lipase on the artery wall to IDL. This arterial wall cleavage allows absorption of triacylglycerol and increases the concentration of circulating cholesterol. IDL molecules are then consumed in two processes: half is metabolized by HTGL and taken up by the LDL receptor on the liver cell surfaces, while the other half continues to lose triacylglycerols in the bloodstream until they become cholesterol-laden LDL particles. LDL particles are the major blood cholesterol carriers. Each one contains approximately 1,500 molecules of cholesterol ester. LDL molecule shells contain just one molecule of apolipoprotein B100, recognized by LDL receptors in peripheral tissues. Upon binding of apolipoprotein B100, many LDL receptors concentrate in clathrin-coated pits. Both LDL and its receptor form vesicles within a cell via endocytosis. These vesicles then fuse with a lysosome, where the lysosomal acid lipase enzyme hydrolyzes the cholesterol esters. The cholesterol can then be used for membrane biosynthesis or esterified and stored within the cell, so as to not interfere with the cell membranes. LDL receptors are used up during cholesterol absorption, and its synthesis is regulated by SREBP, the same protein that controls the synthesis of cholesterol de novo, according to its presence inside the cell. A cell with abundant cholesterol will have its LDL receptor synthesis blocked, to prevent new cholesterol in LDL molecules from being taken up. Conversely, LDL receptor synthesis proceeds when a cell is deficient in cholesterol. When this process becomes unregulated, LDL molecules without receptors begin to appear in the blood. These LDL molecules are oxidized and taken up by macrophages, which become engorged and form foam cells. These foam cells often become trapped in the walls of blood vessels and contribute to atherosclerotic plaque formation. Differences in cholesterol homeostasis affect the development of early atherosclerosis (carotid intima-media thickness). These plaques are the main causes of heart attacks, strokes, and other serious medical problems, leading to the association of so-called LDL cholesterol (actually a lipoprotein) with "bad" cholesterol. HDL particles are thought to transport cholesterol back to the liver, either for excretion or for other tissues that synthesize hormones, in a process known as reverse cholesterol transport (RCT). Large numbers of HDL particles correlates with better health outcomes, whereas low numbers of HDL particles is associated with atheromatous disease progression in the arteries. Metabolism, recycling and excretion Cholesterol is susceptible to oxidation and easily forms oxygenated derivatives called oxysterols. Three different mechanisms can form these: autoxidation, secondary oxidation to lipid peroxidation, and cholesterol-metabolizing enzyme oxidation. A great interest in oxysterols arose when they were shown to exert inhibitory actions on cholesterol biosynthesis. This finding became known as the "oxysterol hypothesis". Additional roles for oxysterols in human physiology include their participation in bile acid biosynthesis, function as transport forms of cholesterol, and regulation of gene transcription. In biochemical experiments radiolabelled forms of cholesterol, such as tritiated-cholesterol are used. These derivatives undergo degradation upon storage and it is essential to purify cholesterol prior to use. Cholesterol can be purified using small Sephadex LH-20 columns. Cholesterol is oxidized by the liver into a variety of bile acids. These, in turn, are conjugated with glycine, taurine, glucuronic acid, or sulfate. A mixture of conjugated and nonconjugated bile acids, along with cholesterol itself, is excreted from the liver into the bile. Approximately 95% of the bile acids are reabsorbed from the intestines, and the remainder are lost in the feces. The excretion and reabsorption of bile acids forms the basis of the enterohepatic circulation, which is essential for the digestion and absorption of dietary fats. Under certain circumstances, when more concentrated, as in the gallbladder, cholesterol crystallises and is the major constituent of most gallstones (lecithin and bilirubin gallstones also occur, but less frequently). Every day, up to 1 g of cholesterol enters the colon. This cholesterol originates from the diet, bile, and desquamated intestinal cells, and can be metabolized by the colonic bacteria. Cholesterol is converted mainly into coprostanol, a nonabsorbable sterol that is excreted in the feces. Although cholesterol is a steroid generally associated with mammals, the human pathogen Mycobacterium tuberculosis is able to completely degrade this molecule and contains a large number of genes that are regulated by its presence. Many of these cholesterol-regulated genes are homologues of fatty acid β-oxidation genes, but have evolved in such a way as to bind large steroid substrates like cholesterol. Dietary sources Animal fats are complex mixtures of triglycerides, with lesser amounts of both the phospholipids and cholesterol molecules from which all animal (and human) cell membranes are constructed. Since all animal cells manufacture cholesterol, all animal-based foods contain cholesterol in varying amounts. Major dietary sources of cholesterol include red meat, egg yolks and whole eggs, liver, kidney, giblets, fish oil, and butter. Human breast milk also contains significant quantities of cholesterol. Plant cells synthesize cholesterol as a precursor for other compounds, such as phytosterols and steroidal glycoalkaloids, with cholesterol remaining in plant foods only in minor amounts or absent. Some plant foods, such as avocado, flax seeds and peanuts, contain phytosterols, which compete with cholesterol for absorption in the intestines, and reduce the absorption of both dietary and bile cholesterol. A typical diet contributes on the order of 0.2 gram of phytosterols, which is not enough to have a significant impact on blocking cholesterol absorption. Phytosterols intake can be supplemented through the use of phytosterol-containing functional foods or dietary supplements that are recognized as having potential to reduce levels of LDL-cholesterol. Medical guidelines and recommendations In 2015, the United States Department of Agriculture Dietary Guidelines Advisory Committee (DGAC) recommended that Americans eat as little dietary cholesterol as possible, because most foods that are rich in cholesterol are also high in saturated fat and thereby may increase the risk of cardiovascular disease. A 2013 report by the American Heart Association and the American College of Cardiology recommended focusing on healthy dietary patterns rather than specific cholesterol limits, as they are hard for clinicians and consumers to implement. They recommend the DASH and Mediterranean diet, which are low in cholesterol. A 2017 review by the American Heart Association recommends switching saturated fats for polyunsaturated fats to reduce cardiovascular disease risk. Some supplemental guidelines have recommended doses of phytosterols in the 1.6–3.0 grams per day range (Health Canada, EFSA, ATP III, FDA). A recent meta-analysis demonstrating a 12% reduction in LDL-cholesterol at a mean dose of 2.1 grams per day. However, the benefits of a diet supplemented with phytosterols have also been questioned. Clinical significance Hypercholesterolemia According to the lipid hypothesis, elevated levels of cholesterol in the blood lead to atherosclerosis which may increase the risk of heart attack, stroke, and peripheral artery disease. Since higher blood LDL – especially higher LDL concentrations and smaller LDL particle size – contributes to this process more than the cholesterol content of the HDL particles, LDL particles are often termed "bad cholesterol". High concentrations of functional HDL, which can remove cholesterol from cells and atheromas, offer protection and are commonly referred to as "good cholesterol". These balances are mostly genetically determined, but can be changed by body composition, medications, diet, and other factors. A 2007 study demonstrated that blood total cholesterol levels have an exponential effect on cardiovascular and total mortality, with the association more pronounced in younger subjects. Because cardiovascular disease is relatively rare in the younger population, the impact of high cholesterol on health is larger in older people. Elevated levels of the lipoprotein fractions, LDL, IDL and VLDL, rather than the total cholesterol level, correlate with the extent and progress of atherosclerosis. Conversely, the total cholesterol can be within normal limits, yet be made up primarily of small LDL and small HDL particles, under which conditions atheroma growth rates are high. A post hoc analysis of the IDEAL and the EPIC prospective studies found an association between high levels of HDL cholesterol (adjusted for apolipoprotein A-I and apolipoprotein B) and increased risk of cardiovascular disease, casting doubt on the cardioprotective role of "good cholesterol". About one in 250 individuals can have a genetic mutation for the LDL cholesterol receptor that causes them to have familial hypercholesterolemia. Inherited high cholesterol can also include genetic mutations in the PCSK9 gene and the gene for apolipoprotein B. Elevated cholesterol levels are treated with a strict diet consisting of low saturated fat, trans fat-free, low cholesterol foods, often followed by one of various hypolipidemic agents, such as statins, fibrates, cholesterol absorption inhibitors, monoclonal antibody therapy (PCSK9 inhibitors), nicotinic acid derivatives or bile acid sequestrants. There are several international guidelines on the treatment of hypercholesterolaemia. Human trials using HMG-CoA reductase inhibitors, known as statins, have repeatedly confirmed that changing lipoprotein transport patterns from unhealthy to healthier patterns significantly lowers cardiovascular disease event rates, even for people with cholesterol values currently considered low for adults. Studies have shown that reducing LDL cholesterol levels by about 38.7 mg/dL with the use of statins can reduce cardiovascular disease and stroke risk by about 21%. Studies have also found that statins reduce atheroma progression. As a result, people with a history of cardiovascular disease may derive benefit from statins irrespective of their cholesterol levels (total cholesterol below 5.0 mmol/L [193 mg/dL]), and in men without cardiovascular disease, there is benefit from lowering abnormally high cholesterol levels ("primary prevention"). Primary prevention in women was originally practiced only by extension of the findings in studies on men, since, in women, none of the large statin trials conducted prior to 2007 demonstrated a significant reduction in overall mortality or in cardiovascular endpoints. Meta-analyses have demonstrated significant reductions in all-cause and cardiovascular mortality, without significant heterogeneity by sex. The 1987 report of National Cholesterol Education Program, Adult Treatment Panels suggests the total blood cholesterol level should be: < 200 mg/dL normal blood cholesterol, 200–239 mg/dL borderline-high, > 240 mg/dL high cholesterol. The American Heart Association provides a similar set of guidelines for total (fasting) blood cholesterol levels and risk for heart disease: Statins are effective in lowering LDL cholesterol and widely used for primary prevention in people at high risk of cardiovascular disease, as well as in secondary prevention for those who have developed cardiovascular disease. More current testing methods determine LDL ("bad") and HDL ("good") cholesterol separately, allowing cholesterol analysis to be more nuanced. The desirable LDL level is considered to be less than 100 mg/dL (2.6 mmol/L), although a newer upper limit of 70 mg/dL (1.8 mmol/L) can be considered in higher-risk individuals based on some of the above-mentioned trials. A ratio of total cholesterol to HDL—another useful measure—of far less than 5:1 is thought to be healthier. Total cholesterol is defined as the sum of HDL, LDL, and VLDL. Usually, only the total, HDL, and triglycerides are measured. For cost reasons, the VLDL is usually estimated as one-fifth of the triglycerides and the LDL is estimated using the Friedewald formula (or a variant): estimated LDL = [total cholesterol] − [total HDL] − [estimated VLDL]. Direct LDL measures are used when triglycerides exceed 400 mg/dL. The estimated VLDL and LDL have more error when triglycerides are above 400 mg/dL. In the Framingham Heart Study, each 10 mg/dL (0.6 mmol/L) increase in total cholesterol levels increased 30-year overall mortality by 5% and CVD mortality by 9%. While subjects over the age of 50 had an 11% increase in overall mortality, and a 14% increase in cardiovascular disease mortality per 1 mg/dL (0.06 mmol/L) year drop in total cholesterol levels. The researchers attributed this phenomenon to reverse causation, whereby the disease itself increases risk of death, as well as changes a myriad of factors, such as weight loss and the inability to eat, which lower serum cholesterol. This effect was also shown in men of all ages and women over 50 in the Vorarlberg Health Monitoring and Promotion Programme. These groups were more likely to die of cancer, liver diseases, and mental diseases with very low total cholesterol, of 186 mg/dL (10.3 mmol/L) and lower. This result indicates the low-cholesterol effect occurs even among younger respondents, contradicting the previous assessment among cohorts of older people that this is a marker for frailty occurring with age. Hypocholesterolemia Abnormally low levels of cholesterol are termed hypocholesterolemia. Research into the causes of this state is relatively limited, but some studies suggest a link with depression, cancer, and cerebral hemorrhage. In general, the low cholesterol levels seem to be a consequence, rather than a cause, of an underlying illness. A genetic defect in cholesterol synthesis causes Smith–Lemli–Opitz syndrome, which is often associated with low plasma cholesterol levels. Hyperthyroidism, or any other endocrine disturbance which causes upregulation of the LDL receptor, may result in hypocholesterolemia. Cholesterol testing The American Heart Association recommends testing cholesterol every 4–6 years for people aged 20 years or older. A separate set of American Heart Association guidelines issued in 2013 indicates that people taking statin medications should have their cholesterol tested 4–12 weeks after their first dose and then every 3–12 months thereafter. For men ages 45 to 65 and women ages 55 to 65, a cholesterol test should occur every 1-2 years, and for seniors over age 65, an annual test should be performed. A blood sample after 12-hours of fasting is taken by a healthcare professional from an arm vein to measure a lipid profile for a) total cholesterol, b) HDL cholesterol, c) LDL cholesterol, and d) triglycerides. Results may be expressed as "calculated", indicating a calculation of total cholesterol, HDL, and triglycerides. Cholesterol is tested to determine for "normal" or "desirable" levels if a person has a total cholesterol of 5.2 mmol/L or less (200 mg/dL), an HDL value of more than 1 mmol/L (40 mg/dL, "the higher, the better"), an LDL value of less than 2.6 mmol/L (100 mg/dL), and a triglycerides level of less than 1.7 mmol/L (150 mg/dL). Blood cholesterol in people with lifestyle, aging, or cardiovascular risk factors, such as diabetes mellitus, hypertension, family history of coronary artery disease, or angina, are evaluated at different levels. Interactive pathway map Cholesteric liquid crystals Some cholesterol derivatives (among other simple cholesteric lipids) are known to generate the liquid crystalline "cholesteric phase". The cholesteric phase is, in fact, a chiral nematic phase, and it changes colour when its temperature changes. This makes cholesterol derivatives useful for indicating temperature in liquid-crystal display thermometers and in temperature-sensitive paints. Stereoisomers Cholesterol has 256 stereoisomers that arise from its eight stereocenters, although only two of the stereoisomers are of biochemical significance (nat-cholesterol and ent-cholesterol, for natural and enantiomer, respectively), and only one occurs naturally (nat-cholesterol). Additional images See also Arcus senilis "Cholesterol ring" in the eyes Cardiovascular disease Cholesterol embolism Cholesterol total synthesis Familial hypercholesterolemia Hypercholesterolemia "High Cholesterol" Hypocholesterolemia "Low Cholesterol" Janus-faced molecule List of cholesterol in foods Niemann–Pick disease Type C Oxycholesterol Remnant cholesterol References External links Cholestanes GABAA receptor positive allosteric modulators Lipid disorders Neurosteroids Nutrition Receptor agonists Sterols
2,915
6,444
https://en.wikipedia.org/wiki/Cleopatra%20%28disambiguation%29
Cleopatra (disambiguation)
Cleopatra (69–30 BC) was the last active Ptolemaic ruler of Egypt before it became a Roman province. Cleopatra may also refer to: for Given name From the Greek name Κλεοπάτρα (Kleopatra) meaning "glory of the father", derived from κλέος (kleos) meaning "glory" combined with πατήρ (pater) meaning "father" (genitive πατρός). Cleopatra (given name), a list of people and fictional characters Cleopatra (Greek singer) (born 1963), represented Greece in the 1992 Eurovision Song Contest Cleopatra (Greek myth), a list of mythological figures Film Cleopatra (1912 film), silent film by Helen Gardner Cleopatra (1917 film), American film by J. Gordon Edwards Cleopatra (1928 film), short film Cleopatra (1934 film), American film by Cecil B. DeMille Cleopatra (1963 film), American film by Joseph L. Mankiewicz Cleopatra (1970 film), Japanese anime film Cleopatra (2003 film), Argentine film by Eduardo Mignogna Cleopatra (2005 film), South Indian Tamil film Cleopatra (2007 film), Brazilian film by Júlio Bressane Cleopatra (2013 film), South Indian Malayalam film Literature Cleopatra (Rider Haggard novel) (1889) Cleopatra (Gardner novel), a 1962 novel Jeffrey K. Gardner La Cleopatra (poem), an epic poem by Girolamo Graziani the title character of Cleopatra in Space, an American graphic novel series for children by Mike Maihack Music Classical music Cleopatra (Cimarosa), a 1789 opera seria by Domenico Cimarosa Cleopatra (Rossi), an 1876 opera by Lauro Rossi Cleopatra, an opera by Johann Mattheson Cleopatra, a composition by Luigi Mancinelli Cleopatra, a symphonic poem by George Whitefield Chadwick Popular music Cleopatra Records, an American record label Cleopatra (group), a British girl group Albums Cleopatra (album), a 2016 album by The Lumineers Cleopatra (1963 soundtrack), a soundtrack by Alex North Cleopatra, a 2004 album by Isabel Bayrakdarian Handel: Cleopatra, a 2011 album by Natalie Dessay Songs "Cleopatra" (Frankie Avalon song) (1963) "Cleopatra" (Jerome Kern song) (1917) "Cleopatra" (Samira Efendi song), Azerbaijan's 2020 Eurovision song submission. "Cleopatra" (The Lumineers song) (2016) "Cleopatra" (Weezer song) (2014) "Cleopatra (I've Got to Get You Off My Mind)", a song by The Tennors "Cleopatra", a song by Adam and the Ants from their 1979 album Dirk Wears White Sox "Cleopatra", a song by Nico Fidenco "Cleopatra", a song by David Vendetta Paintings Cleopatra (Artemisia Gentileschi, Ferrara), by Artemisia Gentileschi, c. 1620 Cleopatra (Artemisia Gentileschi, Milan), by Artemisia Gentileschi, 1613 or 1621–1622 Cleopatra (Artemisia Gentileschi, Rome), by Artemisia Gentileschi, c. 1633-5 Places Cleopatra (neighborhood), a neighborhood of Alexandria, Egypt Cleopatra, Kentucky, United States, an unincorporated community Cleopatra, Missouri, United States, an unincorporated community Cleopatra (crater), an impact crater on Venus Plants and animals Cleopatra (horse), an American racehorse Cleopatra (gastropod), a genus of freshwater snails Gonepteryx cleopatra or cleopatra, a species of butterfly Neoguillauminia cleopatra, a species of tree from New Caledonia Ships , various Royal Navy ships , an East India Company paddle frigate built in 1839 and sunk by a tropical cyclone in 1847 Cleopatra (cylinder ship), a vessel constructed to convey Cleopatra's Needle from Alexandria to London in 1877 , originally named Cleopatra, a mixed passenger liner and animal carrier which sank in 1898 , a World War II Victory cargo ship renamed Cleopatra in 1956 Television Cleopatra (miniseries), a 1999 American miniseries produced by Hallmark Entertainment Cleopatra 2525, an American science fiction television series The Cleopatras, a 1983 British series Cleopatra in Space, an animated television series from DreamWorks Animation Television Other uses Cleopatra (cigarette), an Egyptian brand See also Foxxy Cleopatra, a character in Austin Powers in Goldmember Cleopatra Algemene Studentenvereniging Groningen, a student association in Groningen, the Netherlands Cleopatra's Needle, a pair of Egyptian obelisks Kleopatra (disambiguation) Cleo (disambiguation)
2,920
6,508
https://en.wikipedia.org/wiki/Cyril
Cyril
Cyril (also Cyrillus or Cyryl) is a masculine given name. It is derived from the Greek name Κύριλλος (Kýrillos), meaning 'lordly, masterful', which in turn derives from Greek κυριος (kýrios) 'lord'. There are various variant forms of the name Cyril such as Cyrill, Cyrille, Ciril, Kirill, Kiryl, Kirillos, Kyrylo, Kiril, Kiro, and Kyrill. It may also refer to: Christian patriarchs or bishops Cyril of Jerusalem (c. 313 – 386), theologian and bishop Cyril of Alexandria (c. 376 – 444), Patriarch of Alexandria Cyril the Philosopher (link to Saints Cyril and Methodius), 9th century Greek missionary, co-invented the Slavic alphabet, translated the Bible into Old Church Slavonic Pope Cyril II of Alexandria reigned 1078–1092 Greek Patriarch Cyril II of Alexandria reigned in the 12th century Cyril of Turaw (1130–1182), Belorussian bishop and orthodox saint Pope Cyril III of Alexandria reigned 1235–1243 Cyril, Metropolitan of Moscow died 1572 Cyril Lucaris (Patriarch Cyril I of Constantinople), reigned for six terms between 1612 and 1638 Cyril II of Constantinople, patriarch in 1633, 1635–1636, 1638–1639 Patriarch Cyril III of Constantinople, patriarch in 1652 and 1654 Cyril IV of Constantinople, patriarch 1711–1713 Cyril V Zaim, Melkite patriarch of Antioch died 1720 Cyril VI Tanas, Melkite patriarch of Antioch 1724–1760 Patriarch Cyril V of Constantinople, patriarch in 1748–1751, 1752–1757 Cyril VII Siaj, Melkite patriarch of Antioch 1794–1796 Patriarch Cyril VI of Constantinople, patriarch in 1813–1818 Patriarch Cyril II of Jerusalem, reigned 1845–1875 Patriarch Cyril VII of Constantinople, patriarch in 1855–1860 Pope Cyril IV of Alexandria reigned 1854–1861 Pope Cyril V of Alexandria reigned 1874–1921 Cyril VIII Jaha, Melkite patriarch of Antioch 1902–1916 Cyril IX Moghabghab, Melkite patriarch of Antioch 1925–1946 Patriarch Cyril of Bulgaria, reigned 1953–1971 Pope Cyril VI of Alexandria, reigned 1959–1971 Other individuals Cyrillus, 5th century Greek jurist Cyril Abiteboul (born 1977), French motor racing engineer and manager, formerly the Managing Director of Renault Sport F1 Team Cyril Almeida, Pakistani journalist Cyril Eugene Attygalle, Sri Lankan Sinhala politician Cyril Benson, founder of British company Bensons for Beds Cyril Bourlon de Rouvre (born 1945), French businessman and politician Sir Cyril Burt (1883–1971), psychologist Cyril Connolly (1903–1974), English literary critic and writer Cyril Delevanti (1889–1975), British actor Cyril Despres (born 1974), French motorcycle rider Cyril De Zoysa (1896–1978), Sri Lankan businessman and Buddhist revivalist Cyril Dissanayaka, Sri Lankan Sinhala senior police officer Cyril Dodd (1844–1913), British politician Cyril Domoraud (born 1971), Ivorian football player (senior career 1992–2008) who played for the Côte d'Ivoire national team (1995–2006) Cyril Fernando (1895–1974), Sri Lankan Sinhala clinician and researcher Cyril Fletcher (1913–2005), English comedian, actor and businessman Cyril Gautier (born 1987), French racing cyclist Cyril Goulden (1897–1981), Welsh/Canadian geneticist, statistician, and agronomist Cyril Grayson (born 1993), American football player Cyril Haran (1931–2014), Gaelic footballer and manager, priest, scholar and schoolteacher Cyril Stanley Harrison (1915–1998), English cricketer Cyril Leo Heraclius, Prince Toumanoff (born Toumanishvili) (1913–1997), Russian-born historian and genealogist who was a Professor Emeritus at Georgetown University Cyril Herath (died 2011), Inspector-General of Sri Lanka Police from 1985 to 1988 Cyril Jordan (born 1948), American guitarist and founder of the Flamin' Groovies Cyril Knowles (1944–1991), English footballer Cyril Lawrence (1920–2020), English footballer Cyril Lewis (1909–1999), Welsh footballer Sister M. Cyril Mooney (born 1936), educational innovator in India Cyril Nicholas (1898–1961), Sri Lankan Burgher army captain, civil servant, and forester Elder Cyril Pavlov (1919–2017), Russian Orthodox Christian monk, mystic and wonder-worker Cyril Perkins (1911–2013), English cricketer Cyril C. Perera (1923–2016), Sri Lankan Sinhala author, translator of world literature into Sinhala Cyril E. S. Perera (1892-1968), Sri Lankan Sinhala member of the Ceylon House of Representatives Cyril Pinto Jayatilake Seneviratne (1918–1984), Sri Lankan Sinhala military officer and politician Cyril Ponnamperuma (1923–1994), Sri Lankan Sinhala scientist in the fields of chemical evolution and the origin of life Cyril Ramaphosa (born 1952), South African president, businessman, and trade unionist Cyril Ranatunga, Sri Lankan Sinhala army general Cyril Richardson (born 1990), American football player Cyril Rioli (born 1989), Australian rules footballer Cyril Smith (1928–2010), English Liberal politician Cyril Takayama, (born 1973), American-Japanese magician Cyril Wickramage (born 1932), Sri Lankan Sinhala actor, director, and vocalist Cyril Sogoni (born 1997),Kenyan Photographer, Videographer,and Nutritionist Fictional characters Cyril "Blakey" Blake, the bus depot inspector from the 1970s British comedy TV series On the Buses Cyril Fielding, character in E. M. Forster's novel A Passage to India Cyril Figgis, character in the TV series Archer Cyril Gray, character from the film Nanny McPhee and the Big Bang, played by Eros Vlahos Cyril Kinnear, the menacing and urbane mastermind from the 1971 British crime film Get Carter Cyril Orchard, the murder victim in the 1948 Nero Wolfe mystery And Be A Villain. Cyril Playfair, the reverend from the 1952 film The Quiet Man Cyril Proudbottom, Mr. Toad's horse from the 1949 film The Adventures of Ichabod and Mr. Toad Cyril O'Reily, character from television series Oz Cyril Sneer, the villain aardvark of the 1980s cartoon series The Raccoons Cyril Woodcock, from the film Phantom Thread, played by Lesley Manville Cyril the Fogman, a character from the television series Thomas & Friends Cyril, a character from Doctor Who Cyril, a character from Fire Emblem: Three Houses Cyril the Ice Dragon, from The Legend of Spyro Cyril the Squirrel, from Maisy Cyril, the main character in The Heart's Invisible Furies by John Boyne Cyril, a giant squirrel kaiju from Rampage: Total Destruction See also Cyrille Cyrillus (crater) on the moon Cirillo Kyril Kyrylo Given names of Greek language origin Masculine given names Unisex given names
2,936
6,512
https://en.wikipedia.org/wiki/Coercion
Coercion
Coercion () is compelling a party to act in an involuntary manner by the use of threats, including threats to use force against a party. It involves a set of forceful actions which violate the free will of an individual in order to induce a desired response. These actions may include extortion, blackmail, or even torture and sexual assault. For example, a bully may demand lunch money from a student where refusal results in the student getting beaten. In common law systems, the act of violating a law while under coercion is codified as a duress crime. Coercion can be used as leverage to force the victim to act in a way contrary to their own interests. Coercion can involve not only the infliction of bodily harm, but also psychological abuse (the latter intended to enhance the perceived credibility of the threat). The threat of further harm may also lead to the acquiescence of the person being coerced. The concepts of coercion and persuasion are similar, but various factors distinguish the two. These include the intent, the willingness to cause harm, the result of the interaction, and the options available to the coerced party. John Rawls, Thomas Nagel, Ronald Dworkin, and other political authors argue that the state is coercive. Max Weber defined a state as "a community which has a monopoly on the legitimate use of force." Morris argues that the state can operate through incentives rather than coercion. In healthcare, informal coercion may be used to make a patient adhere to a doctor's treatment plan. Under certain circumstances, physical coercion is used to treat a patient involuntarily. Overview The purpose of coercion is to substitute one's aims to those of the victim. For this reason, many social philosophers have considered coercion as the polar opposite to freedom. Various forms of coercion are distinguished: first on the basis of the kind of injury threatened, second according to its aims and scope, and finally according to its effects, from which its legal, social, and ethical implications mostly depend. Physical Physical coercion is the most commonly considered form of coercion, where the content of the conditional threat is the use of force against a victim, their relatives or property. An often used example is "putting a gun to someone's head" (at gunpoint) or putting a "knife under the throat" (at knifepoint or cut-throat) to compel action under the threat that non-compliance may result in the attacker harming or even killing the victim. These are so common that they are also used as metaphors for other forms of coercion. Armed forces in many countries use firing squads to maintain discipline and intimidate the masses, or opposition, into submission or silent compliance. However, there also are nonphysical forms of coercion, where the threatened injury does not immediately imply the use of force. Byman and Waxman (2000) define coercion as "the use of threatened force, including the limited use of actual force to back up the threat, to induce an adversary to behave differently than it otherwise would." Coercion does not in many cases amount to destruction of property or life since compliance is the goal. Psychological In psychological coercion, the threatened injury regards the victim's relationships with other people. The most obvious example is blackmail, where the threat consists of the dissemination of damaging information. However, many other types are possible e.g. "emotional blackmail", which typically involves threats of rejection from or disapproval by a peer-group, or creating feelings of guilt/obligation via a display of anger or hurt by someone whom the victim loves or respects. See also Notes References Lifton, Robert J. (1961) Thought Reform and the Psychology of Totalism, Penguin Books. External links . Carter, Barry E. Economic Coercion, Max Planck Encyclopedia of Public International Law (subscription required) Abuse Authority Bullying Legal terminology Psychological abuse Interrogation techniques Power (social and political) concepts
2,938
6,516
https://en.wikipedia.org/wiki/Cosmological%20argument
Cosmological argument
A cosmological argument, in natural theology, is an argument which claims that the existence of God can be inferred from facts concerning causation, explanation, change, motion, contingency, dependency, or finitude with respect to the universe or some totality of objects. A cosmological argument can also sometimes be referred to as an argument from universal causation, an argument from first cause, the causal argument, or prime mover argument. Whichever term is employed, there are two basic variants of the argument, each with subtle yet important distinctions: in esse (essentiality), and in fieri (becoming). The basic premises of all of these arguments involve the concept of causation. The conclusion of these arguments is that there exists a first cause, subsequently analysed to be God. The history of this argument goes back to Aristotle or earlier, was developed in Neoplatonism and early Christianity and later in medieval Islamic theology during the 9th to 12th centuries, and was re-introduced to medieval Christian theology in the 13th century by Thomas Aquinas. The cosmological argument is closely related to the principle of sufficient reason as addressed by Gottfried Leibniz and Samuel Clarke, itself a modern exposition of the claim that "nothing comes from nothing" attributed to Parmenides. Contemporary defenders of cosmological arguments include William Lane Craig, Robert Koons, and Alexander Pruss. History Plato (c. 427–347 BC) and Aristotle (c. 384–322 BC) both posited first cause arguments, though each had certain notable caveats. In The Laws (Book X), Plato posited that all movement in the world and the Cosmos was "imparted motion". This required a "self-originated motion" to set it in motion and to maintain it. In Timaeus, Plato posited a "demiurge" of supreme wisdom and intelligence as the creator of the Cosmos. Aristotle argued against the idea of a first cause, often confused with the idea of a "prime mover" or "unmoved mover" ( or primus motor) in his Physics and Metaphysics. Aristotle argued in favor of the idea of several unmoved movers, one powering each celestial sphere, which he believed lived beyond the sphere of the fixed stars, and explained why motion in the universe (which he believed was eternal) had continued for an infinite period of time. Aristotle argued the atomist's assertion of a non-eternal universe would require a first uncaused cause – in his terminology, an efficient first cause – an idea he considered a nonsensical flaw in the reasoning of the atomists. Like Plato, Aristotle believed in an eternal cosmos with no beginning and no end (which in turn follows Parmenides' famous statement that "nothing comes from nothing"). In what he called "first philosophy" or metaphysics, Aristotle did intend a theological correspondence between the prime mover and deity (presumably Zeus); functionally, however, he provided an explanation for the apparent motion of the "fixed stars" (now understood as the daily rotation of the Earth). According to his theses, immaterial unmoved movers are eternal unchangeable beings that constantly think about thinking, but being immaterial, they are incapable of interacting with the cosmos and have no knowledge of what transpires therein. From an "aspiration or desire", the celestial spheres, imitate that purely intellectual activity as best they can, by uniform circular motion. The unmoved movers inspiring the planetary spheres are no different in kind from the prime mover, they merely suffer a dependency of relation to the prime mover. Correspondingly, the motions of the planets are subordinate to the motion inspired by the prime mover in the sphere of fixed stars. Aristotle's natural theology admitted no creation or capriciousness from the immortal pantheon, but maintained a defense against dangerous charges of impiety. Plotinus, a third-century Platonist, taught that the One transcendent absolute caused the universe to exist simply as a consequence of its existence (creatio ex deo). His disciple Proclus stated "The One is God". Centuries later, the Islamic philosopher Avicenna (c. 980–1037) inquired into the question of being, in which he distinguished between essence (māhiyya) and existence (wuǧūd). He argued that the fact of existence could not be inferred from or accounted for by the essence of existing things, and that form and matter by themselves could not originate and interact with the movement of the Universe or the progressive actualization of existing things. Thus, he reasoned that existence must be due to an agent cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must coexist with its effect and be an existing thing. Steven Duncan writes that it "was first formulated by a Greek-speaking Syriac Christian neo-Platonist, John Philoponus, who claims to find a contradiction between the Greek pagan insistence on the eternity of the world and the Aristotelian rejection of the existence of any actual infinite". Referring to the argument as the "'Kalam' cosmological argument", Duncan asserts that it "received its fullest articulation at the hands of [medieval] Muslim and Jewish exponents of Kalam ("the use of reason by believers to justify the basic metaphysical presuppositions of the faith"). Thomas Aquinas (c. 1225–1274) adapted and enhanced the argument he found in his reading of Aristotle, Avicenna (the Proof of the Truthful), and Maimonides to form one of the most influential versions of the cosmological argument. His conception of first cause was the idea that the Universe must be caused by something that is itself uncaused, which he claimed is that which we call God: Importantly, Aquinas' Five Ways, given the second question of his Summa Theologica, are not the entirety of Aquinas' demonstration that the Christian God exists. The Five Ways form only the beginning of Aquinas' Treatise on the Divine Nature. Versions of the argument Argument from contingency In the scholastic era, Aquinas formulated the "argument from contingency", following Aristotle in claiming that there must be something to explain why the Universe exists. Since the Universe could, under different circumstances, conceivably not exist (contingency), its existence must have a cause – not merely another contingent thing, but something that exists by necessity (something that must exist in order for anything else to exist). In other words, even if the Universe has always existed, it still owes its existence to an uncaused cause, Aquinas further said: "... and this we understand to be God." Aquinas's argument from contingency allows for the possibility of a Universe that has no beginning in time. It is a form of argument from universal causation. Aquinas observed that, in nature, there were things with contingent existences. Since it is possible for such things not to exist, there must be some time at which these things did not in fact exist. Thus, according to Aquinas, there must have been a time when nothing existed. If this is so, there would exist nothing that could bring anything into existence. Contingent beings, therefore, are insufficient to account for the existence of contingent beings: there must exist a necessary being whose non-existence is an impossibility, and from which the existence of all contingent beings is ultimately derived. The German philosopher Gottfried Leibniz made a similar argument with his principle of sufficient reason in 1714. "There can be found no fact that is true or existent, or any true proposition," he wrote, "without there being a sufficient reason for its being so and not otherwise, although we cannot know these reasons in most cases." He formulated the cosmological argument succinctly: "Why is there something rather than nothing? The sufficient reason ... is found in a substance which ... is a necessary being bearing the reason for its existence within itself." Leibniz's argument from contingency is one of the most popular cosmological arguments in philosophy of religion. It attempts to prove the existence of a necessary being and infer that this being is God. Alexander Pruss formulates the argument as follows: Every contingent fact has an explanation. There is a contingent fact that includes all other contingent facts. Therefore, there is an explanation of this fact. This explanation must involve a necessary being. This necessary being is God. Premise 1 is a form of the principle of sufficient reason stating that all contingently true sentences (i.e. contingent facts) have a sufficient explanation as to why they are the case. Premise 2 refers to what is known as the Big Conjunctive Contingent Fact (abbreviated BCCF), and the BCCF is generally taken to be the logical conjunction of all contingent facts. It can be thought about as the sum total of all contingent reality. Premise 3 then concludes that the BCCF has an explanation, as every contingency does (in virtue of the PSR). It follows that this explanation is non-contingent (i.e. necessary); no contingency can explain the BCCF, because every contingent fact is a part of the BCCF. Statement 5, which is either seen as a premise or a conclusion, infers that the necessary being which explains the totality of contingent facts is God. Several philosophers of religion, such as Joshua Rasmussen and T. Ryan Byerly, have argued for the inference from (4) to (5). In esse and in fieri The difference between the arguments from causation in fieri and in esse is a fairly important one. In fieri is generally translated as "becoming", while in esse is generally translated as "in essence". In fieri, the process of becoming, is similar to building a house. Once it is built, the builder walks away, and it stands on its own accord; compare the watchmaker analogy. (It may require occasional maintenance, but that is beyond the scope of the first cause argument.) In esse (essence) is more akin to the light from a candle or the liquid in a vessel. George Hayward Joyce, SJ, explained that, "where the light of the candle is dependent on the candle's continued existence, not only does a candle produce light in a room in the first instance, but its continued presence is necessary if the illumination is to continue. If it is removed, the light ceases. Again, a liquid receives its shape from the vessel in which it is contained; but were the pressure of the containing sides withdrawn, it would not retain its form for an instant." This form of the argument is far more difficult to separate from a purely first cause argument than is the example of the house's maintenance above, because here the first cause is insufficient without the candle's or vessel's continued existence. The philosopher Robert Koons has stated a new variant on the cosmological argument. He says that to deny causation is to deny all empirical ideas – for example, if we know our own hand, we know it because of the chain of causes including light being reflected upon one's eyes, stimulating the retina and sending a message through the optic nerve into your brain. He summarised the purpose of the argument as "that if you don't buy into theistic metaphysics, you're undermining empirical science. The two grew up together historically and are culturally and philosophically inter-dependent ... If you say I just don't buy this causality principle – that's going to be a big big problem for empirical science." This in fieri version of the argument therefore does not intend to prove God, but only to disprove objections involving science, and the idea that contemporary knowledge disproves the cosmological argument. Kalām cosmological argument William Lane Craig, who was principally responsible for re-popularizing this argument in Western philosophy, presents it in the following general form: Whatever begins to exist has a cause of its existence. The universe began to exist. Therefore, the universe has a cause of its existence. Craig analyses this cause, in 'The Blackwell Companion to Natural Theology', and says that this cause must be uncaused, beginningless, changeless, timeless, spaceless, unimaginably powerful, and personal. Craig defends the second premise, that the Universe had a beginning starting with Al-Ghazali's proof that an actual infinity is impossible. However, If the universe never had a beginning then there would be an actual infinite, Craig claims, namely an infinite amount of cause and effect events. Hence, the Universe had a beginning. Metaphysical argument for the existence of God Duns Scotus, the influential Medieval Christian theologian, created a metaphysical argument for the existence of God. Though it was inspired by Aquinas' argument from motion, he, like other philosophers and theologians, believed that his statement for God's existence could be considered separate to Aquinas'. His explanation for God's existence is long, and can be summarised as follows: Something can be produced. It is produced by itself, by nothing, or by another. Not by nothing, because nothing causes nothing. Not by itself, because an effect never causes itself. Therefore, by another A. If A is first then we have reached the conclusion. If A is not first, then we return to 2). From 3) and 4), we produce another- B. The ascending series is either infinite or finite. An infinite series is not possible. Therefore, God exists. Scotus deals immediately with two objections he can see: first, that there cannot be a first, and second, that the argument falls apart when 1) is questioned. He states that infinite regress is impossible, because it provokes unanswerable questions, like, in modern English, "What is infinity minus infinity?" The second he states can be answered if the question is rephrased using modal logic, meaning that the first statement is instead "It is possible that something can be produced." Cosmological argument and infinite regress Depending on its formulation, the cosmological argument is an example of a positive infinite regress argument. An infinite regress is an infinite series of entities governed by a recursive principle that determines how each entity in the series depends on or is produced by its predecessor. An infinite regress argument is an argument against a theory based on the fact that this theory leads to an infinite regress. A positive infinite regress argument employs the regress in question to argue in support of a theory by showing that its alternative involves a vicious regress. The regress relevant for the cosmological argument is the regress of causes: an event occurred because it was caused by another event that occurred before it, which was itself caused by a previous event, and so on. For an infinite regress argument to be successful, it has to demonstrate not just that the theory in question entails an infinite regress but also that this regress is vicious. Once the viciousness of the regress of causes is established, the cosmological argument can proceed to its positive conclusion by holding that it is necessary to posit a first cause in order to avoid it. A regress can be vicious due to metaphysical impossibility, implausibility or explanatory failure. It is sometimes held that the regress of causes is vicious because it is metaphysically impossible, i.e. that it involves an outright contradiction. But it is difficult to see where this contradiction lies unless an additional assumption is accepted: that actual infinity is impossible. But this position is opposed to infinity in general, not just specifically to the regress of causes. A more promising view is that the regress of causes is to be rejected because it is implausible. Such an argument can be based on empirical observation, e.g. that, to the best of our knowledge, our universe had a beginning in the form of the Big Bang. But it can also be based on more abstract principles, like Ockham's razor (parsimony), which posits that we should avoid ontological extravagance by not multiplying entities without necessity. A third option is to see the regress of causes as vicious due to explanatory failure, i.e. that it does not solve the problem it was formulated to solve or that it assumes already in disguised form what it was supposed to explain. According to this position, we seek to explain one event in the present by citing an earlier event that caused it. But this explanation is incomplete unless we can come to understand why this earlier event occurred, which is itself explained by its own cause and so on. At each step, the occurrence of an event has to be assumed. So it fails to explain why anything at all occurs, why there is a chain of causes to begin with. Objections and counterarguments What caused the first cause? One objection to the argument is that it leaves open the question of why the first cause is unique in that it does not require any causes. Proponents argue that the first cause is exempt from having a cause, while opponents argue that this is special pleading or otherwise untrue. Critics often press that arguing for the first cause's exemption raises the question of why the first cause is indeed exempt, whereas defenders maintain that this question has been answered by the various arguments, emphasizing that none of its major forms rest on the premise of everything having a cause. William Lane Craig, who popularised and is notable for defending the Kalam cosmological argument, argues that the infinite is impossible, whichever perspective the viewer takes, and so there must always have been one unmoved thing to begin the universe. He uses Hilbert's paradox of the Grand Hotel and the question "What is infinity minus infinity?" to illustrate the idea that the infinite is metaphysically, mathematically, and even conceptually impossible. Other reasons include the fact that it is impossible to count down from infinity, and that, had the universe existed for an infinite amount of time, every possible event, including the final end of the universe, would already have occurred. He therefore states his argument in three points: firstly, everything that begins to exist has a cause of its existence; secondly, the universe began to exist; so, thirdly, therefore, the universe has a cause of its existence. Craig argues in the Blackwell Companion to Natural Theology that there cannot be an infinite regress of causes and thus there must be a first uncaused cause, even if one posits a plurality of causes of the universe. He argues Occam's razor may be employed to remove unneeded further causes of the universe to leave a single uncaused cause. Secondly, it is argued that the premise of causality has been arrived at via a posteriori (inductive) reasoning, which is dependent on experience. David Hume highlighted this problem of induction and argued that causal relations were not true a priori. However, as to whether inductive or deductive reasoning is more valuable remains a matter of debate, with the general conclusion being that neither is prominent. Opponents of the argument tend to argue that it is unwise to draw conclusions from an extrapolation of causality beyond experience. Andrew Loke replies that, according to the Kalam cosmological argument, only things which begin to exist require a cause. On the other hand, something that is without beginning has always existed and therefore does not require a cause. The Kalam and the Thomistic cosmological argument posit that there cannot be an actual infinite regress of causes, therefore there must be an uncaused first cause that is beginningless and does not require a cause. Not evidence for a theistic God According to this objection, the basic cosmological argument merely establishes that a first cause exists, not that it has the attributes of a theistic god, such as omniscience, omnipotence, and omnibenevolence. This is why the argument is often expanded to assert that at least some of these attributes are necessarily true, for instance in the modern Kalam argument given above. Existence of causal loops A causal loop is a form of predestination paradox arising where traveling backwards in time is deemed a possibility. A sufficiently powerful entity in such a world would have the capacity to travel backwards in time to a point before its own existence, and to then create itself, thereby initiating everything which follows from it. The usual reason given to refute the possibility of a causal loop is that it requires that the loop as a whole be its own cause. Richard Hanley argues that causal loops are not logically, physically, or epistemically impossible: "[In timed systems,] the only possibly objectionable feature that all causal loops share is that coincidence is required to explain them." However, Andrew Loke argues that causal loop of the type that is supposed to avoid a first cause suffers from the problem of vicious circularity and thus it would not work. Existence of infinite causal chains David Hume and later Paul Edwards have invoked a similar principle in their criticisms of the cosmological argument. William L. Rowe has called this the Hume-Edwards principle: Nevertheless, David White argues that the notion of an infinite causal regress providing a proper explanation is fallacious. Furthermore, in Hume's Dialogues Concerning Natural Religion, the character Demea states that even if the succession of causes is infinite, the whole chain still requires a cause. To explain this, suppose there exists a causal chain of infinite contingent beings. If one asks the question, "Why are there any contingent beings at all?", it does not help to be told that "There are contingent beings because other contingent beings caused them." That answer would just presuppose additional contingent beings. An adequate explanation of why some contingent beings exist would invoke a different sort of being, a necessary being that is not contingent. A response might suppose each individual is contingent but the infinite chain as a whole is not, or the whole infinite causal chain is its own cause. Severinsen argues that there is an "infinite" and complex causal structure. White tried to introduce an argument "without appeal to the principle of sufficient reason and without denying the possibility of an infinite causal regress". A number of other arguments have been offered to demonstrate that an actual infinite regress cannot exist, viz. the argument for the impossibility of concrete actual infinities, the argument for the impossibility of traversing an actual infinite, the argument from the lack of capacity to begin to exist, and various arguments from paradoxes. Big Bang cosmology Some cosmologists and physicists argue that a challenge to the cosmological argument is the nature of time: "One finds that time just disappears from the Wheeler–DeWitt equation" (Carlo Rovelli). The Big Bang theory states that it is the point in which all dimensions came into existence, the start of both space and time. Then, the question "What was there before the Universe?" makes no sense; the concept of "before" becomes meaningless when considering a situation without time. This has been put forward by J. Richard Gott III, James E. Gunn, David N. Schramm, and Beatrice Tinsley, who said that asking what occurred before the Big Bang is like asking what is north of the North Pole. However, some cosmologists and physicists do attempt to investigate causes for the Big Bang, using such scenarios as the collision of membranes. Philosopher Edward Feser argues that most of the classical philosophers' cosmological arguments for the existence of God do not depend on the Big Bang or whether the universe had a beginning. The question is not about what got things started or how long they have been going, but rather what keeps them going. See also Argument Biblical cosmology Chaos Cosmogony Creation myth Dating Creation Determinism Creatio ex nihilo Ex nihilo nihil fit First cause First Principle Infinitism Logos Present Psychology Semantics Semiotics Unmoved mover Quinque viae Temporal finitism Timeline of the Big Bang Transtheism References External links Arguments for the existence of God Philosophy of religion Causality
2,941
6,517
https://en.wikipedia.org/wiki/Clutch
Clutch
A clutch is a mechanical device that allows the output shaft to be disconnected from the rotating input shaft. The clutch's input shaft is typically attached to a motor, while the clutch's output shaft is connected to the mechanism that does the work. In a motor vehicle, the clutch acts as a mechanical linkage between the engine and transmission. By disengaging the clutch, the engine speed (RPM) is no longer determined by the speed of the driven wheels. Another example of clutch usage is in electric drills. The clutch's input shaft is driven by a motor and the output shaft is connected to the drill bit (via several intermediate components). The cluch allows the drill bit to either spin at the same speed as the motor (clutch engaged), spin at a lower speed as the motor (clutch slipping) or remain stationary while the motor is spinning (clutch disengaged). Types Dry clutch A dry clutch uses dry friction to transfer power from the input shaft to the output shaft, for example a friction disk pressing on a car engine's flywheel. The majority of clutches are dry clutches, especially in vehicles with manual transmissions. Slippage of a friction clutch (where the clutch is partially engaged but the shafts are rotating at different speeds) is sometimes required, such as when a motor vehicle accelerates from a standstill; however the slippage should be minimised to avoid increased wear rates. In a pull-type clutch, pressing the pedal pulls the release bearing to disengage the clutch. In a push-type clutch, pressing the pedal pushes the release bearing to disengage the clutch. A multi-plate clutch consists of several friction plates arranged concentrically. In some cases, it is used instead of a larger diameter clutch. Drag racing cars use multi-plate clutches to control the rate of power transfer to the wheels as the vehicle accelerates from a standing start. Some clutch disks include springs designed to change the natural frequency of the clutch disc, in order to reduce NVH within the vehicle. Also, some clutches for manual transmission cars use a clutch delay valve to avoid abrupt engagements of the clutch. Wet clutch In a wet clutch, the friction material sits in an oil bath (or has flow-through oil) which cools and lubricates the clutch. This can provide smoother engagement and a longer lifespan of the clutch, however wet clutches can have a lower efficiency due to some energy being transferred to the oil. Since the surfaces of a wet clutch can be slippery (as with a motorcycle clutch bathed in engine oil), stacking multiple clutch discs can compensate for the lower coefficient of friction and so eliminate slippage under power when fully engaged. Wet clutches often use a composite paper material. Centrifugal clutch A centrifugal clutch automatically engages as the speed of the input shaft increases and disengages as the input shaft speed decreases. Applications include small motorcycles, motor scooters, chainsaws, and some older automobiles. Cone clutch A cone clutch is similar to dry friction plate clutch, except the friction material is applied to the outside of a conical shaped object. A common application for cone clutches is the synchronizer ring in a manual transmission. Dog clutch A dog clutch is a non-slip design of clutch which is used in non-synchronous transmissions. Single-revolution clutch The single-revolution clutch was developed in the 19th century to power machinery such as shears or presses where a single pull of the operating lever or (later) press of a button would trip the mechanism, engaging the clutch between the power source and the machine's crankshaft for exactly one revolution before disengaging the clutch. When the clutch is disengaged, the driven member is stationary. Early designs were typically dog clutches with a cam on the driven member used to disengage the dogs at the appropriate point. Greatly simplified single-revolution clutches were developed in the 20th century, requiring much smaller operating forces and in some variations, allowing for a fixed fraction of a revolution per operation. Fast action friction clutches replaced dog clutches in some applications, eliminating the problem of impact loading on the dogs every time the clutch engaged. In addition to their use in heavy manufacturing equipment, single-revolution clutches were applied to numerous small machines. In tabulating machines, for example, pressing the operate key would trip a single revolution clutch to process the most recently entered number. In typesetting machines, pressing any key selected a particular character and also engaged a single rotation clutch to cycle the mechanism to typeset that character. Similarly, in teleprinters, the receipt of each character tripped a single-revolution clutch to operate one cycle of the print mechanism. In 1928, Frederick G. Creed developed a single-turn spring clutch (see above) that was particularly well suited to the repetitive start-stop action required in teleprinters. In 1942, two employees of Pitney Bowes Postage Meter Company developed an improved single turn spring clutch. In these clutches, a coil spring is wrapped around the driven shaft and held in an expanded configuration by the trip lever. When tripped, the spring rapidly contracts around the power shaft engaging the clutch. At the end of one revolution, if the trip lever has been reset, it catches the end of the spring (or a pawl attached to it), and the angular momentum of the driven member releases the tension on the spring. These clutches have long operating lives—many have performed tens and perhaps hundreds of millions of cycles without the need of maintenance other than occasional lubrication. Cascaded-pawl single-revolution clutches superseded wrap-spring single-revolution clutches in page printers, such as teleprinters, including the Teletype Model 28 and its successors, using the same design principles. IBM Selectric typewriters also used them. These are typically disc-shaped assemblies mounted on the driven shaft. Inside the hollow disc-shaped drive drum are two or three freely floating pawls arranged so that when the clutch is tripped, the pawls spring outward much like the shoes in a drum brake. When engaged, the load torque on each pawl transfers to the others to keep them engaged. These clutches do not slip once locked up, and they engage very quickly, on the order of milliseconds. A trip projection extends out from the assembly. If the trip lever engaged this projection, the clutch was disengaged. When the trip lever releases this projection, internal springs and friction engage the clutch. The clutch then rotates one or more turns, stopping when the trip lever again engages the trip projection. Other designs Kickback clutch-brakes: Found in some types of synchronous-motor-driven electric clocks built before the 1940s, to prevent the clock from running backwards. The clutch consisted of a wrap-spring clutch-brake that was coupled to the rotor by one or two stages of reduction gearing. The clutch-brake locked up when rotated backwards, but also had some spring action. The inertia of the rotor going backwards engaged the clutch and wound the spring. As it unwound, it restarted the motor in the correct direction. Belt clutch: used on agricultural equipment, lawnmowers, tillers, and snow blowers. Engine power is transmitted via a set of belts that are slack when the engine is idling, but an idler pulley can tighten the belts to increase friction between the belts and the pulleys. BMA clutch: Invented by Waldo J Kelleigh in 1949, used for transmitting torque between two shafts consisting of a fixed driving member secured to one of said shafts, and a movable driving member, having a contacting surface with a plurality of indentations. Electromagnetic clutch: typically engaged by an electromagnet that is an integral part of the clutch assembly. Another type, the magnetic particle clutch, contains magnetically influenced particles in a chamber between driving and driven members—application of direct current makes the particles clump together and adhere to the operating surfaces. Engagement and slippage are notably smooth. Wrap-spring clutch: has a helical spring, typically wound with square-cross-section wire. These were developed in the late 19th and early 20th-century. In simple form the spring is fastened at one end to the driven member; its other end is unattached. The spring fits closely around a cylindrical driving member. If the driving member rotates in the direction that would unwind the spring expands minutely and slips although with some drag. Because of this, spring clutches must typically be lubricated with light oil. Rotating the driving member the other way makes the spring wrap itself tightly around the driving surface and the clutch locks up very quickly. The torque required to make a spring clutch slip grows exponentially with the number of turns in the spring, obeying the capstan equation. Usage in automobiles Manual transmissions Most cars and trucks with a manual transmission use a dry clutch, which is operated by the driver using the left-most pedal. The motion of the pedal is transferred to the clutch using hydraulics (master and slave cylinders) or a cable. The clutch is only disengaged at times when the driver is pressing on the clutch pedal, therefore the default state is for the transmission to be connected to the engine. A "neutral" gear position is provided, so that the clutch pedal can be released with the vehicle remaining stationary. The clutch is required for standing starts and is usually (but not always) used to assist in synchronising the speeds of the engine and transmission during gear changes, i.e. while reducing the engine speed (RPM) during upshifts and increasing the engine speed during downshifts. The clutch is usually mounted directly to the face of the engine's flywheel, as this already provides a convenient large-diameter steel disk that can act as one driving plate of the clutch. Some racing clutches use small multi-plate disk packs that are not part of the flywheel. Both clutch and flywheel are enclosed in a conical bellhousing for the gearbox. The friction material used for the clutch disk varies, with a common material being an organic compound resin with a copper wire facing or a ceramic material. Automatic transmissions In an automatic transmission, the role of the clutch is performed by a torque converter. However, the transmission itself often includes internal clutches, such as a lock-up clutch to prevent slippage of the torque converter, in order to reduce the energy loss through the transmission and therefore improve fuel economy. Fans and compressors Older belt-driven engine cooling fans often use a heat-activated clutch, in the form of a bimetallic strip. When the temperature is low, the spring winds and closes the valve, which lets the fan spin at about 20% to 30% of the crankshaft speed. As the temperature of the spring rises, it unwinds and opens the valve, allowing fluid past the valve, making the fan spin at about 60% to 90% of crankshaft speed. A vehicle's air-conditioning compressor often uses magnetic clutches to engage the compressor as required. Usage in motorcycles Motorcycles typically employ a wet clutch with the clutch riding in the same oil as the transmission. These clutches are usually made up of a stack of alternating friction plates and steel plates. The friction plates have lugs on their outer diameters that lock them into a basket that is turned by the crankshaft. The steel plates have lugs on their inner diameters that lock them to the transmission input shaft. A set of coil springs or a diaphragm spring plate force the plates together when the clutch is engaged. On motorcycles the clutch is operated by a hand lever on the left handlebar. No pressure on the lever means that the clutch plates are engaged (driving), while pulling the lever back towards the rider disengages the clutch plates through cable or hydraulic actuation, allowing the rider to shift gears or coast. Racing motorcycles often use slipper clutches to eliminate the effects of engine braking, which, being applied only to the rear wheel, can cause instability. See also Clutch control Coupling Freewheel Gear shift Torque converter Torque limiter References Automotive transmission technologies
2,942
6,537
https://en.wikipedia.org/wiki/Celestines
Celestines
The Celestines were a Roman Catholic monastic order, a branch of the Benedictines, founded in 1244. At the foundation of the new rule, they were called Hermits of St Damiano, or Moronites (or Murronites), and did not assume the appellation of Celestines until after the election of their founder, Peter of Morone (Pietro Murrone), to the Papacy as Celestine V. They used the post-nominal initials O.S.B. Cel. The order was absorbed by Order of the Most Holy Annunciation from 1778 by order of Pius VI in 1776. In 1810 the last Celestines were transferred. Founding The fame of the holy life and the austerities practised by Pietro Morone in his solitude on the Mountain of Majella, near Sulmona, attracted many visitors, several of whom were moved to remain and share his mode of life. They built a small convent on the spot inhabited by the holy hermit, which became too small for the accommodation of those who came to share their life of privations. Peter of Morone (later Pope Celestine V), their founder, built a number of other small oratories in that neighborhood. Around the year 1254, Peter of Morone gave the order a rule formulated in accordance with his own practices. In 1264 the new institution was approved as a branch of the Benedictines by Urban IV; however, the next pope Pope Gregory X had commanded that all orders founded since the prior Lateran Council should not be further multiplied. Hearing a rumor that the order was to be suppressed, the reclusive Peter traveled to Lyon, where the Pope was holding a council. There he persuaded Gregory to approve his new order, making it a branch of the Benedictines and following the rule of Saint Benedict, but adding to it additional severities and privations. Gregory took it under the Papal protection, assured to it the possession of all property it might acquire, and endowed it with exemption from the authority of the ordinary. Nothing more was needed to ensure the rapid spread of the new association and Peter the hermit of Morone lived to see himself "Superior-General" to thirty-six monasteries and more than six hundred monks. As soon as he had seen his new order thus consolidated he gave up the government of it to a certain Robert, and retired once again to an even more remote site to devote himself to solitary penance and prayer. Shortly afterwards, in a chapter of the order held in 1293, the original monastery of Majella being judged to be too desolate and exposed to too rigorous a climate, it was decided that the Abbey of the Holy Spirit at Monte Morrone, located in Sulmona, should be the headquarters of the order and the residence of the General-Superior, where it continued for centuries. The next year Peter of Morrone, despite his reluctance, was elected Pope by the name of Celestine V. From there on, the order he had founded took the name of Celestines. During his short reign as Pope, the former hermit confirmed the rule of the order, which he had himself composed, and conferred on the society a variety of special graces and privileges. In the only creation of cardinals promoted by him, among the twelve raised to the purple, there were two monks of his order. He also visited personally the Benedictine monastery on Monte Cassino, where he persuaded the monks to accept his more rigorous rule. He sent fifty monks of his order to introduce it, who remained there, however, for only a few months. After the death of the founder the order was favoured and privileged by Benedict XI, and rapidly spread through Italy, Germany, Flanders, and France, where they were received by Philip the Fair in 1300. The administration of the order was carried on somewhat after the pattern of Cluny, that is all monasteries were subject to the Abbey of the Holy Ghost at Sulmona, and these dependent houses were divided into provinces. The Celestines had ninety-six houses in Italy, twenty-one in France, and a few in Germany. Subsequently, the French Celestines, with the consent of the Italian superiors of the order, and of Pope Martin V in 1427, obtained the privilege of making new constitutions for themselves, which they did in the 17th century in a series of regulations accepted by the provincial chapter in 1667. At that time the French congregation of the order was composed of twenty-one monasteries, the head of which was that of Paris, and was governed by a Provincial with the authority of General. Paul V was a notable benefactor of the order. The order became extinct in the eighteenth century. Description of order According to their special constitutions the Celestines were bound to say matins in the choir at two o'clock in the morning, and always to abstain from eating meat, save in illness. The distinct rules of their order with regard to fasting are numerous, but not more severe than those of similar congregations, though much more so than is required by the old Benedictine rule. In reading their minute directions for divers degrees of abstinence on various days, it is impossible to avoid being struck by the conviction that the great object of the framers of these rules was the general purpose of ensuring an ascetic mode of life. The Celestines wore a white woollen cassock bound with a linen band, and a leathern girdle of the same colour, with a scapular unattached to the body of the dress, and a black hood. It was not permitted to them to wear any shirt save of serge. Their dress in short was very like that of the Cistercians. But it is a tradition in the order that in the time of the founder they wore a coarse brown cloth. The church and monastery of San Pietro in Montorio originally belonged to the Celestines in Rome; but they were turned out of it by Sixtus IV to make way for Franciscans, receiving from the Pope in exchange the Church of St Eusebius of Vercelli with the adjacent mansion for a monastery. References External links 1244 establishments in Europe Catholic orders and societies Religious organizations established in the 1240s Christian religious orders established in the 13th century
2,950
6,539
https://en.wikipedia.org/wiki/Cessna
Cessna
Cessna () is an American brand of general aviation aircraft owned by Textron Aviation since 2014, headquartered in Wichita, Kansas. Originally, it was a brand of the Cessna Aircraft Company, an American general aviation aircraft manufacturing corporation also headquartered in Wichita. The company produced small, piston-powered aircraft, as well as business jets. For much of the mid-to-late 20th century, Cessna was one of the highest-volume and most diverse producers of general aviation aircraft in the world. It was founded in 1927 by Clyde Cessna and Victor Roos and was purchased by General Dynamics in 1985, then by Textron, Inc. in 1992. In March 2014, when Textron purchased the Beechcraft and Hawker Aircraft corporations, Cessna ceased operations as a subsidiary company, and joined the others as one of the three distinct brands produced by Textron Aviation. Throughout its history, and especially in the years following World War II, Cessna became best-known for producing high-wing, small piston aircraft. Its most popular and iconic aircraft is the Cessna 172, delivered since 1956 (with a break from 1986–1996), with more sold than any other aircraft in history. Since the first model was delivered in 1972, the brand has also been well known for its Citation family of low-wing business jets which vary in size. History Origins Clyde Cessna, a farmer in Rago, Kansas, built his own aircraft and flew it in June 1911. He was the first person to do so between the Mississippi River and the Rocky Mountains. Cessna started his wood-and-fabric aircraft ventures in Enid, Oklahoma, testing many of his early planes on the salt flats. When bankers in Enid refused to lend him more money to build his planes, he moved to Wichita. Cessna Aircraft was formed when Clyde Cessna and Victor Roos became partners in the Cessna-Roos Aircraft Company in 1927. Roos resigned just one month into the partnership, selling back his interest to Cessna. Shortly afterward, Roos's name was dropped from the company name. The Cessna DC-6 earned certification on the same day as the stock market crash of 1929, October 29, 1929. In 1932, the Cessna Aircraft Company closed due to the Great Depression. However, the Cessna CR-3 custom racer made its first flight in 1933. The plane won the 1933 American Air Race in Chicago and later set a new world speed record for engines smaller than 500 cubic inches by averaging . Cessna's nephews, brothers Dwane and Dwight Wallace, bought the company from Cessna in 1934. They reopened it and began the process of building it into what would become a global success. The Cessna C-37 was introduced in 1937 as Cessna's first seaplane when equipped with Edo floats. In 1940, Cessna received their largest order to date, when they signed a contract with the U.S. Army for 33 specially equipped Cessna T-50s. Later in 1940, the Royal Canadian Air Force placed an order for 180 T-50s. Postwar boom Cessna returned to commercial production in 1946, after the revocation of wartime production restrictions (L-48), with the release of the Model 120 and Model 140. The approach was to introduce a new line of all-metal aircraft that used production tools, dies and jigs, rather than the hand-built tube-and-fabric construction process used before the war. The Model 140 was named by the US Flight Instructors Association as the "Outstanding Plane of the Year" in 1948. Cessna's first helicopter, the Cessna CH-1, received FAA type certification in 1955. Cessna introduced the Cessna 172 in 1956. It became the most produced airplane in history. During the post-World War II era, Cessna was known as one of the "Big Three" in general aviation aircraft manufacturing, along with Piper and Beechcraft. In 1959, Cessna acquired Aircraft Radio Corporation (ARC), of Boonton, New Jersey, a leading manufacturer of aircraft radios. During these years, Cessna expanded the ARC product line, and rebranded ARC radios as "Cessna" radios, making them the "factory option" for avionics in new Cessnas. However, during this time, ARC radios suffered a severe decline in quality and popularity. Cessna kept ARC as a subsidiary until 1983, selling it to avionics-maker Sperry. In 1960, Cessna acquired McCauley Industrial Corporation, of Ohio, a leading manufacturer of propellers for light aircraft. McCauley became the world's leading producer of general aviation aircraft propellers, largely through their installation on Cessna airplanes. In 1960, Cessna affiliated itself with Reims Aviation of Reims, France. In 1963, Cessna produced its 50,000th airplane, a Cessna 172. Cessna's first business jet, the Cessna Citation I, performed its maiden flight on September 15, 1969. Cessna produced its 100,000th single-engine airplane in 1975. In 1985, Cessna ceased to be an independent company. It was purchased by General Dynamics Corporation and became a wholly owned subsidiary. Production of the Cessna Caravan began. General Dynamics in turn sold Cessna to Textron in 1992. Late in 2007, Cessna purchased the bankrupt Columbia Aircraft company for US$26.4M and would continue production of the Columbia 350 and 400 as the Cessna 350 and Cessna 400 at the Columbia factory in Bend, Oregon. However, production of both aircraft had ended by 2018. Chinese production controversy On November 27, 2007, Cessna announced the then-new Cessna 162 would be built in the People's Republic of China by Shenyang Aircraft Corporation, which is a subsidiary of the China Aviation Industry Corporation I (AVIC I), a Chinese government-owned consortium of aircraft manufacturers. Cessna reported that the decision was made to save money and also that the company had no more plant capacity in the United States at the time. Cessna received much negative feedback for this decision, with complaints centering on the recent quality problems with Chinese production of other consumer products, China's human rights record, exporting of jobs and China's less than friendly political relationship with the United States. The customer backlash surprised Cessna and resulted in a company public relations campaign. In early 2009, the company attracted further criticism for continuing plans to build the 162 in China while laying off large numbers of workers in the United States. In the end, the Cessna 162 was not a commercial success and only a small number were delivered before production was cancelled. 2008–2010 economic crisis The company's business suffered notably during the late-2000s recession, laying off more than half its workforce between January 2009 and September 2010. On November 4, 2008, Cessna's parent company, Textron, indicated that Citation production would be reduced from the original 2009 target of 535 "due to continued softening in the global economic environment" and that this would result in an undetermined number of lay-offs at Cessna. On November 8, 2008, at the Aircraft Owners and Pilots Association (AOPA) Expo, CEO Jack Pelton indicated that sales of Cessna aircraft to individual buyers had fallen, but piston and turboprop sales to businesses had not. "While the economic slowdown has created a difficult business environment, we are encouraged by brisk activity from new and existing propeller fleet operators placing almost 200 orders for 2009 production aircraft," Pelton stated. Beginning in January 2009, a total of 665 jobs were cut at Cessna's Wichita and Bend, Oregon plants. The Cessna factory at Independence, Kansas, which builds the Cessna piston-engined aircraft and the Cessna Mustang, did not see any layoffs, but one third of the workforce at the former Columbia Aircraft facility in Bend was laid off. This included 165 of the 460 employees who built the Cessna 350 and 400. The remaining 500 jobs were eliminated at the main Cessna Wichita plant. In January 2009, the company laid off an additional 2,000 employees, bringing the total to 4,600. The job cuts included 120 at the Bend, Oregon, facility reducing the plant that built the Cessna 350 and 400 to fewer than half the number of workers that it had when Cessna bought it. Other cuts included 200 at the Independence, Kansas, plant that builds the single-engined Cessnas and the Mustang, reducing that facility to 1,300 workers. On April 29, 2009, the company suspended the Citation Columbus program and closed the Bend, Oregon, facility. The Columbus program was finally cancelled in early July 2009. The company reported, "Upon additional analysis of the business jet market related to this product offering, we decided to formally cancel further development of the Citation Columbus". With the 350 and 400 production moving to Kansas, the company indicated that it would lay off 1,600 more workers, including the remaining 150 employees at the Bend plant and up to 700 workers from the Columbus program. In early June 2009, Cessna laid off an additional 700 salaried employees, bringing the total number of lay-offs to 7,600, which was more than half the company's workers at the time. The company closed its three Columbus, Georgia, manufacturing facilities between June 2010 and December 2011. The closures included the new facility that was opened in August 2008 at a cost of US$25M, plus the McCauley Propeller Systems plant. These closures resulted in total job losses of 600 in Georgia. Some of the work was relocated to Cessna's Independence, Kansas, or Mexican facilities. Cessna's parent company, Textron, posted a loss of US$8M in the first quarter of 2010, largely driven by continuing low sales at Cessna, which were down 44%. Half of Cessna's workforce remained laid-off and CEO Jack Pelton stated that he expected the recovery to be long and slow. In September 2010, a further 700 employees were laid off, bringing the total to 8,000 jobs lost. CEO Jack Pelton indicated this round of layoffs was due to a "stalled [and] lackluster economy" and noted that while the number of orders cancelled for jets had been decreasing, new orders had not met expectations. Pelton added, "our strategy is to defend and protect our current markets while investing in products and services to secure our future, but we can do this only if we succeed in restructuring our processes and reducing our costs." 2010s On May 2, 2011, CEO Jack J. Pelton retired. The new CEO, Scott A. Ernest, started on May 31, 2011. Ernest joined Textron after 29 years at General Electric, where he had most recently served as vice president and general manager, global supply chain for GE Aviation. Ernest previously worked for Textron CEO Scott Donnelly when both worked at General Electric. In September 2011, the Federal Aviation Administration (FAA) proposed a US$2.4 million fine against the company for its failure to follow quality assurance requirements while producing fiberglass components at its plant in Chihuahua, Mexico. Excess humidity meant that the parts did not cure correctly and quality assurance did not detect the problems. The failure to follow procedures resulted in the delamination in flight of a section of one Cessna 400's wing skin from the spar while the aircraft was being flown by an FAA test pilot. The aircraft was landed safely. The FAA also discovered 82 other aircraft parts that had been incorrectly made and not detected by the company's quality assurance. The investigation resulted in an emergency Airworthiness Directive that affected 13 Cessna 400s. Since March 2012, Cessna has been pursuing building business jets in China as part of a joint venture with Aviation Industry Corporation of China (AVIC). The company stated that it intends to eventually build all aircraft models in China, saying "The agreements together pave the way for a range of business jets, utility single-engine turboprops and single-engine piston aircraft to be manufactured and certified in China." In late April 2012, the company added 150 workers in Wichita as a result of anticipated increased demand for aircraft production. Overall, they have cut more than 6000 jobs in the Wichita plant since 2009. In March 2014, Cessna ceased operations as a company and instead became a brand of Textron Aviation. Marketing initiatives During the 1950s and 1960s, Cessna's marketing department followed the lead of Detroit automakers and came up with many unique marketing terms in an effort to differentiate its product line from their competitors. Other manufacturers and the aviation press widely ridiculed and spoofed many of the marketing terms, but Cessna built and sold more aircraft than any other manufacturer during the boom years of the 1960s and 1970s. Generally, the names of Cessna models do not follow a theme, but there is usually logic to the numbering: the 100 series are the light singles, the 200s are the heftier, the 300s are light to medium twins, the 400s have "wide oval" cabin-class accommodation and the 500s are jets. Many Cessna models have names starting with C for the sake of alliteration (e.g. Citation, Crusader, Chancellor). Company terminology Cessna marketing terminology includes: Para-Lift Flaps – Large Fowler flaps Cessna introduced on the 170B in 1952, replacing the narrow chord plain flaps then in use. Land-O-Matic – In 1956, Cessna introduced sprung-steel tricycle landing gear on the 172. The marketing department chose "Land-O-Matic" to imply that these aircraft were much easier to land and take off than the preceding conventional landing gear equipped Cessna 170. They even went as far as to say pilots could do “drive-up take-offs and drive-in landings”, implying that flying these aircraft was as easy as driving a car. In later years, some Cessna models had their steel sprung landing gear replaced with steel tube gear legs. The 206 retains the original spring steel landing gear today. Omni-Vision – The rear windows on some Cessna singles, starting with the 182 and 210 in 1962 and followed by the 172 and 150 in 1963 and 1964 respectively. The term was intended to make the pilot feel visibility was improved on the notably poor-visibility Cessna line. The introduction of the rear window caused in most models a loss of cruise speed due to the extra drag, while not adding any useful visibility. Cushioned Power – The rubber mounts on the cowling of the 1967 model 150, in addition to the rubber mounts isolating the engine from the cabin. Omni-Flash – The flashing beacon on the tip of the fin that could be seen all around. Open-View – This referred to the removal of the top section of the control wheel in 1967 models. These had been rectangular, they now became “ram’s horn” shaped, thus not blocking the instrument panel as much. Quick-Scan – Cessna introduced a new instrument panel layout in the 1960s and this buzzword was to indicate Cessna’s panels were ahead of the competition. Nav-O-Matic – The name of the Cessna autopilot system, which implied the system was relatively simple. Camber-Lift – A marketing name used to describe Cessna aircraft wings starting in 1972 when the aerodynamics designers at Cessna added a slightly drooped leading edge to the standard NACA 2412 airfoil used on most of the light aircraft fleet. Writer Joe Christy described the name as "stupid" and added "Is there any other kind [of lift]?" Stabila-Tip – Cessna started commonly using wingtip fuel tanks, carefully shaped for aerodynamic effect rather than being tubular-shaped. Tip tanks do have an advantage of reducing free surface effect of fuel affecting the balance of the aircraft in rolling manoeuvres. Aircraft models In October 2020, Textron Aviation was producing the following Cessna-branded models: Cessna 172 Skyhawk – high-wing, single piston-engined, four-seat aircraft in production since 1956 Cessna 182 Skylane – high-wing, single piston-engined, four-seat aircraft in production since 1956 Cessna 206 Stationair – high-wing, single piston-engined, six-seat utility aircraft in production since 1962 Cessna 208 Caravan – high-wing single-turboprop utility aircraft in production since 1984 Cessna 408 SkyCourier – high-wing twin-turboprop utility aircraft in production since 2022 Cessna Citation family – twin-engined business jets Cessna Citation 525 M2/CJ series – in production since 1991 Cessna Citation 560XL Excel – in production since 1996 Cessna Citation 680 Sovereign – Out production 2021 Cessna Citation 680A Latitude – in production since 2014 Cessna Citation 700 Longitude – in production since 2019 References External links Mort Brown Cessna Special Collection – Personal collection of documents belonging to a former chief test pilot Aircraft manufacturers of the United States Manufacturing companies based in Kansas General Dynamics Textron Companies based in Wichita, Kansas American companies established in 1927 Vehicle manufacturing companies established in 1927 1927 establishments in Kansas Collier Trophy recipients 1985 mergers and acquisitions 1992 mergers and acquisitions
2,951
6,543
https://en.wikipedia.org/wiki/Carnivore
Carnivore
A carnivore , or meat-eater (Latin, caro, genitive carnis, meaning meat or "flesh" and vorare meaning "to devour"), is an animal or plant whose food and energy requirements derive from animal tissues (mainly muscle, fat and other soft tissues) whether through hunting or scavenging. Nomenclature Mammal order The technical term for mammals in the order Carnivora is carnivoran, and they are so-named because most member species in the group have a carnivorous diet, but the similarity of the name of the order and the name of the diet causes confusion. Many but not all carnivorans are meat eaters; a few, such as the large and small cats (felidae) are obligate carnivores (see below). Other classes of carnivore are highly variable. The Ursids, for example: While the Arctic polar bear eats meat almost exclusively (more than 90% of its diet is meat), almost all other bear species are omnivorous, and one species, the giant panda, is nearly exclusively herbivorous. Dietary carnivory is not a distinguishing trait of the order: Many mammals with highly carnivorous diets are not members of the order Carnivora. Cetaceans, for example, all eat other animals, but are paradoxically members of the almost exclusively plant-eating hooved mammals. Carnivorous diet Animals that depend solely on animal flesh for their nutrient requirements are called hypercarnivores or obligate carnivores, while those that also consume non-animal food are called mesocarnivores, or facultative carnivores, or omnivores (there are no clear distinctions). A carnivore at the top of the food chain (adults not preyed upon by other animals) is termed an apex predator, regardless of whether it is an obligate or facultative carnivore. Outside the animal kingdom, there are several genera containing carnivorous plants (predominantly insectivores) and several phyla containing carnivorous fungi (preying mostly on microscopic invertebrates, such as nematodes, amoebae, and springtails). Subcategories of carnivory Carnivores are sometimes characterized by their type of prey. For example, animals that eat mainly insects and similar invertebrates are called insectivores, while those that eat mainly fish are called piscivores. Carnivores may alternatively be classified according to the percentage of meat in their diet. The diet of a hypercarnivore consists of more than 70% meat, that of a mesocarnivore 30–70%, and that of a hypocarnivore less than 30%, with the balance consisting of non-animal foods, such as fruits, other plant material, or fungi. Omnivores also consume both animal and non-animal food, and apart from their more general definition, there is no clearly defined ratio of plant vs. animal material that distinguishes a facultative carnivore from an omnivore. Obligate carnivores Obligate or "true" carnivores are those whose diet requires nutrients found only in animal flesh. While obligate carnivores might be able to ingest small amounts of plant matter, they lack the necessary physiology required to fully digest it. Some obligate carnivorous mammals will ingest vegetation as an emetic, to self-induce vomiting the food that upset their stomachs. Obligate carnivores are diverse. The amphibian axolotl consumes mainly worms and larvae in its environment, but if necessary will consume algae. All felids, including the domestic cat, require a diet of primarily animal flesh and organs. Specifically, cats have high protein requirements and their metabolisms appear unable to synthesize essential nutrients such as retinol, arginine, taurine, and arachidonic acid; thus, in nature, they must consume flesh to supply these nutrients. Characteristics of carnivores Characteristics commonly associated with carnivores include strength, speed, and keen senses for hunting, as well as teeth and claws for capturing and tearing prey. However, some carnivores do not hunt and are scavengers, lacking the physical characteristics to bring down prey; in addition, most hunting carnivores will scavenge when the opportunity arises. Carnivores have comparatively short digestive systems, as they are not required to break down the tough cellulose found in plants. Many hunting animals have evolved eyes facing forward, enabling depth perception. This is almost universal among mammalian predators, while most reptile and amphibian predators have eyes facing sideways. Prehistory of carnivory Predation (the eating of one living creature by another for nutrition) predates the rise of commonly recognized carnivores by hundreds of millions (perhaps billions) of years. Indeed: It began with single-celled organisms, before multicellular creatures, and so carnivory predates the clear distinction between plants and animals (herbivory / carnivory). Proterozoic origin The earliest predators were microbial organisms, which engulfed or grazed on others. Because the earliest fossil record is the poorest, these first predators could date back anywhere between 1 and over 2.7 Gya (billion years ago). The rise of eukaryotic cells at around 2.7 Gya, the rise of multicellular organisms at about 2 Gya, and the rise of mobile predators (around 600 Mya – 2 Gya, probably around 1 Gya) have all been attributed to early predatory behavior, and many very early remains show evidence of boreholes or other markings attributed to small predator species. Devonian land-predators Among more familiar species, the first vertebrate carnivores were fish, and then amphibians that moved on to land. Early tetrapods were large amphibious piscivores. The first tetrapods, or land-dwelling vertebrates, were piscivorous amphibians called labyrinthodonts. They gave rise to insectivorous vertebrates and, later, to predators of other tetrapods. Some scientists assert that Dimetrodon "was the first terrestrial vertebrate to develop the curved, serrated teeth that enable a predator to eat prey much larger than itself." While amphibians continued to feed on fish and later insects, reptiles began exploring two new food types: tetrapods (carnivory) and then plants (herbivory). Carnivory was a natural transition from insectivory for medium and large tetrapods, requiring minimal adaptation; in contrast, a complex set of adaptations was necessary for feeding on highly fibrous plant materials. Mesozoic In the Mesozoic, some theropod dinosaurs such as Tyrannosaurus rex are thought probably to have been obligate carnivores. Though the theropods were the larger carnivores, several carnivorous mammal groups were already present. Most notable are the gobiconodontids, the triconodontid Jugulator, the deltatheroidans and Cimolestes. Many of these, such as Repenomamus, Jugulator and Cimolestes, were among the largest mammals in their faunal assemblages, capable of attacking dinosaurs. Cenozoic In the early-to-mid-Cenozoic, the dominant predator forms were mammals: hyaenodonts, oxyaenids, entelodonts, ptolemaiidans, arctocyonids and mesonychians, representing a great diversity of eutherian carnivores in the northern continents and Africa. In South America, sparassodonts were dominant, while Australia saw the presence of several marsupial predators, such as the dasyuromorphs and thylacoleonids. From the Miocene to the present, the dominant carnivorous mammals have been carnivoramorphs. Most carnivorous mammals, from dogs to deltatheridiums, share several dental adaptations, such as carnassialiforme teeth, long canines and even similar tooth replacement patterns. Most aberrant are thylacoleonids, with a diprodontan dentition completely unlike that of any other mammal; and eutriconodonts like gobiconodontids and Jugulator, with a three-cusp anatomy which nevertheless functioned similarly to carnassials. See also Mesocarnivore Herbivore References Further reading Biological interactions Animals by eating behaviors Ethology
2,953
6,566
https://en.wikipedia.org/wiki/English%20in%20the%20Commonwealth%20of%20Nations
English in the Commonwealth of Nations
The use of the English language in current and former member countries of the Commonwealth of Nations was largely inherited from British colonisation, with some exceptions. English serves as the medium of inter-Commonwealth relations. Many regions, notably Australia, Brunei, Canada, Hong Kong, India, Ireland, Malaysia, New Zealand, Pakistan, Singapore, South Africa, Sri Lanka and the Caribbean, have developed their own native varieties of the language. Mozambique, which joined the Commonwealth in 1996, is a special case: English is widely spoken there, despite it being a former Portuguese colony (though the port of Chinde was leased by Britain from 1891 to 1923). Likewise, in Cyprus, it does not have official status but is widely used as a . English is spoken as a first or second language in most of the Commonwealth. Written English in the current and former Commonwealth generally favours British spelling as opposed to American, with some exceptions, particularly in Canada, where there are strong influences from neighbouring American English. Few Commonwealth countries besides Canada and Australia have produced their own variant English dictionaries and style guides, and may rely on those produced in other countries. Native varieties Southern Hemisphere native varieties of English began to develop during the 18th century, with the colonisation of Australasia and South Africa. Australian English and New Zealand English are closely related to each other, and share some similarities with South African English (though it has unique influences from indigenous African languages, and Dutch influences it inherited along with the development of Afrikaans from Dutch). Canadian English contains elements of British English and American English, as well as many Canadianisms and some French influences. It is the product of several waves of immigration and settlement, from Britain, Ireland, France, the United States, and around the world, over a period of almost two centuries. Modern Canadian English has taken significant vocabulary and spelling from the shared political and social institutions of Commonwealth countries. Caribbean Caribbean English is influenced by the English-based Creole varieties spoken, but they are not one and the same. There is a great deal of variation in the way English is spoken, with a "Standard English" at one end of a bipolar linguistic continuum and Creole languages at the other. These dialects have roots in 17th-century British and Irish English, and African languages, plus localised influences from other colonial languages including French, Spanish, and Dutch; unlike most native varieties of English, West Indian dialects often tend to be syllable-timed rather than stress-timed. Non-native varieties Second-language varieties of English in Africa and Asia have often undergone "indigenisation"; that is, each English-speaking community has developed (or is in the process of developing) its own standards of usage, often under the influence of local languages. These dialects are sometimes referred to as New Englishes (McArthur, p. 36); most of them inherited non-rhoticity from Southern British English. Africa Several dialects of West African English exist, with a lot of regional variation and some influence from indigenous languages. West African English tends to be syllable-timed, and its phoneme inventory is much simpler than that of Received Pronunciation; this sometimes affects mutual intelligibility with native varieties of English. A distinctive North African English, often with significant influences from Bantu languages such as Swahili, is spoken in countries such as Kenya or Tanzania, particularly in Nairobi and other cities where there is an expanding middle class, for whom English is increasingly being used in the home as the first language. Small communities of native English speakers can be found in Zimbabwe, Botswana, and Namibia; the dialects spoken are similar to native South African English. Asia Indian subcontinent English was introduced into the subcontinent by the British Raj. Among the partitioned post-independent countries, India has the largest English-speaking population in the Commonwealth, although comparatively very few speakers of Indian English are first-language speakers. The same is true of English spoken in other parts of South Asia, e.g. Pakistani English, Sri Lankan English, Bangladeshi English and Myanmar English. South Asian English phonology is highly variable; stress, rhythm and intonation are generally different from those of native varieties. There are also several peculiarities at the levels of morphology, syntax and usage, some of which can also be found among educated speakers. Malay Archipelago Southeast Asian English comprises Singapore English, Malaysian English, and Brunei English; it features some influence from Malay and Chinese languages, as well as Indian English. Hong Kong ceased to be part of the Commonwealth in 1997. Nonetheless, the English language there still enjoys status as an official language. See also British English American English EF English Proficiency Index English-speaking world Other languages: Community of Portuguese Language Countries Dutch Language Union La Francophonie Latin Union List of countries by spoken languages References McArthur, Tom (2002). The Oxford Guide to World English. Oxford: Oxford University Press. . Peters, Pam (2004). The Cambridge Guide to English Usage. Cambridge: Cambridge University Press. . Trudgill, Peter & Hannah, Jean (2002). International English: A Guide to the Varieties of Standard English; 4th ed. London: Arnold. . Specific Commonwealth of Nations Symbols of the Commonwealth of Nations
2,965
6,587
https://en.wikipedia.org/wiki/Cadillac%2C%20Michigan
Cadillac, Michigan
Cadillac ( ) is a city in and county seat of Wexford County in the U.S. state of Michigan. The population was 10,371 at the 2020 census, which ranks it the third most-populated city in the Northern Michigan region after Traverse City and Alpena. Cadillac was settled as early as 1871 and formerly known as the village of Clam Lake before incorporating as a city in 1877. The city is the junction of several major highways, including U.S. Route 131, M-55, and M-115. The geographic center of Michigan is approximately north-northwest of Cadillac. Cadillac is the central city of the Cadillac micropolitan area, which includes all of Wexford County and Missaukee County to the east, and had population of 48,725 at the 2020 census. History Village of Clam Lake European explorers and fur traders visited this area from the 18th century, most of them initially French and French-Canadians who traded with regional Native Americans. More permanent communities were not established until the late 19th century. Initial settlements developed from logging camps and the logging industry. In 1871, the first sawmill began operations at Clam Lake. Originally called the Pioneer Mill, it was built by John R. Yale. That same year, George A. Mitchell, a prominent local banker and railroad entrepreneur, and Adam Gallinger, a local carpenter, formed the Clam Lake Canal Improvement and Construction Company. Two years later, the Clam Lake Canal was constructed between Big and Little Clam lakes, known as present-day Lakes Mitchell and Lake Cadillac. Sawmill owners used the canal to transport timber from Big Clam Lake to the mills and railroad sites on Little Clam Lake. The Grand Rapids and Indiana Railroad (G.R. & I. Railroad) had reached the area in 1872. The settlement of Clam Lake was incorporated as a village in 1874. George Mitchell was elected as the first mayor. The village was incorporated as a city in 1877 and renamed Cadillac, after Antoine Laumet de La Mothe, sieur de Cadillac, a French colonist who started the first permanent settlement at Detroit in 1701. Battle of Manton The Wexford County seat of government, originally located in Sherman, was moved to Manton in 1881, as the result of a compromise between the feuding residents of Cadillac and Sherman. Cadillac partisans, however, won the county seat by a county-wide vote in April 1882. The day following the election, a sheriff's posse left the city for Manton by special train to seize the county records. After they arrived and collected a portion of the materials, however, an angry crowd confronted the Cadillac men and drove them out of town. When the sheriff returned to Cadillac, he encountered a force consisting of several hundred armed men; this group reportedly included a brass band. The Sheriff's force, some of whom may have been intoxicated, traveled back to Manton to seize the remaining records. Although Manton residents confronted the Cadillac men and barricaded the courthouse, the posse successfully seized the documents. They returned to Cadillac in dubious glory. City of Cadillac In 1878, Ephraim Shay perfected his Shay locomotive, which was particularly effective in its ability to climb steep grades, maneuver sharp turns, and accommodate imperfections in railroad tracks. Cadillac was home to the Michigan Iron Works Company, which manufactured the Shay locomotive for a short time in the early 1880s. The lumber industry continued to dominate the city, attracting a large immigrant labor force, most of whom were Swedish. (Later Cadillac made sister city arrangements with Mölnlycke, Sweden, and Rovaniemi, Finland). In 1899, the Cadillac Club formed, the forerunner of the Cadillac Area Chamber of Commerce. Gradually, various manufacturing firms found success in Cadillac. By the early 20th century, with the lumber depleted, the timber industry was in decline. Industrial development soon dominated the local economy, and it continues to do so today. Cadillac's range of industries includes the manufacture of pleasure boats, automotive parts, water-well components, vacuum cleaners, and rubber products. In 1936, the U.S. Forest Service and the Civilian Conservation Corps developed the Caberfae Ski Area during the Great Depression as an investment in future economic development. This resulted in promotion of this area as a tourist center. Caberfae remains in operation today, as the oldest ski resort in the midwest. Tourism and outdoor recreation have since become an important sector of Cadillac's economy. In the summer, tourists travel to the city and region for boating, fishing, hiking, mountain biking, and camping. During the fall, hunting and color tours are popular. The winter is possibly the busiest season; the area can be found packed with downhill skiers, cross-country skiers, ice-fishers, snow-shoers and–most of all-snowmobilers. The North American Snowmobile Festival (NASF) is held on frozen Lake Cadillac every winter. Thirsty's, a gas station on M-55 west of Cadillac, was the home of Samantha or "Sam The Bear" from the 1970s through the late 1990s, when Sam died of old age. Sam was the only brown bear in captivity in the US at the time to hibernate naturally. Sam lived in a large cage in front of the gas station and was fed ice cream cones by tourists every summer. In October 1975 the rock group Kiss visited Cadillac and performed at the Cadillac High School gymnasium. They played the concert to honor the Cadillac High School football team. In previous years, the team had compiled a record of sixteen consecutive victories, but the 1974 squad opened the season with two losses. The assistant coach, Jim Neff, an English teacher and rock'n'roll fan, thought to inspire the team by playing Kiss music in the locker room. He also connected the team's game plan, K-I-S-S or "Keep It Simple Stupid", with the band. The team went on to win seven straight games and their conference co-championship. After learning of their association with the team's success, the band decided to visit the school and play for the homecoming game. Historic landmarks Cadillac maintains a number of state historic landmarks. Most are marked with a green "Michigan Historical Marker" sign, which includes a description of the landmark. Six sites with the city are marked: Cadillac Carnegie Library, Charles T. Mitchell House, Clam Lake Canal, Cobbs & Mitchell Building, Cobbs & Mitchell No. 1, and the Shay Locomotive (pictured at the right). Two more are in the near Cadillac area: Caberfae Ski Resort and Greenwood Disciples of Christ Church; and another two are in surrounding Wexford County, marking Battle of Manton and the First Wexford County Court House. Geography Topography According to the U.S. Census Bureau, the city has a total area of , of which is land and is water. The Lake Cadillac is entirely within the city limits. The larger, Lake Mitchell is nearby on the west side of the city, with of shoreline within the city's municipal boundary. The lakes were connected by a stream which was replaced in 1873 by the Clam Lake Canal. The canal was featured on Ripley's Believe It or Not in the 1970s due to the phenomenon that in winter the canal freezes before the lakes and then after the lakes freeze, the canal thaws and remains unfrozen for the rest of the winter. Cadillac is located at the eastern edge of what is now managed as the Manistee National Forest. The surrounding area is heavily wooded, with mixed hardwood and conifer forests. Christmas tree farming has been important to the area agricultural industry. Cadillac was chosen in 1988 to donate the holiday tree installed at the lawn of the U.S. Capitol building in Washington, D.C. The area surrounding Cadillac is primarily rural, and is considered to be part of Northern Michigan. Given the small size of nearby communities, the city is a major commercial and industrial hub of the region. Cityscape The commercial center of the city is located on the eastern edge of Lake Cadillac. Most downtown buildings range from two to five stories in height. Many face Mitchell Street, the city's tree-lined main street and traditional corridor of travel through town. The downtown contains a movie theater, gift shops, restaurants, a bookstore, specialty food stores, jewelers, clothing retailers, and various other businesses. The Courthouse Hill Historic District, recognized in April 2005, lies adjacent to the city's commercial center. The District contains a number of large Victorian-style residences built by the lumber barons and businessmen who helped develop the city in the 1870s. Population and building density is highest in this area. On the western bank of Lake Cadillac, where M-55 intersects M-115, is what is locally referred to as Cadillac West. This is a small commercial district, bordering Mitchell State Park and the two lakes; it caters mostly to tourists. It contains a number of motels and restaurants. Along the northern and southern stretches of the lake are the main residential areas of the city. They are generally of low to moderate density, characterized primarily by single-family structures. Climate Cadillac experiences a typical northern Michigan climate, undergoing temperate seasonal changes, influenced by the presence of Lake Michigan and the inevitable lake effect. Winters are generally cold with large amounts of snowfall. Summers are warm. The average high temperature in July is and the average low in January is . Summer temperatures can exceed , and winter temperatures can drop below . Average annual rainfall is , and average annual snowfall is . Snowfall typically occurs between the months of November and March. According to the Köppen climate classification system, Cadillac has a humid continental climate, abbreviated "Dfb" on climate maps. Superfund sites Cadillac has two superfund sites, according to the U.S. Environmental Protection Agency. One is located at 1100 Wright Street, the former site of Kysor Industrial Corp, which operations resulted in toxic wastes. The other is located at 1002 6th Street, the former site of Northernaire Plating. Its operations also produced hazardous wastes, which produced contamination. Demographics 2010 census As of the census of 2010, there were 10,355 people, 4,280 households, and 2,625 families residing in the city. The population density was . There were 4,927 housing units at an average density of . The racial makeup of the city was 95.6% White, 0.5% African American, 0.6% Native American, 1.0% Asian, 0.4% from other races, and 1.8% from two or more races. Hispanic or Latino of any race were 1.8% of the population. There were 4,280 households, of which 32.9% had children under the age of 18 living with them, 39.2% were married couples living together, 16.4% had a female householder with no husband present, 5.7% had a male householder with no wife present, and 38.7% were non-families. 32.0% of all households were made up of individuals, and 14% had someone living alone who was 65 years of age or older. The average household size was 2.34 and the average family size was 2.90. The median age in the city was 36.5 years. 24.7% of residents were under the age of 18; 10% were between the ages of 18 and 24; 24.4% were from 25 to 44; 23.8% were from 45 to 64; and 17.1% were 65 years of age or older. The gender makeup of the city was 47.4% male and 52.6% female. 2000 census As of the census of 2000, there were 10,000 people, 4,118 households, and 2,577 families residing in the city. The population density was . There were 4,466 housing units at an average density of . The racial makeup of the city was 96.55% White, 0.21% Black or African American, 0.92% Native American, 0.63% Asian, 0.03% Pacific Islander, 0.28% from other races, and 1.38% from two or more races. 1.18% of the population were Hispanic or Latino of any race. There were 4,118 households, out of which 32.2% had children under the age of 18 living with them, 43.9% were married couples living together, 14.2% had a female householder with no husband present, and 37.4% were non-families. 31.8% of all households were made up of individuals, and 14.4% had someone living alone who was 65 years of age or older. The average household size was 2.37 and the average family size was 2.96. In the city, the population was spread out, with 26.2% under the age of 18, 9.6% from 18 to 24, 27.9% from 25 to 44, 19.6% from 45 to 64, and 16.7% who were 65 years of age or older. The median age was 36 years. For every 100 females, there were 91.4 males. For every 100 females age 18 and over, there were 84.4 males. The median income for a household in the city was $29,899, and the median income for a family was $36,825. Males had a median income of $29,773 versus $21,283 for females. The per capita income for the city was $16,801. About 10.9% of families and 13.7% of the population were below the poverty line, including 15.4% of those under age 18 and 13.3% of those age 65 or over. Government Cadillac was incorporated as a city in 1877. It is a home rule city with a Council-Manager form of government-one. Current council members are Shari Spoelman, Antoinette Schippers, Arthur Stevens, James Dean and Carla Filkins (mayor). The present City Manager is Marcus Peccia. Cadillac is located in Michigan's 4th congressional district, represented by Republican John Moolenaar. Economy Manufacturing has been the greatest employer in Cadillac since the logging industry. More than 26% of the city's labor force is employed in manufacturing. Three industrial parks are located within the city limits, comprising 7% of the total land use in Cadillac. Their operations generate 47% of the city's tax base. Much of the city's economic performance is determined by the fortunes of local industry. The center of the city is generally perceived to have a "small-town-feel". In the summer, the downtown fills with tourists, many from southern Michigan. The city center is one block from Lake Cadillac. For visitors by boat who dock at the public docks, it is nearly as accessible by boat as it is by car. The city's immediate proximity to two lakes, as well as Manistee National Forest, Pere Marquette State Forest, Mitchell State Park and a number of major highways, has established tourism as a significant sector of the local economy. During the winter months, Lake Cadillac and Lake Mitchell freeze over and the city becomes covered with snow. Cadillac is connected to a number of trail systems popular with winter recreation enthusiasts. The city integrates unusually well into the corridors of travel created by snowmobilers. Cadillac is also known as Chestnut Town, USA. The local area has a relatively high number of American chestnut trees, planted by pioneers from New York and Pennsylvania who settled in western Michigan. A blight in the early 20th century killed nearly every American Chestnut tree, but those in western Michigan had developed a mysterious resistance and survived. Top employers According to the city's 2019 Comprehensive Annual Financial Report, the principal employers in the city were: Education Cadillac's public education system has a total of 10 schools, with approximately 3,100 students and 166 teachers with a student:teacher ratio of 19.1:1. Cadillac has 4 private primary and secondary schools with approximately 394 students, 20 teachers and a student:teacher ratio of 20:1. Cadillac Area Public Schools (CAPS) The city has two high schools: Cadillac High School and Innovation High School. The area also has a junior high school, covering grades 7 and 8, located adjacent to the high school, and a middle school, Mackinaw Trail Middle School, covering grades 5 and 6. There are four elementary schools, Forest View Elementary, Franklin Elementary, Kenwood Elementary, and Lincoln Elementary. Cadillac also has an alternative high school, located in the building that formerly housed Cooley Elementary School. Adult high school and GED courses are offered there as well. As a whole, the programs at Cooley are part of a curriculum that aids individuals in overcoming the exceptional obstacles to their educational and workforce goals. Vocational career training is available to high school students free of charge in Cadillac and nearby schools at Wexford–Missaukee Intermediate School District (ISD) Career Tech Center (formerly Wexford-Missaukee Vocational Center or Voc-Tech). Students are bussed for part of the day to the Career Tech Center from their respective schools and receive credits toward high school graduation. Students are also able to earn certification in a chosen trade. Courses include: Agriscience Allied Heath Technologies Automotive Building Trades Business Management and Administration Computers and Electronics Digital Media Productions Electrical Occupations (formally Robotics and Automation) Heavy Equipment Hospitality Retailing and Entrepreneureship Machine Trades Metal Fabrication Power sports and Equipment Cosmetology is offered through the Career Tech Center, but at an off-campus location in downtown Cadillac. Adults can attend the vocational or cosmetology school with tuition or financial aid for certification. Cadillac hosts the Wexford-Missaukee ISD Special Education for residents of the two counties who are in need of special services. This school is on the same campus as the Career Tech Center. The class of 2006 was the largest class to go through Cadillac Public Schools. Private schools Cadillac offers several options for private religious education. Cadillac Heritage Christian offers nondenominational Christian education from pre-K through 12th grade. It is a coed school with 98 students and a teacher:student ratio of 1:11. Graduating classes are typically between 3–12 students. Northview Adventist School has 16 students in grades 1–10 as of 2020. It is a coed Seventh Day Adventist School. They operate in a one-room format, with one teacher that doubles as the principal, and one or two teachers assistants. They also have a multitude of volunteers that runs a library, band, and physical education, among other things. They do not participate in competitive sports. Noah's Ark Day School is a small alternative non-denominational Christian school for students in pre-K through first grade only. It is coed with 42 students and 1 teacher. Cadillac's largest and most well-known private school is St. Ann School, a coed private Roman Catholic school with 236 students in grades pre-K through 7. The teacher:student ratio is 1:26. St. Ann is a member of the National Catholic Education Association. No Catholic high school education is offered at St. Ann School, and students typically attend public school for grades 8–12. Training schools Northwoods Aviation, located at Wexford County Airport, offers training programs for piloting and servicing aircraft. Northwoods Aviation also offers primary instruction for those interested in sport pilot, private, and commercial certificates. The Cadillac Institute of Cosmetology (formerly Cadillac Academy of Beauty) is a full service teaching salon in downtown Cadillac that offers training for general cosmetologists and specialized technicians to high school students through a partnership with Wexford-Missaukee Intermediate School District. Training is also available to adult students though private courses on a tuition basis. Upon completion of the program, students are qualified to take the state board exam to become a licensed cosmetologist or specialty technician. Colleges The Baker College-Cadillac campus occupies just outside the City of Cadillac. The school has an enrollment of more than 1,300 students and offers Associate's and bachelor's degrees, in addition to professional certifications. Transportation Major highways Cadillac is situated as the confluence of three highways: US 131, M-55 and M-115. Prior to 2001, the northern end of the freeway portion of US 131 was located at the southern entrance to Cadillac. With the construction of a bypass, the US 131 freeway was extended around the east side of the city. The former route of the highway through downtown Cadillac was redesignated as BUS US 131. In the city, BUS US 131 is named Mitchell Street, after George Mitchell, but may be referred to as main street. bypasses the city to the east. The freeway continues southerly toward Big Rapids and Grand Rapids and northerly toward Manton before transitioning to a two-lane highway for the remainder of the distance to Petoskey. , a loop route through downtown, running largely along the former route of US 131 through the city. is a major two-lane east–west route across the state, connecting with Manistee on the west and Lake City, Houghton Lake, West Branch, and Tawas City on the east. , another major two-lane route, runs diagonally from Clare to the southeast to Frankfort to the northwest. Rail The city is serviced by rail via the Great Lakes Central Railroad. This is primarily a freight line, although passenger service is expected in the future. Public transit Cadillac and Wexford County jointly operate a local public bus service. The Cadillac/Wexford Transit Authority (CWTA) is a demand-response, public transportation system, and has been in operation since 1974. Indian Trails provides daily intercity bus service between Grand Rapids and St. Ignace and stops in Cadillac. Non-motorized transportation The White Pine Trail's northern terminus is in Cadillac. The trail, which stretches and originates from Comstock Park, follows an abandoned railroad bed into the center of the city. The trail is paved from the village of Leroy 16 miles north to Cadillac. Local media Newspapers The Cadillac News Radio WTCM (580 am, Traverse City) – news and talk WLDR (1210 am, Kingsley-Traverse City) – classic country WATT (1240 am) – news and talk WLJW (1370 am) – religious WIAA (88.7 FM, Interlochen) – classical music "IPR Music Radio" WOLW (91.1 FM) – religious "Northern Christian Radio" WGCP (91.9 FM) – religious Strong Tower Radio WJZQ (92.9 FM) – Top 40 "Z-93" WKAD (93.7 FM) – "The Ticket" (Fox Sports Radio) WLXV (96.7 FM) – 96.7 The Bull WUPS (98.5 FM, Houghton Lake) – classic hits WLDR (101.9 FM, Traverse City) – country music "101.9 Sunny Country" WTCM (103.5 FM, Traverse City) – country music WAIR (104.9 FM) – contemporary Christian "Smile-FM" WCKC (107.1 FM) – classic rock "The Drive" WCDY (107.9 FM) – hot AC "107.9 CDY" Television WPBN (channel 7, Traverse City) – NBC, branded as "TV 7 & 4" WWTV (Channel 9) – CBS, branded as "9 &10 News" WMNN (Channel 26) – flagship station of national news network NewsNet, branded as "NewsNet Northern Michigan" WCMV (Channel 27) – PBS, satellite of WCMU in Mount Pleasant, Michigan WGTU (Channel 29, Traverse City) – ABC, branded as "ABC 29 & 8" WFQX (Channel 32) – Fox, branded as "Local 32" WFQX-DT2 (Channel 32.2) - The CW Plus, branded as "The CW Northern Michigan" W23EB-D (Channel 23.1-23.7) - 3ABN, Amazing Facts TV, Strong Tower Radio Notable people Jim Bowman, NFL player Jan Harold Brunvand, American folklorist, born in Cadillac Larry Joe Campbell, actor (According to Jim); born in Cadillac George A. Mitchell, father of the city of Cadillac (first developer). Jackie Swanson, actress (Cheers), attended high school in Cadillac Guy Vander Jagt, U.S. congressman from Michigan's 9th congressional district; born in Cadillac Luke Winslow-King, musician; born in Cadillac Ad Wolgast, professional boxer; born in Cadillac References Further reading External links City of Cadillac Cadillac Area Chamber of Commerce Cities in Wexford County, Michigan County seats in Michigan Populated places established in 1872 1872 establishments in Michigan
2,973
6,591
https://en.wikipedia.org/wiki/Crete
Crete
Crete (, Modern: , Ancient: ) is the largest and most populous of the Greek islands, the 88th largest island in the world and the fifth largest island in the Mediterranean Sea, after Sicily, Sardinia, Cyprus, and Corsica. Crete rests about south of the Greek mainland, and about southwest of Anatolia. Crete has an area of and a coastline of 1,046 km (650 mi). It bounds the southern border of the Aegean Sea, with the Sea of Crete (or North Cretan Sea) to the north and the Libyan Sea (or South Cretan Sea) to the south. Crete and a number of islands and islets that surround it constitute the Region of Crete (), which is the southernmost of the 13 top-level administrative units of Greece, and the fifth most populous of Greece's regions. Its capital and largest city is Heraklion, on the north shore of the island. , the region had a population of 636,504. The Dodecanese are located to the northeast of Crete, while the Cyclades are situated to the north, separated by the Sea of Crete. The Peloponnese is to the region's northwest. Humans have inhabited the island since at least 130,000 years ago, during the Paleolithic age. Crete was the centre of Europe's first advanced civilization, the Minoans, from 2700 to 1420 BC. The Minoan civilization was overrun by the Mycenaean civilization from mainland Greece. Crete was later ruled by Rome, then successively by the Byzantine Empire, Andalusian Arabs, the Venetian Republic, and the Ottoman Empire. In 1898 Crete, whose people had for some time wanted to join the Greek state, achieved independence from the Ottomans, formally becoming the Cretan State. Crete became part of Greece in December 1913. The island is mostly mountainous, and its character is defined by a high mountain range crossing from west to east. It includes Crete's highest point, Mount Ida, and the range of the White Mountains (Lefka Ori) with 30 summits above in altitude and the Samaria Gorge, a World Biosphere Reserve. Crete forms a significant part of the economy and cultural heritage of Greece, while retaining its own local cultural traits (such as its own poetry and music). The Nikos Kazantzakis airport at Heraklion and the Daskalogiannis airport at Chania serve international travelers. The palace of Knossos, a Bronze Age settlement and ancient Minoan city, is also located in Heraklion. Name The earliest references to the island of Crete come from texts from the Syrian city of Mari dating from the 18th century BC, where the island is referred to as Kaptara. This is repeated later in Neo-Assyrian records and the Bible (Caphtor). It was known in ancient Egyptian as or , strongly suggesting a similar Minoan name for the island. The current name Crete is first attested in the 15th century BC in Mycenaean Greek texts, written in Linear B, through the words (, ; later Greek: , plural of ) and (, ; later Greek: , 'Cretan'). In Ancient Greek, the name Crete () first appears in Homer's Odyssey. Its etymology is unknown. One proposal derives it from a hypothetical Luwian word (compare 'island', 'cutting, sliver'). Another proposal suggests that it derives from the ancient Greek word "κραταιή" (krataie̅), meaning strong or powerful, the reasoning being that Crete was the strongest thalassocracy during ancient times. In Latin, the name of the island became . The original Arabic name of Crete was ( < , but after the Emirate of Crete's establishment of its new capital at (modern Heraklion; , ), both the city and the island became known as () or (), which gave Latin, Italian, and Venetian , from which were derived French and English Candy or Candia. Under Ottoman rule, in Ottoman Turkish, Crete was called (). In the Hebrew Bible, Crete is referred to as () "kretim". Physical geography Crete is the largest island in Greece and the fifth largest island in the Mediterranean Sea. It is located in the southern part of the Aegean Sea separating the Aegean from the Libyan Sea. Island morphology The island has an elongated shape: it spans from east to west, is at its widest point, and narrows to as little as (close to Ierapetra). Crete covers an area of , with a coastline of ; to the north, it broaches the Sea of Crete (); to the south, the Libyan Sea (); in the west, the Myrtoan Sea, and toward the east the Carpathian Sea. It lies approximately south of the Greek mainland. Mountains and valleys Crete is mountainous, and its character is defined by a high mountain range crossing from west to east, formed by six different groups of mountains: The White Mountains or Lefka Ori The Idi Range (Psiloritis) Asterousia Mountains Kedros The Dikti Mountains Thripti These mountains lavish Crete with valleys, such as Amari valley, fertile plateaus, such as Lasithi plateau, Omalos and Nidha; caves, such as Gourgouthakas, Diktaion, and Idaion (the birthplace of the ancient Greek god Zeus); and a number of gorges. Mountains in Crete are the object of tremendous fascination both for locals and tourists. The mountains have been seen as a key feature of the island's distinctiveness, especially since the time of Romantic travellers' writing. Contemporary Cretans distinguish between highlanders and lowlanders; the former often claim to reside in places affording a higher/better climatic but also moral environment. In keeping with the legacy of Romantic authors, the mountains are seen as having determined their residents' 'resistance' to past invaders which relates to the oft-encountered idea that highlanders are 'purer' in terms of less intermarriages with occupiers. For residents of mountainous areas, such as Sfakia in western Crete, the aridness and rockiness of the mountains is emphasised as an element of pride and is often compared to the alleged soft-soiled mountains of others parts of Greece or the world. Gorges, rivers and lakes The island has a number of gorges, such as the Samariá Gorge, Imbros Gorge, Kourtaliotiko Gorge, Ha Gorge, Platania Gorge, the Gorge of the Dead (at Kato Zakros, Sitia) and Richtis Gorge and (Richtis) waterfall at Exo Mouliana in Sitia. The rivers of Crete include the Ieropotamos River, the Koiliaris, the Anapodiaris, the Almiros, the Giofyros, and Megas Potamos. There are only two freshwater lakes in Crete: Lake Kournas and Lake Agia, which are both in Chania regional unit. Lake Voulismeni at the coast, at Aghios Nikolaos, was formerly a freshwater lake but is now connected to the sea, in Lasithi. Three artificial lakes created by dams also exist in Crete: the lake of Aposelemis Dam, the lake of Potamos Dam, and the lake of Mpramiana Dam. Surrounding islands A large number of islands, islets, and rocks hug the coast of Crete. Many are visited by tourists, some are only visited by archaeologists and biologists. Some are environmentally protected. A small sample of the islands includes: Gramvousa (Kissamos, Chania) the pirate island opposite the Balo lagoon Elafonisi (Chania), which commemorates a shipwreck and an Ottoman massacre Chrysi island (Ierapetra, Lasithi), which hosts the largest natural Juniperus macrocarpa forest in Europe Paximadia island (Agia Galini, Rethymno) where the god Apollo and the goddess Artemis were born The Venetian fort and leper colony at Spinalonga opposite the beach and shallow waters of Elounda (Agios Nikolaos, Lasithi) Dionysades islands which are in an environmentally protected region together the Palm Beach Forest of Vai in the municipality of Sitia, Lasithi Off the south coast, the island of Gavdos is located south of Hora Sfakion and is the southernmost point of Europe. Climate Crete straddles two climatic zones, the Mediterranean and the North African, mainly falling within the former. As such, the climate in Crete is primarily Mediterranean. The atmosphere can be quite humid, depending on the proximity to the sea, while winter is fairly mild. Snowfall is common on the mountains between November and May, but rare in the low-lying areas. While some mountain tops are snow-capped for most of the year, near the coast snow only stays on the ground for a few minutes or hours. However, a truly exceptional cold snap swept the island in February 2004, during which period the whole island was blanketed with snow. During the Cretan summer, average temperatures reach the high 20s-low 30s Celsius (mid 80s to mid 90s Fahrenheit), with maxima touching the upper 30s-mid 40s. The south coast, including the Mesara Plain and Asterousia Mountains, falls in the North African climatic zone, and thus enjoys significantly more sunny days and high temperatures throughout the year. There, date palms bear fruit, and swallows remain year-round rather than migrate to Africa. The fertile region around Ierapetra, on the southeastern corner of the island, is renowned for its exceptional year-round agricultural production, with all kinds of summer vegetables and fruit produced in greenhouses throughout the winter. Western Crete (Chania province) receives more rain and the soils there suffer more erosion compared to the Eastern part of Crete. According to the data of the Hellenic National Meteorological Service, South Crete receives the highest sunshine in Greece with locally more than 3,257 hours of sunshine per year. Human geography Crete is the most populous island in Greece with a population of more than 600,000 people. Approximately 42% live in Crete's main cities and towns whilst 45% live in rural areas. Administration Crete with its nearby islands form the Crete Region (, , ), one of the 13 regions of Greece which were established in the 1987 administrative reform. Under the 2010 Kallikratis plan, the powers and authority of the regions were redefined and extended. The region is based at Heraklion and is divided into four regional units (pre-Kallikratis prefectures). From west to east these are: Chania, Rethymno, Heraklion, and Lasithi. These are further subdivided into 24 municipalities. The region's governor is, since 1 January 2011, Stavros Arnaoutakis, who was elected in the November 2010 local administration elections for the Panhellenic Socialist Movement. Cities Heraklion is the largest city and capital of Crete, holding more than a fourth of the island's population. Chania was the capital until 1971. The principal cities are: Heraklion (Iraklion or Candia) (144,422 inhabitants) Chania (Haniá) (53,910 inhabitants) Rethymno (34,300 inhabitants) Ierapetra (23,707 inhabitants) Agios Nikolaos (20,679 inhabitants) Sitia (14,338 inhabitants) Demographics The region has shrunk by 5,705 people between 2011 and 2021, experiencing a population loss of 0.9%. Economy The economy of Crete is predominantly based on services and tourism. However, agriculture also plays an important role and Crete is one of the few Greek islands that can support itself independently without a tourism industry. The economy began to change visibly during the 1970s as tourism gained in importance. Although an emphasis remains on agriculture and stock breeding, because of the climate and terrain of the island, there has been a drop in manufacturing, and an observable expansion in its service industries (mainly tourism-related). All three sectors of the Cretan economy (agriculture/farming, processing-packaging, services), are directly connected and interdependent. The island has a per capita income much higher than the Greek average, whereas unemployment is at approximately 4%, one-sixth of that of the country overall. As in many regions of Greece, viticulture and olive groves are significant; oranges, citrons and avocadoes are also cultivated. Until recently there were restrictions on the import of bananas to Greece, therefore bananas were grown on the island, predominantly in greenhouses. Dairy products are important to the local economy and there are a number of speciality cheeses such as mizithra, anthotyros, and kefalotyri. 20% of Greek wine is produced in Crete, mostly in the region of Peza The Gross domestic product (GDP) of the region was €9.4 billion in 2018, accounting for 5.1% of Greek economic output. GDP per capita adjusted for purchasing power was €17,800 or 59% of the EU27 average in the same year. The GDP per employee was 68% of the EU average. Crete is the region in Greece with the fifth highest GDP per capita. Transport infrastructure Airports The island has three significant airports, Nikos Kazantzakis at Heraklion, the Daskalogiannis airport at Chania and a smaller one in Sitia. The first two serve international routes, acting as the main gateways to the island for travellers. There is a long-standing plan to replace Heraklion airport with a completely new airport at Kastelli, where there is presently an air force base. Ferries The island is well served by ferries, mostly from Piraeus, by ferry companies such as Minoan Lines and ANEK Lines. Seajets operates routes to Cyclades. Road network Although almost everywhere is covered by the road network, there is a lack of modern highways, although this is gradually changing with the completion of the northern coastal spine highway. In addition, a European Union study has been devised to promote a modern highway to connect the northern and southern parts of the island via a tunnel. The study proposal includes a section of road between the villages of Agia Varvara and Agia Deka in central Crete. It is hoped to benefit both tourists and locals by improving the connections to the southern part of the island and by reducing accidents. The new road section forms part of the route between Messara in the south and Crete's largest city Heraklion, which houses the island's airport and principal ferry links with mainland Greece. Traffic speeds on the new road will increase by 19 km/hour (from 29 km/hour to 48 km/hour), which should reduce journey times between Messara and Heraklion by 55 minutes. The scheme is also expected to improve road safety by cutting the number of accidents along the route. Building works include construction of three road tunnels, five bridges and three junctions. This project is expected to create 44 jobs during the implementation phase. The investment falls under Greece's "Improvement of Accessibility" Operational Programme, which aims to improve the country's transport infrastructures as well as its international connections. The Operational Programme works to link Greece's more prosperous and less developed regions, and thus help to promote greater territorial cohesion. Total investment for the project "Completion of construction of the section of Ag. Varvara - Ag. Deka (Kasteli) (22+170 km to 37+900 km) of the vertical road axis Irakleio – Messara in the prefecture of Irakleio, Kriti" is EUR 102 273 321, of which the EU's European Regional Development Fund is contributing EUR 86 932 323 from the Operational Programme "Improvement of Accessibility" for the 2007 to 2013 programming period. Work falls under the priority "Road Transport – trans-European and trans-regional route network of the regions on the Convergence objective". Railway Also, during the 1930s there was a narrow-gauge industrial railway in Heraklion, from Giofyros in the west side of the city to the port. There are now no railway lines on Crete. The government is planning the construction of a line from Chania to Heraklion via Rethymno. Development The construction sector in Crete responded well during the pandemic and has come out strong in the post-recession recovery period. Total construction spending recovered and seems that will peak a record high (approximately 8% higher than 2019 average levels signalling consistent expansion in construction projects and real estate investments in Crete. The evolution of the private sector in Crete is tightly linked with the demand for tourism-related investments. Moreover, the recovery of the tourism sector will lead to further growth in housing prices and rental demand. Newspapers have reported that the Ministry of Mercantile Marine is ready to support the agreement between Greece, South Korea, Dubai Ports World and China for the construction of a large international container port and free trade zone in southern Crete near Tympaki; the plan is to expropriate of land. The port would handle two million containers per year, but the project has not been universally welcomed because of its environmental, economic and cultural impact. As of January 2013, the project has still not been confirmed, although there is mounting pressure to approve it, arising from Greece's difficult economic situation. There are plans for underwater cables going from mainland Greece to Israel and Egypt passing by Crete and Cyprus: EuroAfrica Interconnector and EuroAsia Interconnector. They would connect Crete electrically with mainland Greece, ending energy isolation of Crete. At present Greece covers electricity cost differences for Crete of around €300 million per year. History Hominids settled in Crete at least 130,000 years ago. In the later Neolithic and Bronze Age periods, under the Minoans, Crete had a highly developed, literate civilization. It has been ruled by various ancient Greek entities, the Roman Empire, the Byzantine Empire, the Emirate of Crete, the Republic of Venice and the Ottoman Empire. After a brief period of independence (1897–1913) under a provisional Cretan government, it joined the Kingdom of Greece. It was occupied by Nazi Germany during the Second World War. Prehistory In 2002, the paleontologist Gerard Gierlinski discovered fossil footprints possibly left by ancient human relatives 5,600,000 years ago. The first human settlement in Crete dates to more than 130,000 years ago, during the Paleolithic age. Settlements dating to the aceramic Neolithic in the 7th millennium BC, used cattle, sheep, goats, pigs and dogs as well as domesticated cereals and legumes; ancient Knossos was the site of one of these major Neolithic (then later Minoan) sites. Other neolithic settlements include those at Kephala, Magasa, and Trapeza. Minoan civilization During the Bronze Age, Crete was the centre of the Minoan, notable for its art, its writing systems such as Linear A, and for its massive building complexes including the palace at Knossos. Its economy benefited from a network of trade around much of the Mediterranean, and Minoan cultural influence extended to Cyprus, Canaan, and the Egypt. Some scholars have speculated that legends such as that of the minotaur have a historical basis in Minoan times. Mycenaean civilization In 1420 BC, the Minoan civilization was subsumed by the Mycenaean civilization from mainland Greece. The oldest samples of writing in the Greek language, as identified by Michael Ventris, is the Linear B archive from Knossos, dated approximately to 1425–1375 BC. Archaic and Classical period After the Bronze Age collapse, Crete was settled by new waves of Greeks from the mainland. A number of city states developed in the Archaic period. There was very limited contact with mainland Greece, and Greek historiography shows little interest in Crete, and as a result, there are very few literary sources. During the 6th to 4th centuries BC, Crete was comparatively free from warfare. The Gortyn code (5th century BC) is evidence for how codified civil law established a balance between aristocratic power and civil rights. In the late 4th century BC, the aristocratic order began to collapse due to endemic infighting among the elite, and Crete's economy was weakened by prolonged wars between city states. During the 3rd century BC, Gortyn, Kydonia (Chania), Lyttos and Polyrrhenia challenged the primacy of ancient Knossos. While the cities continued to prey upon one another, they invited into their feuds mainland powers like Macedon and its rivals Rhodes and Ptolemaic Egypt. In 220 BC the island was tormented by a war between two opposing coalitions of cities. As a result, the Macedonian king Philip V gained hegemony over Crete which lasted to the end of the Cretan War (205–200 BC), when the Rhodians opposed the rise of Macedon and the Romans started to interfere in Cretan affairs. In the 2nd century BC Ierapytna (Ierapetra) gained supremacy on eastern Crete. Roman rule Crete was involved in the Mithridatic Wars, initially repelling an attack by Roman general Marcus Antonius Creticus in 71 BC. Nevertheless, a ferocious three-year campaign soon followed under Quintus Caecilius Metellus, equipped with three legions and Crete was finally conquered by Rome in 69 BC, earning for Metellus the title "Creticus". Gortyn was made capital of the island, and Crete became a Roman province, along with Cyrenaica that was called Creta et Cyrenaica. Archaeological remains suggest that Crete under Roman rule witnessed prosperity and increased connectivity with other parts of the Empire. In the 2nd century AD, at least three cities in Crete (Lyttos, Gortyn, Hierapytna) joined the Panhellenion, a league of Greek cities founded by the emperor Hadrian. When Diocletian redivided the Empire, Crete was placed, along with Cyrene, under the diocese of Moesia, and later by Constantine I to the diocese of Macedonia. Byzantine Empire – first period Crete was separated from Cyrenaica . It remained a province within the eastern half of the Roman Empire, usually referred to as the Eastern Roman (Byzantine) Empire after the establishment of a second capital in Constantinople by Constantine in 330. Crete was subjected to an attack by Vandals in 467, the great earthquakes of 365 and 415, a raid by Slavs in 623, Arab raids in 654 and the 670s, and again in the 8th century. In , the Emperor Leo III the Isaurian transferred the island from the jurisdiction of the Pope to that of the Patriarchate of Constantinople. Andalusian Arab rule In the 820s, after 900 years as a Roman island, Crete was captured by Andalusian Muwallads led by Abu Hafs, who established the Emirate of Crete. The Byzantines launched a campaign that took most of the island back in 842 and 843 under Theoktistos. Further Byzantine campaigns in 911 and 949 failed. In 960–61, Nikephoros Phokas' campaign completely restored Crete to the Byzantine Empire, after a century and a half of Arab control. Byzantine Empire – second period In 961, Nikephoros Phokas returned the island to Byzantine rule after expelling the Arabs. Extensive efforts at conversion of the populace were undertaken, led by John Xenos and Nikon "the Metanoeite". The reconquest of Crete was a major achievement for the Byzantines, as it restored Byzantine control over the Aegean littoral and diminished the threat of Saracen pirates, for which Crete had provided a base of operations. In 1204, the Fourth Crusade seized and sacked the imperial capital of Constantinople. Crete was initially granted to leading Crusader Boniface of Montferrat in the partition of spoils that followed. However, Boniface sold his claim to the Republic of Venice, whose forces made up the majority of the Crusade. Venice's rival the Republic of Genoa immediately seized the island and it was not until 1212 that Venice secured Crete as a colony. Venetian rule From 1212, during Venice's rule, which lasted more than four centuries, a Renaissance swept through the island as is evident from the plethora of artistic works dating to that period. Known as The Cretan School or Post-Byzantine Art, it is among the last flowerings of the artistic traditions of the fallen empire. The most notable representatives of this Cretan renaissance were the painter El Greco and the writers Nicholas Kalliakis (1645–1707), Georgios Kalafatis (professor) (–1720), Andreas Musalus (–1721) and Vitsentzos Kornaros. Under the rule of the Catholic Venetians, the city of Candia was reputed to be the best fortified city of the Eastern Mediterranean. The three main forts were located at Gramvousa, Spinalonga, and Fortezza at Rethymnon. Other fortifications include the Kazarma fortress at Sitia. In 1492, Jews expelled from Spain settled on the island. In 1574–77, Crete was under the rule of Giacomo Foscarini as Proveditor General, Sindace and Inquisitor. According to Starr's 1942 article, the rule of Giacomo Foscarini was a Dark Age for Jews and Greeks. Under his rule, non-Catholics had to pay high taxes with no allowances. In 1627, there were 800 Jews in the city of Candia, about seven percent of the city's population. Marco Foscarini was the Doge of Venice during this time period. Ottoman rule The Ottomans conquered Crete (Girit Eyâleti) in 1669, after the siege of Candia. Many Greek Cretans fled to other regions of the Republic of Venice after the Ottoman–Venetian Wars, some even prospering such as the family of Simone Stratigo (c. 1733 – c. 1824) who migrated to Dalmatia from Crete in 1669. Islamic presence on the island, aside from the interlude of the Arab occupation, was cemented by the Ottoman conquest. Most Cretan Muslims were local Greek converts who spoke Cretan Greek, but in the island's 19th-century political context they came to be viewed by the Christian population as Turks. Contemporary estimates vary, but on the eve of the Greek War of Independence (1830), as much as 45% of the population of the island may have been Muslim. A number of Sufi orders were widespread throughout the island, the Bektashi order being the most prevalent, possessing at least five tekkes. Many Cretan Turks fled Crete because of the unrest, settling in Turkey, Rhodes, Syria, Libya and elsewhere. By 1900, 11% of the population was Muslim. Those remaining were relocated in the 1924 population exchange between Greece and Turkey. During Easter of 1770, a notable revolt against Ottoman rule, in Crete, was started by Daskalogiannis, a shipowner from Sfakia who was promised support by Orlov's fleet which never arrived. Daskalogiannis eventually surrendered to the Ottoman authorities. Today, the airport at Chania is named after him. During the Greek war of independence, Sultan Mahmud II granted rule over Crete to Egypt's ruler Muhammad Ali Pasha in exchange for his military support. Crete was subsequently left out of the new Greek state established under the London Protocol of 1830. Its administration by Muhammad Ali was confirmed in the Convention of Kütahya of 1833, but direct Ottoman rule was re-established by the Convention of London of 3 July 1840. Heraklion was surrounded by high walls and bastions and extended westward and southward by the 17th century. The most opulent area of the city was the northeastern quadrant where all the elite were gathered together. The city had received another name under the rule of the Ottomans, "the deserted city". The urban policy that the Ottoman applied to Candia was a two-pronged approach. The first was the religious endowments. It made the Ottoman elite contribute to building and rehabilitating the ruined city. The other method was to boost the population and the urban revenue by selling off urban properties. According to Molly Greene (2001) there were numerous records of real-estate transactions during the Ottoman rule. In the deserted city, minorities received equal rights in purchasing property. Christians and Jews were also able to buy and sell in the real-estate market. The Cretan Revolt of 1866–1869 or Great Cretan Revolution () was a three-year uprising against Ottoman rule, the third and largest in a series of revolts between the end of the Greek War of Independence in 1830 and the establishment of the independent Cretan State in 1898. A particular event which caused strong reactions among the liberal circles of western Europe was the Holocaust of Arkadi. The event occurred in November 1866, as a large Ottoman force besieged the Arkadi Monastery, which served as the headquarters of the rebellion. In addition to its 259 defenders, over 700 women and children had taken refuge in the monastery. After a few days of hard fighting, the Ottomans broke into the monastery. At that point, the abbot of the monastery set fire to the gunpowder stored in the monastery's vaults, causing the death of most of the rebels and the women and children sheltered there. Cretan State 1898–1908 Following the repeated uprisings in 1841, 1858, 1889, 1895 and 1897 by the Cretan people, who wanted to join Greece, the Great Powers decided to restore order and in February 1897 sent in troops. The island was subsequently garrisoned by troops from Great Britain, France, Italy and Russia; Germany and Austro-Hungary withdrawing from the occupation in early 1898. During this period Crete was governed through a committee of admirals from the remaining four Powers. In March 1898 the Powers decreed, with the very reluctant consent of the Sultan, that the island would be granted autonomy under Ottoman suzerainty in the near future. In September 1898 the Candia massacre in Candia, modern Heraklion, left over 500 Cretan Christians and 14 British servicemen dead at the hands of Muslim irregulars. As a result, the Admirals ordered the expulsion of all Ottoman troops and administrators from the island, a move that was ultimately completed by early November. The decision to grant autonomy to the island was enforced and a High Commissioner, Prince George of Greece, appointed, arriving to take up his post in December 1898. The flag of the Cretan State was chosen by the Powers, with the white star representing the Ottoman suzerainty over the island. In 1905, disagreements between Prince George and minister Eleftherios Venizelos over the question of the enosis (union with Greece), such as the Prince's autocratic style of government, resulted in the Theriso revolt, one of the leaders being Eleftherios Venizelos. Prince George resigned as High Commissioner and was replaced by Alexandros Zaimis, a former Greek prime minister, in 1906. In 1908, taking advantage of domestic turmoil in Turkey as well as the timing of Zaimis's vacation away from the island, the Cretan deputies unilaterally declared union with Greece. With the outbreak of the First Balkan War, the Greek government declared that Crete was now Greek territory. This was not recognised internationally until 1 December 1913. Second World War During World War II, the island was the scene of the famous Battle of Crete in May 1941. The initial 11-day battle was bloody and left more than 11,000 soldiers and civilians killed or wounded. As a result of the fierce resistance from both Allied forces and civilian Cretan locals, the invasion force suffered heavy casualties, and Adolf Hitler forbade further large-scale paratroop operations for the rest of the war. During the initial and subsequent occupation, German firing squads routinely executed male civilians in reprisal for the death of German soldiers; civilians were rounded up randomly in local villages for the mass killings, such as at the Massacre of Kondomari and the Viannos massacres. Two German generals were later tried and executed for their roles in the killing of 3,000 of the island's inhabitants. Civil War In the aftermath of the Dekemvriana in Athens, Cretan leftists were targeted by the right-wing paramilitary organization National Organization of Rethymno (EOR), which engaged in attacks in the villages of Koxare and Melampes, as well as Rethymno in January 1945. Those attacks did not escalate into a full-scale insurgency as they did in the Greek mainland and the Cretan ELAS did not surrender its weapons after the Treaty of Varkiza. An uneasy truce was maintained until 1947, with a series of arrests of notable communists in Chania and Heraklion. Encouraged by orders from the central organization in Athens, KKE launched an insurgency in Crete; marking the beginning of the Greek Civil War on the island. In eastern Crete the Democratic Army of Greece (DSE) struggled to establish its presence in Dikti and Psilorites. On 1 July 1947, the surviving 55 fighters of DSE were ambushed south of Psilorites, the few surviving members of the unit managed to join the rest of DSE in Lefka Ori. The Lefka Ori region in the west offered more favourable conditions for DSE's insurgency. In the summer of 1947 DSE raided and looted the Maleme Airport and motor depot at Chrysopigi. Its numbers swelled to approximately 300 fighters, the rise of DSE numbers compounded with crop failure on the island created serious logistical issues for the insurgents. The communists resorted to cattle rustling and crop confiscations which solved the problem only temporarily. In the autumn of 1947, the Greek government offered generous amnesty terms to Cretan DSE fighters and mountain bandits, many of whom opted to abandon armed struggle or even defect to the nationalists. On 4 July 1948, government troops launched a large scale offensive on Samariá Gorge. Many DSE soldiers were killed in the fighting while the survivors broke into small armed bands. In October 1948, the secretary of the Cretan KKE Giorgos Tsitilos was killed in an ambush. By the following month only 34 DSE fighters remained active in Lefka Ori. The insurgency in Crete gradually withered away, with the last two hold outs surrendering in 1974, 25 years after the conclusion of the war in mainland Greece. Tourism Crete is one of the most popular holiday destinations in Greece. 15% of all arrivals in Greece come through the city of Heraklion (port and airport), while charter journeys to Heraklion make up about 20% of all charter flights in Greece. The number of hotel beds on the island increased by 53% in the period between 1986 and 1991. Today, the island's tourism infrastructure caters to all tastes, including a very wide range of accommodation; the island's facilities take in large luxury hotels with their complete facilities, swimming pools, sports and recreation, smaller family-owned apartments, camping facilities and others. Visitors reach the island via two international airports in Heraklion and Chania and a smaller airport in Sitia (international charter and domestic flights starting May 2012) or by boat to the main ports of Heraklion, Chania, Rethimno, Agios Nikolaos and Sitia. Popular tourist attractions include the archaeological sites of the Minoan civilisation, the Venetian old city and port of Chania, the Venetian castle at Rethymno, the gorge of Samaria, the islands of Chrysi, Elafonisi, Gramvousa, Spinalonga and the Palm Beach of Vai, which is the largest natural palm forest in Europe. Transportation Crete has an extensive bus system with regular services across the north of the island and from north to south. There are two regional bus stations in Heraklion. Bus routes and timetables can be found on KTEL website. Holiday homes and immigration Crete's mild climate attracts interest from northern Europeans who want a holiday home or residence on the island. EU citizens have the right to freely buy property and reside with little formality. In the cities of Heraklion and Chania, the average price per square metre of apartments ranges from €1,670 to €1,700. A growing number of real estate companies cater mainly to British immigrants, followed by Dutch, German, Scandinavian and other European nationalities wishing to own a home in Crete. The British immigrants are concentrated in the western regional units of Chania and Rethymno and to a lesser extent in Heraklion and Lasithi. Archaeological sites and museums The area has a large number of archaeological sites, including the Minoan sites of Knossos, Malia (not to be confused with the town of the same name), Petras and Phaistos, the classical site of Gortys, and the diverse archaeology of the island of Koufonisi, which includes Minoan, Roman, and World War II era ruins (nb. due to conservation concerns, access to Koufonisi has been restricted for the last few years, so it is best to check before heading to a port). There are a number of museums throughout Crete. The Heraklion Archaeological Museum displays most of the archaeological finds from the Minoan era and was reopened in 2014. Harmful effects Helen Briassoulis, in a qualitative analysis, proposed in the Journal of Sustainable Tourism that Crete is affected by tourism applying pressure to it to develop at an unhealthy rate, and that informal, internal systems within the country are forced to adapt. According to her, these forces have strengthened in three stages: from the period from 1960 to 1970, 1970–1990, and 1990 to the present. During this first period, tourism was a largely positive force, pushing modern developments like running water and electricity onto the largely rural countryside. However, beginning in the second period and especially in the third period leading up to the present day, tourist companies became more pushy with deforestation and pollution of Crete's natural resources. The country is then pulled into an interesting parity, where these companies only upkeep those natural resources that are directly essential to their industry. Fauna and flora Fauna Crete is isolated from mainland Europe, Asia, and Africa, and this is reflected in the diversity of the fauna and flora. As a result, the fauna and flora of Crete have many clues to the evolution of species. There are no animals that are dangerous to humans on the island of Crete in contrast to other parts of Greece. Indeed, the ancient Greeks attributed the lack of large mammals such as bears, wolves, jackals, and venomous snakes, to the labour of Hercules (who took a live Cretan bull to the Peloponnese). Hercules wanted to honor the birthplace of Zeus by removing all "harmful" and "venomous" animals from Crete. Later, Cretans believed that the island was cleared of dangerous creatures by the Apostle Paul, who lived on the island of Crete for two years, with his exorcisms and blessings. There is a natural history museum, the Natural History Museum of Crete, operating under the direction of the University of Crete and two aquariums – Aquaworld in Hersonissos and Cretaquarium in Gournes, displaying sea creatures common in Cretan waters. Other interesting and rare mammals that live on the island are the Cretan badger and the Cretan wildcat Prehistoric fauna Dwarf elephants, dwarf hippopotamus, dwarf mammoths, dwarf deer, and giant flightless owls were native to Pleistocene Crete. Mammals Mammals of Crete include the vulnerable kri-kri, Capra aegagrus cretica that can be seen in the national park of the Samaria Gorge and on Thodorou, Dia and Agioi Pantes (islets off the north coast), the Cretan wildcat and the Cretan spiny mouse. Other terrestrial mammals include subspecies of the Cretan marten, the Cretan weasel, the Cretan badger, the long-eared hedgehog, and the edible dormouse. The Cretan shrew, a type of white-toothed shrew is considered endemic to the island of Crete because this species of shrew is unknown elsewhere. It is a relic species of the Crocidura shrews of which fossils have been found that can be dated to the Pleistocene era. In the present day it can only be found in the highlands of Crete. It is considered to be the only surviving remnant of the endemic species of the Pleistocene Mediterranean islands. Bat species include: Blasius's horseshoe bat, the lesser horseshoe bat, the greater horseshoe bat, the lesser mouse-eared bat, Geoffroy's bat, the whiskered bat, Kuhl's pipistrelle, the common pipistrelle, Savi's pipistrelle, the serotine bat, the long-eared bat, Schreibers' bat and the European free-tailed bat. Birds A large variety of birds includes eagles (can be seen in Lasithi), swallows (throughout Crete in the summer and year-round in the south of the island), pelicans (along the coast), and common cranes (including Gavdos and Gavdopoula). The Cretan mountains and gorges are refuges for the endangered lammergeier vulture. Bird species include: the golden eagle, Bonelli's eagle, the bearded vulture or lammergeier, the griffon vulture, Eleonora's falcon, peregrine falcon, lanner falcon, European kestrel, tawny owl, little owl, hooded crow, alpine chough, red-billed chough, and the Eurasian hoopoe. The population of griffon vultures in Crete is the largest insular one of the species in the world and consists the majority of griffon vulture population in Greece. Reptiles and amphibians Tortoises can be seen throughout the island. Snakes can be found hiding under rocks. Toads and frogs reveal themselves when it rains. Reptiles include the Aegean wall lizard, Balkan green lizard, common chameleon, ocellated skink, snake-eyed skink, moorish gecko, Turkish gecko, Kotschy's gecko, spur-thighed tortoise, and the Caspian turtle. There are four species of snake on the island and these are not dangerous to humans. The four species include the leopard snake (locally known as Ochendra), the Balkan whip snake (locally called Dendrogallia), the dice snake (called Nerofido in Greek), and the only venomous snake is the nocturnal cat snake which has evolved to deliver a weak venom at the back of its mouth to paralyse geckos and small lizards, and is not dangerous to humans. Sea turtles include the green turtle and the loggerhead turtle which are both threatened species. The loggerhead turtle nests and hatches on north-coast beaches around Rethymno and Chania, and south-coast beaches along the gulf of Mesara. Amphibians include the European green toad, American bullfrog (introduced), European tree frog, and the Cretan marsh frog (endemic). Arthropods Crete has an unusual variety of insects. Cicadas, known locally as Tzitzikia, make a distinctive repetitive tzi tzi sound that becomes louder and more frequent on hot summer days. Butterfly species include the swallowtail butterfly. Moth species include the hummingbird moth. There are several species of scorpion such as Euscorpius carpathicus whose venom is generally no more potent than a mosquito bite. Crustaceans and molluscs River crabs include the semi-terrestrial Potamon potamios crab. Edible snails are widespread and can cluster in the hundreds waiting for rainfall to reinvigorate them. Sealife Apart from terrestrial mammals, the seas around Crete are rich in large marine mammals, a fact unknown to most Greeks at present, although reported since ancient times. Indeed, the Minoan frescoes depicting dolphins in Queen's Megaron at Knossos indicate that Minoans were well aware of and celebrated these creatures. Apart from the famous endangered Mediterranean monk seal, which lives in almost all the coasts of the country, Greece hosts whales, sperm whales, dolphins and porpoises. These are either permanent residents of the Mediterranean or just occasional visitors. The area south of Crete, known as the Greek Abyss, hosts many of them. Squid and octopus can be found along the coast and sea turtles and hammerhead sharks swim in the sea around the coast. The Cretaquarium and the Aquaworld Aquarium, are two of only three aquariums in the whole of Greece. They are located in Gournes and Hersonissos respectively. Examples of the local sealife can be seen there. Some of the fish that can be seen in the waters around Crete include: scorpion fish, dusky grouper, east Atlantic peacock wrasse, five-spotted wrasse, weever fish, common stingray, brown ray, mediterranean black goby, pearly razorfish, star-gazer, painted comber, damselfish, and the flying gurnard. Flora The Minoans contributed to the deforestation of Crete. Further deforestation occurred in the 1600s "so that no more local supplies of firewood were available". Common wildflowers include: camomile, daisy, gladiolus, hyacinth, iris, poppy, cyclamen and tulip, among others. There are more than 200 different species of wild orchid on the island and this includes 14 varieties of Ophrys cretica. Crete has a rich variety of indigenous herbs including common sage, rosemary, thyme, and oregano. Rare herbs include the endemic Cretan dittany. and ironwort, Sideritis syriaca, known as Malotira (Μαλοτήρα). Varieties of cactus include the edible prickly pear. Common trees on the island include the chestnut, cypress, oak, olive tree, pine, plane, and tamarisk. Trees tend to be taller to the west of the island where water is more abundant. Environmentally protected areas There are a number of environmentally protected areas. One such area is located at the island of Elafonisi on the coast of southwestern Crete. Also, the palm forest of Vai in eastern Crete and the Dionysades (both in the municipality of Sitia, Lasithi), have diverse animal and plant life. Vai has a palm beach and is the largest natural palm forest in Europe. The island of Chrysi, south of Ierapetra, has the largest naturally-grown Juniperus macrocarpa forest in Europe. Samaria Gorge is a World Biosphere Reserve and Richtis Gorge is protected for its landscape diversity. Mythology Crete has a strong association with ancient Greek gods but is also connected with the Minoan civilization. According to Greek mythology, the Diktaean Cave at Mount Dikti was the birthplace of the god Zeus. The Paximadia islands were the birthplace of the goddess Artemis and the god Apollo. Their mother, the goddess Leto, was worshipped at Phaistos. The goddess Athena bathed in Lake Voulismeni. Zeus launched a lightning bolt at a giant lizard that was threatening Crete. The lizard immediately turned to stone and became the lizard-shaped island of Dia, which can be seen from Knossos. The islets of Lefkai were the result of a musical contest between the Sirens and the Muses. The Muses were so anguished to have lost that they plucked the feathers from the wings of their rivals; the Sirens turned white and fell into the sea at Aptera ("featherless"), where they formed the islands in the bay that were called Lefkai (the islands of Souda and Leon). Heracles, in one of his labors, took the Cretan bull to the Peloponnese. Europa and Zeus made love at Gortys and conceived the kings of Crete: Rhadamanthys, Sarpedon, and Minos. The labyrinth of the Palace of Knossos was the setting for the myth of Theseus and the Minotaur in which the Minotaur was slain by Theseus. Icarus and Daedalus were captives of King Minos and crafted wings to escape. After his death, King Minos became a judge of the dead in Hades, while Rhadamanthys became the ruler of the Elysian fields. Culture Crete has its own distinctive Mantinades poetry. The island is known for its Mantinades-based music (typically performed with the Cretan lyra and the laouto) and has many indigenous dances, the most noted of which is the Pentozali. Since the 1980s and certainly in the 1990s onwards there has been a proliferation of Cultural Associations that teach dancing (in Western Crete many focus on rizitiko singing). These Associations often perform in official events but also become stages for people to meet up and engage in traditionalist practices. The topic of tradition and the role of Cultural Associations in reviving it is very often debated throughout Crete. Cretan authors have made important contributions to Greek literature throughout the modern period; major names include Vikentios Kornaros, creator of the 17th-century epic romance Erotokritos (Greek Ερωτόκριτος), and, in the 20th century, Nikos Kazantzakis. In the Renaissance, Crete was the home of the Cretan School of icon painting, which influenced El Greco and through him subsequent European painting. Cretans are fiercely proud of their island and customs, and men often don elements of traditional dress in everyday life: knee-high black riding boots (stivania), vráka breeches tucked into the boots at the knee, black shirt and black headdress consisting of a fishnet-weave kerchief worn wrapped around the head or draped on the shoulders (sariki). Men often grow large mustaches as a mark of masculinity. Cretan society is known in Greece and internationally for family and clan vendettas which persist on the island to date. Cretans also have a tradition of keeping firearms at home, a tradition lasting from the era of resistance against the Ottoman Empire. Nearly every rural household on Crete has at least one unregistered gun. Guns are subject to strict regulation from the Greek government, and in recent years a great deal of effort to control firearms in Crete has been undertaken by the Greek police, but with limited success. Sports Crete has many football clubs playing in the local leagues. During the 2011–12 season, OFI Crete, which plays at Theodoros Vardinogiannis Stadium (Iraklion), and Ergotelis F.C., which plays at the Pankritio Stadium (Iraklion) were both members of the Greek Superleague. During the 2012–13 season, OFI Crete, which plays at Theodoros Vardinogiannis Stadium (Iraklion), and Platanias F.C., which plays at the Perivolia Municipal Stadium, near Chania, are both members of the Greek Superleague. Notable people Notable people from Crete include: Nikos Kazantzakis, author, born in Heraklion, 7 times suggested for the Nobel Prize Odysseas Elytis, poet, awarded the Nobel Prize in Literature in 1979, born in Heraklion Georgios Chortatzis, Renaissance author Vitsentzos Kornaros, Renaissance author from Sitia, who lived in Heraklion (then Candia) Domenikos Theotokopoulos (El Greco), Renaissance artist, born in Heraklion Nikos Xilouris, famous composer and singer. Psarantonis, Cretan folk singer and Cretan lyra player and brother of Nikos Xilouris. Nana Mouskouri, singer, born in Chania Eleftherios Venizelos, former Greek Prime Minister, born in Chania Prefecture Konstantinos Mitsotakis, nephew of Eleftherios Venizelos and Prime Minister of Greece. Daskalogiannis, leader of the Orlov Revolt in Crete in 1770 Michalis Kourmoulis, leader of the Greek War of Independence from Messara. Eleni Daniilidou, tennis player, born in Chania Louis Tikas, Greek-American labor union leader Tess Fragoulis, Greek-Canadian writer, born in Heraklion Nick Dandolos, a.k.a. Nick the Greek, professional gambler and high roller Joseph Sifakis, a computer scientist, laureate of the 2007 Turing Award, born in Heraklion in 1946 Constantinos Daskalakis, Associate Professor at MIT's Electrical Engineering and Computer Science department. George Karniadakis, Professor of Applied Mathematics at Brown University; also Research Scientist at MIT John Aniston (Giannis Anastasakis), Greek-American actor, father of Jennifer Aniston George Psychoundakis, a shepherd, a war hero and an author. Ahmed Resmî Efendi: 18th-century Ottoman statesman, diplomat and author (notably of two sefâretnâme). Turkey's first-ever ambassador in Berlin (during the reign of Frederick the Great). He was born into a Muslim family of Greek descent in the Cretan town of Rethymno in 1700. Giritli Ali Aziz Efendi: Turkey's third ambassador in Berlin and arguably the first Turkish author to have written in novelistic form. Al-Husayn I ibn Ali at-Turki – founder of the Husainid Dynasty, which ruled Tunisia until 1957. Salacıoğlu (1750 Hanya – 1825 Kandiye): One of the most important 18th-century poets of Turkish folk literature. Giritli Sırrı Pasha: Ottoman administrator, Leyla Saz's husband and a notable man of letters in his own right. Vedat Tek: Representative figure of the First National Architecture Movement in Turkish architecture, son of Leyla Saz and Giritli Sırrı Pasha. Paul Mulla (alias Mollazade Mehmed Ali): born Muslim, converted to Christianity and became a Roman Catholic monsignor and author. Rahmizâde Bahaeddin Bediz: The first Turkish photographer by profession. The thousands of photographs he took, based as of 1895 successively in Crete, İzmir, Istanbul and Ankara (as Head of the Photography Department of Turkish Historical Society), have immense historical value. Salih Zeki: Turkish photographer in Chania Ali Nayip Zade: Associate of Eleftherios Venizelos, Prefect of Drama and Kavala, Adrianople, and Lasithi. Ismail Fazil Pasha (1856–1921): descended from the rooted Cebecioğlu family of Söke who had settled in Crete. He has been the first Minister of Public Works in the government of Grand National Assembly in 1920. He was the father of Ali Fuad and Mehmed Ali. Mehmet Atıf Ateşdağlı (1876–1947): Turkish officer. Mustafa Ertuğrul Aker (1892–1961): Turkish officer who sank HMS Ben-my-Chree. Cevat Şakir Kabaağaçlı, alias Halikarnas Balıkçısı (The Fisherman of Halicarnassus), writer, although born in Crete and has often let himself be cited as Cretan, descends from a family of Ottoman aristocracy with roots in Afyonkarahisar. His father had been an Ottoman High Commissioner in Crete and later ambassador in Athens. *Likewise, as stated above, Mustafa Naili Pasha was Albanian/Egyptian. Bülent Arınç (born 25 May 1948) has been a Deputy Prime Minister of Turkey since 2009. He is of Cretan Muslim heritage with his ancestors arriving to Turkey as Cretan refugees during the population exchange between Greece and Turkey at the time of Sultan Abdul Hamid II and is fluent in Cretan Greek. Arınç is a proponent of wanting to reconvert the Hagia Sophia into a mosque, which has caused diplomatic protestations from Greece. Yoseph Shlomo Delmedigo, renaissance rabbi, mathematician, astronomer and philosopher. Zach Galifianakis paternal grandparents, Mike Galifianakis and Sophia Kastrinakis, were from Crete. Vicky Psarakis, vocalist for Canadian metal band The Agonist, is from Crete. Georgos Kalaitzakis, Greek professional basketball player for the Milwaukee Bucks of the National Basketball Association is from Heraklion, Crete. See also Cretan Greek Cretan lyra Cretan Turks Cretan wine List of novels set in Crete List of rulers of Crete Mantinades Citations General and citied sources Francis, Jane and Anna Kouremenos (eds.) 2016. Roman Crete: New Perspectives. Oxford: Oxbow. External links Natural History Museum of Crete at the University of Crete. Cretaquarium Thalassocosmos in Heraklion. Aquaworld Aquarium in Hersonissos. Ancient Crete at Oxford Bibliographies Online: Classics. Official Greek National Tourism Organisation website Interactive Virtual Tour of Crete Aegean islands Crete and Cyrenaica Islands of Greece Mediterranean islands Minoan geography Territories of the Republic of Venice
2,976
6,596
https://en.wikipedia.org/wiki/Computer%20vision
Computer vision
Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. The scientific discipline of computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, multi-dimensional data from a 3D scanner, or medical scanning devices. The technological discipline of computer vision seeks to apply its theories and models to the construction of computer vision systems. Sub-domains of computer vision include scene reconstruction, object detection, event detection, video tracking, object recognition, 3D pose estimation, learning, indexing, motion estimation, visual servoing, 3D scene modeling, and image restoration. Adopting computer vision technology might be painstaking for organizations as there is no single point solution for it. There are very few companies that provide a unified and distributed platform or an Operating System where computer vision applications can be easily deployed and managed. Definition Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. "Computer vision is concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding." As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems. History In the late 1960s, computer vision began at universities that were pioneering artificial intelligence. It was meant to mimic the human visual system, as a stepping stone to endowing robots with intelligent behavior. In 1966, it was believed that this could be achieved through a summer project, by attaching a camera to a computer and having it "describe what it saw". What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding. Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation. The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision. These include the concept of scale-space, the inference of shape from various cues such as shading, texture and focus, and contour models known as snakes. Researchers also realized that many of these mathematical concepts could be treated within the same optimization framework as regularization and Markov random fields. By the 1990s, some of the previous research topics became more active than others. Research in projective 3-D reconstructions led to better understanding of camera calibration. With the advent of optimization methods for camera calibration, it was realized that a lot of the ideas were already explored in bundle adjustment theory from the field of photogrammetry. This led to methods for sparse 3-D reconstructions of scenes from multiple images. Progress was made on the dense stereo correspondence problem and further multi-view stereo techniques. At the same time, variations of graph cut were used to solve image segmentation. This decade also marked the first time statistical learning techniques were used in practice to recognize faces in images (see Eigenface). Toward the end of the 1990s, a significant change came about with the increased interaction between the fields of computer graphics and computer vision. This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering. Recent work has seen the resurgence of feature-based methods, used in conjunction with machine learning techniques and complex optimization frameworks. The advancement of Deep Learning techniques has brought further life to the field of computer vision. The accuracy of deep learning algorithms on several benchmark computer vision data sets for tasks ranging from classification, segmentation and optical flow has surpassed prior methods. Related fields Solid-state physics Solid-state physics is another field that is closely related to computer vision. Most computer vision systems rely on image sensors, which detect electromagnetic radiation, which is typically in the form of either visible or infrared light. The sensors are designed using quantum physics. The process by which light interacts with surfaces is explained using physics. Physics explains the behavior of optics which are a core part of most imaging systems. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Also, various measurement problems in physics can be addressed using computer vision, for example, motion in fluids. Neurobiology Neurobiology has greatly influenced the development of computer vision algorithms. Over the last century, there has been an extensive study of eyes, neurons, and brain structures devoted to the processing visual stimuli in both humans and various animals. This has led to a coarse, yet convoluted, description of how natural vision systems operate in order to solve certain vision-related tasks. These results have led to a sub-field within computer vision where artificial systems are designed to mimic the processing and behavior of biological systems at different levels of complexity. Also, some of the learning-based methods developed within computer vision (e.g. neural net and deep learning based image and feature analysis and classification) have their background in neurobiology. The Neocognitron, a neural network developed in the 1970s by Kunihiko Fukushima, is an early example of computer vision taking direct inspiration from neurobiology, specifically the primary visual cortex. Some strands of computer vision research are closely related to the study of biological vision—indeed, just as many strands of AI research are closely tied with research into human intelligence, and the use of stored knowledge to interpret, integrate and utilize visual information. The field of biological vision studies and models the physiological processes behind visual perception in humans and other animals. Computer vision, on the other hand, develops and describes the algorithms implemented in software and hardware behind artificial vision systems. An interdisciplinary exchange between biological and computer vision has proven fruitful for both fields. Signal processing Yet another field related to computer vision is signal processing. Many methods for processing of one-variable signals, typically temporal signals, can be extended in a natural way to the processing of two-variable signals or multi-variable signals in computer vision. However, because of the specific nature of images, there are many methods developed within computer vision that have no counterpart in the processing of one-variable signals. Together with the multi-dimensionality of the signal, this defines a subfield in signal processing as a part of computer vision. Robotic navigation Robot navigation sometimes deals with autonomous path planning or deliberation for robotic systems to navigate through an environment. A detailed understanding of these environments is required to navigate through them. Information about the environment could be provided by a computer vision system, acting as a vision sensor and providing high-level information about the environment and the robot. Other fields Besides the above-mentioned views on computer vision, many of the related research topics can also be studied from a purely mathematical point of view. For example, many methods in computer vision are based on statistics, optimization or geometry. Finally, a significant part of the field is devoted to the implementation aspect of computer vision; how existing methods can be realized in various combinations of software and hardware, or how these methods can be modified in order to gain processing speed without losing too much performance. Computer vision is also used in fashion eCommerce, inventory management, patent search, furniture, and the beauty industry. Distinctions The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are similar, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences, and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented. In image processing, the input is an image and the output is an image as well, whereas in computer vision, an image or a video is taken as an input and the output could be an enhanced image, an understanding of the content of an image or even behavior of a computer system based on such understanding. Computer graphics produces image data from 3D models, and computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines, e.g., as explored in augmented reality. The following characterizations appear relevant but should not be taken as universally accepted: Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, local operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither requires assumptions nor produces interpretations about the image content. Computer vision includes 3D analysis from 2D images. This analyzes the 3D scene projected onto one or several images, e.g., how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image. Machine vision is the process of applying a range of technologies and methods to provide imaging-based automatic inspection, process control, and robot guidance in industrial applications. Machine vision tends to focus on applications, mainly in manufacturing, e.g., vision-based robots and systems for vision-based inspection, measurement, or picking (such as bin picking). This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that external conditions such as lighting can be and are often more controlled in machine vision than they are in general computer vision, which can enable the use of different algorithms. There is also a field called imaging which primarily focuses on the process of producing images, but sometimes also deals with the processing and analysis of images. For example, medical imaging includes substantial work on the analysis of image data in medical applications. Finally, pattern recognition is a field that uses various methods to extract information from signals in general, mainly based on statistical approaches and artificial neural networks. A significant part of this field is devoted to applying these methods to image data. Photogrammetry also overlaps with computer vision, e.g., stereophotogrammetry vs. computer stereo vision. Applications Applications range from tasks such as industrial machine vision systems which, say, inspect bottles speeding by on a production line, to research into artificial intelligence and computers or robots that can comprehend the world around them. The computer vision and machine vision fields have significant overlap. Computer vision covers the core technology of automated image analysis which is used in many fields. Machine vision usually refers to a process of combining automated image analysis with other methods and technologies to provide automated inspection and robot guidance in industrial applications. In many computer-vision applications, computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Examples of applications of computer vision include systems for: Automatic inspection, e.g., in manufacturing applications; Assisting humans in identification tasks, e.g., a species identification system; Controlling processes, e.g., an industrial robot; Detecting events, e.g., for visual surveillance or people counting, e.g., in the restaurant industry; Interaction, e.g., as the input to a device for computer-human interaction; Modeling objects or environments, e.g., medical image analysis or topographical modeling; Navigation, e.g., by an autonomous vehicle or mobile robot; Organizing information, e.g., for indexing databases of images and image sequences. Tracking surfaces or planes in 3D coordinates for allowing Augmented Reality experiences. Medicine One of the most prominent application fields is medical computer vision, or medical image processing, characterized by the extraction of information from image data to diagnose a patient. An example of this is the detection of tumours, arteriosclerosis or other malign changes, and a variety of dental pathologies; measurements of organ dimensions, blood flow, etc. are another example. It also supports medical research by providing new information: e.g., about the structure of the brain, or the quality of medical treatments. Applications of computer vision in the medical area also include enhancement of images interpreted by humans—ultrasonic images or X-ray images, for example—to reduce the influence of noise. Machine vision A second application area in computer vision is in industry, sometimes called machine vision, where information is extracted for the purpose of supporting a production process. One example is quality control where details or final products are being automatically inspected in order to find defects. One of the most prevalent fields for such inspection is the Wafer industry in which every single Wafer is being measured and inspected for inaccuracies or defects to prevent a computer chip from coming to market in an unusable manner. Another example is a measurement of the position and orientation of details to be picked up by a robot arm. Machine vision is also heavily used in the agricultural processes to remove undesirable food stuff from bulk material, a process called optical sorting. Military Military applications are probably one of the largest areas of computer vision. The obvious examples are the detection of enemy soldiers or vehicles and missile guidance. More advanced systems for missile guidance send the missile to an area rather than a specific target, and target selection is made when the missile reaches the area based on locally acquired image data. Modern military concepts, such as "battlefield awareness", imply that various sensors, including image sensors, provide a rich set of information about a combat scene that can be used to support strategic decisions. In this case, automatic processing of the data is used to reduce complexity and to fuse information from multiple sensors to increase reliability. Autonomous vehicles One of the newer application areas is autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars, or trucks), aerial vehicles, and unmanned aerial vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to vehicles where computer-vision-based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, e.g., for knowing where they are or mapping their environment (SLAM), for detecting obstacles and/or automatically ensuring navigational safety. It can also be used for detecting certain task-specific events, e.g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars, but this technology has still not reached a level where it can be put on the market. There are ample examples of military autonomous vehicles ranging from advanced missiles to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e.g., NASA's Curiosity and CNSA's Yutu-2 rover. Tactile feedback Materials such as rubber and silicon are being used to create sensors that allow for applications such as detecting micro undulations and calibrating robotic hands. Rubber can be used in order to create a mold that can be placed over a finger, inside of this mold would be multiple strain gauges. The finger mold and sensors could then be placed on top of a small sheet of rubber containing an array of rubber pins. A user can then wear the finger mold and trace a surface. A computer can then read the data from the strain gauges and measure if one or more of the pins is being pushed upward. If a pin is being pushed upward then the computer can recognize this as an imperfection in the surface. This sort of technology is useful in order to receive accurate data on imperfections on a very large surface. Another variation of this finger mold sensor are sensors that contain a camera suspended in silicon. The silicon forms a dome around the outside of the camera and embedded in the silicon are point markers that are equally spaced. These cameras can then be placed on devices such as robotic hands in order to allow the computer to receive highly accurate tactile data. Other application areas include: Support of visual effects creation for cinema and broadcast, e.g., camera tracking (match moving). Surveillance. Driver drowsiness detection Tracking and counting organisms in the biological sciences Typical tasks Each of the application areas described above employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods. Some examples of typical computer vision tasks are presented below. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. Recognition The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. Different varieties of recognition problem are described in the literature. Object recognition (also called object classification)one or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene. Blippar, Google Goggles, and LikeThat provide stand-alone programs that illustrate this functionality. Identificationan individual instance of an object is recognized. Examples include identification of a specific person's face or fingerprint, identification of handwritten digits, or identification of a specific vehicle. Detectionthe image data are scanned for a specific condition. Examples include the detection of possible abnormal cells or tissues in medical images or the detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correct interpretation. Currently, the best algorithms for such tasks are based on convolutional neural networks. An illustration of their capabilities is given by the ImageNet Large Scale Visual Recognition Challenge; this is a benchmark in object classification and detection, with millions of images and 1000 object classes used in the competition. Performance of convolutional neural networks on the ImageNet tests is now close to that of humans. The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters (an increasingly common phenomenon with modern digital cameras). By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained classes, such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this with ease. Several specialized tasks based on recognition exist, such as: Content-based image retrievalfinding all images in a larger set of images which have a specific content. The content can be specified in different ways, for example in terms of similarity relative to a target image (give me all images similar to image X) by utilizing reverse image search techniques, or in terms of high-level search criteria given as text input (give me all images which contain many houses, are taken during winter and have no cars in them). Pose estimationestimating the position or orientation of a specific object relative to the camera. An example application for this technique would be assisting a robot arm in retrieving objects from a conveyor belt in an assembly line situation or picking parts from a bin. Optical character recognition (OCR)identifying characters in images of printed or handwritten text, usually with a view to encoding the text in a format more amenable to editing or indexing (e.g. ASCII). 2D code readingreading of 2D codes such as data matrix and QR codes. Facial recognition a technology that enables the matching of faces in digital images or video frames to a face database, which is now widely used for mobile phone facelock, smart door locking, etc. Shape Recognition Technology (SRT) in people counter systems differentiating human beings (head and shoulder patterns) from objects Motion analysis Several tasks relate to motion estimation where an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene or even of the camera that produces the images. Examples of such tasks are: Egomotiondetermining the 3D rigid motion (rotation and translation) of the camera from an image sequence produced by the camera. Trackingfollowing the movements of a (usually) smaller set of interest points or objects (e.g., vehicles, objects, humans or other organisms) in the image sequence. This has vast industry applications as most of high-running machinery can be monitored in this way. Optical flowto determine, for each point in the image, how that point is moving relative to the image plane, i.e., its apparent motion. This motion is a result both of how the corresponding 3D point is moving in the scene and how the camera is moving relative to the scene. Scene reconstruction Given one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case, the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model. The advent of 3D imaging not requiring motion or scanning, and related processing algorithms is enabling rapid advances in this field. Grid-based 3D sensing can be used to acquire 3D images from multiple angles. Algorithms are now available to stitch multiple 3D images together into point clouds and 3D models. Image restoration Image restoration comes into picture when the original image is degraded or damaged due to some external factors like lens wrong positioning , transmission interference , low lighting or motion blurs and etc.. which is referred to as noise. When the images are degraded or damaged the information to be extracted from that also gets damaged. Therefore we need to recover or restore the image as it was intended to be. The aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look, to distinguish them from noise. By first analyzing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches. An example in this field is inpainting. System methods The organization of a computer vision system is highly application-dependent. Some systems are stand-alone applications that solve a specific measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on whether its functionality is pre-specified or if some part of it can be learned or modified during operation. Many functions are unique to the application. There are, however, typical functions that are found in many computer vision systems. Image acquisition – A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, include range sensors, tomography devices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands (gray images or colour images), but can also be related to various physical measures, such as depth, absorption or reflectance of sonic or electromagnetic waves, or nuclear magnetic resonance. Pre-processing – Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data in order to assure that it satisfies certain assumptions implied by the method. Examples are: Re-sampling to assure that the image coordinate system is correct. Noise reduction to assure that sensor noise does not introduce false information. Contrast enhancement to assure that relevant information can be detected. Scale space representation to enhance image structures at locally appropriate scales. Feature extraction – Image features at various levels of complexity are extracted from the image data. Typical examples of such features are: Lines, edges and ridges. Localized interest points such as corners, blobs or points. More complex features may be related to texture, shape or motion. Detection/segmentation – At some point in the processing a decision is made about which image points or regions of the image are relevant for further processing. Examples are: Selection of a specific set of interest points. Segmentation of one or multiple image regions that contain a specific object of interest. Segmentation of image into nested scene architecture comprising foreground, object groups, single objects or salient object parts (also referred to as spatial-taxon scene hierarchy), while the visual salience is often implemented as spatial and temporal attention. Segmentation or co-segmentation of one or multiple videos into a series of per-frame foreground masks, while maintaining its temporal semantic continuity. High-level processing – At this step the input is typically a small set of data, for example a set of points or an image region which is assumed to contain a specific object. The remaining processing deals with, for example: Verification that the data satisfy model-based and application-specific assumptions. Estimation of application-specific parameters, such as object pose or object size. Image recognition – classifying a detected object into different categories. Image registration – comparing and combining two different views of the same object. Decision making Making the final decision required for the application, for example: Pass/fail on automatic inspection applications. Match/no-match in recognition applications. Flag for further human review in medical, military, security and recognition applications. Image-understanding systems Image-understanding systems (IUS) include three levels of abstraction as follows: low level includes image primitives such as edges, texture elements, or regions; intermediate level includes boundaries, surfaces and volumes; and high level includes objects, scenes, or events. Many of these requirements are entirely topics for further research. The representational requirements in the designing of IUS for these levels are: representation of prototypical concepts, concept organization, spatial knowledge, temporal knowledge, scaling, and description by comparison and differentiation. While inference refers to the process of deriving new, not explicitly represented facts from currently known facts, control refers to the process that selects which of the many inference, search, and matching techniques should be applied at a particular stage of processing. Inference and control requirements for IUS are: search and hypothesis activation, matching and hypothesis testing, generation and use of expectations, change and focus of attention, certainty and strength of belief, inference and goal satisfaction. Hardware There are many kinds of computer vision systems; however, all of them contain these basic elements: a power source, at least one image acquisition device (camera, ccd, etc.), a processor, and control and communication cables or some kind of wireless interconnection mechanism. In addition, a practical vision system contains software, as well as a display in order to monitor the system. Vision systems for inner spaces, as most industrial ones, contain an illumination system and may be placed in a controlled environment. Furthermore, a completed system includes many accessories such as camera supports, cables and connectors. Most computer vision systems use visible-light cameras passively viewing a scene at frame rates of at most 60 frames per second (usually far slower). A few computer vision systems use image-acquisition hardware with active illumination or something other than visible light or both, such as structured-light 3D scanners, thermographic cameras, hyperspectral imagers, radar imaging, lidar scanners, magnetic resonance images, side-scan sonar, synthetic aperture sonar, etc. Such hardware captures "images" that are then processed often using the same computer vision algorithms used to process visible-light images. While traditional broadcast and consumer video systems operate at a rate of 30 frames per second, advances in digital signal processing and consumer graphics hardware has made high-speed image acquisition, processing, and display possible for real-time systems on the order of hundreds to thousands of frames per second. For applications in robotics, fast, real-time video systems are critically important and often can simplify the processing needed for certain algorithms. When combined with a high-speed projector, fast image acquisition allows 3D measurement and feature tracking to be realized. Egocentric vision systems are composed of a wearable camera that automatically take pictures from a first-person perspective. As of 2016, vision processing units are emerging as a new class of processor, to complement CPUs and graphics processing units (GPUs) in this role. See also Computational imaging Computational photography Computer audition Egocentric vision Machine vision glossary Space mapping Teknomo–Fernandez algorithm Vision science Visual agnosia Visual perception Visual system Lists Outline of computer vision List of emerging technologies Outline of artificial intelligence References Further reading External links USC Iris computer vision conference list Computer vision papers on the web A complete list of papers of the most relevant computer vision conferences. Computer Vision Online News, source code, datasets and job offers related to computer vision. CVonline Bob Fisher's Compendium of Computer Vision. British Machine Vision Association Supporting computer vision research within the UK via the BMVC and MIUA conferences, Annals of the BMVA (open-source journal), BMVA Summer School and one-day meetings Computer Vision Container, Joe Hoeller GitHub: Widely adopted open-source container for GPU accelerated computer vision applications. Used by researchers, universities, private companies as well as the U.S. Gov't. Image processing Packaging machinery Articles containing video clips
2,978
6,599
https://en.wikipedia.org/wiki/Chaldea
Chaldea
Chaldea () was a small country that existed between the late 10th or early 9th and mid-6th centuries BC, after which the country and its people were absorbed and assimilated into the indigenous population of Babylonia. Semitic-speaking, it was located in the marshy land of the far southeastern corner of Mesopotamia and briefly came to rule Babylon. The Hebrew Bible uses the term (Kaśdim) and this is translated as Chaldaeans in the Greek Old Testament, although there is some dispute as to whether Kasdim in fact means Chaldean or refers to the south Mesopotamian Kaldu. During a period of weakness in the East Semitic-speaking kingdom of Babylonia, new tribes of West Semitic-speaking migrants arrived in the region from the Levant between the 11th and 9th centuries BC. The earliest waves consisted of Suteans and Arameans, followed a century or so later by the Kaldu, a group who became known later as the Chaldeans or the Chaldees. These migrations did not affect the powerful kingdom and empire of Assyria in Upper Mesopotamia, which repelled these incursions. These nomadic Chaldeans settled in the far southeastern portion of Babylonia, chiefly on the left bank of the Euphrates. Though for a short time the name commonly referred to the whole of southern Mesopotamia in Hebraic literature, this was a geographical and historical misnomer as Chaldea proper was in fact only the plain in the far southeast formed by the deposits of the Euphrates and the Tigris, extending about along the course of these rivers and averaging about in width. There were several kings of Chaldean origins who ruled Babylonia. From 626 BC to 539 BC, a ruling family referred to as the Chaldean dynasty, named after their possible Chaldean origin, ruled the kingdom at its height under the Neo-Babylonian Empire, although the final ruler of this empire, Nabonidus (556–539 BC) (and his son and regent Belshazzar) was a usurper of Assyrian ancestry. Name The name Chaldaea is a latinization of the Greek (), a hellenization of Akkadian or . The name appears in Hebrew in the Bible as () and in Aramaic as (). The Hebrew word possibly appears in the Bible (Book of Genesis 22:22) in the name "Kesed"(כשד), the singular form of "Kasdim"(כַּשְׂדִּים), meaning Chaldeans. Kesed is identified as son of Abraham's brother Nahor (and brother of Kemuel the father of Aram), residing in Aram Naharaim. Jewish historian Flavius Josephus (37 – c. 100) also links Arphaxad and Chaldaea, in his Antiquities of the Jews, stating, “Arphaxad named the Arphaxadites, who are now called Chaldeans.” Land In the early period, between the early 9th century and late 7th century BC, mat Kaldi was the name of a small sporadically independent migrant-founded territory under the domination of the Neo-Assyrian Empire (911–605 BC) in southeastern Babylonia, extending to the western shores of the Persian Gulf. The expression mat Bit Yâkin is also used, apparently synonymously. Bit Yâkin was the name of the largest and most powerful of the five tribes of the Chaldeans, or equivalently, their territory. The original extension of Bit Yâkin is not known precisely, but it extended from the lower Tigris into the Arabian Peninsula. Sargon II mentions it as extending as far as Dilmun or "sea-land" (littoral Eastern Arabia). "Chaldea" or mat Kaldi generally referred to the low, marshy, alluvial land around the estuaries of the Tigris and Euphrates, which at the time discharged their waters through separate mouths into the sea. The tribal capital Dur Yâkin was the original seat of Marduk-Baladan. The king of Chaldea was also called the king of Bit Yakin, just as the kings of Babylonia and Assyria were regularly styled simply king of Babylon or Assur, the capital city in each case. In the same way, what is now known as the Persian Gulf was sometimes called "the Sea of Bit Yakin", and sometimes "the Sea of the Land of Chaldea". "Chaldea" came to be used in a wider sense, of Southern Mesopotamia in general, following the brief ascendancy of the Chaldeans during 608–557 BC. This is especially the case in the Hebrew Bible, which was substantially composed during this period (roughly corresponding to the period of Babylonian captivity). The Book of Jeremiah makes frequent reference to the Chaldeans (King James Version Chaldees following LXX ; in Biblical Hebrew as Kasdîm ). Book of Habakkuk 1:6 calls them "that bitter and hasty nation" (). Book of Isaiah 23:13 DRB states, “Behold the land of the Chaldeans, there was not such a people, the Assyrians founded it: they have led away the strong ones thereof into captivity, they have destroyed the houses thereof, they have brought it to ruin.” Ancient Chaldeans Unlike the East Semitic Akkadian-speaking Akkadians, Assyrians and Babylonians, whose ancestors had been established in Mesopotamia since at least the 30th century BC, the Chaldeans were not a native Mesopotamian people, but were late 10th or early 9th century BC West Semitic Levantine migrants to the southeastern corner of the region, who had played no part in the previous 3,000 years or so of Sumero-Akkadian and Assyro-Babylonian Mesopotamian civilization and history. The ancient Chaldeans seem to have migrated into Mesopotamia sometime between c. 940–860 BC, a century or so after other new Semitic arrivals, the Arameans and the Suteans, appeared in Babylonia, c. 1100 BC. They first appear in written record in the annals of the Assyrian king Shalmaneser III during the 850s BC. This was a period of weakness in Babylonia, and its ineffectual native kings were unable to prevent new waves of semi-nomadic foreign peoples from invading and settling in the land. Though belonging to the same West Semitic speaking ethnic group and migrating from the same Levantine regions as the earlier arriving Aramaeans, they are to be differentiated; the Assyrian king Sennacherib, for example, carefully distinguishes them in his inscriptions. The Chaldeans were for a time able to keep their identity despite the dominant native Assyro-Babylonian (Sumero-Akkadian-derived) culture although, as was the case for the earlier Amorites, Kassites and Suteans before them, by the time Babylon fell in 539 BC, perhaps before, the Chaldeans ceased to exist as a specific race of people. In the Hebrew Bible, "Ur of the Chaldees" (Ur Kaśdim) is cited as the starting point of the patriarch Abraham's journey to Canaan. Language Ancient Chaldeans originally spoke a West Semitic language similar to ancient Aramaic language. During the Neo-Assyrian Empire, the Assyrian king Tiglath-Pileser III introduced an Eastern Aramaic dialect as the lingua franca of his empire in the mid-8th century BC. As a result of this innovation, in late periods both the Babylonian and Assyrian dialects of Akkadian became marginalized, and Akkadian influenced Mesopotamian Aramaic took its place across Mesopotamia, including among the Chaldeans, and later, also the Levant. One form of this once widespread Aramaic language was used in some books of the Hebrew Bible (the Book of Daniel and the Book of Ezra). The use of the name "Chaldean" (Chaldaic, Chaldee) to describe it, first introduced by Jerome of Stridon (d. 420), became common in early Aramaic studies, but that misnomer was later corrected, when modern scholars concluded that the Aramaic dialect used in the Hebrew Bible was not related to the ancient Chaldeans and their language. History The region that the Chaldeans eventually made their homeland was in relatively poor southeastern Mesopotamia, at the head of the Persian Gulf. They appear to have migrated into southern Babylonia from the Levant at some unknown point between the end of the reign of Ninurta-kudurri-usur II (a contemporary of Tiglath-Pileser II) circa 940 BC, and the start of the reign of Marduk-zakir-shumi I in 855 BC, although there is no historical proof of their existence prior to the late 850s BC. For perhaps a century or so after settling in the area, these semi-nomadic migrant Chaldean tribes had no impact on the pages of history, seemingly remaining subjugated by the native Akkadian speaking kings of Babylon or by perhaps regionally influential Aramean tribes. The main players in southern Mesopotamia during this period were Babylonia and Assyria, together with Elam to the east and the Aramaeans, who had already settled in the region a century or so prior to the arrival of the Chaldeans. The very first written historical attestation of the existence of Chaldeans occurs in 852 BC, in the annals of the Assyrian king Shalmaneser III, who mentions invading the southeastern extremes of Babylonia and subjugating one Mushallim-Marduk, the chief of the Amukani tribe and overall leader of the Kaldu tribes, together with capturing the town of Baqani, extracting tribute from Adini, chief of the Bet-Dakkuri, another Chaldean tribe. Shalmaneser III had invaded Babylonia at the request of its own king, Marduk-zakir-shumi I, who, being threatened by his own rebellious relations, together with powerful Aramean tribes pleaded with the more powerful Assyrian king for help. The subjugation of the Chaldean tribes by the Assyrian king appears to have been an aside, as they were not at that time a powerful force or a threat to the native Babylonian king. Important Kaldu tribes and their regions in southeastern Babylonia were Bit-Yâkin (the original area the Chaldeans settled in on the Persian Gulf), Bet-Dakuri, Bet-Adini, Bet-Amukkani, and Bet-Shilani. Chaldean leaders had by this time already adopted Assyro-Babylonian names, religion, language, and customs, indicating that they had become Akkadianized to a great degree. The Chaldeans remained quietly ruled by the native Babylonians (who were in turn subjugated by their Assyrian relations) for the next seventy-two years, only coming to historical prominence for the first time in Babylonia in 780 BC, when a previously unknown Chaldean named Marduk-apla-usur usurped the throne from the native Babylonian king Marduk-bel-zeri (790–780 BC). The latter was a vassal of the Assyrian king Shalmaneser IV (783–773 BC), who was otherwise occupied quelling a civil war in Assyria at the time. This was to set a precedent for all future Chaldean aspirations on Babylon during the Neo-Assyrian Empire; always too weak to confront a strong Assyria alone and directly, the Chaldeans awaited periods when Assyrian kings were distracted elsewhere in their vast empire, or engaged in internal conflicts, then, in alliance with other powers stronger than themselves (usually Elam), they made a bid for control over Babylonia. Shalmaneser IV attacked and defeated Marduk-apla-user, retaking northern Babylonia and forcing on him a border treaty in Assyria's favour. The Assyrians allowed him to remain on the throne, although subject to Assyria. Eriba-Marduk, another Chaldean, succeeded him in 769 BC and his son, Nabu-shuma-ishkun in 761 BC, with both being dominated by the new Assyrian king Ashur-Dan III (772–755 BC). Babylonia appears to have been in a state of chaos during this time, with the north occupied by Assyria, its throne occupied by foreign Chaldeans, and continual civil unrest throughout the land. The Chaldean rule proved short-lived. A native Babylonian king named Nabonassar (748–734 BC) defeated and overthrew the Chaldean usurpers in 748 BC, restored indigenous rule, and successfully stabilised Babylonia. The Chaldeans once more faded into obscurity for the next three decades. During this time both the Babylonians and the Chaldean and Aramean migrant groups who had settled in the land once more fell completely under the yoke of the powerful Assyrian king Tiglath-Pileser III (745–727 BC), a ruler who introduced Imperial Aramaic as the lingua franca of the empire. The Assyrian king at first made Nabonassar and his successor native Babylonian kings Nabu-nadin-zeri, Nabu-suma-ukin II and Nabu-mukin-zeri his subjects, but decided to rule Babylonia directly from 729 BC. He was followed by Shalmaneser V (727–722 BC), who also ruled Babylon in person. When Sargon II (722–705 BC) ascended the throne of the Assyrian Empire in 722 BC after the death of Shalmaneser V, he was forced to launch a major campaign in his subject states of Persia, Mannea and Media in Ancient Iran to defend his territories there. He defeated and drove out the Scythians and Cimmerians who had attacked Assyria's Persian and Median vassal colonies in the region. At the same time, Egypt began encouraging and supporting the rebellion against Assyria in Israel and Canaan, forcing the Assyrians to send troops to deal with the Egyptians. These events allowed the Chaldeans to once more attempt to assert themselves. While the Assyrian king was otherwise occupied defending his Iranian colonies from the Scythians and Cimmerians and driving the Egyptians from Canaan, Marduk-apla-iddina II (the Biblical Merodach-Baladan) of Bit-Yâkin, allied himself with the powerful Elamite kingdom and the native Babylonians, briefly seizing control of Babylon between 721 and 710 BC. With the Scythians and Cimmerians vanquished, the Medes and Persians pledging loyalty, and the Egyptians defeated and ejected from southern Canaan, Sargon II was free at last to deal with the Chaldeans, Babylonians, and Elamites. He attacked and deposed Marduk-apla-adding II in 710 BC, also defeating his Elamite allies in the process. After defeat by the Assyrians, Merodach-Baladan fled to his protectors in Elam In 703, Merodach-Baladan very briefly regained the throne from a native Akkadian-Babylonian ruler Marduk-zakir-shumi II, who was a puppet of the new Assyrian king, Sennacherib (705–681 BC). He was once more soundly defeated at Kish, and once again fled to Elam where he died in exile after one final failed attempt to raise a revolt against Assyria in 700 BC, this time not in Babylon, but in the Chaldean tribal land of Bit-Yâkin. A native Babylonian king named Bel-ibni (703–701 BC) was placed on the throne as a puppet of Assyria. The next challenge to Assyrian domination came from the Elamites in 694 BC, with Nergal-ushezib deposing and murdering Ashur-nadin-shumi (700–694 BC), the Assyrian prince who was king of Babylon and son of Sennacherib. The Chaldeans and Babylonians again allied with their more powerful Elamite neighbors in this endeavour. This prompted the enraged Assyrian king Sennacherib to invade and subjugate Elam and Chaldea and to sack Babylon, laying waste to and largely destroying the city. Babylon was regarded as a sacred city by all Mesopotamians, including the Assyrians, and this act eventually resulted in Sennacherib's being murdered by his own sons while he was praying to the god Nisroch in Nineveh. Esarhaddon (681–669 BC) succeeded Sennacherib as ruler of the Assyrian Empire. He completely rebuilt Babylon and brought peace to the region. He conquered Egypt, Nubia and Libya and entrenched his mastery over the Persians, Medes, Parthians, Scythians, Cimmerians, Arameans, Israelites, Phoenicians, Canaanites, Urartians, Pontic Greeks, Cilicians, Phrygians, Lydians, Manneans and Arabs. For the next 60 or so years, Babylon and Chaldea remained peacefully under direct Assyrian control. The Chaldeans remained subjugated and quiet during this period, and the next major revolt in Babylon against the Assyrian empire was fermented not by a Chaldean, Babylonian or Elamite, but by Shamash-shum-ukin, who was an Assyrian king of Babylon, and elder brother of Ashurbanipal (668–627 BC), the new ruler of the Neo-Assyrian Empire. Shamash-shum-ukin (668–648 BC) had become infused with Babylonian nationalism after sixteen years peacefully subject to his brother, and despite being Assyrian himself, declared that the city of Babylon and not Nineveh or Assur should be the seat of the empire. In 652 BC, he raised a powerful coalition of peoples resentful of their subjugation to Assyria against his own brother Ashurbanipal. The alliance included the Babylonians, Persians, Chaldeans, Medes, Elamites, Sultans, Arameans, Israelites, Arabs and Canaanites, together with some disaffected elements among the Assyrians themselves. After a bitter struggle lasting five years, the Assyrian king triumphed over his rebellious brother in 648 BC, Elam was utterly destroyed, and the Babylonians, Persians, Medes, Chaldeans, Arabs, and others were savagely punished. An Assyrian governor named Kandalanu was then placed on the throne of Babylon to rule on behalf of Ashurbanipal. The next 22 years were peaceful, and neither the Babylonians nor Chaldeans posed a threat to the dominance of Ashurbanipal. However, after the death of the mighty Ashurbanipal (and Kandalanu) in 627 BC, the Neo-Assyrian Empire descended into a series of bitter internal dynastic civil wars that were to be the cause of its downfall. Ashur-etil-ilani (626–623 BC) ascended to the throne of the empire in 626 BC but was immediately engulfed in a torrent of fierce rebellions instigated by rival claimants. He was deposed in 623 BC by an Assyrian general (turtanu) named Sin-shumu-lishir (623–622 BC), who was also declared king of Babylon. Sin-shar-ishkun (622–612 BC), the brother of Ashur-etil-ilani, took back the throne of empire from Sin-shumu-lishir in 622 BC, but was then himself faced with unremitting rebellion against his rule by his own people. Continual conflict among the Assyrians led to a myriad of subject peoples, from Cyprus to Persia and The Caucasus to Egypt, quietly reasserting their independence and ceasing to pay tribute to Assyria. Nabopolassar, a previously obscure and unknown Chaldean chieftain, followed the opportunistic tactics laid down by previous Chaldean leaders to take advantage of the chaos and anarchy gripping Assyria and Babylonia and seized the city of Babylon in 620 BC with the help of its native Babylonian inhabitants. Sin-shar-ishkun amassed a powerful army and marched into Babylon to regain control of the region. Nabopolassar was saved from likely destruction because yet another massive Assyrian rebellion broke out in Assyria proper, including the capital Nineveh, which forced the Assyrian king to turn back in order to quell the revolt. Nabopolassar took advantage of this situation, seizing the ancient city of Nippur in 619 BC, a mainstay of pro-Assyrianism in Babylonia, and thus Babylonia as a whole. However, his position was still far from secure, and bitter fighting continued in the Babylonian heartlands from 620 to 615 BC, with Assyrian forces encamped in Babylonia in an attempt to eject Nabopolassar. Nabopolassar attempted a counterattack, marched his army into Assyria proper in 616 BC, and tried to besiege Assur and Arrapha (modern Kirkuk), but was defeated by Sin-shar-ishkun and chased back into Babylonia after being driven from Idiqlat (modern Tikrit) at the southernmost end of Assyria. A stalemate seemed to have ensued, with Nabopolassar unable to make any inroads into Assyria despite its greatly weakened state, and Sin-shar-ishkun unable to eject Nabopolassar from Babylonia due to constant rebellions and civil war among his own people. Nabopolassar's position, and the fate of the Assyrian empire, was sealed when he entered into an alliance with another of Assyria's former vassals, the Medes, the now dominant people of what was to become Persia. The Median Cyaxares had also recently taken advantage of the anarchy in the Assyrian Empire, while officially still a vassal of Assyria, he took the opportunity to meld the Iranian peoples; the Medes, Persians, Sagartians and Parthians, into a large and powerful Median-dominated force. The Medes, Persians, Parthians, Chaldeans and Babylonians formed an alliance that also included the Scythians and Cimmerians to the north. While Sin-shar-ishkun was fighting both the rebels in Assyria and the Chaldeans and Babylonians in southern Mesopotamia, Cyaxares (hitherto a vassal of Assyria), in alliance with the Scythians and Cimmerians launched a surprise attack on civil-war-beleaguered Assyria in 615 BC, sacking Kalhu (the Biblical Calah/Nimrud) and taking Arrapkha (modern Kirkuk). Nabopolassar, still pinned down in southern Mesopotamia, was not involved in this major breakthrough against Assyria. From this point however, the alliance of Medes, Persians, Chaldeans, Babylonians, Sagartians, Scythians and Cimmerians fought in unison against Assyria. Despite the sorely depleted state of Assyria, bitter fighting ensued. Throughout 614 BC the alliance of powers continued to make inroads into Assyria itself, although in 613 BC the Assyrians somehow rallied to score a number of counterattacking victories over the Medes-Persians, Babylonians-Chaldeans and Scythians-Cimmerians. This led to a coalition of forces ranged against it to unite and launch a massive combined attack in 612 BC, finally besieging and sacking Nineveh in late 612 BC, killing Sin-shar-ishkun in the process. A new Assyrian king, Ashur-uballit II (612–605 BC), took the crown amidst the house-to-house fighting in Nineveh, and refused a request to bow in vassalage to the rulers of the alliance. He managed to fight his way out of Nineveh and reach the northern Assyrian city of Harran, where he founded a new capital. Assyria resisted for another seven years until 605 BC, when the remnants of the Assyrian army and the army of the Egyptians, whose 26th Dynasty had formed a brief allied coalition with the Assyrians, were defeated at Karchemish. Nabopolassar and his Median, Scythian and Cimmerian allies were now in possession of much of the huge Neo-Assyrian Empire. The Egyptians had belatedly come to the aid of Assyria, which they would have hoped to support as a secure buffer between Egypt and the new powers of Babylon, Medes and Persians, having already been raided by the Scythians. The Chaldean king of Babylon now ruled all of southern Mesopotamia (Assyria in the north was ruled by the Medes), and the former Assyrian possessions of Aram (Syria), Phoenicia, Israel, Cyprus, Edom, Philistia, and parts of Arabia, while the Medes took control of the former Assyrian colonies in Ancient Iran, Asia Minor and the Caucasus. Nabopolassar was not able to enjoy his success for long, dying in 604 BC, only one year after the victory at Karchemish. He was succeeded by his son, who took the name Nebuchadnezzar II, after the unrelated 12th century BC native Akkadian-Babylonian king Nebuchadnezzar I, indicating the extent to which the migrant Chaldeans had become infused with native Mesopotamian culture. Nebuchadnezzar II and his allies may well have been forced to deal with remnants of Assyrian resistance based in and around Dur-Katlimmu, as Assyrian imperial records continue to be dated in this region between 604 and 599 BC. In addition, the Egyptians remained in the region an attempt to revive the Asian colonies of the ancient Egyptian Empire. Nebuchadnezzar II was to prove himself to be the greatest of the Chaldean rulers, rivaling another non-native ruler, the 18th century BC Amorite king Hammurabi, as the greatest king of Babylon. He was a patron of the cities and a spectacular builder, rebuilding all of Babylonia's major cities on a lavish scale. His building activity at Babylon, expanding on the earlier major and impressive rebuilding of the Assyrian king Esarhaddon, helped to turn it into the immense and beautiful city of legend. Babylon covered more than , surrounded by moats and ringed by a double circuit of walls. The Euphrates flowed through the center of the city, spanned by a beautiful stone bridge. At the center of the city rose the giant ziggurat called Etemenanki, "House of the Frontier Between Heaven and Earth," which lay next to the Temple of Marduk. He is also believed by many historians to have built The Hanging Gardens of Babylon (although others believe these gardens were built much earlier by an Assyrian king in Nineveh) for his wife, a Median princess from the green mountains, so that she would feel at home. A capable leader, Nebuchadnezzar II conducted successful military campaigns; cities like Tyre, Sidon and Damascus were subjugated. He also conducted numerous campaigns in Asia Minor against the Scythians, Cimmerians, and Lydians. Like their Assyrian relations, the Babylonians had to campaign yearly in order to control their colonies. In 601 BC, Nebuchadnezzar II was involved in a major but inconclusive battle against the Egyptians. In 599 BC, he invaded Arabia and routed the Arabs at Qedar. In 597 BC, he invaded Judah, captured Jerusalem after the siege of Jerusalem (597 BC) and deposed its king Jehoiachin, carrying the Israelites into captivity in Babylon. Egyptian and Babylonian armies fought each other for control of the Near East throughout much of Nebuchadnezzar's reign, and this encouraged king Zedekiah of Judah to revolt. After an eighteen-month siege, Jerusalem was captured in 587 BC, thousands of Jews were deported to Babylon, and Solomon's Temple was razed to the ground. Nebuchadnezzar successfully fought the Pharaohs Psammetichus II and Apries throughout his reign, and during the reign of Pharaoh Amasis in 568 BC it is rumoured that he may have briefly invaded Egypt itself. By 572, Nebuchadnezzar was in full control of Babylonia, Chaldea, Aramea (Syria), Phonecia, Israel, Judah, Philistia, Samarra, Jordan, northern Arabia, and parts of Asia Minor. Nebuchadnezzar died of illness in 562 BC after a one-year co-reign with his son, Amel-Marduk, who was deposed in 560 BC after a reign of only two years. End of the Chaldean dynasty Neriglissar succeeded Amel-Marduk. It is unclear as to whether he was in fact an ethnic Chaldean or a native Babylonian nobleman, as he was not related by blood to Nabopolassar's descendants, having married into the ruling family. He conducted successful military campaigns against the Hellenic inhabitants of Cilicia, which had threatened Babylonian interests. Neriglissar reigned for only four years and was succeeded by the youthful Labashi-Marduk in 556 BC. Again, it is unclear whether he was a Chaldean or a native Babylonian. Labashi-Marduk reigned only for a matter of months, being deposed by Nabonidus in late 556 BC. Nabonidus was certainly not a Chaldean, but an Assyrian from Harran, the last capital of Assyria, and proved to be the final native Mesopotamian king of Babylon. He and his son, the regent Belshazzar, were deposed by the Persians under Cyrus the Great in 539 BC. When the Babylonian Empire was absorbed into the Persian Achaemenid Empire, the name "Chaldean" lost its meaning in reference to a particular ethnicity or land, but lingered for a while as a term solely and explicitly used to describe a societal class of astrologers and astronomers in southern Mesopotamia. The original Chaldean tribe had long ago became Akkadianized, adopting Akkadian culture, religion, language and customs, blending into the majority native population, and eventually wholly disappearing as a distinct race of people, as had been the case with other preceding migrant peoples, such as the Amorites, Kassites, Suteans and Arameans of Babylonia. The Persians considered this Chaldean societal class to be masters of reading and writing, and especially versed in all forms of incantation, sorcery, witchcraft, and the magical arts. They spoke of astrologists and astronomers as Chaldeans, and it is used with this specific meaning in the Book of Daniel (Dan. i. 4, ii. 2 et seq.) and by classical writers, such as Strabo. The disappearance of the Chaldeans as an ethnicity and Chaldea as a land is evidenced by the fact that the Persian rulers of the Achaemenid Empire (539–330 BC) did not retain a province called "Chaldea", nor did they refer to "Chaldeans" as a race of people in their written annals. This is in contrast to Assyria, and for a time Babylonia also, where the Persians retained the names Assyria and Babylonia as designations for distinct geo-political entities within the Achaemenid Empire. In the case of the Assyrians in particular, Achaemenid records show Assyrians holding important positions within the empire, particularly with regards to military and civil administration. Legacy The term Chaldean was still in use at the time of Cicero (106–43 BC) long after the Chaldeans had disappeared, who in one of his speeches mentions "Chaldean astrologers", and speaks of them more than once in his De Divinatione. Other classical Latin writers who speak of them as distinguished for their knowledge of astronomy and astrology are Pliny the Elder, Valerius Maximus, Aulus Gellius, Cato the Elder, Lucretius, and Juvenal. Horace in his Carpe diem ode speaks of the "Babylonian calculations" (Babylonii numeri), the horoscopes of astrologers consulted regarding the future. In the late antiquity, a variant of Aramaic language that was used in some books of the Bible was misnamed as Chaldean by Jerome of Stridon. That inaccurate usage continued down the centuries in Western Europe, and it was still customary during the nineteenth century, until the misnomer was corrected by scholars. In West Asian, Greek and Hebraic sources, however, the term for the language spoken in Mesopotamia was commonly "Assyrian" and later also "Syriac". Accordingly, in the earliest recorded "Western" mentions of the Christians of what is now Iraq and nearby countries, "Chaldean" is used with reference to their language. In 1220/1, Jacques de Vitry wrote that "they denied that Mary was the Mother of God and claimed that Christ existed in two persons. They consecrated leavened bread and used the 'Chaldean' (Syriac) language". In the fifteenth century the term "Chaldeans" was first applied specifically to Assyrians living in Cyprus who entered a union with Rome, and no longer merely with reference to their language but the name of a new church. The common ethnic term for the Aramaic-speaking inhabitants of Northern Mesopotamia used by the people themselves and their Persian, Armenian, Arab, Greek, Georgian and Kurdish neighbours both before and after the advent of Christianity in Iraq, Northeast Syria, Southeast Turkey and Northwest Iran, however, was always Assyrian, and also Syrian (a later derivation of Assyrian), the Assyrian continuity in these regions being well documented. References Sources External links States and territories established in the 10th century BC States and territories disestablished in the 6th century BC Ancient peoples Babylonia Ancient Mesopotamia Ur of the Chaldees Former kingdoms
2,981
6,604
https://en.wikipedia.org/wiki/Rendering%20%28computer%20graphics%29
Rendering (computer graphics)
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output. Rendering is one of the major sub-topics of 3D computer graphics, and in practice it is always connected to the others. It is the last major step in the graphics pipeline, giving models and animation their final appearance. With the increasing sophistication of computer graphics since the 1970s, it has become a more distinct subject. Rendering has uses in architecture, video games, simulators, movie and TV visual effects, and design visualization, each employing a different balance of features and techniques. A wide variety of renderers are available for use. Some are integrated into larger modeling and animation packages, some are stand-alone, and some are free open-source projects. On the inside, a renderer is a carefully engineered program based on multiple disciplines, including light physics, visual perception, mathematics, and software development. Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image on a screen from a 3D representation stored in a scene file are handled by the graphics pipeline in a rendering device such as a GPU. A GPU is a purpose-built device that assists a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software must solve the rendering equation. The rendering equation doesn't account for all lighting phenomena, but instead acts as a general lighting model for computer-generated imagery. In the case of 3D graphics, scenes can be pre-rendered or generated in realtime. Pre-rendering is a slow, computationally intensive process that is typically used for movie creation, where scenes can be generated ahead of time, while real-time rendering is often done for 3D video games and other applications that must dynamically create scenes. 3D hardware accelerators can improve realtime rendering performance. Usage When the pre-image (a wireframe sketch usually) is complete, rendering is used, which adds in bitmap textures or procedural textures, lights, bump mapping and relative position to other objects. The result is a completed image the consumer or intended viewer sees. For movie animations, several images (frames) must be rendered, and stitched together in a program capable of making an animation of this sort. Most 3D image editing programs can do this. Features A rendered image can be understood in terms of a number of visible features. Rendering research and development has been largely motivated by finding ways to simulate these efficiently. Some relate directly to particular algorithms and techniques, while others are produced together. Shading how the color and brightness of a surface varies with lighting Texture-mapping a method of applying detail to surfaces Bump-mapping a method of simulating small-scale bumpiness on surfaces Fogging/participating medium how light dims when passing through non-clear atmosphere or air Shadows the effect of obstructing light Soft shadows varying darkness caused by partially obscured light sources Reflection mirror-like or highly glossy reflection Transparency (optics), transparency (graphic) or opacity sharp transmission of light through solid objects Translucency highly scattered transmission of light through solid objects Refraction bending of light associated with transparency Diffraction bending, spreading, and interference of light passing by an object or aperture that disrupts the ray Indirect illumination surfaces illuminated by light reflected off other surfaces, rather than directly from a light source (also known as global illumination) Caustics (a form of indirect illumination) reflection of light off a shiny object, or focusing of light through a transparent object, to produce bright highlights on another object Depth of field objects appear blurry or out of focus when too far in front of or behind the object in focus Motion blur objects appear blurry due to high-speed motion, or the motion of the camera Non-photorealistic rendering rendering of scenes in an artistic style, intended to look like a painting or drawing Techniques Many rendering have been researched, and software used for rendering may employ a number of different techniques to obtain a final image. Tracing every particle of light in a scene is nearly always completely impractical and would take a stupendous amount of time. Even tracing a portion large enough to produce an image takes an inordinate amount of time if the sampling is not intelligently restricted. Therefore, a few loose families of more-efficient light transport modeling techniques have emerged: rasterization, including scanline rendering, geometrically projects objects in the scene to an image plane, without advanced optical effects; ray casting considers the scene as observed from a specific point of view, calculating the observed image based only on geometry and very basic optical laws of reflection intensity, and perhaps using Monte Carlo techniques to reduce artifacts; ray tracing is similar to ray casting, but employs more advanced optical simulation, and usually uses Monte Carlo techniques to obtain more realistic results at a speed that is often orders of magnitude faster. The fourth type of light transport technique, radiosity is not usually implemented as a rendering technique but instead calculates the passage of light as it leaves the light source and illuminates surfaces. These surfaces are usually rendered to the display using one of the other three techniques. Most advanced software combines two or more of the techniques to obtain good-enough results at reasonable cost. Another distinction is between image order algorithms, which iterate over pixels of the image plane, and object order algorithms, which iterate over objects in the scene. Generally object order is more efficient, as there are usually fewer objects in a scene than pixels. Scanline rendering and rasterization A high-level representation of an image necessarily contains elements in a different domain from pixels. These elements are referred to as primitives. In a schematic drawing, for instance, line segments and curves might be primitives. In a graphical user interface, windows and buttons might be the primitives. In rendering of 3D models, triangles and polygons in space might be primitives. If a pixel-by-pixel (image order) approach to rendering is impractical or too slow for some task, then a primitive-by-primitive (object order) approach to rendering may prove useful. Here, one loop through each of the primitives, determines which pixels in the image it affects, and modifies those pixels accordingly. This is called rasterization, and is the rendering method used by all current graphics cards. Rasterization is frequently faster than pixel-by-pixel rendering. First, large areas of the image may be empty of primitives; rasterization will ignore these areas, but pixel-by-pixel rendering must pass through them. Second, rasterization can improve cache coherency and reduce redundant work by taking advantage of the fact that the pixels occupied by a single primitive tend to be contiguous in the image. For these reasons, rasterization is usually the approach of choice when interactive rendering is required; however, the pixel-by-pixel approach can often produce higher-quality images and is more versatile because it does not depend on as many assumptions about the image as rasterization. The older form of rasterization is characterized by rendering an entire face (primitive) as a single color. Alternatively, rasterization can be done in a more complicated manner by first rendering the vertices of a face and then rendering the pixels of that face as a blending of the vertex colors. This version of rasterization has overtaken the old method as it allows the graphics to flow without complicated textures (a rasterized image when used face by face tends to have a very block-like effect if not covered in complex textures; the faces are not smooth because there is no gradual color change from one primitive to the next). This newer method of rasterization utilizes the graphics card's more taxing shading functions and still achieves better performance because the simpler textures stored in memory use less space. Sometimes designers will use one rasterization method on some faces and the other method on others based on the angle at which that face meets other joined faces, thus increasing speed and not hurting the overall effect. Ray casting In ray casting the geometry which has been modeled is parsed pixel by pixel, line by line, from the point of view outward, as if casting rays out from the point of view. Where an object is intersected, the color value at the point may be evaluated using several methods. In the simplest, the color value of the object at the point of intersection becomes the value of that pixel. The color may be determined from a texture-map. A more sophisticated method is to modify the color value by an illumination factor, but without calculating the relationship to a simulated light source. To reduce artifacts, a number of rays in slightly different directions may be averaged. Ray casting involves calculating the "view direction" (from camera position), and incrementally following along that "ray cast" through "solid 3d objects" in the scene, while accumulating the resulting value from each point in 3D space. This is related and similar to "ray tracing" except that the raycast is usually not "bounced" off surfaces (where the "ray tracing" indicates that it is tracing out the lights path including bounces). "Ray casting" implies that the light ray is following a straight path (which may include traveling through semi-transparent objects). The ray cast is a vector that can originate from the camera or from the scene endpoint ("back to front", or "front to back"). Sometimes the final light value is derived from a "transfer function" and sometimes it's used directly. Rough simulations of optical properties may be additionally employed: a simple calculation of the ray from the object to the point of view is made. Another calculation is made of the angle of incidence of light rays from the light source(s), and from these as well as the specified intensities of the light sources, the value of the pixel is calculated. Another simulation uses illumination plotted from a radiosity algorithm, or a combination of these two. Ray tracing Ray tracing aims to simulate the natural flow of light, interpreted as particles. Often, ray tracing methods are utilized to approximate the solution to the rendering equation by applying Monte Carlo methods to it. Some of the most used methods are path tracing, bidirectional path tracing, or Metropolis light transport, but also semi realistic methods are in use, like Whitted Style Ray Tracing, or hybrids. While most implementations let light propagate on straight lines, applications exist to simulate relativistic spacetime effects. In a final, production quality rendering of a ray traced work, multiple rays are generally shot for each pixel, and traced not just to the first object of intersection, but rather, through a number of sequential 'bounces', using the known laws of optics such as "angle of incidence equals angle of reflection" and more advanced laws that deal with refraction and surface roughness. Once the ray either encounters a light source, or more probably once a set limiting number of bounces has been evaluated, then the surface illumination at that final point is evaluated using techniques described above, and the changes along the way through the various bounces evaluated to estimate a value observed at the point of view. This is all repeated for each sample, for each pixel. In distribution ray tracing, at each point of intersection, multiple rays may be spawned. In path tracing, however, only a single ray or none is fired at each intersection, utilizing the statistical nature of Monte Carlo experiments. As a brute-force method, ray tracing has been too slow to consider for real-time, and until recently too slow even to consider for short films of any degree of quality, although it has been used for special effects sequences, and in advertising, where a short portion of high quality (perhaps even photorealistic) footage is required. However, efforts at optimizing to reduce the number of calculations needed in portions of a work where detail is not high or does not depend on ray tracing features have led to a realistic possibility of wider use of ray tracing. There is now some hardware accelerated ray tracing equipment, at least in prototype phase, and some game demos which show use of real-time software or hardware ray tracing. Neural rendering Neural rendering is a rendering method using artificial neural networks. Neural rendering includes image-based rendering methods that are used to reconstruct 3D models from 2-dimensional images.One of these methods are photogrammetry, which is a method in which a collection of images from multiple angles of an object are turned into a 3D model. There have also been recent developments in generating and rendering 3D models from text and coarse paintings by notably NVIDIA, Google and various other companies. Radiosity Radiosity is a method which attempts to simulate the way in which directly illuminated surfaces act as indirect light sources that illuminate other surfaces. This produces more realistic shading and seems to better capture the 'ambience' of an indoor scene. A classic example is a way that shadows 'hug' the corners of rooms. The optical basis of the simulation is that some diffused light from a given point on a given surface is reflected in a large spectrum of directions and illuminates the area around it. The simulation technique may vary in complexity. Many renderings have a very rough estimate of radiosity, simply illuminating an entire scene very slightly with a factor known as ambiance. However, when advanced radiosity estimation is coupled with a high quality ray tracing algorithm, images may exhibit convincing realism, particularly for indoor scenes. In advanced radiosity simulation, recursive, finite-element algorithms 'bounce' light back and forth between surfaces in the model, until some recursion limit is reached. The colouring of one surface in this way influences the colouring of a neighbouring surface, and vice versa. The resulting values of illumination throughout the model (sometimes including for empty spaces) are stored and used as additional inputs when performing calculations in a ray-casting or ray-tracing model. Due to the iterative/recursive nature of the technique, complex objects are particularly slow to emulate. Prior to the standardization of rapid radiosity calculation, some digital artists used a technique referred to loosely as false radiosity by darkening areas of texture maps corresponding to corners, joints and recesses, and applying them via self-illumination or diffuse mapping for scanline rendering. Even now, advanced radiosity calculations may be reserved for calculating the ambiance of the room, from the light reflecting off walls, floor and ceiling, without examining the contribution that complex objects make to the radiosity or complex objects may be replaced in the radiosity calculation with simpler objects of similar size and texture. Radiosity calculations are viewpoint independent which increases the computations involved, but makes them useful for all viewpoints. If there is little rearrangement of radiosity objects in the scene, the same radiosity data may be reused for a number of frames, making radiosity an effective way to improve on the flatness of ray casting, without seriously impacting the overall rendering time-per-frame. Because of this, radiosity is a prime component of leading real-time rendering methods, and has been used from beginning-to-end to create a large number of well-known recent feature-length animated 3D-cartoon films. Sampling and filtering One problem that any rendering system must deal with, no matter which approach it takes, is the sampling problem. Essentially, the rendering process tries to depict a continuous function from image space to colors by using a finite number of pixels. As a consequence of the Nyquist–Shannon sampling theorem (or Kotelnikov theorem), any spatial waveform that can be displayed must consist of at least two pixels, which is proportional to image resolution. In simpler terms, this expresses the idea that an image cannot display details, peaks or troughs in color or intensity, that are smaller than one pixel. If a naive rendering algorithm is used without any filtering, high frequencies in the image function will cause ugly aliasing to be present in the final image. Aliasing typically manifests itself as jaggies, or jagged edges on objects where the pixel grid is visible. In order to remove aliasing, all rendering algorithms (if they are to produce good-looking images) must use some kind of low-pass filter on the image function to remove high frequencies, a process called antialiasing. Optimization Due to the large number of calculations, a work in progress is usually only rendered in detail appropriate to the portion of the work being developed at a given time, so in the initial stages of modeling, wireframe and ray casting may be used, even where the target output is ray tracing with radiosity. It is also common to render only parts of the scene at high detail, and to remove objects that are not important to what is currently being developed. For real-time, it is appropriate to simplify one or more common approximations, and tune to the exact parameters of the scenery in question, which is also tuned to the agreed parameters to get the most 'bang for the buck'. Academic core The implementation of a realistic renderer always has some basic element of physical simulation or emulation some computation which resembles or abstracts a real physical process. The term "physically based" indicates the use of physical models and approximations that are more general and widely accepted outside rendering. A particular set of related techniques have gradually become established in the rendering community. The basic concepts are moderately straightforward, but intractable to calculate; and a single elegant algorithm or approach has been elusive for more general purpose renderers. In order to meet demands of robustness, accuracy and practicality, an implementation will be a complex combination of different techniques. Rendering research is concerned with both the adaptation of scientific models and their efficient application. The rendering equation This is the key academic/theoretical concept in rendering. It serves as the most abstract formal expression of the non-perceptual aspect of rendering. All more complete algorithms can be seen as solutions to particular formulations of this equation. Meaning: at a particular position and direction, the outgoing light (Lo) is the sum of the emitted light (Le) and the reflected light. The reflected light being the sum of the incoming light (Li) from all directions, multiplied by the surface reflection and incoming angle. By connecting outward light to inward light, via an interaction point, this equation stands for the whole 'light transport' all the movement of light in a scene. The bidirectional reflectance distribution function The bidirectional reflectance distribution function (BRDF) expresses a simple model of light interaction with a surface as follows: Light interaction is often approximated by the even simpler models: diffuse reflection and specular reflection, although both can ALSO be BRDFs. Geometric optics Rendering is practically exclusively concerned with the particle aspect of light physics known as geometrical optics. Treating light, at its basic level, as particles bouncing around is a simplification, but appropriate: the wave aspects of light are negligible in most scenes, and are significantly more difficult to simulate. Notable wave aspect phenomena include diffraction (as seen in the colours of CDs and DVDs) and polarisation (as seen in LCDs). Both types of effect, if needed, are made by appearance-oriented adjustment of the reflection model. Visual perception Though it receives less attention, an understanding of human visual perception is valuable to rendering. This is mainly because image displays and human perception have restricted ranges. A renderer can simulate a wide range of light brightness and color, but current displays movie screen, computer monitor, etc. cannot handle so much, and something must be discarded or compressed. Human perception also has limits, and so does not need to be given large-range images to create realism. This can help solve the problem of fitting images into displays, and, furthermore, suggest what short-cuts could be used in the rendering simulation, since certain subtleties won't be noticeable. This related subject is tone mapping. Mathematics used in rendering includes: linear algebra, calculus, numerical mathematics, signal processing, and Monte Carlo methods. Rendering for movies often takes place on a network of tightly connected computers known as a render farm. The current state of the art in 3-D image description for movie creation is the Mental Ray scene description language designed at Mental Images and RenderMan Shading Language designed at Pixar (compare with simpler 3D fileformats such as VRML or APIs such as OpenGL and DirectX tailored for 3D hardware accelerators). Other renderers (including proprietary ones) can and are sometimes used, but most other renderers tend to miss one or more of the often needed features like good texture filtering, texture caching, programmable shaders, highend geometry types like hair, subdivision or nurbs surfaces with tesselation on demand, geometry caching, raytracing with geometry caching, high quality shadow mapping, speed or patent-free implementations. Other highly sought features these days may include interactive photorealistic rendering (IPR) and hardware rendering/shading. Chronology of important published ideas 1968 Ray casting 1970 Scanline rendering 1971 Gouraud shading 1973 Phong shading 1973 Phong reflection 1973 Diffuse reflection 1973 Specular highlight 1973 Specular reflection 1974 Sprites 1974 Scrolling 1974 Texture mapping 1974 Z-buffering 1976 Environment mapping 1977 Blinn shading 1977 Side-scrolling 1977 Shadow volumes 1978 Shadow mapping 1978 Bump mapping 1979 Tile map 1980 BSP trees 1980 Ray tracing 1981 Parallax scrolling 1981 Sprite zooming 1981 Cook shader 1983 MIP maps 1984 Octree ray tracing 1984 Alpha compositing 1984 Distributed ray tracing 1984 Radiosity 1985 Row/column scrolling 1985 Hemicube radiosity 1986 Light source tracing 1986 Rendering equation 1987 Reyes rendering 1988 Depth cue 1988 Distance fog 1988 Tiled rendering 1991 Xiaolin Wu line anti-aliasing 1991 Hierarchical radiosity 1993 Texture filtering 1993 Perspective correction 1993 Transform, clipping, and lighting 1993 Directional lighting 1993 Trilinear interpolation 1993 Z-culling 1993 Oren–Nayar reflectance 1993 Tone mapping 1993 Subsurface scattering 1994 Ambient occlusion 1995 Hidden-surface determination 1995 Photon mapping 1996 Multisample anti-aliasing 1997 Metropolis light transport 1997 Instant Radiosity 1998 Hidden-surface removal 2000 Pose space deformation 2002 Precomputed Radiance Transfer See also Per-pixel lighting References Further reading External links GPU Rendering Magazine, online CGI magazine about advantages of GPU rendering SIGGRAPH the ACMs special interest group in graphics the largest academic and professional association and conference List of links to (recent, as of 2004) siggraph papers (and some others) on the web 3D rendering
2,985
6,617
https://en.wikipedia.org/wiki/Compactification%20%28mathematics%29
Compactification (mathematics)
In mathematics, in general topology, compactification is the process or result of making a topological space into a compact space. A compact space is a space in which every open cover of the space contains a finite subcover. The methods of compactification are various, but each is a way of controlling points from "going off to infinity" by in some way adding "points at infinity" or preventing such an "escape". An example Consider the real line with its ordinary topology. This space is not compact; in a sense, points can go off to infinity to the left or to the right. It is possible to turn the real line into a compact space by adding a single "point at infinity" which we will denote by ∞. The resulting compactification can be thought of as a circle (which is compact as a closed and bounded subset of the Euclidean plane). Every sequence that ran off to infinity in the real line will then converge to ∞ in this compactification. Intuitively, the process can be pictured as follows: first shrink the real line to the open interval (−,) on the x-axis; then bend the ends of this interval upwards (in positive y-direction) and move them towards each other, until you get a circle with one point (the topmost one) missing. This point is our new point ∞ "at infinity"; adding it in completes the compact circle. A bit more formally: we represent a point on the unit circle by its angle, in radians, going from − to for simplicity. Identify each such point θ on the circle with the corresponding point on the real line tan(θ/2). This function is undefined at the point , since tan(/2) is undefined; we will identify this point with our point ∞. Since tangents and inverse tangents are both continuous, our identification function is a homeomorphism between the real line and the unit circle without ∞. What we have constructed is called the Alexandroff one-point compactification of the real line, discussed in more generality below. It is also possible to compactify the real line by adding two points, +∞ and −∞; this results in the extended real line. Definition An embedding of a topological space X as a dense subset of a compact space is called a compactification of X. It is often useful to embed topological spaces in compact spaces, because of the special properties compact spaces have. Embeddings into compact Hausdorff spaces may be of particular interest. Since every compact Hausdorff space is a Tychonoff space, and every subspace of a Tychonoff space is Tychonoff, we conclude that any space possessing a Hausdorff compactification must be a Tychonoff space. In fact, the converse is also true; being a Tychonoff space is both necessary and sufficient for possessing a Hausdorff compactification. The fact that large and interesting classes of non-compact spaces do in fact have compactifications of particular sorts makes compactification a common technique in topology. Alexandroff one-point compactification For any noncompact topological space X the (Alexandroff) one-point compactification αX of X is obtained by adding one extra point ∞ (often called a point at infinity) and defining the open sets of the new space to be the open sets of X together with the sets of the form G ∪ {∞}, where G is an open subset of X such that X \ G is closed and compact. The one-point compactification of X is Hausdorff if and only if X is Hausdorff, noncompact and locally compact. Stone–Čech compactification Of particular interest are Hausdorff compactifications, i.e., compactifications in which the compact space is Hausdorff. A topological space has a Hausdorff compactification if and only if it is Tychonoff. In this case, there is a unique (up to homeomorphism) "most general" Hausdorff compactification, the Stone–Čech compactification of X, denoted by βX; formally, this exhibits the category of Compact Hausdorff spaces and continuous maps as a reflective subcategory of the category of Tychonoff spaces and continuous maps. "Most general" or formally "reflective" means that the space βX is characterized by the universal property that any continuous function from X to a compact Hausdorff space K can be extended to a continuous function from βX to K in a unique way. More explicitly, βX is a compact Hausdorff space containing X such that the induced topology on X by βX is the same as the given topology on X, and for any continuous map f:X → K, where K is a compact Hausdorff space, there is a unique continuous map g:βX → K for which g restricted to X is identically f. The Stone–Čech compactification can be constructed explicitly as follows: let C be the set of continuous functions from X to the closed interval [0,1]. Then each point in X can be identified with an evaluation function on C. Thus X can be identified with a subset of [0,1]C, the space of all functions from C to [0,1]. Since the latter is compact by Tychonoff's theorem, the closure of X as a subset of that space will also be compact. This is the Stone–Čech compactification. Spacetime compactification Walter Benz and Isaak Yaglom have shown how stereographic projection onto a single-sheet hyperboloid can be used to provide a compactification for split complex numbers. In fact, the hyperboloid is part of a quadric in real projective four-space. The method is similar to that used to provide a base manifold for group action of the conformal group of spacetime. Projective space Real projective space RPn is a compactification of Euclidean space Rn. For each possible "direction" in which points in Rn can "escape", one new point at infinity is added (but each direction is identified with its opposite). The Alexandroff one-point compactification of R we constructed in the example above is in fact homeomorphic to RP1. Note however that the projective plane RP2 is not the one-point compactification of the plane R2 since more than one point is added. Complex projective space CPn is also a compactification of Cn; the Alexandroff one-point compactification of the plane C is (homeomorphic to) the complex projective line CP1, which in turn can be identified with a sphere, the Riemann sphere. Passing to projective space is a common tool in algebraic geometry because the added points at infinity lead to simpler formulations of many theorems. For example, any two different lines in RP2 intersect in precisely one point, a statement that is not true in R2. More generally, Bézout's theorem, which is fundamental in intersection theory, holds in projective space but not affine space. This distinct behavior of intersections in affine space and projective space is reflected in algebraic topology in the cohomology rings – the cohomology of affine space is trivial, while the cohomology of projective space is non-trivial and reflects the key features of intersection theory (dimension and degree of a subvariety, with intersection being Poincaré dual to the cup product). Compactification of moduli spaces generally require allowing certain degeneracies – for example, allowing certain singularities or reducible varieties. This is notably used in the Deligne–Mumford compactification of the moduli space of algebraic curves. Compactification and discrete subgroups of Lie groups In the study of discrete subgroups of Lie groups, the quotient space of cosets is often a candidate for more subtle compactification to preserve structure at a richer level than just topological. For example, modular curves are compactified by the addition of single points for each cusp, making them Riemann surfaces (and so, since they are compact, algebraic curves). Here the cusps are there for a good reason: the curves parametrize a space of lattices, and those lattices can degenerate ('go off to infinity'), often in a number of ways (taking into account some auxiliary structure of level). The cusps stand in for those different 'directions to infinity'. That is all for lattices in the plane. In n-dimensional Euclidean space the same questions can be posed, for example about SO(n)\SLn(R)/SLn(Z). This is harder to compactify. There are a variety of compactifications, such as the Borel–Serre compactification, the reductive Borel–Serre compactification, and the Satake compactifications, that can be formed. Other compactification theories The theories of ends of a space and prime ends. Some 'boundary' theories such as the collaring of an open manifold, Martin boundary, Shilov boundary and Furstenberg boundary. The Bohr compactification of a topological group arises from the consideration of almost periodic functions. The projective line over a ring for a topological ring may compactify it. The Baily–Borel compactification of a quotient of a Hermitian symmetric space. The wonderful compactification of a quotient of algebraic groups. The compactifications that are simultaneously convex subsets in a locally convex space are called convex compactifications, their additional linear structure allowing e.g. for developing a differential calculus and more advanced considerations e.g. in relaxation in variational calculus or optimization theory. See also References
2,995
6,652
https://en.wikipedia.org/wiki/Cleveland%20Guardians
Cleveland Guardians
The Cleveland Guardians are an American professional baseball team based in Cleveland. The Guardians compete in Major League Baseball (MLB) as a member club of the American League (AL) Central division. Since , they have played at Progressive Field. Since their establishment as a Major League franchise in 1901, the team has won 11 Central division titles, six American League pennants, and two World Series championships (in 1920 and 1948). The team's World Series championship drought since 1948 is the longest active among all 30 current Major League teams. The team's name references the Guardians of Traffic, eight monolithic 1932 Art Deco sculptures by Henry Hering on the city's Hope Memorial Bridge, which is adjacent to Progressive Field. The team's mascot is named "Slider." The team's spring training facility is at Goodyear Ballpark in Goodyear, Arizona. The franchise originated in 1894 as the Grand Rapids Rippers, a minor league team based in Grand Rapids, Michigan, that played in the Western League. The team relocated to Cleveland in 1900 and was called the Cleveland Lake Shores. The Western League itself was renamed the American League prior to the 1900 season while continuing its minor league status. When the American League declared itself a major league in 1901, Cleveland was one of its eight charter franchises. Originally called the Cleveland Bluebirds or Blues, the team was also unofficially called the Cleveland Bronchos in 1902. Beginning in 1903, the team was named the Cleveland Napoleons or Naps, after team captain Nap Lajoie. Following Lajoie's departure after the 1914 season, club owner Charles Somers requested that baseball writers choose a new name. They chose the name Cleveland Indians, allegedly a revival of the nickname that fans gave to the Cleveland Spiders while Louis Sockalexis, a Native American, was playing for the team. That name stuck and remained in use for more than a century. Common nicknames for the Indians were the "Tribe" and the "Wahoos", the latter referencing their longtime logo, Chief Wahoo. After it came under criticism as part of the Native American mascot controversy, the team ceased using the name "Indians" following the 2021 season, and were renamed the "Guardians" for 2022. From August 24 to September 14, 2017, the team won 22 consecutive games, the longest winning streak in American League history, and the second longest winning streak in Major League Baseball history. As of the end of the 2022 season, the franchise's overall record is (). Early Cleveland baseball teams "In 1857 baseball games were a daily spectacle in Cleveland's Public Squares. City authorities tried to find an ordinance forbidding it, to the joy of the crowd, they were unsuccessful. – Harold Seymour" 1865–1868 Forest Citys of Cleveland (Amateur) 1869–1872 Forest Citys of Cleveland From 1865 to 1868 Forest Citys was an amateur ball club. During the 1869 season, Cleveland was among several cities that established professional baseball teams following the success of the 1869 Cincinnati Red Stockings, the first fully professional team. In the newspapers before and after 1870, the team was often called the Forest Citys, in the same generic way that the team from Chicago was sometimes called The Chicagos. In 1871 the Forest Citys joined the new National Association of Professional Base Ball Players (NA), the first professional league. Ultimately, two of the league's western clubs went out of business during the first season and the Chicago Fire left that city's White Stockings impoverished, unable to field a team again until 1874. Cleveland was thus the NA's westernmost outpost in 1872, the year the club folded. Cleveland played its full schedule to July 19 followed by two games versus Boston in mid-August and disbanded at the end of the season. 1879–1881 Cleveland Forest Citys 1882–1884 Cleveland Blues In 1876, the National League (NL) supplanted the NA as the major professional league. Cleveland was not among its charter members, but by 1879 the league was looking for new entries and the city gained an NL team. The Cleveland Forest Citys were recreated, but rebranded in 1882 as the Cleveland Blues, because the National League required distinct colors for that season. The Blues had mediocre records for six seasons and were ruined by a trade war with the Union Association (UA) in 1884, when its three best players (Fred Dunlap, Jack Glasscock, and Jim McCormick) jumped to the UA after being offered higher salaries. The Cleveland Blues merged with the St. Louis Maroons UA team in 1885. 1887–1899 Cleveland Spiders – nickname "Blues" Cleveland went without major league baseball for two seasons until gaining a team in the American Association (AA) in 1887. After the AA's Allegheny club jumped to the NL, Cleveland followed suit in 1889, as the AA began to crumble. The Cleveland ball club, named the Spiders (supposedly inspired by their "skinny and spindly" players) slowly became a power in the league. In 1891, the Spiders moved into League Park, which would serve as the home of Cleveland professional baseball for the next 55 years. Led by native Ohioan Cy Young, the Spiders became a contender in the mid-1890s, playing in the Temple Cup Series (that era's World Series) twice and winning it in 1895. The team began to fade after this success, and was dealt a severe blow under the ownership of the Robison brothers. Prior to the season, Frank Robison, the Spiders' owner, bought the St. Louis Browns, thus owning two clubs at the same time. The Browns were renamed the "Perfectos", and restocked with Cleveland talent. Just weeks before the season opener, most of the better Spiders were transferred to St. Louis, including three future Hall of Famers: Cy Young, Jesse Burkett and Bobby Wallace. The roster maneuvers failed to create a powerhouse Perfectos team, as St. Louis finished fifth in both 1899 and . The Spiders were left with essentially a minor league lineup, and began to lose games at a record pace. Drawing almost no fans at home, they ended up playing most of their season on the road, and became known as "The Wanderers." The team ended the season in 12th place, 84 games out of first place, with an all-time worst record of 20-134 (.130 winning percentage). Following the 1899 season, the National League disbanded four teams, including the Spiders franchise. The disastrous 1899 season would actually be a step toward a new future for Cleveland fans the next year. 1890, Cleveland Infants – nickname "Babes" The Cleveland Infants competed in the Players' League, which was well-attended in some cities, but club owners lacked the confidence to continue beyond the one season. The Cleveland Infants finished with 55 wins and 75 losses, playing their home games at Brotherhood Park. Franchise history 1894–1935: Beginning to middle The Grand Rapids Rippers (also known as the Rustlers) were founded in Michigan in 1894 and were part of the Western League. In 1900 the team moved to Cleveland and was named the Cleveland Lake Shores. Around the same time Ban Johnson changed the name of his minor league (Western League) to the American League. In 1900 the American League was still considered a minor league. In 1901 the team was called the Cleveland Bluebirds or Blues when the American League broke with the National Agreement and declared itself a competing Major League. The Cleveland franchise was among its eight charter members, and is one of four teams that remain in its original city, along with Boston, Chicago, and Detroit. The new team was owned by coal magnate Charles Somers and tailor Jack Kilfoyl. Somers, a wealthy industrialist and also co-owner of the Boston Americans, lent money to other team owners, including Connie Mack's Philadelphia Athletics, to keep them and the new league afloat. Players didn't think the name "Bluebirds" was suitable for a baseball team. Writers frequently shortened it to Cleveland Blues due to the players' all-blue uniforms, but the players didn't like this unofficial name either. The players themselves tried to change the name to Cleveland Broncos in , but this name never caught on. Cleveland suffered from financial problems in their first two seasons. This led Somers to seriously consider moving to either Pittsburgh or Cincinnati. Relief came in 1902 as a result of the conflict between the National and American Leagues. In 1901, Napoleon "Nap" Lajoie, the Philadelphia Phillies' star second baseman, jumped to the A's after his contract was capped at $2,400 per year—one of the highest-profile players to jump to the upstart AL. The Phillies subsequently filed an injunction to force Lajoie's return, which was granted by the Pennsylvania Supreme Court. The injunction appeared to doom any hopes of an early settlement between the warring leagues. However, a lawyer discovered that the injunction was only enforceable in the state of Pennsylvania. Mack, partly to thank Somers for his past financial support, agreed to trade Lajoie to the then-moribund Blues, who offered $25,000 salary over three years. Due to the injunction, however, Lajoie had to sit out any games played against the A's in Philadelphia. Lajoie arrived in Cleveland on June 4 and was an immediate hit, drawing 10,000 fans to League Park. Soon afterward, he was named team captain, and in 1903 the team was called the Cleveland Napoleons or Naps after a newspaper conducted a write-in contest. Lajoie was named manager in , and the team's fortunes improved somewhat. They finished half a game short of the pennant in 1908. However, the success did not last and Lajoie resigned during the 1909 season as manager but remained on as a player. After that, the team began to unravel, leading Kilfoyl to sell his share of the team to Somers. Cy Young, who returned to Cleveland in 1909, was ineffective for most of his three remaining years and Addie Joss died from tubercular meningitis prior to the 1911 season. Despite a strong lineup anchored by the potent Lajoie and Shoeless Joe Jackson, poor pitching kept the team below third place for most of the next decade. One reporter referred to the team as the Napkins, "because they fold up so easily". The team hit bottom in 1914 and 1915, finishing in the cellar both years. 1915 brought significant changes to the team. Lajoie, nearly 40 years old, was no longer a top hitter in the league, batting only .258 in 1914. With Lajoie engaged in a feud with manager Joe Birmingham, the team sold Lajoie back to the A's. With Lajoie gone, the club needed a new name. Somers asked the local baseball writers to come up with a new name, and based on their input, the team was renamed the Cleveland Indians. The name referred to the nickname "Indians" that was applied to the Cleveland Spiders baseball club during the time when Louis Sockalexis, a Native American, played in Cleveland (1897–1899). At the same time, Somers' business ventures began to fail, leaving him deeply in debt. With the Indians playing poorly, attendance and revenue suffered. Somers decided to trade Jackson midway through the 1915 season for two players and $31,500, one of the largest sums paid for a player at the time. By 1916, Somers was at the end of his tether, and sold the team to a syndicate headed by Chicago railroad contractor James C. "Jack" Dunn. Manager Lee Fohl, who had taken over in early 1915, acquired two minor league pitchers, Stan Coveleski and Jim Bagby and traded for center fielder Tris Speaker, who was engaged in a salary dispute with the Red Sox. All three would ultimately become key players in bringing a championship to Cleveland. Speaker took over the reins as player-manager in , and led the team to a championship in 1920. On August 16, 1920, the Indians were playing the Yankees at the Polo Grounds in New York. Shortstop Ray Chapman, who often crowded the plate, was batting against Carl Mays, who had an unusual underhand delivery. It was also late in the afternoon and the infield was completely shaded with the center field area (the batters' background) bathed in sunlight. As well, at the time, "part of every pitcher's job was to dirty up a new ball the moment it was thrown onto the field. By turns, they smeared it with dirt, licorice, tobacco juice; it was deliberately scuffed, sandpapered, scarred, cut, even spiked. The result was a misshapen, earth-colored ball that traveled through the air erratically, tended to soften in the later innings, and as it came over the plate, was very hard to see." In any case, Chapman did not move reflexively when Mays' pitch came his way. The pitch hit Chapman in the head, fracturing his skull. Chapman died the next day, becoming the only player to sustain a fatal injury from a pitched ball. The Indians, who at the time were locked in a tight three-way pennant race with the Yankees and White Sox, were not slowed down by the death of their teammate. Rookie Joe Sewell hit .329 after replacing Chapman in the lineup. In September 1920, the Black Sox Scandal came to a boil. With just a few games left in the season, and Cleveland and Chicago neck-and-neck for first place at 94–54 and 95–56 respectively, the Chicago owner suspended eight players. The White Sox lost two of three in their final series, while Cleveland won four and lost two in their final two series. Cleveland finished two games ahead of Chicago and three games ahead of the Yankees to win its first pennant, led by Speaker's .388 hitting, Jim Bagby's 30 victories and solid performances from Steve O'Neill and Stan Coveleski. Cleveland went on to defeat the Brooklyn Robins 5–2 in the World Series for their first title, winning four games in a row after the Robins took a 2–1 Series lead. The Series included three memorable "firsts", all of them in Game 5 at Cleveland, and all by the home team. In the first inning, right fielder Elmer Smith hit the first Series grand slam. In the fourth inning, Jim Bagby hit the first Series home run by a pitcher. In the top of the fifth inning, second baseman Bill Wambsganss executed the first (and only, so far) unassisted triple play in World Series history, in fact, the only Series triple play of any kind. The team would not reach the heights of 1920 again for 28 years. Speaker and Coveleski were aging and the Yankees were rising with a new weapon: Babe Ruth and the home run. They managed two second-place finishes but spent much of the decade in the cellar. In 1927 Dunn's widow, Mrs. George Pross (Dunn had died in 1922), sold the team to a syndicate headed by Alva Bradley. 1936–1946: Bob Feller enters the show The Indians were a middling team by the 1930s, finishing third or fourth most years. brought Cleveland a new superstar in 17-year-old pitcher Bob Feller, who came from Iowa with a dominating fastball. That season, Feller set a record with 17 strikeouts in a single game and went on to lead the league in strikeouts from 1938 to 1941. On August 20, 1938, Indians catchers Hank Helf and Frank Pytlak set the "all-time altitude mark" by catching baseballs dropped from the Terminal Tower. By , Feller, along with Ken Keltner, Mel Harder and Lou Boudreau, led the Indians to within one game of the pennant. However, the team was wracked with dissension, with some players (including Feller and Mel Harder) going so far as to request that Bradley fire manager Ossie Vitt. Reporters lampooned them as the Cleveland Crybabies. Feller, who had pitched a no-hitter to open the season and won 27 games, lost the final game of the season to unknown pitcher Floyd Giebell of the Detroit Tigers. The Tigers won the pennant and Giebell never won another major league game. Cleveland entered 1941 with a young team and a new manager; Roger Peckinpaugh had replaced the despised Vitt; but the team regressed, finishing in fourth. Cleveland would soon be depleted of two stars. Hal Trosky retired in 1941 due to migraine headaches and Bob Feller enlisted in the Navy two days after the Attack on Pearl Harbor. Starting third baseman Ken Keltner and outfielder Ray Mack were both drafted in 1945 taking two more starters out of the lineup. 1946–1949: The Bill Veeck years In , Bill Veeck formed an investment group that purchased the Cleveland Indians from Bradley's group for a reported $1.6 million. Among the investors was Bob Hope, who had grown up in Cleveland, and former Tigers slugger, Hank Greenberg. A former owner of a minor league franchise in Milwaukee, Veeck brought to Cleveland a gift for promotion. At one point, Veeck hired rubber-faced Max Patkin, the "Clown Prince of Baseball" as a coach. Patkin's appearance in the coaching box was the sort of promotional stunt that delighted fans but infuriated the American League front office. Recognizing that he had acquired a solid team, Veeck soon abandoned the aging, small and lightless League Park to take up full-time residence in massive Cleveland Municipal Stadium. The Indians had briefly moved from League Park to Municipal Stadium in mid-1932, but moved back to League Park due to complaints about the cavernous environment. From 1937 onward, however, the Indians began playing an increasing number of games at Municipal, until by 1940 they played most of their home slate there. League Park was mostly demolished in 1951, but has since been rebuilt as a recreational park. Making the most of the cavernous stadium, Veeck had a portable center field fence installed, which he could move in or out depending on how the distance favored the Indians against their opponents in a given series. The fence moved as much as between series opponents. Following the 1947 season, the American League countered with a rule change that fixed the distance of an outfield wall for the duration of a season. The massive stadium did, however, permit the Indians to set the then-record for the largest crowd to see a Major League baseball game. On October 10, 1948, Game 5 of the World Series against the Boston Braves drew over 84,000. The record stood until the Los Angeles Dodgers drew a crowd in excess of 92,500 to watch Game 5 of the 1959 World Series at the Los Angeles Memorial Coliseum against the Chicago White Sox. Under Veeck's leadership, one of Cleveland's most significant achievements was breaking the color barrier in the American League by signing Larry Doby, formerly a player for the Negro league's Newark Eagles in , 11 weeks after Jackie Robinson signed with the Dodgers. Similar to Robinson, Doby battled racism on and off the field but posted a .301 batting average in 1948, his first full season. A power-hitting center fielder, Doby led the American League twice in homers. In 1948, needing pitching for the stretch run of the pennant race, Veeck turned to the Negro leagues again and signed pitching great Satchel Paige amid much controversy. Barred from Major League Baseball during his prime, Veeck's signing of the aging star in 1948 was viewed by many as another publicity stunt. At an official age of 42, Paige became the oldest rookie in Major League baseball history, and the first black pitcher. Paige ended the year with a 6–1 record with a 2.48 ERA, 45 strikeouts and two shutouts. In , veterans Boudreau, Keltner, and Joe Gordon had career offensive seasons, while newcomers Doby and Gene Bearden also had standout seasons. The team went down to the wire with the Boston Red Sox, winning a one-game playoff, the first in American League history, to go to the World Series. In the series, the Indians defeated the Boston Braves four games to two for their first championship in 28 years. Boudreau won the American League MVP Award. The Indians appeared in a film the following year titled The Kid From Cleveland, in which Veeck had an interest. The film portrayed the team helping out a "troubled teenaged fan" and featured many members of the Indians organization. However, filming during the season cost the players valuable rest days leading to fatigue towards the end of the season. That season, Cleveland again contended before falling to third place. On September 23, 1949, Bill Veeck and the Indians buried their 1948 pennant in center field the day after they were mathematically eliminated from the pennant race. Later in 1949, Veeck's first wife (who had a half-stake in Veeck's share of the team) divorced him. With most of his money tied up in the Indians, Veeck was forced to sell the team to a syndicate headed by insurance magnate Ellis Ryan. 1950–1959: Near misses In , Al Rosen was an All Star for the second year in a row, was named The Sporting News Major League Player of the Year, and won the American League Most Valuable Player Award in a unanimous vote playing for the Indians after leading the AL in runs, home runs, RBIs (for the second year in a row), and slugging percentage, and coming in second by one point in batting average. Ryan was forced out in 1953 in favor of Myron Wilson, who in turn gave way to William Daley in . Despite this turnover in the ownership, a powerhouse team composed of Feller, Doby, Minnie Miñoso, Luke Easter, Bobby Ávila, Al Rosen, Early Wynn, Bob Lemon, and Mike Garcia continued to contend through the early 1950s. However, Cleveland only won a single pennant in the decade, in 1954, finishing second to the New York Yankees five times. The winningest season in franchise history came in 1954, when the Indians finished the season with a record of 111–43 (.721). That mark set an American League record for wins that stood for 44 years until the Yankees won 114 games in 1998 (a 162-game regular season). The Indians' 1954 winning percentage of .721 is still an American League record. The Indians returned to the World Series to face the New York Giants. The team could not bring home the title, however, ultimately being upset by the Giants in a sweep. The series was notable for Willie Mays' over-the-shoulder catch off the bat of Vic Wertz in Game 1. Cleveland remained a talented team throughout the remainder of the decade, finishing in second place in 1959, George Strickland's last full year in the majors. 1960–1993: The 33-year slump From 1960 to 1993, the Indians managed one third-place finish (in 1968) and six fourth-place finishes (in 1960, 1974, 1975, 1976, 1990, and 1992) but spent the rest of the time at or near the bottom of the standings, including four seasons with over 100 losses (1971, 1985, 1987, 1991). Frank Lane becomes general manager The Indians hired general manager Frank Lane, known as "Trader" Lane, away from the St. Louis Cardinals in 1957. Lane over the years had gained a reputation as a GM who loved to make deals. With the White Sox, Lane had made over 100 trades involving over 400 players in seven years. In a short stint in St. Louis, he traded away Red Schoendienst and Harvey Haddix. Lane summed up his philosophy when he said that the only deals he regretted were the ones that he didn't make. One of Lane's early trades in Cleveland was to send Roger Maris to the Kansas City Athletics in the middle of 1958. Indians executive Hank Greenberg was not happy about the trade and neither was Maris, who said that he could not stand Lane. After Maris broke Babe Ruth's home run record, Lane defended himself by saying he still would have done the deal because Maris was unknown and he received good ballplayers in exchange. After the Maris trade, Lane acquired 25-year-old Norm Cash from the White Sox for Minnie Miñoso and then traded him to Detroit before he ever played a game for the Indians; Cash went on to hit over 350 home runs for the Tigers. The Indians received Steve Demeter in the deal, who had only five at-bats for Cleveland. Curse of Rocky Colavito In 1960, Lane made the trade that would define his tenure in Cleveland when he dealt slugging right fielder and fan favorite Rocky Colavito to the Detroit Tigers for Harvey Kuenn just before Opening Day in . It was a blockbuster trade that swapped the AL home run co-champion (Colavito) for the AL batting champion (Kuenn). After the trade, however, Colavito hit over 30 home runs four times and made three All-Star teams for Detroit and Kansas City before returning to Cleveland in . Kuenn, on the other hand, played only one season for the Indians before departing for San Francisco in a trade for an aging Johnny Antonelli and Willie Kirkland. Akron Beacon Journal columnist Terry Pluto documented the decades of woe that followed the trade in his book The Curse of Rocky Colavito. Despite being attached to the curse, Colavito said that he never placed a curse on the Indians but that the trade was prompted by a salary dispute with Lane. Lane also engineered a unique trade of managers in mid-season 1960, sending Joe Gordon to the Tigers in exchange for Jimmy Dykes. Lane left the team in 1961, but ill-advised trades continued. In 1965, the Indians traded pitcher Tommy John, who would go on to win 288 games in his career, and 1966 Rookie of the Year Tommy Agee to the White Sox to get Colavito back. Indians' pitchers also set numerous strikeout records. They led the league in K's every year from 1963 to 1968, and narrowly missed in 1969. The 1964 staff was the first to amass 1,100 strikeouts, and in 1968, they were the first to collect more strikeouts than hits allowed. Move to the AL East division The 1970s were not much better, with the Indians trading away several future stars, including Graig Nettles, Dennis Eckersley, Buddy Bell and 1971 Rookie of the Year Chris Chambliss, for a number of players who made no impact. Constant ownership changes did not help the Indians. In 1963, Daley's syndicate sold the team to a group headed by general manager Gabe Paul. Three years later, Paul sold the Indians to Vernon Stouffer, of the Stouffer's frozen-food empire. Prior to Stouffer's purchase, the team was rumored to be relocated due to poor attendance. Despite the potential for a financially strong owner, Stouffer had some non-baseball related financial setbacks and, consequently, the team was cash-poor. In order to solve some financial problems, Stouffer had made an agreement to play a minimum of 30 home games in New Orleans with a view to a possible move there. After rejecting an offer from George Steinbrenner and former Indian Al Rosen, Stouffer sold the team in 1972 to a group led by Cleveland Cavaliers and Cleveland Barons owner Nick Mileti. Steinbrenner went on to buy the New York Yankees in 1973. Only five years later, Mileti's group sold the team for $11 million to a syndicate headed by trucking magnate Steve O'Neill and including former general manager and owner Gabe Paul. O'Neill's death in 1983 led to the team going on the market once more. O'Neill's nephew Patrick O'Neill did not find a buyer until real estate magnates Richard and David Jacobs purchased the team in 1986. The team was unable to move out of the cellar, with losing seasons between 1969 and 1975. One highlight was the acquisition of Gaylord Perry in . The Indians traded fireballer "Sudden Sam" McDowell for Perry, who became the first Indian pitcher to win the Cy Young Award. In , Cleveland broke another color barrier with the hiring of Frank Robinson as Major League Baseball's first African American manager. Robinson served as player-manager and provided a franchise highlight when he hit a pinch-hit home run on Opening Day. But the high-profile signing of Wayne Garland, a 20-game winner in Baltimore, proved to be a disaster after Garland suffered from shoulder problems and went 28–48 over five years. The team failed to improve with Robinson as manager and he was fired in . In 1977, pitcher Dennis Eckersley threw a no-hitter against the California Angels. The next season, he was traded to the Boston Red Sox where he won 20 games in 1978 and another 17 in 1979. The 1970s also featured the infamous Ten Cent Beer Night at Cleveland Municipal Stadium. The ill-conceived promotion at a 1974 game against the Texas Rangers ended in a riot by fans and a forfeit by the Indians. There were more bright spots in the 1980s. In May 1981, Len Barker threw a perfect game against the Toronto Blue Jays, joining Addie Joss as the only other Indian pitcher to do so. "Super Joe" Charboneau won the American League Rookie of the Year award. Unfortunately, Charboneau was out of baseball by 1983 after falling victim to back injuries and Barker, who was also hampered by injuries, never became a consistently dominant starting pitcher. Eventually, the Indians traded Barker to the Atlanta Braves for Brett Butler and Brook Jacoby, who became mainstays of the team for the remainder of the decade. Butler and Jacoby were joined by Joe Carter, Mel Hall, Julio Franco and Cory Snyder, bringing new hope to fans in the late 1980s. Cleveland's struggles over the 30-year span were highlighted in the 1989 film Major League, which comically depicted a hapless Cleveland ball club going from worst to first by the end of the film. Throughout the 1980s, the Indians' owners had pushed for a new stadium. Cleveland Stadium had been a symbol of the Indians' glory years in the 1940s and 1950s. However, during the lean years even crowds of 40,000 were swallowed up by the cavernous environment. The old stadium was not aging gracefully; chunks of concrete were falling off in sections and the old wooden pilings were petrifying. In 1984, a proposal for a $150 million domed stadium was defeated in a referendum 2–1. Finally, in May 1990, Cuyahoga County voters passed an excise tax on sales of alcohol and cigarettes in the county. The tax proceeds were to be used for financing the construction of the Gateway Sports and Entertainment Complex, which would include Jacobs Field for the Indians and Gund Arena for the Cleveland Cavaliers basketball team. The team's fortunes started to turn in , ironically with a very unpopular trade. The team sent power-hitting outfielder Joe Carter to the San Diego Padres for two unproven players, Sandy Alomar Jr. and Carlos Baerga. Alomar made an immediate impact, not only being elected to the All-Star team but also winning Cleveland's fourth Rookie of the Year award and a Gold Glove. Baerga became a three-time All-Star with consistent offensive production. Indians general manager John Hart made a number of moves that finally brought success to the team. In , he hired former Indian Mike Hargrove to manage and traded catcher Eddie Taubensee to the Houston Astros who, with a surplus of outfielders, were willing to part with Kenny Lofton. Lofton finished second in AL Rookie of the Year balloting with a .285 average and 66 stolen bases. The Indians were named "Organization of the Year" by Baseball America in 1992, in response to the appearance of offensive bright spots and an improving farm system. The team suffered a tragedy during spring training of , when a boat carrying pitchers Steve Olin, Tim Crews, and Bob Ojeda crashed into a pier. Olin and Crews were killed, and Ojeda was seriously injured. (Ojeda missed most of the season, and retired the following year). By the end of the 1993 season, the team was in transition, leaving Cleveland Stadium and fielding a talented nucleus of young players. Many of those players came from the Indians' new AAA farm team, the Charlotte Knights, who won the International League title that year. 1994–2001: New beginnings 1994: Jacobs Field opens Indians General Manager John Hart and team owner Richard Jacobs managed to turn the team's fortunes around. The Indians opened Jacobs Field in 1994 with the aim of improving on the prior season's sixth-place finish. The Indians were only one game behind the division-leading Chicago White Sox on August 12 when a players strike wiped out the rest of the season. 1995–1996: First AL pennant since 1954 Having contended for the division in the aborted 1994 season, Cleveland sprinted to a 100–44 record (the season was shortened by 18 games due to player/owner negotiations) in 1995, winning its first-ever divisional title. Veterans Dennis Martínez, Orel Hershiser and Eddie Murray combined with a young core of players including Omar Vizquel, Albert Belle, Jim Thome, Manny Ramírez, Kenny Lofton and Charles Nagy to lead the league in team batting average as well as team ERA. After defeating the Boston Red Sox in the Division Series and the Seattle Mariners in the ALCS, Cleveland clinched the American League pennant and a World Series berth, for the first time since 1954. The World Series ended in disappointment, however: the Indians fell in six games to the Atlanta Braves. Tickets for every Indians home game sold out several months before opening day in 1996. The Indians repeated as AL Central champions but lost to the wild card Baltimore Orioles in the Division Series. 1997: One inning away In 1997, Cleveland started slow but finished with an 86–75 record. Taking their third consecutive AL Central title, the Indians defeated the New York Yankees in the Division Series, 3–2. After defeating the Baltimore Orioles in the ALCS, Cleveland went on to face the Florida Marlins in the World Series that featured the coldest game in World Series history. With the series tied after Game 6, the Indians went into the ninth inning of Game Seven with a 2–1 lead, but closer José Mesa allowed the Marlins to tie the game. In the eleventh inning, Édgar Rentería drove in the winning run giving the Marlins their first championship. Cleveland became the first team to lose the World Series after carrying the lead into the ninth inning of the seventh game. 1998–2001 In 1998, the Indians made the postseason for the fourth straight year. After defeating the wild-card Boston Red Sox 3–1 in the Division Series, Cleveland lost the 1998 ALCS in six games to the New York Yankees, who had come into the postseason with a then-AL record 114 wins in the regular season. For the 1999 season, Cleveland added relief pitcher Ricardo Rincón and second baseman Roberto Alomar, brother of catcher Sandy Alomar Jr., and won the Central Division title for the fifth consecutive year. The team scored 1,009 runs, becoming the first (and to date only) team since the 1950 Boston Red Sox to score more than 1,000 runs in a season. This time, Cleveland did not make it past the first round, losing the Division Series to the Red Sox, despite taking a 2–0 lead in the series. In game three, Indians starter Dave Burba went down with an injury in the 4th inning. Four pitchers, including presumed game four starter Jaret Wright, surrendered nine runs in relief. Without a long reliever or emergency starter on the playoff roster, Hargrove started both Bartolo Colón and Charles Nagy in games four and five on only three days rest. The Indians lost game four 23–7 and game five 12–8. Four days later, Hargrove was dismissed as manager. In 2000, the Indians had a 44–42 start, but caught fire after the All Star break and went 46–30 the rest of the way to finish 90–72. The team had one of the league's best offenses that year and a defense that yielded three gold gloves. However, they ended up five games behind the Chicago White Sox in the Central division and missed the wild card by one game to the Seattle Mariners. Mid-season trades brought Bob Wickman and Jake Westbrook to Cleveland. After the season, free-agent outfielder Manny Ramírez departed for the Boston Red Sox. In 2000, Larry Dolan bought the Indians for $320 million from Richard Jacobs, who, along with his late brother David, had paid $45 million for the club in 1986. The sale set a record at the time for the sale of a baseball franchise. 2001 saw a return to the postseason. After the departures of Ramírez and Sandy Alomar Jr., the Indians signed Ellis Burks and former MVP Juan González, who helped the team win the Central division with a 91–71 record. One of the highlights came on August 5, when the Indians completed the biggest comeback in MLB History. Cleveland rallied to close a 14–2 deficit in the seventh inning to defeat the Seattle Mariners 15–14 in 11 innings. The Mariners, who won an MLB record-tying 116 games that season, had a strong bullpen, and Indians manager Charlie Manuel had already pulled many of his starters with the game seemingly out of reach. Seattle and Cleveland met in the first round of the postseason; however, the Mariners won the series 3–2. In the 2001–02 offseason, GM John Hart resigned and his assistant, Mark Shapiro, took the reins. 2002–2010: The Shapiro/Wedge years First "rebuilding of the team" Shapiro moved to rebuild by dealing aging veterans for younger talent. He traded Roberto Alomar to the New York Mets for a package that included outfielder Matt Lawton and prospects Alex Escobar and Billy Traber. When the team fell out of contention in mid-, Shapiro fired manager Charlie Manuel and traded pitching ace Bartolo Colón for prospects Brandon Phillips, Cliff Lee, and Grady Sizemore; acquired Travis Hafner from the Rangers for Ryan Drese and Einar Díaz; and picked up Coco Crisp from the St. Louis Cardinals for aging starter Chuck Finley. Jim Thome left after the season, going to the Phillies for a larger contract. Young Indians teams finished far out of contention in 2002 and under new manager Eric Wedge. They posted strong offensive numbers in , but continued to struggle with a bullpen that blew more than 20 saves. A highlight of the season was a 22–0 victory over the New York Yankees on August 31, one of the worst defeats suffered by the Yankees in team history. In early , the offense got off to a poor start. After a brief July slump, the Indians caught fire in August, and cut a 15.5 game deficit in the Central Division down to 1.5 games. However, the season came to an end as the Indians went on to lose six of their last seven games, five of them by one run, missing the playoffs by only two games. Shapiro was named Executive of the Year in 2005. The next season, the club made several roster changes, while retaining its nucleus of young players. The off-season was highlighted by the acquisition of top prospect Andy Marte from the Boston Red Sox. The Indians had a solid offensive season, led by career years from Travis Hafner and Grady Sizemore. Hafner, despite missing the last month of the season, tied the single season grand slam record of six, which was set in by Don Mattingly. Despite the solid offensive performance, the bullpen struggled with 23 blown saves (a Major League worst), and the Indians finished a disappointing fourth. In , Shapiro signed veteran help for the bullpen and outfield in the offseason. Veterans Aaron Fultz and Joe Borowski joined Rafael Betancourt in the Indians bullpen. The Indians improved significantly over the prior year and went into the All-Star break in second place. The team brought back Kenny Lofton for his third stint with the team in late July. The Indians finished with a 96–66 record tied with the Red Sox for best in baseball, their seventh Central Division title in 13 years and their first postseason trip since 2001. The Indians began their playoff run by defeating the Yankees in the ALDS three games to one. This series will be most remembered for the swarm of bugs that overtook the field in the later innings of Game Two. They also jumped out to a three-games-to-one lead over the Red Sox in the ALCS. The season ended in disappointment when Boston swept the final three games to advance to the 2007 World Series. Despite the loss, Cleveland players took home a number of awards. Grady Sizemore, who had a .995 fielding percentage and only two errors in 405 chances, won the Gold Glove award, Cleveland's first since 2001. Indians Pitcher CC Sabathia won the second Cy Young Award in team history with a 19–7 record, a 3.21 ERA and an MLB-leading 241 innings pitched. Eric Wedge was awarded the first Manager of the Year Award in team history. Shapiro was named to his second Executive of the Year in 2007. Second "rebuilding of the team" The Indians struggled during the 2008 season. Injuries to sluggers Travis Hafner and Victor Martinez, as well as starting pitchers Jake Westbrook and Fausto Carmona led to a poor start. The Indians, falling to last place for a short time in June and July, traded CC Sabathia to the Milwaukee Brewers for prospects Matt LaPorta, Rob Bryson, and Michael Brantley. and traded starting third baseman Casey Blake for catching prospect Carlos Santana. Pitcher Cliff Lee went 22–3 with an ERA of 2.54 and earned the AL Cy Young Award. Grady Sizemore had a career year, winning a Gold Glove Award and a Silver Slugger Award, and the Indians finished with a record of 81–81. Prospects for the 2009 season dimmed early when the Indians ended May with a record of 22–30. Shapiro made multiple trades: Cliff Lee and Ben Francisco to the Philadelphia Phillies for prospects Jason Knapp, Carlos Carrasco, Jason Donald and Lou Marson; Victor Martinez to the Boston Red Sox for prospects Bryan Price, Nick Hagadone and Justin Masterson; Ryan Garko to the Texas Rangers for Scott Barnes; and Kelly Shoppach to the Tampa Bay Rays for Mitch Talbot. The Indians finished the season tied for fourth in their division, with a record of 65–97. The team announced on September 30, 2009, that Eric Wedge and all of the team's coaching staff were released at the end of the 2009 season. Manny Acta was hired as the team's 40th manager on October 25, 2009. On February 18, 2010, it was announced that Shapiro (following the end of the 2010 season) would be promoted to team President, with current President Paul Dolan becoming the new Chairman/CEO, and longtime Shapiro assistant Chris Antonetti filling the GM role. 2011–present: Antonetti/Chernoff/Francona era On January 18, 2011, longtime popular former first baseman and manager Mike Hargrove was brought in as a special adviser. The Indians started the 2011 season strong – going 30–15 in their first 45 games and seven games ahead of the Detroit Tigers for first place. Injuries led to a slump where the Indians fell out of first place. Many minor leaguers such as Jason Kipnis and Lonnie Chisenhall got opportunities to fill in for the injuries. The biggest news of the season came on July 30 when the Indians traded four prospects for Colorado Rockies star pitcher, Ubaldo Jiménez. The Indians sent their top two pitchers in the minors, Alex White and Drew Pomeranz along with Joe Gardner and Matt McBride. On August 25, the Indians signed the team leader in home runs, Jim Thome off of waivers. He made his first appearance in an Indians uniform since he left Cleveland after the 2002 season. To honor Thome, the Indians placed him at his original position, third base, for one pitch against the Minnesota Twins on September 25. It was his first appearance at third base since 1996, and his last for Cleveland. The Indians finished the season in 2nd place, 15 games behind the division champion Tigers. The Indians broke Progressive Field's Opening Day attendance record with 43,190 against the Toronto Blue Jays on April 5, 2012. The game went 16 innings, setting the MLB Opening Day record, and lasted 5 hours and 14 minutes. On September 27, 2012, with six games left in the Indians' 2012 season, Manny Acta was fired; Sandy Alomar Jr. was named interim manager for the remainder of the season. On October 6, the Indians announced that Terry Francona, who managed the Boston Red Sox to five playoff appearances and two World Series between 2004 and 2011, would take over as manager for 2013. The Indians entered the 2013 season following an active offseason of dramatic roster turnover. Key acquisitions included free agent 1B/OF Nick Swisher and CF Michael Bourn. The team added prized right-handed pitching prospect Trevor Bauer, OF Drew Stubbs, and relief pitchers Bryan Shaw and Matt Albers in a three-way trade with the Arizona Diamondbacks and Cincinnati Reds that sent RF Shin-Soo Choo to the Reds, and Tony Sipp to the Arizona Diamondbacks Other notable additions included utility man Mike Avilés, catcher Yan Gomes, designated hitter Jason Giambi, and starting pitcher Scott Kazmir. The 2013 Indians increased their win total by 24 over 2012 (from 68 to 92), finishing in second place, one game behind Detroit in the Central division, but securing the number one seed in the American League Wild Card Standings. In their first postseason appearance since 2007, Cleveland lost the 2013 American League Wild Card Game 4–0 at home to Tampa Bay. Francona was recognized for the turnaround with the 2013 American League Manager of the Year Award. With an 85–77 record, the 2014 Indians had consecutive winning seasons for the first time since 1999–2001, but they were eliminated from playoff contention during the last week of the season and finished third in the AL Central. In 2015, after struggling through the first half of the season, the Indians finished 81–80 for their third consecutive winning season, which the team had not done since 1999–2001. For the second straight year, the Tribe finished third in the Central and was eliminated from the Wild Card race during the last week of the season. Following the departure of longtime team executive Mark Shapiro on October 6, the Indians promoted GM Chris Antonetti to President of Baseball Operations, assistant general manager Mike Chernoff to GM, and named Derek Falvey as assistant GM. Falvey was later hired by the Minnesota Twins in 2016, becoming their President of Baseball Operations. The Indians set what was then a franchise record for longest winning streak when they won their 14th consecutive game, a 2–1 win over the Toronto Blue Jays in 19 innings on July 1, 2016, at Rogers Centre. The team clinched the Central Division pennant on September 26, their eighth division title overall and first since 2007, as well as returning to the playoffs for the first time since 2013. They finished the regular season at 94–67, marking their fourth straight winning season, a feat not accomplished since the 1990s and early 2000s. The Indians began the 2016 postseason by sweeping the Boston Red Sox in the best-of-five American League Division Series, then defeated the Blue Jays in five games in the 2016 American League Championship Series to claim their sixth American League pennant and advance to the World Series against the Chicago Cubs. It marked the first appearance for the Indians in the World Series since 1997 and first for the Cubs since 1945. The Indians took a 3–1 series lead following a victory in Game 4 at Wrigley Field, but the Cubs rallied to take the final three games and won the series 4 games to 3. The Indians' 2016 success led to Francona winning his second AL Manager of the Year Award with the club. From August 24 through September 15 during the 2017 season, the Indians set a new American League record by winning 22 games in a row. On September 28, the Indians won their 100th game of the season, marking only the third time in history the team has reached that milestone. They finished the regular season with 102 wins, second-most in team history (behind 1954's 111 win team). The Indians earned the AL Central title for the second consecutive year, along with home-field advantage throughout the American League playoffs, but they lost the 2017 ALDS to the Yankees 3–2 after being up 2–0. In 2018, the Indians won their third consecutive AL Central crown with a 91–71 record, but were swept in the 2018 American League Division Series by the Houston Astros, who outscored Cleveland 21–6. In 2019, despite a two-game improvement, the Indians missed the playoffs as they trailed three games behind the Tampa Bay Rays for the second AL Wild Card berth. During the 2020 season (shortened to 60 games because of the COVID-19 pandemic), the Indians were 35–25, finishing second behind the Minnesota Twins in the AL Central, but qualified for the expanded playoffs. In the best-of-three AL Wild Card Series, the Indians were swept by the New York Yankees, ending their season. On December 18, 2020, the team confirmed that the Indians name would be dropped after the 2021 season, and then announced on July 23, 2021, that their new name will be the Cleveland Guardians . They played their last game under the Indians name on October 3, 2021. They officially became the Guardians on November 19, 2021. In their first season under the Guardians name, the team won the 2022 AL Central Division crown, marking the 11th division title in franchise history. In the best-of-three AL Wild Card Series, the Guardians won the series against the Tampa Bay Rays 2–0, to advance to the AL Division Series. The Guardians lost the series to the New York Yankees 3–2, ending their season. Season-by-season results Rivalries Interleague The rivalry with fellow Ohio team the Cincinnati Reds is known as the Battle of Ohio or Buckeye Series and features the Ohio Cup trophy for the winner. Prior to 1997, the winner of the cup was determined by an annual pre-season baseball game, played each year at minor-league Cooper Stadium in the state capital of Columbus, and staged just days before the start of each new Major League Baseball season. A total of eight Ohio Cup games were played, with the Guardians winning six of them. It ended with the start of interleague play in 1997. The winner of the game each year was awarded the Ohio Cup in postgame ceremonies. The Ohio Cup was a favorite among baseball fans in Columbus, with attendances regularly topping 15,000. Since 1997, the two teams have played each other as part of the regular season, with the exception of 2002. The Ohio Cup was reintroduced in 2008 and is presented to the team who wins the most games in the series that season. Initially, the teams played one three-game series per season, meeting in Cleveland in 1997 and Cincinnati the following year. The teams have played two series per season against each other since 1999, with the exception of 2002, one at each ballpark. A format change in 2013 made each series two games, except in years when the AL and NL Central divisions meet in interleague play, where it is usually extended to three games per series. Through the 2020 meetings, the Guardians lead the series 66–51. An on-and-off rivalry with the Pittsburgh Pirates stems from the close proximity of the two cities, and features some carryover elements from the longstanding rivalry in the National Football League between the Cleveland Browns and Pittsburgh Steelers. Because the Guardians' designated interleague rival is the Reds and the Pirates' designated rival is the Tigers, the teams have played periodically. The teams played one three-game series each year from 1997–2001 and periodically between 2002 and 2022, generally only in years in which the AL Central played the NL Central in the former interleague play rotation. The teams played six games in 2020 as MLB instituted an abbreviated schedule focusing on regional match-ups. Beginning in 2023, the teams will play a three-game series each season as a result of the new "balanced" schedule. The Pirates lead the series 21–18. Divisional As the Guardians play most of their games every year with each of their AL Central competitors (formerely 19 for each team until 2023), several rivalries have developed. The Guardians have a geographic rivalry with the Detroit Tigers, highlighted in recent years by intense battles for the AL Central title. The matchup has some carryover elements from the Ohio State-Michigan rivalry, as well as the general historic rivalry between Michigan and Ohio dating back to the Toledo War. The Chicago White Sox are another rival, dating back to the 1959 season, when the Sox slipped past the Guardians to win the AL pennant. The rivalry intensified when both clubs were moved to the new AL Central in 1994. During that season, the two teams challenged for the division title, with the Guardians one game back of Chicago when the strike began in August. During a game in Chicago, the White Sox confiscated Albert Belle's corked bat, followed by an attempt by Guardians pitcher Jason Grimsley to crawl through the Comiskey Park clubhouse ceiling to retrieve it. Belle later signed with the White Sox in 1997, adding additional intensity to the rivalry. Logos and uniforms The official team colors are navy blue, red, and white. Home The primary home uniform is white with navy blue piping around each sleeve, and the "winged G" logo on the right sleeve. Across the front of the jersey in script font is the word "Guardians" in red with a navy blue outline, with navy blue undershirts, belts, and socks. The alternate home jersey is red with a navy blue script "Guardians" trimmed in white on the front, and navy blue piping on both sleeves, the "winged G" logo on the right sleeve, with navy blue undershirts, belts, and socks. The home cap is navy blue with a red bill and features a red "diamond C" on the front. Road The primary road uniform is gray, with "Cleveland" in navy blue "diamond C" letters, trimmed in red across the front of the jersey, the "winged G" logo on the right sleeve, navy blue piping around the sleeves, and navy blue undershirts, belts, and socks. The alternate road jersey is navy blue with "Cleveland" in red "diamond C" letters trimmed in white on the front of the jersey, the "winged G" logo on the right sleeve, and navy blue undershirts, belts, and socks. The road cap is similar to the home cap, with the only difference being the bill is navy blue. Universal For all games, the team uses a navy blue batting helmet with a red "diamond C" on the front. Name and logo controversy The club name and its cartoon logo have been criticized for perpetuating Native American stereotypes. In 1997 and 1998, protesters were arrested after effigies were burned. Charges were dismissed in the 1997 case, and were not filed in the 1998 case. Protesters arrested in the 1998 incident subsequently fought and lost a lawsuit alleging that their First Amendment rights had been violated. Bud Selig (then-Commissioner of Baseball) said in 2014 that he had never received a complaint about the logo. He has heard that there are some protesting against the mascots, but individual teams such as the Indians and Atlanta Braves, whose name was also criticized for similar reasons, should make their own decisions. An organized group consisting of Native Americans, which had protested for many years, protested Chief Wahoo on Opening Day 2015, noting that this was the 100th anniversary since the team became the Indians. Owner Paul Dolan, while stating his respect for the critics, said he mainly heard from fans who wanted to keep Chief Wahoo, and had no plans to change. On January 29, 2018, Major League Baseball announced that Chief Wahoo would be removed from the Indians' uniforms as of the 2019 season, stating that the logo was no longer appropriate for on-field use. The block "C" was promoted to the primary logo; at the time, there were no plans to change the team's name. In 2020, protests over the murder of George Floyd, a black man, by a Minneapolis police officer, led Dolan to reconsider use of the Indians name. On July 3, 2020, on the heels of the Washington Redskins announcing that they would "undergo a thorough review" of that team's name, the Indians announced that they would "determine the best path forward" regarding the team's name and emphasized the need to "keep improving as an organization on issues of social justice". On December 13, 2020, it was reported that the Indians name would be dropped after the 2021 season. Although it had been hinted by the team that they may move forward without a replacement name (in similar manner to the Washington Football Team), it was announced via Twitter on July 23, 2021, that the team will be named the Guardians, after the Guardians of Traffic, eight large Art Deco statues on the Hope Memorial Bridge, located close to Progressive Field. The club, however, found itself amid a trademark dispute with a men's roller derby team called the Cleveland Guardians. The Cleveland Guardians roller derby team has competed in the Men's Roller Derby Association since 2016. In addition, two other entities have attempted to preempt the team's use of the trademark by filing their own registrations with the U.S. Patent and Trademark Office. The roller derby team filed a federal lawsuit in the U.S. District Court for the Northern District of Ohio on October 27, 2021, seeking to block the baseball team's name change. On November 16, 2021, the lawsuit was resolved, and both teams were allowed to continue using the Guardians name. The name change from Indians to Guardians became official on November 19, 2021. Media Radio Cleveland stations WTAM (1100 AM/106.9 FM) and WMMS (100.7 FM) serve as flagship stations for the Cleveland Guardians Radio Network, with lead announcer Tom Hamilton and Jim Rosenhaus calling the games. TV The television rights are held by Bally Sports Great Lakes. Lead announcer Matt Underwood, analyst and former Indians Gold Glove-winning centerfielder Rick Manning, and field reporter Andre Knott form the broadcast team. Al Pawlowski and former Indians pitcher Jensen Lewis serve as pregame/postgame hosts. Select games are simulcast over-the-air on WKYC channel 3. Past announcers Notable former broadcasters include Tom Manning, Jack Graney (the first ex-baseball player to become a play-by-play announcer), Ken Coleman, Joe Castiglione, Van Patrick, Nev Chandler, Bruce Drennan, Jim "Mudcat" Grant, Rocky Colavito, Dan Coughlin, and Jim Donovan. Previous broadcasters who have had lengthy tenures with the team include Joe Tait (15 seasons between TV and radio), Jack Corrigan (18 seasons on TV), Ford C. Frick Award winner Jimmy Dudley (19 seasons on radio), Mike Hegan (23 seasons between TV and radio), and Herb Score (34 seasons between TV and radio). Popular culture Under the Cleveland Indians name, the team has been featured in several films, including: The Kid from Cleveland – a 1949 film featuring then-owner Bill Veeck and numerous players from the team (coming off winning the 1948 World Series). Major League – a 1989 film centered around a fictionalized version of the Indians. Major League II – a 1994 sequel to the 1989 original. Awards and honors Baseball Hall of Famers Ford C. Frick Award recipients Retired numbers Jackie Robinson's number 42 is retired throughout Major League Baseball. The number 455 was retired in honor of the Indians fans after the team sold out 455 consecutive games between 1995 and 2001, which was an MLB record until it was surpassed by the Boston Red Sox on September 8, 2008. Guardians Hall of Fame Statues Numerous Naps/Indians players have had statues made in their honor: In and around Progressive Field Bob Feller (team all-time leader in wins and strikeouts by a pitcher, 1948 World Series Champion, eight-time All-Star) – since 1994* Jim Thome (team all-time leader in home runs and walks by a hitter, three-time All-Star with the Indians) – since 2014* Larry Doby (First black player in the American League, 1948 World Series Champion, seven-time All-Star) – since 2015* Frank Robinson (Became first black manager in MLB history when he served as player/manager from 1975 to 1977) – since 2017 Lou Boudreau (1948 AL MVP, 1948 World Series Champion as player/manager, eight-time All-Star) – since 2017* In and around Cleveland Hall of Fame outfielder Elmer Flick has a statue in his hometown of Bedford, Ohio, a nearby suburb of Cleveland – since 2013* Former outfielder Luke Easter has a statue outside of his namesake park on the east side of Cleveland – since 1980 (when the park was renamed in Easter's honor following his murder) Five-time All-Star (with the Indians) outfielder Rocky Colavito has a statue in Cleveland's Little Italy neighborhood – since August 10, 2021. (*) – Inducted into the Baseball Hall of Fame as an Indian/Nap. Murals In July 2022 - in honor of the 75th anniversary of Larry Doby becoming the AL's first black player - a mural was added to the exterior of Progressive Field, honoring players who were viewed as barrier breakers that played for the Indians/Guardians. The mural features Doby, Frank Robinson, and Satchel Paige. Streets A portion of Eagle Avenue near Progressive Field was renamed "Larry Doby Way" in 2012 Parks and fields A number of parks and newly built and renovated youth baseball fields in Cleveland have been named after former and current Indians/Guardians players, including: Luke Easter Park - named for Easter in 1980 following his murder Jim Thome All-Star Complex - 2019 CC Sabathia Field at Luke Easter Park - 2021 José Ramírez Field - opening in 2023 Franchise records Season records Highest batting average: .408, Joe Jackson (1911) Most games: 163, Leon Wagner (1964) Most runs: 140, Earl Averill (1930) Highest slugging %: .714, Albert Belle (1994) Most doubles: 64, George Burns (1926) Most triples: 26, Joe Jackson (1912) Most home runs: 52, Jim Thome (2002) Most RBIs: 165, Manny Ramirez (1999) Most stolen bases: 75, Kenny Lofton (1996) Most wins: 31, Jim Bagby, Sr. (1920) Lowest ERA: 1.16, Addie Joss (1908) Strikeouts: 348, Bob Feller (1946) Complete games: 36, Bob Feller (1946) Saves: 46, José Mesa (1995) Longest win streak: 22 games (2017) Roster Minor league affiliations The Cleveland Guardians farm system consists of seven minor league affiliates. Regular season home attendance (*) - There were no fans allowed in any MLB stadium in 2020 due to the COVID-19 pandemic. (**) - At the beginning of the season, there was a limit of 30% capacity due to COVID-19 restrictions implemented by Ohio Governor Mike DeWine. On June 2, DeWine lifted the restrictions, and the team immediately allowed full capacity at Progressive Field. See also Cleveland Guardians all-time roster List of Cleveland Guardians managers List of Cleveland Guardians seasons List of Cleveland Guardians team records List of World Series champions Notes References External links Cleveland Indians 1998 Annual Report, the last filed with the SEC Sports E-Cyclopedia 1894 establishments in Ohio Major League Baseball teams Baseball teams established in 1894
3,015
6,666
https://en.wikipedia.org/wiki/Christopher%20B%C3%A1thory
Christopher Báthory
Christopher Báthory (; 1530 – 27 May 1581) was voivode of Transylvania from 1576 to 1581. He was a younger son of Stephen Báthory of Somlyó. Christopher's career began during the reign of Queen Isabella Jagiellon, who administered the eastern territories of the Kingdom of Hungary on behalf of her son, John Sigismund Zápolya, from 1556 to 1559. He was one of the commanders of John Sigismund's army in the early 1560s. Christopher's brother, Stephen Báthory, who succeeded John Sigismund in 1571, made Christopher captain of Várad (now Oradea in Romania). After being elected King of Poland, Stephen Báthory adopted the title of Prince of Transylvania and made Christopher voivode in 1576. Christopher cooperated with Márton Berzeviczy, whom his brother appointed to supervise the administration of the Principality of Transylvania as the head of the Transylvanian chancellery at Kraków. Christopher ordered the imprisonment of Ferenc Dávid, a leading theologian of the Unitarian Church of Transylvania, who started to condemn the adoration of Jesus. He supported his brother's efforts to settle the Jesuits in Transylvania. Early life Christopher was the third of the four sons of Stephen Báthory of Somlyó and Catherine Telegdi. His father was a supporter of John Zápolya, King of Hungary, who made him voivode of Transylvania in February 1530. Christopher was born in Báthorys' castle at Szilágysomlyó (now Șimleu Silvaniei in Romania) in the same year. His father died in 1534. His brother, Andrew, and their kinsman, Tamás Nádasdy, took charge of Christopher's education. Christopher visited England, France, Italy, Spain, and the Holy Roman Empire in his youth. He also served as a page in Emperor Charles V's court. Career Christopher entered the service of John Zápolya's widow, Isabella Jagiellon, in the late 1550s. At the time, Isabella administered the eastern territories of the Kingdom of Hungary on behalf of her son, John Sigismund Zápolya. She wanted to persuade Henry II of France to withdraw his troops from three fortresses that the Ottomans had captured in Banat, so she sent Christopher to France to start negotiations in 1557. John Sigismund took charge of the administration of his realm after his mother died on 15 November 1559. He retained his mother's advisors, including Christopher who became one of his most influential officials. After the rebellion of Melchior Balassa, Christopher persuaded John Sigismund to fight for his realm instead of fleeing to Poland in 1562. Christopher was one of the commanders of John Sigismund's troops during the ensuing war against the Habsburg rulers of the western territories of the Kingdom of Hungary, Ferdinand and Maximilian, who tried to reunite the kingdom under their rule. Christopher defeated Maximilian's commander, Lazarus von Schwendi, forcing him to lift the siege of Huszt (now Khust in Ukraine) in 1565. After the death of John Sigismund, the Diet of Transylvania elected Christopher's younger brother, Stephen Báthory, voivode (or ruler) on 25 May 1571. Stephen made Christopher captain of Várad (now Oradea in Romania). The following year, the Ottoman Sultan, Selim II (who was the overlord of Transylvania), acknowledged the hereditary right of the Báthory family to rule the province. Reign Stephen Báthory was elected King of Poland on 15 December 1575. He adopted the title of Prince of Transylvania and made Christopher voivode on 14 January 1576. An Ottoman delegation confirmed Christopher's appointment at the Diet in Gyulafehérvár (now Alba Iulia in Romania) in July. The sultan's charter (or ahidnâme) sent to Christopher emphasized that he should keep the peace along the frontiers. Stephen set up a separate chancellery in Kraków to keep an eye on the administration of Transylvania. The head of the new chancellery, Márton Berzeviczy, and Christopher cooperated closely. Anti-Trinitarian preachers began to condemn the worshiping of Jesus in Partium and Székely Land in 1576, although the Diet had already forbade all doctrinal innovations. Ferenc Dávid, the most influential leader of the Unitarian Church of Transylvania, openly joined the dissenters in the autumn of 1578. Christopher invited Fausto Sozzini, a leading Anti-Trinitarian theologian, to Transylvania to convince Dávid that the new teaching was erroneous. Since Dávid refused to obey, Christopher held a Diet and the "Three Nations" (including the Unitarian delegates) ordered Dávid's imprisonment. Christopher also supported his brother's attempts to strengthen the position of the Roman Catholic Church in Transylvania. He granted estates to the Jesuits to promote the establishment of a college in Kolozsvár (now Cluj-Napoca in Romania) on 5 May 1579. Christopher fell seriously ill after his second wife, Elisabeth Bocskai, died in early 1581. After a false rumor about Christopher's death reached Istanbul, Koca Sinan Pasha proposed Transylvania to Pál Márkházy whom Christopher had been forced into exile. Although Christopher's only surviving son Sigismund was still a minor, the Diet elected him as voivode before Christopher's death, because they wanted to prevent the appointment of Márkházy. Christopher died in Gyulafehérvár on 27 May 1581. He was buried in the Jesuits' church in Gyulafehérvár, almost two years later, on 14 March 1583. Family Christopher's first wife, Catherina Danicska, was a Polish noblewoman, but only the Hungarian form of her name is known. Their eldest son, Balthasar Báthory, moved to Kraków shortly after Stephen Báthory was crowned King of Poland; he drowned in the Vistula River in May 1577 at the age of 22. Christopher's and Catherina's second son, Nicholas, was born in 1567 and died in 1576. Christopher's second wife, Elisabeth Bocskai, was a Calvinist noblewoman. Their first child, Cristina (or Griselda), was born in 1569. She was given in marriage to Jan Zamoyski, Chancellor of Poland, in 1583. Christopher's youngest son, Sigismund, was born in 1573. References Sources 1530 births 1581 deaths Voivodes of Transylvania Christopher People from Șimleu Silvaniei 16th-century Hungarian people Eastern Hungarian Kingdom
3,022
6,671
https://en.wikipedia.org/wiki/Cincinnati%20Reds
Cincinnati Reds
The Cincinnati Reds are an American professional baseball team based in Cincinnati. They compete in Major League Baseball (MLB) as a member club of the National League (NL) Central division and were a charter member of the American Association in 1881 before joining the NL in 1890. The Reds played in the NL West division from 1969 to 1993, before joining the Central division in 1994. For several years in the 1970s, they were considered the most dominant team in baseball, most notably winning the 1975 and 1976 World Series; the team was colloquially known as the "Big Red Machine" during this time, and it included Hall of Fame members Johnny Bench, Joe Morgan and Tony Perez. Overall, the Reds have won five World Series championships, nine NL pennants, one AA pennant and 10 division titles. The team plays its home games at Great American Ball Park, which opened in 2003. Bob Castellini has been the CEO of the Reds since 2006. From 1882 to 2021, the Reds' overall win–loss record is 10,713–10,501 (a winning percentage). Franchise history The birth of the Reds and the American Association (1881–1889) The origins of the modern Cincinnati Reds baseball team can be traced back to the expulsion from the National League of an earlier team bearing the same name. In 1876, Cincinnati became one of the charter members of the new National League (NL), but the club ran afoul of league organizer and longtime president William Hulbert for selling beer during games and renting out its ballpark on Sundays. Both were important in enticing the city's large German population to support the team. While Hulbert made clear his distaste for both beer and Sunday baseball at the founding of the league, neither practice was against league rules at the time. On October 6, 1880, however, seven of the eight team owners adopted a pledge to ban both beer and Sunday baseball at the regular league meeting in December. Only Cincinnati president W. H. Kennett refused to sign the pledge, so the other owners preemptively expelled Cincinnati from the league for violating the new rules even though they were not yet in effect. Cincinnati's expulsion incensed Cincinnati Enquirer sports editor O. P. Caylor, who made two attempts to form a new league on behalf of the receivers for the now-bankrupt Reds franchise. When these attempts failed, he formed a new independent ball club known as the Red Stockings in the spring of 1881 and brought the team to St. Louis for a weekend exhibition. The Reds' first game was a 12–3 victory over the St. Louis club. After the 1881 series proved successful, Caylor and former Reds president Justus Thorner received an invitation from Philadelphia businessman Horace Phillips to attend a meeting of several clubs in Pittsburgh, planning to establish a new league to compete with the NL. Upon arriving, however, Caylor and Thorner found that no other owners had accepted the invitation, while even Phillips declined to attend his own meeting. By chance, the duo met former pitcher Al Pratt, who paired them with former Pittsburgh Alleghenys president H. Denny McKnight. Together, the three hatched a scheme to form a new league by sending a telegram to each of the owners who were invited to attend the meeting stating that he was the only person who did not attend, and that everyone else was enthusiastic about the new venture and eager to attend a second meeting in Cincinnati. The ploy worked, and the American Association (AA) was officially formed at the Hotel Gibson in Cincinnati. The new Reds – with Thorner now serving as president – became a charter member of the AA. Led by the hitting of third baseman Hick Carpenter, the defense of future Hall of Fame second baseman Bid McPhee and the pitching of 40-game-winner Will White, the Reds won the inaugural AA pennant in 1882. With the establishment of the Union Association in 1884, Thorner left the club to finance the Cincinnati Outlaw Reds and managed to acquire the lease on the Reds' Bank Street Grounds playing field, forcing new president Aaron Stern to relocate three blocks away to the hastily built League Park. The club never placed higher than second or lower than fifth for the rest of its tenure in the American Association. The National League returns to Cincinnati (1890–1911) The Cincinnati Red Stockings left the American Association on November 14, 1889, and joined the National League along with the Brooklyn Bridegrooms after a dispute with St. Louis Browns owner Chris Von Der Ahe over the selection of a new league president. The National League was happy to accept the teams in part due to the emergence of the new Player's League, an early failed attempt to break the reserve clause in baseball that threatened both existing leagues. Because the National League decided to expand while the American Association was weakening, the team accepted an invitation to join the National League. After shortening their name to the Reds, the team wandered through the 1890s, signing local stars and aging veterans. During this time, the team never finished above third place (1897) and never closer than 10 games to first (1890). At the start of the 20th century, the Reds had hitting stars Sam Crawford and Cy Seymour. Seymour's .377 average in 1905 was the first individual batting crown won by a Red. In 1911, Bob Bescher stole 81 bases, which is still a team record. Like the previous decade, the 1900s were not kind to the Reds, as much of the decade was spent in the league's second division. Redland Field to the Great Depression (1912–1932) In 1912, the club opened Redland Field (renamed Crosley Field in 1934), a new steel-and-concrete ballpark. The Reds had been playing baseball on that same site – the corner of Findlay and Western Avenues on the city's west side – for 28 years in wooden structures that had been occasionally damaged by fires. By the late 1910s, the Reds began to come out of the second division. The 1918 team finished fourth, and new manager Pat Moran led the Reds to an NL pennant in 1919, in what the club advertised as its "Golden Anniversary." The 1919 team had hitting stars Edd Roush and Heinie Groh, while the pitching staff was led by Hod Eller and left-hander Harry "Slim" Sallee. The Reds finished ahead of John McGraw's New York Giants and then won the world championship in eight games over the Chicago White Sox. By 1920, the "Black Sox" scandal had brought a taint to the Reds' first championship. After 1926 and well into the 1930s, the Reds were second division dwellers. Eppa Rixey, Dolf Luque and Pete Donohue were pitching stars, but the offense never lived up to the pitching. By 1931, the team was bankrupt, the Great Depression was in full swing and Redland Field was in a state of disrepair. Championship baseball and revival (1933–1940) Powel Crosley, Jr., an electronics magnate who, with his brother Lewis M. Crosley, produced radios, refrigerators and other household items, bought the Reds out of bankruptcy in 1933 and hired Larry MacPhail to be the general manager. Crosley had started WLW radio, the Reds flagship radio broadcaster, and the Crosley Broadcasting Corporation in Cincinnati, where he was also a prominent civic leader. MacPhail began to develop the Reds' minor league system and expanded the Reds' fan base. Throughout the rest of the decade, the Reds became a team of "firsts." The now-renamed Crosley Field became the host of the first night game in 1935, which was also the first baseball fireworks night. (The fireworks at the game were shot by Joe Rozzi of Rozzi's Famous Fireworks.) Johnny Vander Meer became the only pitcher in major league history to throw back-to-back no-hitters in 1938. Thanks to Vander Meer, Paul Derringer and second baseman/third baseman-turned-pitcher Bucky Walters, the Reds had a solid pitching staff. The offense came around in the late 1930s. By 1938, the Reds, now led by manager Bill McKechnie, were out of the second division, finishing fourth. Ernie Lombardi was named the National League's Most Valuable Player in 1938. By 1939, the Reds were National League champions only to be swept in the World Series by the New York Yankees. In 1940, the Reds repeated as NL Champions, and for the first time in 21 years, they captured a world championship, beating the Detroit Tigers 4 games to 3. Frank McCormick was the 1940 NL MVP; other position players included Harry Craft, Lonny Frey, Ival Goodman, Lew Riggs and Bill Werber. 1941–1969 World War II and age finally caught up with the Reds, as the team finished mostly in the second division throughout the 1940s and early 1950s. In 1944, Joe Nuxhall (who was later to become part of the radio broadcasting team), at age 15, pitched for the Reds on loan from Wilson Junior High school in Hamilton, Ohio. He became the youngest player ever to appear in a major league game, a record that still stands today. Ewell "The Whip" Blackwell was the main pitching stalwart before arm problems cut short his career. Ted Kluszewski was the NL home run leader in 1954. The rest of the offense was a collection of over-the-hill players and not-ready-for-prime-time youngsters. In April 1953, the Reds announced a preference to be called the "Redlegs," saying that the name of the club had been "Red Stockings" and then "Redlegs." A newspaper speculated that it was due to the developing political connotation of the word "red" to mean Communism. From 1956 to 1960, the club's logo was altered to remove the term "REDS" from the inside of the "wishbone C" symbol. The word "REDS" reappeared on the 1961 uniforms, but the point of the "C" was removed. The traditional home uniform logo was reinstated in 1967. In 1956, the Redlegs, led by National League Rookie of the Year Frank Robinson, hit 221 home runs to tie the NL record. By 1961, Robinson was joined by Vada Pinson, Wally Post, Gordy Coleman and Gene Freese. Pitchers Joey Jay, Jim O'Toole and Bob Purkey led the staff. The Reds captured the 1961 National League pennant, holding off the Los Angeles Dodgers and San Francisco Giants, only to be defeated by the perennially powerful New York Yankees in the World Series. The Reds had winning teams during the rest of the 1960s, but did not produce any championships. They won 98 games in 1962, paced by Purkey's 23 wins, but finished third. In 1964, they lost the pennant by one game to the St. Louis Cardinals after having taken first place when the Philadelphia Phillies collapsed in September. Their beloved manager Fred Hutchinson died of cancer just weeks after the end of the 1964 season. The failure of the Reds to win the 1964 pennant led to owner Bill DeWitt selling off key components of the team in anticipation of relocating the franchise. In response to DeWitt's threatened move, women of Cincinnati banded together to form the Rosie Reds to urge DeWitt to keep the franchise in Cincinnati. The Rosie Reds are still in existence, and are currently the oldest fan club in Major League Baseball. After the 1965 season, DeWitt executed what is remembered as the most lopsided trade in baseball history, sending former MVP Frank Robinson to the Baltimore Orioles for pitchers Milt Pappas and Jack Baldschun, and outfielder Dick Simpson. Robinson went on to win the MVP and Triple Crown in the American League in 1966, and led Baltimore to its first-ever World Series title in a sweep of the Los Angeles Dodgers. The Reds did not recover from this trade until the rise of the "Big Red Machine" in the 1970s. Starting in the early 1960s, the Reds' farm system began producing a series of stars, including Jim Maloney (the Reds' pitching ace of the 1960s), Pete Rose, Tony Pérez, Johnny Bench, Lee May, Tommy Helms, Bernie Carbo, Hal McRae, Dave Concepción and Gary Nolan. The tipping point came in 1967, with the appointment of Bob Howsam as general manager. That same year, the Reds avoided a move to San Diego when the city of Cincinnati and Hamilton County agreed to build a state-of-the-art, downtown stadium on the edge of the Ohio River. The Reds entered into a 30-year lease in exchange for the stadium commitment keeping the franchise in Cincinnati. In a series of strategic moves, Howsam brought in key personnel to complement the homegrown talent. The Reds' final game at Crosley Field, where they had played since 1912, was played on June 24, 1970, with a 5–4 victory over the San Francisco Giants. Under Howsam's administration starting in the late 1960s, all players coming to the Reds were required to shave and cut their hair for the next three decades in order to present the team as wholesome in an era of turmoil. The rule was controversial, but persisted well into the ownership of Marge Schott. On at least one occasion, in the early 1980s, enforcement of this rule lost the Reds the services of star reliever and Ohio native Rollie Fingers, who would not shave his trademark handlebar mustache in order to join the team. The rule was not officially rescinded until 1999, when the Reds traded for slugger Greg Vaughn, who had a goatee. The New York Yankees continue to have a similar rule today, although Yankees players are permitted to have mustaches. Much like when players leave the Yankees today, players who left the Reds took advantage with their new teams; Pete Rose, for instance, grew his hair out much longer than would be allowed by the Reds once he signed with the Philadelphia Phillies in 1979. The Reds' rules also included conservative uniforms. In Major League Baseball, a club generally provides most of the equipment and clothing needed for play. However, players are required to supply their gloves and shoes themselves. Many players enter into sponsorship arrangements with shoe manufacturers, but until the mid-1980s, the Reds had a strict rule requiring players to wear only plain black shoes with no prominent logo. Reds players decried what they considered to be the boring color choice, as well as the denial of the opportunity to earn more money through shoe contracts. In 1985, a compromise was struck in which players could paint red marks on their black shoes and were allowed to wear all-red shoes the following year. The Big Red Machine (1970–1976) In , little-known George "Sparky" Anderson was hired as manager of the Reds, and the team embarked upon a decade of excellence, with a lineup that came to be known as "the Big Red Machine." Playing at Crosley Field until June 30, 1970, when they moved into Riverfront Stadium, a new 52,000-seat multi-purpose venue on the shores of the Ohio River, the Reds began the 1970s with a bang by winning 70 of their first 100 games. Johnny Bench, Tony Pérez, Pete Rose, Lee May and Bobby Tolan were the early offensive leaders of this era. Gary Nolan, Jim Merritt, Wayne Simpson and Jim McGlothlin led a pitching staff that also included veterans Tony Cloninger and Clay Carroll, as well as youngsters Pedro Borbón and Don Gullett. The Reds breezed through the 1970 season, winning the NL West and capturing the NL pennant by sweeping the Pittsburgh Pirates in three games. By the time the club got to the World Series, however, the pitching staff had run out of gas, and the veteran Baltimore Orioles, led by Hall of Fame third baseman and World Series MVP Brooks Robinson, beat the Reds in five games. After the disastrous season – the only year in the decade in which the team finished with a losing record – the Reds reloaded by trading veterans Jimmy Stewart, May and Tommy Helms to the Houston Astros for Joe Morgan, César Gerónimo, Jack Billingham, Ed Armbrister and Denis Menke. Meanwhile, Dave Concepción blossomed at shortstop. 1971 was also the year a key component of future world championships was acquired, when George Foster was traded to the Reds from the San Francisco Giants in exchange for shortstop Frank Duffy. The Reds won the NL West in baseball's first-ever strike-shortened season, and defeated the Pittsburgh Pirates in a five-game playoff series. They then faced the Oakland Athletics in the World Series, where six of the seven games were decided by one run. With powerful slugger Reggie Jackson sidelined by an injury incurred during Oakland's playoff series, Ohio native Gene Tenace got a chance to play in the series, delivering four home runs that tied the World Series record for homers, propelling Oakland to a dramatic seven-game series win. This was one of the few World Series in which no starting pitcher for either side pitched a complete game. The Reds won a third NL West crown in after a dramatic second-half comeback that saw them make up games on the Los Angeles Dodgers after the All-Star break. However, they lost the NL pennant to the New York Mets in five games in the NLCS. In Game 1, Tom Seaver faced Jack Billingham in a classic pitching duel, with all three runs of the 2–1 margin being scored on home runs. John Milner provided New York's run off Billingham, while Pete Rose tied the game in the seventh inning off Seaver, setting the stage for a dramatic game-ending home run by Johnny Bench in the bottom of the ninth. The New York series provided plenty of controversy surrounding the riotous behavior of Shea Stadium fans toward Pete Rose when he and Bud Harrelson scuffled after a hard slide by Rose into Harrelson at second base during the fifth inning of Game 3. A full bench-clearing fight resulted after Harrelson responded to Rose's aggressive move to prevent him from completing a double play by calling him a name. This also led to two more incidents in which play was stopped. The Reds trailed 9–3, and New York's manager Yogi Berra and legendary outfielder Willie Mays, at the request of National League president Warren Giles, appealed to fans in left field to restrain themselves. The next day the series was extended to a fifth game when Rose homered in the 12th inning to tie the series at two games each. The Reds won 98 games in but finished second to the 102-win Los Angeles Dodgers. The 1974 season started off with much excitement, as the Atlanta Braves were in town to open the season with the Reds. Hank Aaron entered opening day with 713 home runs, one shy of tying Babe Ruth's record of 714. The first pitch Aaron swung at in the 1974 season was the record-tying home run off Jack Billingham. The next day, the Braves benched Aaron, hoping to save him for his record-breaking home run on their season-opening homestand. Then-commissioner Bowie Kuhn ordered Braves management to play Aaron the next day, where he narrowly missed a historic home run in the fifth inning. Aaron went on to set the record in Atlanta two nights later. The 1974 season also saw the debut of Hall of Fame radio announcer Marty Brennaman after Al Michaels left the Reds to broadcast for the San Francisco Giants. With 1975, the Big Red Machine lineup solidified with the "Great Eight" starting team of Johnny Bench (catcher), Tony Pérez (first base), Joe Morgan (second base), Dave Concepción (shortstop), Pete Rose (third base), Ken Griffey (right field), César Gerónimo (center field) and George Foster (left field). The starting pitchers included Don Gullett, Fred Norman, Gary Nolan, Jack Billingham, Pat Darcy and Clay Kirby. The bullpen featured Rawly Eastwick and Will McEnaney, who combined for 37 saves, and veterans Pedro Borbón and Clay Carroll. On Opening Day, Rose still played in left field and Foster was not a starter, while John Vukovich, an off-season acquisition, was the starting third baseman. While Vuckovich was a superb fielder, he was a weak hitter. In May, with the team off to a slow start and trailing the Dodgers, Sparky Anderson made a bold move by moving Rose to third base, a position where he had very little experience, and inserting Foster in left field. This was the jolt that the Reds needed to propel them into first place, with Rose proving to be reliable on defense and the addition of Foster to the outfield giving the offense some added punch. During the season, the Reds compiled two notable streaks: 1.) winning 41 out of 50 games in one stretch, and 2.) by going a month without committing any errors on defense. In the 1975 season, Cincinnati clinched the NL West with 108 victories before sweeping the Pittsburgh Pirates in three games to win the NL pennant. They went on to face the Boston Red Sox in the World Series, splitting the first four games and taking Game 5. After a three-day rain delay, the two teams met in Game 6, considered by many to be the best World Series game ever. The Reds were ahead 6–3 with five outs left when the Red Sox tied the game on former Red Bernie Carbo's three-run home run, his second pinch-hit, three-run homer in the series. After a few close calls both ways, Carlton Fisk hit a dramatic 12th-inning home run off the foul pole in left field to give the Red Sox a 7–6 win and force a decisive game 7. Cincinnati prevailed the next day when Morgan's RBI single won Game 7 and gave the Reds their first championship in 35 years. The Reds have not lost a World Series game since Carlton Fisk's home run, a span of nine straight wins. saw a return of the same starting eight in the field. The starting rotation was again led by Nolan, Gullett, Billingham and Norman, while the addition of rookies Pat Zachry and Santo Alcalá comprised an underrated staff in which four of the six had ERAs below 3.10. Eastwick, Borbon and McEnaney shared closer duties, recording 26, eight and seven saves, respectively. The Reds won the NL West by 10 games and went undefeated in the postseason, sweeping the Philadelphia Phillies (winning game 3 in their final at-bat) to return to the World Series, where they beat the Yankees at the newly renovated Yankee Stadium in the first Series held there since 1964. This was only the second-ever sweep of the Yankees in the World Series, and the Reds became the first NL team since the 1921–22 New York Giants to win consecutive World Series championships. To date, the 1975 and 1976 Reds were the last NL team to repeat as champions. Beginning with the 1970 National League pennant, the Reds beat either of the two Pennsylvania-based clubs – the Philadelphia Phillies and the Pittsburgh Pirates – to win their pennants (they beat the Pirates in 1970, 1972, 1975 and 1990, and the Phillies in 1976), making the Big Red Machine part of the rivalry between the two Pennsylvania teams. In 1979, Pete Rose added further fuel to the Big Red Machine, being part of the rivalry when he signed with the Phillies and helped them win their first World Series in . The Machine dismantled (1977–1989) The late 1970s brought turmoil and change to the Reds. Popular Tony Pérez was sent to the Montreal Expos after the 1976 season, breaking up the Big Red Machine's starting lineup. Manager Sparky Anderson and general manager Bob Howsam later considered this trade to be the biggest mistake of their careers. Starting pitcher Don Gullett left via free agency and signed with the New York Yankees. In an effort to fill that gap, a trade with the Oakland Athletics for starting ace Vida Blue was arranged during the 1976–77 offseason. However, then-commissioner Bowie Kuhn vetoed the trade in order to maintain competitive balance in baseball; some have suggested that the actual reason had more to do with Kuhn's continued feud with Athletics owner Charlie Finley. On June 15, 1977, the Reds acquired pitcher Tom Seaver from the New York Mets for Pat Zachry, Doug Flynn, Steve Henderson and Dan Norman. In other deals that proved to be less successful, the Reds traded Gary Nolan to the California Angels for Craig Hendrickson; Rawly Eastwick to the St. Louis Cardinals for Doug Capilla; and Mike Caldwell to the Milwaukee Brewers for Rick O'Keeffe and Garry Pyka, as well as Rick Auerbach from Texas. The end of the Big Red Machine era was heralded by the replacement of general manager Bob Howsam with Dick Wagner. In his last season as a Red, Rose gave baseball a thrill as he challenged Joe DiMaggio's 56-game hitting streak, tying for the second-longest streak ever at 44 games. The streak came to an end in Atlanta after striking out in his fifth at-bat in the game against Gene Garber. Rose also earned his 3,000th hit that season, on his way to becoming baseball's all-time hits leader when he rejoined the Reds in the mid-1980s. The year also witnessed the only no-hitter of Hall of Fame pitcher Tom Seaver's career, coming against the St. Louis Cardinals on June 16, 1978. After the 1978 season and two straight second-place finishes, Wagner fired manager Anderson in a move that proved to be unpopular. Pete Rose, who had played almost every position for the team except pitcher, shortstop and catcher since 1963, signed with Philadelphia as a free agent. By , the starters were Bench (catcher), Dan Driessen (first base), Morgan (second base), Concepción (shortstop) and Ray Knight (third base), with Griffey, Foster and Geronimo again in the outfield. The pitching staff had experienced a complete turnover since 1976, except for Fred Norman. In addition to ace starter Tom Seaver, the remaining starters were Mike LaCoss, Bill Bonham and Paul Moskau. In the bullpen, only Borbon had remained. Dave Tomlin and Mario Soto worked middle relief, with Tom Hume and Doug Bair closing. The Reds won the 1979 NL West behind the pitching of Seaver, but were dispatched in the NL playoffs by the Pittsburgh Pirates. Game 2 featured a controversial play in which a ball hit by Pittsburgh's Phil Garner was caught by Reds outfielder Dave Collins but was ruled a trap, setting the Pirates up to take a 2–1 lead. The Pirates swept the series 3 games to 0 and went on to win the World Series against the Baltimore Orioles. The 1981 team fielded a strong lineup, with only Concepción, Foster and Griffey retaining their spots from the 1975–76 heyday. After Johnny Bench was able to play only a few games as catcher each year after 1980 due to ongoing injuries, Joe Nolan took over as starting catcher. Driessen and Bench shared first base, and Knight starred at third. Morgan and Geronimo had been replaced at second base and center field by Ron Oester and Dave Collins, respectively. Mario Soto posted a banner year starting on the mound, only surpassed by the outstanding performance of Seaver's Cy Young runner-up season. La Coss, Bruce Berenyi and Frank Pastore rounded out the starting rotation. Hume again led the bullpen as closer, joined by Bair and Joe Price. In , the Reds had the best overall record in baseball, but finished second in the division in both of the half-seasons that resulted from a mid-season players' strike, and missed the playoffs. To commemorate this, a team photo was taken, accompanied by a banner that read "Baseball's Best Record 1981." By , the Reds were a shell of the original Red Machine, having lost 101 games that year. Johnny Bench, after an unsuccessful transition to third base, retired a year later. After the heartbreak of 1981, general manager Dick Wagner pursued the strategy of ridding the team of veterans, including third baseman Knight and the entire starting outfield of Griffey, Foster and Collins. Bench, after being able to catch only seven games in 1981, was moved from platooning at first base to be the starting third baseman; Alex Treviño became the regular starting catcher. The outfield was staffed with Paul Householder, César Cedeño and future Colorado Rockies and Pittsburgh Pirates manager Clint Hurdle on Opening Day. Hurdle was an immediate bust, and rookie Eddie Milner took his place in the starting outfield early in the year. The highly touted Householder struggled throughout the year despite extensive playing time. Cedeno, while providing steady veteran play, was a disappointment, unable to recapture his glory days with the Houston Astros. The starting rotation featured the emergence of a dominant Mario Soto and featured strong years by Pastore and Bruce Berenyi, but Seaver was injured all year, and their efforts were wasted without a strong offensive lineup. Tom Hume still led the bullpen along with Joe Price, but the colorful Brad "The Animal" Lesley was unable to consistently excel, and former All-Star Jim Kern was also a disappointment. Kern was also publicly upset over having to shave off his prominent beard to join the Reds, and helped force the issue of getting traded during mid-season by growing it back. The season also saw the midseason firing of manager John McNamara, who was replaced as skipper by Russ Nixon. The Reds fell to the bottom of the Western Division for the next few years. After the 1982 season, Seaver was traded back to the Mets. found Dann Bilardello behind the plate, Bench returning to part-time duty at first base, rookie Nick Esasky taking over at third base and Gary Redus taking over from Cedeno. Tom Hume's effectiveness as a closer had diminished, and no other consistent relievers emerged. Dave Concepción was the sole remaining starter from the Big Red Machine era. Wagner's tenure ended in 1983, when Howsam, the architect of the Big Red Machine, was brought back. The popular Howsam began his second term as the Reds' general manager by signing Cincinnati native Dave Parker as a free agent from Pittsburgh. In , the Reds began to move up, depending on trades and some minor leaguers. In that season, Dave Parker, Dave Concepción and Tony Pérez were in Cincinnati uniforms. In August of the same year, Pete Rose was reacquired and hired to be the Reds player-manager. After raising the franchise from the grave, Howsam gave way to the administration of Bill Bergesch, who attempted to build the team around a core of highly regarded young players in addition to veterans like Parker. However, he was unable to capitalize on an excess of young and highly touted position players including Kurt Stillwell, Tracy Jones and Kal Daniels by trading them for pitching. Despite the emergence of Tom Browning as Rookie of the Year in , when he won 20 games, the rotation was devastated by the early demise of Mario Soto's career to arm injury. Under Bergesch, the Reds finished second four times from 1985 to . Among the highlights, Rose became the all-time hits leader, Tom Browning threw a perfect game, Eric Davis became the first player in baseball history to hit at least 35 home runs and steal 50 bases, and Chris Sabo was the 1988 National League Rookie of the Year. The Reds also had a bullpen star in John Franco, who was with the team from 1984 to 1989. Rose once had Concepción pitch late in a game at Dodger Stadium. In , following the release of the Dowd Report, which accused Rose of betting on baseball games, Rose was banned from baseball by Commissioner Bart Giamatti, who declared him guilty of "conduct detrimental to baseball." Controversy also swirled around Reds owner Marge Schott, who was accused several times of ethnic and racial slurs. World championship and the end of an era (1990–2002) In , general manager Bergesch was replaced by Murray Cook, who initiated a series of deals that would finally bring the Reds back to the championship, starting with acquisitions of Danny Jackson and José Rijo. An aging Dave Parker was let go after a revival of his career in Cincinnati following the Pittsburgh drug trials. Barry Larkin emerged as the starting shortstop over Kurt Stillwell, who, along with reliever Ted Power, was traded for Jackson. In , Cook was succeeded by Bob Quinn, who put the final pieces of the championship puzzle together, with the acquisitions of Hal Morris, Billy Hatcher and Randy Myers. In , the Reds, under new manager Lou Piniella, shocked baseball by leading the NL West from wire-to-wire, making them the only NL team to do so. Winning their first nine games, they started 33–12 and maintained their lead throughout the year. Led by Chris Sabo, Barry Larkin, Eric Davis, Paul O'Neill and Billy Hatcher on the field, and by José Rijo, Tom Browning and the "Nasty Boys" – Rob Dibble, Norm Charlton and Randy Myers – on the mound, the Reds took out the Pirates in the NLCS. The Reds swept the heavily favored Oakland Athletics in four straight and extended a winning streak in the World Series to nine consecutive games. This Series, however, saw Eric Davis severely bruise a kidney diving for a fly ball in Game 4, and his play was greatly limited the next year. In , Quinn was replaced in the front office by Jim Bowden. On the field, manager Lou Piniella wanted outfielder Paul O'Neill to be a power hitter to fill the void Eric Davis left when he was traded to the Los Angeles Dodgers in exchange for Tim Belcher. However, O'Neill only hit .246 with 14 home runs. The Reds returned to winning after a losing season in , but 90 wins was only enough for second place behind the division-winning Atlanta Braves. Before the season ended, Piniella got into an altercation with reliever Rob Dibble. In the offseason, Paul O'Neill was traded to the New York Yankees for outfielder Roberto Kelly, who was a disappointment for the Reds over the next couple of years, while O'Neill led a downtrodden Yankees franchise to a return to glory. Around this time, the Reds would replace their Big Red Machine–era uniforms in favor of a pinstriped uniform with no sleeves. For the 1993 season, Piniella was replaced by fan favorite Tony Pérez, but he lasted only 44 games at the helm before being replaced by Davey Johnson. With Johnson steering the team, the Reds made steady progress. In , the Reds were in the newly created National League Central Division with the Chicago Cubs, St. Louis Cardinals, and rivals Pittsburgh Pirates and Houston Astros. By the time the strike hit, the Reds finished a half-game ahead of the Houston Astros for first place in the NL Central. In , the Reds won the division thanks to MVP Barry Larkin. After defeating the NL West champion Dodgers in the first NLDS since 1981, however, they lost to the Atlanta Braves. Team owner Marge Schott announced mid-season that Johnson would be gone by the end of the year, regardless of outcome, to be replaced by former Reds third baseman Ray Knight. Johnson and Schott had never gotten along, and she did not approve of Johnson living with his fiancée before they were married. In contrast, Knight, along with his wife, professional golfer Nancy Lopez, were friends of Schott. The team took a dive under Knight, who was unable to complete two full seasons as manager and was subjected to complaints in the press about his strict managerial style. In , the Reds won 96 games, led by manager Jack McKeon, but lost to the New York Mets in a one-game playoff. Earlier that year, Schott sold controlling interest in the Reds to Cincinnati businessman Carl Lindner. Despite an 85–77 finish in , and being named 1999 NL manager of the year, McKeon was fired after the 2000 season. The Reds did not have another winning season until 2010. Contemporary era (2003–present) Riverfront Stadium, by then known as Cinergy Field, was demolished in . Great American Ball Park opened in , with high expectations for a team led by local favorites, including outfielder Ken Griffey Jr., shortstop Barry Larkin and first baseman Sean Casey. Although attendance improved considerably with the new ballpark, the Reds continued to lose. Schott had not invested much in the farm system since the early 1990s, leaving the team relatively thin on talent. After years of promises that the club was rebuilding toward the opening of the new ballpark, general manager Jim Bowden and manager Bob Boone were fired on July 28. This broke up the father-son combo of manager Bob Boone and third baseman Aaron Boone, and the latter was soon traded to the New York Yankees. Tragedy struck in November when Dernell Stenson, a promising young outfielder, was shot and killed during a carjack. Following the season, Dan O'Brien was hired as the Reds' 16th general manager on October 27, 2003, succeeding Jim Bowden. The and seasons continued the trend of big-hitting, poor pitching and poor records. Griffey, Jr. joined the 500 home run club in 2004, but was again hampered by injuries. Adam Dunn emerged as consistent home run hitter, including a home run against José Lima. He also broke the major league record for strikeouts in 2004. Although a number of free agents were signed before 2005, the Reds were quickly in last place, and manager Dave Miley was forced out in the 2005 midseason and replaced by Jerry Narron. Like many other small-market clubs, the Reds dispatched some of their veteran players and began entrusting their future to a young nucleus that included Adam Dunn and Austin Kearns. 2004 saw the opening of the Cincinnati Reds Hall of Fame (HOF), which had been in existence in name only since the 1950s, with player plaques, photos and other memorabilia scattered throughout their front offices. Ownership and management desired a standalone facility where the public could walk through interactive displays, see locker room recreations, watch videos of classic Reds moments and peruse historical items, such as the history of Reds uniforms dating back to the 1920s or a baseball marking every hit Pete Rose had during his career. Robert Castellini took over as controlling owner from Lindner in 2006. Castellini promptly fired general manager Dan O'Brien and hired Wayne Krivsky. The Reds made a run at the playoffs, but ultimately fell short. The 2007 season was again mired in mediocrity. Midway through the season, Jerry Narron was fired as manager and replaced by Pete Mackanin. The Reds ended up posting a winning record under Mackanin, but finished the season in fifth place in the Central Division. Mackanin was manager in an interim capacity only, and the Reds, seeking a big name to fill the spot, ultimately brought in Dusty Baker. Early in the 2008 season, Krivsky was fired and replaced by Walt Jocketty. Although the Reds did not win under Krivsky, he is credited with revamping the farm system and signing young talent that could potentially lead the team to success in the future. The Reds failed to post winning records in both 2008 and 2009. In 2010, with NL MVP Joey Votto and Gold Glovers Brandon Phillips and Scott Rolen, the Reds posted a 91–71 record and were NL Central champions. The following week, the Reds became only the second team in MLB history to be no-hit in a postseason game when Philadelphia's Roy Halladay shut down the National League's No. 1 offense in Game 1 of the NLDS. The Reds eventually lost in a three-game sweep of the NLDS to Philadelphia. After coming off their surprising 2010 NL Central Division title, the Reds fell short of many expectations for the 2011 season. Multiple injuries and inconsistent starting pitching played a big role in their mid-season collapse, along with a less productive offense as compared to the previous year. The Reds ended the season at 79–83, and won the 2012 NL Central Division Title. On September 28, Homer Bailey threw a 1–0 no-hitter against the Pittsburgh Pirates, marking the first Reds no-hitter since Tom Browning's perfect game in 1988. Finishing with a 97–65 record, the Reds earned the second seed in the Division Series and a matchup with the eventual World Series champion, the San Francisco Giants. After taking a 2–0 lead with road victories at AT&T Park, they headed home looking to win the series. However, they lost three straight at their home ballpark, becoming the first National League team since the Chicago Cubs in 1984 to lose a division series after leading 2–0. In the offseason, the team traded outfielder Drew Stubbs – as part of a three-team deal with the Arizona Diamondbacks and Cleveland Indians – to the Indians, and in turn received right fielder Shin-Soo Choo. On July 2, 2013, Homer Bailey pitched a no-hitter against the San Francisco Giants for a 4–0 Reds victory, making him the third pitcher in Reds history with two complete-game no-hitters in their career. Following six consecutive losses to close out the 2013 season, including a loss to the Pittsburgh Pirates at PNC Park in the National League wild-card playoff game, the Reds decided to fire Dusty Baker. During his six years as manager, Baker led the Reds to the playoff three times; however, they never advanced beyond the first round. On October 22, 2013, the Reds hired pitching coach Bryan Price to replace Baker as manager. Under Price, the Reds were led by pitchers Johnny Cueto and the hard-throwing Aroldis Chapman. The offense was led by All-Star third baseman Todd Frazier, Joey Votto and Brandon Phillips, but although they had plenty of star power, the Reds never got off to a good start and ended the season in lowly fourth place in the division to go along with a 76–86 record. During the offseason, the Reds traded pitchers Alfredo Simón to the Tigers and Mat Latos to the Marlins. In return, they acquired young talents such as Eugenio Suárez and Anthony DeSclafani. They also acquired veteran slugger Marlon Byrd from the Phillies to play left field. The Reds' 2015 season wasn't much better, as they finished with the second-worst record in the league at 64–98, their worst finish since 1982. The Reds were forced to trade star pitchers Johnny Cueto and Mike Leake to the Kansas City Royals and San Francisco Giants, respectively, receiving minor league pitching prospects for both. Shortly after the season's end, the Reds traded Home Run Derby champion Todd Frazier to the Chicago White Sox and closing pitcher Aroldis Chapman to the New York Yankees. In 2016, the Reds broke the then-record for home runs allowed during a single season, The Reds held this record until the 2019 season when it was broken by the Baltimore Orioles. The previous record holder was the 1996 Detroit Tigers with 241 home runs yielded to opposing teams. The Reds went 68–94 and again were one of the worst teams in the MLB. The Reds traded outfielder Jay Bruce to the Mets just before the July 31 non-waiver trade deadline in exchange for two prospects: infielder Dilson Herrera and pitcher Max Wotell. During the offseason, the Reds traded Brandon Phillips to the Atlanta Braves in exchange for two minor league pitchers. On September 25, 2020, the Reds earned their first postseason berth since 2013, ultimately earning the seventh seed in the expanded 2020 playoffs. The 2020 season had been shortened to 60 games as a result of the COVID-19 pandemic. The Reds lost their first-round series against the Atlanta Braves two games to none. The Reds finished the 2021 season with a record of 83–79, good for third in the NL Central. In 2022, the Reds started out the regular season with a ghastly 3–22 record. Their three-game win total in 25 games had not seen since the 2003 Detroit Tigers and was tied for second-worst overall behind the 1988 Baltimore Orioles, who started 2–23 in their first 25 games. They would finish the season with a record of 62-100. Ballpark The Cincinnati Reds play their home games at Great American Ball Park, located at 100 Joe Nuxhall Way, in downtown Cincinnati. Great American Ball Park opened in 2003 at the cost of $290 million and has a capacity of 42,271. Along with serving as the home field for the Reds, the stadium also holds the Cincinnati Reds Hall of Fame, which was added as a part of Reds tradition allowing fans to walk through the history of the franchise as well as participating in many interactive baseball features. Great American Ball Park is the seventh home of the Cincinnati Reds, built immediately to the east of the site on which Riverfront Stadium, later named Cinergy Field, once stood. The first ballpark the Reds occupied was Bank Street Grounds from 1882 to 1883 until they moved to League Park I in 1884, where they would remain until 1893. Through the late 1890s and early 1900s, the Reds moved to two different parks, where they stayed for less than 10 years: League Park II was the third home field for the Reds from 1894 to 1901, and then they moved to the Palace of the Fans, which served as the home of the Reds in the 1910s. It was in 1912 that the Reds moved to Crosley Field, which they called home for 58 years. Crosley served as the home field for the Reds for two World Series titles and five National League pennants. Beginning June 30, 1970, and during the dynasty of the Big Red Machine, the Reds played in Riverfront Stadium, appropriately named due to its location right by the Ohio River. Riverfront saw three World Series titles and five National League pennants. It was in the late 1990s that the city agreed to build two separate stadiums on the riverfront for the Reds and the Cincinnati Bengals. Thus, in 2003, the Reds began a new era with the opening of the current stadium. The Reds hold their spring training in Goodyear, Arizona, at Goodyear Ballpark. The Reds moved into this stadium and the Cactus League in 2010 after staying in the Grapefruit League for most of their history. The Reds share Goodyear Park with their rivals in Ohio, the Cleveland Guardians. Logos and uniforms Logo Throughout the team's history, many different variations of the classic wishbone "C" logo have been introduced. In the team's early history, the Reds logo has been simply the wishbone "C" with the word "REDS" inside, the only colors used being red and white. However, during the 1950s, during the renaming and re-branding of the team as the Cincinnati Redlegs because of the connections to communism of the word "Reds," the color blue was introduced as part of the Reds color combination. During the 1960s and 1970s, the Reds saw a move toward the more traditional colors, abandoning the navy blue. A new logo also appeared with the new era of baseball in 1972, when the team went away from the script "REDS" inside of the "C," instead putting their mascot, Mr. Redlegs, in its place as well as putting the name of the team inside of the wishbone "C." In the 1990s, the more traditional, early logos of Reds came back with the current logo reflecting more of what the team's logo was when they were founded. Uniforms Along with the logo, the Reds' uniforms have been changed many different times throughout their history. Following their departure from being called the "Redlegs" in 1956, the Reds made a groundbreaking change to their uniforms with the use of sleeveless jerseys, seen only once before in the Major Leagues by the Chicago Cubs. At home and away, the cap was all-red with a white wishbone "C" insignia. The long-sleeved undershirts were red. The uniform was plain white with a red wishbone "C" logo on the left and the uniform number on the right. On the road, the wishbone "C" was replaced by the mustachioed "Mr. Redlegs" logo, the pillbox-hat-wearing man with a baseball for a head. The home stockings were red with six white stripes. The away stockings had only three white stripes. The Reds changed uniforms again in 1961, when they replaced the traditional wishbone "C" insignia with an oval-shaped "C" logo, but continued to use the sleeveless jerseys. At home, the Reds wore white caps with the red bill with the oval "C" in red, white sleeveless jerseys with red pinstripes, with the oval "C-REDS" logo in black with red lettering on the left breast and the number in red on the right. The gray away uniform included a gray cap with the red oval "C" and a red bill. Their gray away uniforms, which also included a sleeveless jersey, bore "CINCINNATI" in an arched block style across with the number below on the left. In 1964, players' last names were placed on the back of each set of uniforms, below the numbers. Those uniforms were scrapped after the 1966 season. However, the Cincinnati uniform design most familiar to baseball enthusiasts is the one whose basic form, with minor variations, held sway for 25 seasons from 1967 to 1992. Most significantly, the point was restored to the "C" insignia, making it a wishbone again. During this era, the Reds wore all-red caps both at home and on the road. The caps bore the simple wishbone "C" insignia in white. The uniforms were standard short-sleeved jerseys and standard trousers – white at home and gray on the road. The home uniform featured the wishbone "C-REDS" logo in red with white type on the left breast and the uniform number in red on the right. The away uniform bore "CINCINNATI" in an arched block style across the front with the uniform number below on the left. Red, long-sleeved undershirts and plain red stirrups over white sanitary stockings completed the basic design. The Reds wore pinstriped home uniforms in 1967 only, and the uniforms were flannel through 1971, changing to double-knits with pullover jerseys and belt-less pants in 1972. Those uniforms lasted 20 seasons, and the 1992 Reds were the last MLB team to date whose primary uniforms featured pullover jerseys and belt-less pants. The 1993 uniforms, which did away with the pullovers and brought back button-down jerseys, kept white and gray as the base colors for the home and away uniforms, but added red pinstripes. The home jerseys were sleeveless, showing more of the red undershirts. The color scheme of the "C-REDS" logo on the home uniform was reversed, now red lettering on a white background. A new home cap was created that had a red bill and a white crown with red pinstripes and a red wishbone "C" insignia. The away uniform kept the all-red cap, but moved the uniform number to the left to more closely match the home uniform. The only additional change to these uniforms was the introduction of black as a primary color of the Reds in 1999, especially on their road uniforms. The Reds' latest uniform change came in December 2006, which differed significantly from the uniforms worn during the previous eight seasons. The home caps returned to an all-red design with a white wishbone "C," lightly outlined in black. Caps with red crowns and a black bill became the new road caps. Additionally, the sleeveless jersey was abandoned for a more traditional design. The numbers and lettering for the names on the backs of the jerseys were changed to an early 1900s–style typeface, and a handlebar-mustached "Mr. Redlegs" – reminiscent of the logo used by the Reds in the 1950s and 1960s – was placed on the left sleeve. Awards and accolades Team captains Tommy Corcoran – 1900–1905 Joe Kelley – 1906 John Ganzel – 1907 Hans Lobert – 1909 Mike Mitchell – 1910–1912 Ivey Wingo – 1916 Heinie Groh – 1918–1921 Jake Daubert – 1922–1924 Edd Roush – 1925–1926 Bubbles Hargrave – 1927–1928 14 Pete Rose – 1970–1978 13 Dave Concepción – 1983–1988 11 Barry Larkin – 1997–2004 Retired numbers The Cincinnati Reds have retired 10 numbers in franchise history, as well as honor Jackie Robinson, whose number is retired league-wide around Major League Baseball. All of the retired numbers are located at Great American Ball Park behind home plate on the outside of the press box. Along with the retired players' and managers' number, the following broadcasters are honored with microphones by the broadcast booth: Marty Brennaman, Waite Hoyt and Joe Nuxhall. On April 15, 1997, No. 42 was retired throughout Major League Baseball in honor of Jackie Robinson. Baseball Hall of Famers Ford C. Frick Award recipients MLB All-Star Games The Reds have hosted the Major League Baseball All-Star Game five times: twice at Crosley Field (1938, 1953), twice at Riverfront Stadium (1970, 1988) and once at Great American Ball Park (2015). Ohio Cup The Ohio Cup was an annual pre-season baseball game, which pitted the Ohio rivals Cleveland Indians and Cincinnati Reds. In its first series it was a single-game cup, played each year at minor-league Cooper Stadium in Columbus, and was staged just days before the start of each new Major League Baseball season. A total of eight Ohio Cup games were played, between 1989 to 1996, with the Indians winning six of them. The winner of the game each year was awarded the Ohio Cup in postgame ceremonies. The Ohio Cup was a favorite among baseball fans in Columbus, with attendances regularly topping 15,000. The Ohio Cup games ended with the introduction of regular-season interleague play in 1997. Thereafter, the two teams competed annually in the regular-season Battle of Ohio or Buckeye Series. The Ohio Cup was revived in 2008 as a reward for the team with the better overall record in the Reds–Indians series each year. Media Radio The Reds' flagship radio station has been WLW, 700AM since 1969. Prior to that, the Reds were heard over WKRC, WCPO, WSAI and WCKY. WLW, a 50,000-watt station, is "clear channel" in more than one way, as iHeartMedia owns the "blowtorch" outlet, which is also known as "The Nation's Station." Reds games can be heard on over 100 local radio stations through the Reds on Radio Network. Since 2020, the Reds broadcast team has been former Pensacola Blue Wahoos radio play-by-play announcer Tommy Thrall and retired relief pitcher Jeff Brantley on color commentary. Marty Brennaman called Reds games from 1974 to 2019, most famously alongside former Reds pitcher and color commentator Joe Nuxhall through 2007. Brennaman has won the Ford C. Frick Award for his work, which includes his famous call of "... and this one belongs to the Reds!" after a win. Nuxhall preceded Brennaman in the Reds' booth, beginning in 1967 (the year after his retirement as an active player) until his death in 2007. (From 2004 to 2007, Nuxhall only called select home games.) In 2007, Thom Brennaman, a veteran announcer seen nationwide on Fox Sports, joined his father Marty in the radio booth. Brantley, formerly of ESPN, also joined the network in 2007. Three years later in 2010, Brantley and Thom Brennaman's increased TV schedule led to more appearances for Jim Kelch, who had filled in on the network since 2008. Kelch's contract expired after the 2017 season. In 2019, Thrall was brought in to provide in-game and post-game coverage, as well as act as a fill-in play-by-play announcer. He succeeded Marty Brennaman when the former retired at the end of the 2019 season. Television Televised games are seen exclusively on Bally Sports Ohio and Bally Sports Indiana. In addition, Bally Sports South televises Bally Sports Ohio broadcasts of Reds games to Tennessee and western North Carolina. George Grande, who hosted the first SportsCenter on ESPN in 1979, was the play-by-play announcer, usually alongside Chris Welsh, from 1993 until his retirement during the final game of the 2009 season. Since 2009, Grande has worked part time for the Reds as play-by-play announcer in September when Thom Brennaman is covering the NFL for Fox Sports. He has also made guest appearances throughout each season. Brennaman had been the head play-by-play commentator since 2010, with Welsh and Brantley sharing time as the color commentators. Paul Keels, who left in 2011 to become the play-by-play announcer for the Ohio State Buckeyes Radio Network, was the Reds' backup play-by-play television announcer during the 2010 season. Jim Kelch served as Keels' replacement. The Reds also added former Reds first baseman Sean Casey – known as "The Mayor" by Reds fans – to do color commentary for approximately 15 games in 2011. NBC affiliate WLWT carried Reds games from 1948 to 1995. Among those that have called games for WLWT include Waite Hoyt, Ray Lane, Steve Physioc, Johnny Bench, Joe Morgan and Ken Wilson. Al Michaels, who established a long career with ABC and NBC, spent three years in Cincinnati early in his career. The last regularly scheduled, over-the-air broadcasts of Reds games were on WSTR-TV from 1996 to 1998. Since 2010, WKRC-TV has simulcast Opening Day games with Fox/Bally Sports Ohio, which it came into common ownership with in 2019. On August 19, 2020, Thom Brennaman was caught uttering a homophobic slur during a game against the Kansas City Royals. Brennaman eventually apologized for the incident and was suspended, but on September 26, he resigned from his duties as the Reds' TV play-by-play announcer. This ended the Brennamans' 46-year association with the Reds franchise, dating back to Marty's first season in 1974. Sideline reporter Jim Day served as the interim play-by-play voice for the remainder of the 2020 season, after which the Reds hired John Sadak to serve as its television play-by-play announcer. Community involvement The Reds Community Fund, founded in 2001, is focused on the youth of the Greater Cincinnati area with the goal of improving the lives of participants by leveraging the traditions of the Reds. The fund sponsors the Reviving Baseball in Inner Cities (RBI) program with a goal of 30–50 young people graduating high school and attending college annually. It also holds an annual telethon, raising in excess of $120,000. An example of the fund's community involvement is its renovation of Hoffman Fields in the Evanston neighborhood of the city, upgrading the entire recreation complex, for a total of over 400 baseball diamonds renovated at 200 locations throughout the region. During the COVID-19 pandemic in 2020, since no spectators were allowed at MLB games, the Reds offered fans the opportunity to purchase paper cutouts of their own photographs in the stands at Great American Ball Park. The promotion raised over $300,000 for the fund, more than the fund's traditional events such as Redsfest, the Redlegs Run, an annual golf outing and the Fox Sports Ohio Telethon. Roster Minor league affiliations The Cincinnati Reds farm system consists of six minor league affiliates. References Further reading Gitlin, Marty. Cincinnati Reds (ABDO, 2015). External links Reds Minor Leagues News SCSR / 19th Century Cincinnati Base Ball Voices of Oklahoma interview with Johnny Bench. First-person interview conducted on March 28, 2012, with Johnny Bench, Hall of Fame Catcher for the Cincinnati Reds. Major League Baseball teams Cactus League Sports teams in Cincinnati Baseball in Cincinnati Baseball teams established in 1882 1882 establishments in Ohio
3,026
6,673
https://en.wikipedia.org/wiki/Central%20Powers
Central Powers
The Central Powers, also known as the Central Empires, was one of the two main coalitions that fought in World War I (1914–1918). It consisted of the German Empire, Austria-Hungary, the Ottoman Empire, and the Kingdom of Bulgaria and was also known as the Quadruple Alliance. The Central Powers' origin was the alliance of Germany and Austria-Hungary in 1879. Despite having nominally joined the Triple Alliance before, Italy did not take part in World War I on the side of the Central Powers. The Ottoman Empire and Bulgaria did not join until after World War I had begun. The Central Powers faced and were defeated by the Allied Powers that had formed around the Triple Entente. Member states At the start of the war, the Central Powers consisted of the German Empire and the Austro-Hungarian Empire. The Ottoman Empire joined later in 1914, followed by the Kingdom of Bulgaria in 1915. The name "Central Powers" is derived from the location of these countries; all four were located between the Russian Empire in the east and France and the United Kingdom in the west. Finland and Lithuania joined in 1918, after they became independent from the collapsed Russian Empire. Combatants Germany War justifications In early July 1914, in the aftermath of the assassination of Austro-Hungarian Archduke Franz Ferdinand and faced with the prospect of war between Austria-Hungary and Serbia, Kaiser Wilhelm II and the German government informed the Austro-Hungarian government that Germany would uphold its alliance with Austria-Hungary and defend it from possible Russian intervention if a war between Austria-Hungary and Serbia took place. When Russia enacted a general mobilization, Germany viewed the act as provocative. The Russian government promised Germany that its general mobilization did not mean preparation for war with Germany but was a reaction to the tensions between Austria-Hungary and Serbia. The German government regarded the Russian promise of no war with Germany to be nonsense in light of its general mobilization, and Germany, in turn, mobilized for war. On 1 August, Germany sent an ultimatum to Russia stating that since both Germany and Russia were in a state of military mobilization, an effective state of war existed between the two countries. Later that day, France, an ally of Russia, declared a state of general mobilization. In August 1914, Germany attacked Russia, citing Russian aggression as demonstrated by the mobilization of the Russian army, which had resulted in Germany mobilizing in response. After Germany declared war on Russia, France, with its alliance with Russia, prepared a general mobilization in expectation of war. On 3 August 1914, Germany responded to this action by declaring war on France. Germany, facing a two-front war, enacted what was known as the Schlieffen Plan, which involved German armed forces moving through Belgium and swinging south into France and towards the French capital of Paris. This plan was hoped to quickly gain victory against the French and allow German forces to concentrate on the Eastern Front. Belgium was a neutral country and would not accept German forces crossing its territory. Germany disregarded Belgian neutrality and invaded the country to launch an offensive towards Paris. This caused Great Britain to declare war against the German Empire, as the action violated the Treaty of London that both nations signed in 1839 guaranteeing Belgian neutrality. Subsequently, several states declared war on Germany in late August 1914, with Italy declaring war on Germany in August 1916, the United States in April 1917, and Greece in July 1917. Colonies and dependencies Europe Upon its founding in 1871, the German Empire controlled Alsace-Lorraine as an "imperial territory" incorporated from France after the Franco-Prussian War. It was held as part of Germany's sovereign territory. Africa Germany held multiple African colonies at the time of World War I. 3 of 4 Germany's African colonies were invaded and occupied by Allied forces during the war, only Paul von Lettow-Vorbeck's German force in German East Africa successfully held out against the Allies until they accepted an armistice. Kamerun, German East Africa, Togoland, and German Southwest Africa were German colonies in Africa. Asia The Kiautschou Bay concession was a German dependency in East Asia leased from China in 1898. Japanese forces occupied it following the Siege of Tsingtao. Pacific German New Guinea was a German protectorate in the Pacific. It was occupied by Australian forces in 1914. German Samoa was a German protectorate following the Tripartite Convention. It was occupied by the New Zealand Expeditionary Force in 1914. Austria-Hungary War justifications Austria-Hungary regarded the assassination of Archduke Franz Ferdinand as being orchestrated with the assistance of Serbia. The country viewed the assassination as setting a dangerous precedent of encouraging the country's South Slav population to rebel and threaten to tear apart the multinational country. Austria-Hungary formally sent an ultimatum to Serbia demanding a full-scale investigation of Serbian government complicity in the assassination and complete compliance by Serbia in agreeing to the terms demanded by Austria-Hungary. Serbia submitted to accept most of the demands. However, Austria-Hungary viewed this as insufficient and used this lack of full compliance to justify military intervention. These demands have been viewed as a diplomatic cover for what was going to be an inevitable Austro-Hungarian declaration of war on Serbia. Russia had warned Austria-Hungary that the Russian government would not tolerate Austria-Hungary invading Serbia. However, with Germany supporting Austria-Hungary's actions, the Austro-Hungarian government hoped that Russia would not intervene and that the conflict with Serbia would remain a regional conflict. Austria-Hungary's invasion of Serbia resulted in Russia declaring war on the country, and Germany, in turn, declared war on Russia, setting off the beginning of the clash of alliances that resulted in the World War. Territory Austria-Hungary was internally divided into two states with their own governments, joined in communion through the Habsburg throne. Austrian Cisleithania contained various duchies and principalities but also the Kingdom of Bohemia, the Kingdom of Dalmatia, the Kingdom of Galicia and Lodomeria. Hungarian Transleithania comprised the Kingdom of Hungary and the Kingdom of Croatia-Slavonia. In Bosnia and Herzegovina, sovereign authority was shared by both Austria and Hungary. Ottoman Empire War justifications The Ottoman Empire joined the war on the side of the Central Powers in November 1914. The Ottoman Empire had gained strong economic connections with Germany through the Berlin-to-Baghdad railway project that was still incomplete at the time. The Ottoman Empire made a formal alliance with Germany signed on 2 August 1914. The alliance treaty expected that the Ottoman Empire would become involved in the conflict in a short amount of time. However, for the first several months of the war, the Ottoman Empire maintained neutrality though it allowed a German naval squadron to enter and stay near the strait of Bosphorus. Ottoman officials informed the German government that the country needed time to prepare for conflict. Germany provided financial aid and weapons shipments to the Ottoman Empire. After pressure escalated from the German government demanding that the Ottoman Empire fulfill its treaty obligations, or else Germany would expel the country from the alliance and terminate economic and military assistance, the Ottoman government entered the war with the recently acquired cruisers from Germany, the Yavuz Sultan Selim (formerly SMS Goeben) and the Midilli (formerly SMS Breslau) launching a naval raid on the Russian port of Odessa, thus engaging in military action in accordance with its alliance obligations with Germany. Russia and the Triple Entente declared war on the Ottoman Empire. Bulgaria War justifications Bulgaria was still resentful after its defeat in July 1913 at the hands of Serbia, Greece and Romania. It signed a treaty of defensive alliance with the Ottoman Empire on 19 August 1914. It was the last country to join the Central Powers, which Bulgaria did in October 1915 by declaring war on Serbia. It invaded Serbia in conjunction with German and Austro-Hungarian forces. Bulgaria held claims on the region of Vardar Macedonia then held by Serbia following the Balkan Wars of 1912–1913 and the Treaty of Bucharest (1913). As a condition of entering WW1 on the side of the Central Powers, Bulgaria was granted the right to reclaim that territory. Declarations of war Co-belligerents South African Republic In opposition to offensive operations by Union of South Africa, which had joined the war, Boer army officers of what is now known as the Maritz Rebellion "refounded" the South African Republic in September 1914. Germany assisted the rebels, some rebels operating in and out of the German colony of German South-West Africa. The rebels were all defeated or captured by South African government forces by 4 February 1915. Senussi Order The Senussi Order was a Muslim political-religious tariqa (Sufi order) and clan in Libya, previously under Ottoman control, which had been lost to Italy in 1912. In 1915, they were courted by the Ottoman Empire and Germany, and Grand Senussi Ahmed Sharif as-Senussi declared jihad and attacked the Italians in Libya and British controlled Egypt in the Senussi Campaign. Sultanate of Darfur In 1915 the Sultanate of Darfur renounced allegiance to the Sudan government and aligned with the Ottomans. The Anglo-Egyptian Darfur Expedition preemptively acted in March 1916 to prevent an attack on Sudan and took control of the Sultanate by November 1916. Zaian Confederation The Zaian Confederation began to fight with France in the Zaian War to prevent French expansion into Morocco. The fighting lasted from 1914 and continued after the First World War ended, to 1921. The Central Powers (mainly the Germans) began to attempt to incite unrest to hopefully divert French resources from Europe. Client states With the Bolshevik attack of late 1917, the General Secretariat of Ukraine sought military protection first from the Central Powers and later from the armed forces of the Entente. The Ottoman Empire also had its own allies in Azerbaijan and the Northern Caucasus. The three nations fought alongside each other under the Army of Islam in the Battle of Baku. German client states Poland The Kingdom of Poland was a client state of Germany proclaimed in 1916 and established on 14 January 1917. This government was recognized by the emperors of Germany and Austria-Hungary in November 1916, and it adopted a constitution in 1917. The decision to create a Polish State was taken by Germany in order to attempt to legitimize its military occupation amongst the Polish inhabitants, following upon German propaganda sent to Polish inhabitants in 1915 that German soldiers were arriving as liberators to free Poland from subjugation by Russia. The German government utilized the state alongside punitive threats to induce Polish landowners living in the German-occupied Baltic territories to move to the state and sell their Baltic property to Germans in exchange for moving to Poland. Efforts were made to induce similar emigration of Poles from Prussia to the state. Lithuania The Kingdom of Lithuania was a client state of Germany created on 16 February 1918. Belarus The Belarusian People's Republic was a client state of Germany created on 9 March 1918. Ukraine The Ukrainian State was a client state of Germany led by Hetman Pavlo Skoropadskyi from 29 April 1918, after the government of the Ukrainian People's Republic was overthrown. The Crimean Regional Government was a client state of Germany created on 25 June 1918. It was officially part of the Ukrainian State but acted separate from the central government. The Kuban People's Republic eventually voted to join the Ukrainian State. Courland and Semigallia The Duchy of Courland and Semigallia was a client state of Germany created on 8 March 1918. Baltic State The Baltic State also known as the "United Baltic Duchy", was proclaimed on 22 September 1918 by the Baltic German ruling class. It was to encompass the former Estonian governorates and incorporate the recently established Courland and Semigallia into a unified state. An armed force in the form of the Baltische Landeswehr was created in November 1918, just before the surrender of Germany, which would participate in the Russian Civil War in the Baltics. Finland Finland had been an autonomous Grand Duchy within the Russian Empire since 1809, and the collapse of the Russian Empire in 1917 gave it its independence. Following the end of the Finnish Civil War, in which Germany supported the "White" against the Soviet-backed labour movement, in May 1918, there were moves to create a Kingdom of Finland. A German prince was elected, but the Armistice intervened. Georgia The Democratic Republic of Georgia declared independence in 1918 which then led to border conflicts between the newly formed republic and Ottoman Empire. Soon after Ottoman Empire invaded the republic and quickly reached Borjomi. This forced Georgia to ask for help from Germany, which they were granted. Germany forced the Ottomans to withdraw from Georgian territories and recognize Georgian sovereignty. Germany, Georgia and the Ottomans signed a peace treaty, the Treaty of Batum which ended the conflict with the last two. In return, Georgia became a German "ally". This time period of Georgian-German friendship was known as German Caucasus expedition. Don The Don Republic was founded on 18 May 1918. Their ataman Pyotr Krasnov portrayed himself as willing to serve as a pro-German warlord. Ottoman client states Jabal Shammar Jabal Shammar was an Arab state in the Middle East that was closely associated with the Ottoman Empire. Azerbaijan In 1918, the Azerbaijan Democratic Republic, facing Bolshevik revolution and opposition from the Muslim Musavat Party, was then occupied by the Ottoman Empire, which expelled the Bolsheviks while supporting the Musavat Party. The Ottoman Empire maintained a presence in Azerbaijan until the end of the war in November 1918. Mountain Republic The Mountainous Republic of the Northern Caucasus was associated with the Central Powers. Controversial cases States listed in this section were not officially members of the Central Powers. Still, during the war, they cooperated with one or more Central Powers members on a level that makes their neutrality disputable. Ethiopia The Ethiopian Empire was officially neutral throughout World War I but widely suspected of sympathy for the Central Powers between 1915 and 1916. At the time, Ethiopia was one of only two fully independent states in Africa (the other being Liberia) and a major power in the Horn of Africa. Its ruler, Lij Iyasu, was widely suspected of harbouring pro-Islamic sentiments and being sympathetic to the Ottoman Empire. The German Empire also attempted to reach out to Iyasu, dispatching several unsuccessful expeditions to the region to attempt to encourage it to collaborate in an Arab Revolt-style uprising in East Africa. One of the unsuccessful expeditions was led by Leo Frobenius, a celebrated ethnographer and personal friend of Kaiser Wilhelm II. Under Iyasu's directions, Ethiopia probably supplied weapons to the Muslim Dervish rebels during the Somaliland Campaign of 1915 to 1916, indirectly helping the Central Powers' cause. Fearing the rising influence of Iyasu and the Ottoman Empire, the Christian nobles of Ethiopia conspired against Iyasu over 1915. Iyasu was first excommunicated by the Ethiopian Orthodox Patriarch and eventually deposed in a coup d'état on 27 September 1916. A less pro-Ottoman regent, Ras Tafari Makonnen, was installed on the throne. Non-state combatants Other movements supported the efforts of the Central Powers for their own reasons, such as the radical Irish Nationalists who launched the Easter Rising in Dublin in April 1916; they referred to their "gallant allies in Europe". However, most Irish Nationalists supported the British and allied war effort up until 1916, when the Irish political landscape was changing. In 1914, Józef Piłsudski was permitted by Germany and Austria-Hungary to form independent Polish legions. Piłsudski wanted his legions to help the Central Powers defeat Russia and then side with France and the UK and win the war with them. Armistice and treaties Bulgaria signed an armistice with the Allies on 29 September 1918, following a successful Allied advance in Macedonia. The Ottoman Empire followed suit on 30 October 1918 in the face of British and Arab gains in Palestine and Syria. Austria and Hungary concluded ceasefires separately during the first week of November following the disintegration of the Habsburg Empire and the Italian offensive at Vittorio Veneto; Germany signed the armistice ending the war on the morning of 11 November 1918 after the Hundred Days Offensive, and a succession of advances by New Zealand, Australian, Canadian, Belgian, British, French and US forces in north-eastern France and Belgium. There was no unified treaty ending the war; the Central Powers were dealt with in separate treaties. Leaders See also Central Powers intervention in the Russian Civil War Color books, transcripts of official documents released by each nation early in the war Diplomatic history of World War I Home front during World War I covering all major countries International relations of the Great Powers (1814–1919) Axis powers Kaiserreich Footnotes References Further reading Akin, Yigit. When the War Came Home: The Ottomans' Great War and the Devastation of an Empire (2018) Aksakal, Mustafa. The Ottoman Road to War in 1914: The Ottoman Empire and the First World War (2010). Brandenburg, Erich. (1927) From Bismarck to the World War: A History of German Foreign Policy 1870–1914 (1927) online. Clark, Christopher. The Sleepwalkers: How Europe Went to War in 1914 (2013) Craig, Gordon A. "The World War I alliance of the Central Powers in retrospect: The military cohesion of the alliance." Journal of Modern History 37.3 (1965): 336–344. online Dedijer, Vladimir. The Road to Sarajevo(1966), comprehensive history of the assassination with detailed material on the Austrian Empire and Serbia. Fay, Sidney B. The Origins of the World War (2 vols in one. 2nd ed. 1930). online, passim Gooch, G. P. Before The War Vol II (1939) pp 373–447 on Berchtold online free Hall, Richard C. "Bulgaria in the First World War." Historian 73.2 (2011): 300–315. online Hamilton, Richard F. and Holger H. Herwig, eds. Decisions for War, 1914–1917 (2004), scholarly essays on Serbia, Austria-Hungary, Germany, Russia, France, Britain, Japan, Ottoman Empire, Italy, the United States, Bulgaria, Romania, and Greece. Herweg, Holger H. The First World War: Germany and Austria-Hungary 1914–1918 (2009). Herweg, Holger H., and Neil Heyman. Biographical Dictionary of World War I (1982). Hubatsch, Walther. Germany and the Central Powers in the World War, 1914– 1918 (1963) online Jarausch, Konrad Hugo. "Revising German History: Bethmann-Hollweg Revisited." Central European History 21#3 (1988): 224–243, historiography in JSTOR Pribram, A. F. Austrian Foreign Policy, 1908–18 (1923) pp 68–128. Rich, Norman. Great Power Diplomacy: 1814–1914 (1991), comprehensive survey Schmitt, Bernadotte E. The coming of the war, 1914 (2 vol 1930) comprehensive history online vol 1; online vol 2, esp vol 2 ch 20 pp 334–382 Strachan, Hew. The First World War: Volume I: To Arms (2003). Tucker, Spencer C., ed. The European Powers in the First World War: An Encyclopedia (1996) 816pp Watson, Alexander. Ring of Steel: Germany and Austria-Hungary in World War I (2014) Wawro, Geoffrey. A Mad Catastrophe: The Outbreak of World War I and the Collapse of the Habsburg Empire (2014) Williamson, Samuel R. Austria-Hungary and the Origins of the First World War (1991) Zametica, John. Folly and malice: the Habsburg empire, the Balkans and the start of World War One (London: Shepheard–Walwyn, 2017). 416pp. World War I 1914 establishments in Bulgaria 1914 establishments in the Ottoman Empire 1918 disestablishments in Austria-Hungary 1918 disestablishments in Bulgaria 1918 disestablishments in the Ottoman Empire 20th century in international relations . . German Empire in World War I Germany–Ottoman Empire relations Military alliances involving Austria-Hungary Military alliances involving Bulgaria Military alliances involving the German Empire Military alliances involving the Ottoman Empire Ottoman Empire in World War I
3,028
6,677
https://en.wikipedia.org/wiki/Classical%20liberalism
Classical liberalism
Classical liberalism is a political tradition and a branch of liberalism that advocates free market and laissez-faire economics; civil liberties under the rule of law with especial emphasis on individual autonomy, limited government, economic freedom, political freedom and freedom of speech. It gained full flowering in the early 18th century, building on ideas stemming at least as far back as the 13th century within the Iberian, Anglo-Saxon, and central European contexts and was foundational to the American Revolution and "American Project" more broadly. Notable liberal individuals whose ideas contributed to classical liberalism include John Locke, Jean-Baptiste Say, Thomas Malthus, and David Ricardo. It drew on classical economics, especially the economic ideas as espoused by Adam Smith in Book One of The Wealth of Nations and on a belief in natural law, social progress, and utilitarianism. In contemporary times, Friedrich Hayek, Milton Friedman, Ludwig von Mises, Thomas Sowell, George Stigler and Larry Arnhart are seen as the most prominent advocates of classical liberalism. Classical liberalism, contrary to liberal branches like social liberalism, looks more negatively on social policies, taxation and the state involvement in the lives of individuals, and it advocates deregulation. Until the Great Depression and the rise of social liberalism, it was used under the name of economic liberalism. As a term, classical liberalism was applied in retronym to distinguish earlier 19th-century liberalism from social liberalism. By modern standards, in the United States, simple liberalism often means social liberalism, but in Europe and Australia, simple liberalism often means classical liberalism. In the context of American politics, "classical liberalism" may be described as "fiscally conservative" and "socially liberal". Despite this, "classical liberals" tend to reject the right's higher tolerance for economic protectionism and the left's inclination for collective group rights and political correctness, due to classical liberalism's central principle of individualism. Classical liberalism is also considered closely tied with right-libertarianism in the United States. In Europe, liberalism, whether social (especially radical) or conservative, is classical liberalism in itself, so the term classical liberalism mainly refers to centre-right economic liberalism. Evolution of core beliefs Core beliefs of classical liberals included new ideaswhich departed from both the older conservative idea of society as a family and from the later sociological concept of society as a complex set of social networks. Classical liberals believed that individuals are "egoistic, coldly calculating, essentially inert and atomistic" and that society is no more than the sum of its individual members. Classical liberals agreed with Thomas Hobbes that government had been created by individuals to protect themselves from each other and that the purpose of government should be to minimize conflict between individuals that would otherwise arise in a state of nature. These beliefs were complemented by a belief that labourers could be best motivated by financial incentive. This belief led to the passage of the Poor Law Amendment Act 1834, which limited the provision of social assistance, based on the idea that markets are the mechanism that most efficiently leads to wealth. Adopting Thomas Robert Malthus's population theory, they saw poor urban conditions as inevitable, believed population growth would outstrip food production and thus regarded that consequence desirable because starvation would help limit population growth. They opposed any income or wealth redistribution, believing it would be dissipated by the lowest orders. Drawing on ideas of Adam Smith, classical liberals believed that it is in the common interest that all individuals be able to secure their own economic self-interest. They were critical of what would come to be the idea of the welfare state as interfering in a free market. Despite Smith's resolute recognition of the importance and value of labour and of labourers, classical liberals criticized labour's group rights being pursued at the expense of individual rights while accepting corporations' rights, which led to inequality of bargaining power. Classical liberals argued that individuals should be free to obtain work from the highest-paying employers, while the profit motive would ensure that products that people desired were produced at prices they would pay. In a free market, both labour and capital would receive the greatest possible reward, while production would be organized efficiently to meet consumer demand. Classical liberals argued for what they called a minimal state, limited to the following functions: A government to protect individual rights and to provide services that cannot be provided in a free market. A common national defence to provide protection against foreign invaders. Laws to provide protection for citizens from wrongs committed against them by other citizens, which included protection of private property, enforcement of contracts and common law. Building and maintaining public institutions. Public works that included a stable currency, standard weights and measures and building and upkeep of roads, canals, harbours, railways, communications and postal services. Classical liberals asserted that rights are of a negative nature and therefore stipulate that other individuals and governments are to refrain from interfering with the free market, opposing social liberals who assert that individuals have positive rights, such as the right to vote, the right to an education, the right to health care, and the right to a living wage. For society to guarantee positive rights, it requires taxation over and above the minimum needed to enforce negative rights. Core beliefs of classical liberals did not necessarily include democracy nor government by a majority vote by citizens because "there is nothing in the bare idea of majority rule to show that majorities will always respect the rights of property or maintain rule of law". For example, James Madison argued for a constitutional republic with protections for individual liberty over a pure democracy, reasoning that in a pure democracy a "common passion or interest will, in almost every case, be felt by a majority of the whole ... and there is nothing to check the inducements to sacrifice the weaker party". In the late 19th century, classical liberalism developed into neoclassical liberalism, which argued for government to be as small as possible to allow the exercise of individual freedom. In its most extreme form, neoclassical liberalism advocated social Darwinism. Right-libertarianism is a modern form of neoclassical liberalism. However, Edwin Van de Haar states although libertarianism is influenced by classical liberal thought there are significant differences between them. Classical liberalism refuses to give priority to liberty over order and therefore does not exhibit the hostility to the state which is the defining feature of libertarianism. As such, right-libertarians believe classical liberals favor too much state involvement, arguing that they do not have enough respect for individual property rights and lack sufficient trust in the workings of the free market and its spontaneous order leading to support of a much larger state. Right-libertarians also disagree with classical liberals as being too supportive of central banks and monetarist policies. Typology of beliefs Friedrich Hayek identified two different traditions within classical liberalism, namely the British tradition and the French tradition. Hayek saw the British philosophers Bernard Mandeville, David Hume, Adam Smith, Adam Ferguson, Josiah Tucker and William Paley as representative of a tradition that articulated beliefs in empiricism, the common law and in traditions and institutions which had spontaneously evolved but were imperfectly understood. The French tradition included Jean-Jacques Rousseau, Marquis de Condorcet, the Encyclopedists and the Physiocrats. This tradition believed in rationalism and sometimes showed hostility to tradition and religion. Hayek conceded that the national labels did not exactly correspond to those belonging to each tradition since he saw the Frenchmen Montesquieu, Benjamin Constant and Alexis de Tocqueville as belonging to the British tradition and the British Thomas Hobbes, Joseph Priestley, Richard Price and Thomas Paine as belonging to the French tradition. Hayek also rejected the label laissez-faire as originating from the French tradition and alien to the beliefs of Hume and Smith. Guido De Ruggiero also identified differences between "Montesquieu and Rousseau, the English and the democratic types of liberalism" and argued that there was a "profound contrast between the two Liberal systems". He claimed that the spirit of "authentic English Liberalism" had "built up its work piece by piece without ever destroying what had once been built, but basing upon it every new departure". This liberalism had "insensibly adapted ancient institutions to modern needs" and "instinctively recoiled from all abstract proclamations of principles and rights". Ruggiero claimed that this liberalism was challenged by what he called the "new Liberalism of France" that was characterised by egalitarianism and a "rationalistic consciousness". In 1848, Francis Lieber distinguished between what he called "Anglican and Gallican Liberty". Lieber asserted that "independence in the highest degree, compatible with safety and broad national guarantees of liberty, is the great aim of Anglican liberty, and self-reliance is the chief source from which it draws its strength". On the other hand, Gallican liberty "is sought in government ... . [T]he French look for the highest degree of political civilisation in organisation, that is, in the highest degree of interference by public power". History Great Britain Classical liberalism in Britain traces its roots to the Whigs and radicals, and was heavily influenced by French physiocracy. Whiggery had become a dominant ideology following the Glorious Revolution of 1688 and was associated with supporting the British Parliament, upholding the rule of law, and defending landed property. The origins of rights were seen as being in an ancient constitution, which had existed from time immemorial. These rights, which some Whigs considered to include freedom of the press and freedom of speech, were justified by custom rather than as natural rights. These Whigs believed that the power of the executive had to be constrained. While they supported limited suffrage, they saw voting as a privilege rather than as a right. However, there was no consistency in Whig ideology and diverse writers including John Locke, David Hume, Adam Smith and Edmund Burke were all influential among Whigs, although none of them were universally accepted. From the 1790s to the 1820s, British radicals concentrated on parliamentary and electoral reform, emphasising natural rights and popular sovereignty. Richard Price and Joseph Priestley adapted the language of Locke to the ideology of radicalism. The radicals saw parliamentary reform as a first step toward dealing with their many grievances, including the treatment of Protestant Dissenters, the slave trade, high prices, and high taxes. There was greater unity among classical liberals than there had been among Whigs. Classical liberals were committed to individualism, liberty, and equal rights. They believed these goals required a free economy with minimal government interference. Some elements of Whiggery were uncomfortable with the commercial nature of classical liberalism. These elements became associated with conservatism. Classical liberalism was the dominant political theory in Britain from the early 19th century until the First World War. Its notable victories were the Catholic Emancipation Act of 1829, the Reform Act of 1832 and the repeal of the Corn Laws in 1846. The Anti-Corn Law League brought together a coalition of liberal and radical groups in support of free trade under the leadership of Richard Cobden and John Bright, who opposed aristocratic privilege, militarism, and public expenditure and believed that the backbone of Great Britain was the yeoman farmer. Their policies of low public expenditure and low taxation were adopted by William Gladstone when he became Chancellor of the Exchequer and later Prime Minister. Classical liberalism was often associated with religious dissent and nonconformism. Although classical liberals aspired to a minimum of state activity, they accepted the principle of government intervention in the economy from the early 19th century on, with passage of the Factory Acts. From around 1840 to 1860, laissez-faire advocates of the Manchester School and writers in The Economist were confident that their early victories would lead to a period of expanding economic and personal liberty and world peace, but would face reversals as government intervention and activity continued to expand from the 1850s. Jeremy Bentham and James Mill, although advocates of laissez-faire, non-intervention in foreign affairs, and individual liberty, believed that social institutions could be rationally redesigned through the principles of utilitarianism. The Conservative Prime Minister Benjamin Disraeli rejected classical liberalism altogether and advocated Tory democracy. By the 1870s, Herbert Spencer and other classical liberals concluded that historical development was turning against them. By the First World War, the Liberal Party had largely abandoned classical liberal principles. The changing economic and social conditions of the 19th century led to a division between neo-classical and social (or welfare) liberals, who while agreeing on the importance of individual liberty differed on the role of the state. Neo-classical liberals, who called themselves "true liberals", saw Locke's Second Treatise as the best guide and emphasised "limited government" while social liberals supported government regulation and the welfare state. Herbert Spencer in Britain and William Graham Sumner were the leading neo-classical liberal theorists of the 19th century. The evolution from classical to social/welfare liberalism is for example reflected in Britain in the evolution of the thought of John Maynard Keynes. United States In the United States, liberalism took a strong root because it had little opposition to its ideals, whereas in Europe liberalism was opposed by many reactionary or feudal interests such as the nobility; the aristocracy, including army officers; the landed gentry; and the established church. Thomas Jefferson adopted many of the ideals of liberalism, but in the Declaration of Independence changed Locke's "life, liberty and property" to the more socially liberal "Life, Liberty and the pursuit of Happiness". As the United States grew, industry became a larger and larger part of American life; and during the term of its first populist President, Andrew Jackson, economic questions came to the forefront. The economic ideas of the Jacksonian era were almost universally the ideas of classical liberalism. Freedom, according to classical liberals, was maximised when the government took a "hands off" attitude toward the economy. Historian Kathleen G. Donohue argues: [A]t the center of classical liberal theory [in Europe] was the idea of laissez-faire. To the vast majority of American classical liberals, however, laissez-faire did not mean no government intervention at all. On the contrary, they were more than willing to see government provide tariffs, railroad subsidies, and internal improvements, all of which benefited producers. What they condemned was intervention on behalf of consumers. Leading magazine The Nation espoused liberalism every week starting in 1865 under the influential editor Edwin Lawrence Godkin (1831–1902). The ideas of classical liberalism remained essentially unchallenged until a series of depressions, thought to be impossible according to the tenets of classical economics, led to economic hardship from which the voters demanded relief. In the words of William Jennings Bryan, "You shall not crucify this nation on a cross of gold". Classical liberalism remained the orthodox belief among American businessmen until the Great Depression. The Great Depression in the United States saw a sea change in liberalism, with priority shifting from the producers to consumers. Franklin D. Roosevelt's New Deal represented the dominance of modern liberalism in politics for decades. In the words of Arthur Schlesinger Jr.: Alan Wolfe summarizes the viewpoint that there is a continuous liberal understanding that includes both Adam Smith and John Maynard Keynes: The view that modern liberalism is a continuation of classical liberalism is not universally shared. James Kurth, Robert E. Lerner, John Micklethwait, Adrian Wooldridge and several other political scholars have argued that classical liberalism still exists today, but in the form of American conservatism. According to Deepak Lal, only in the United States does classical liberalism continue to be a significant political force through American conservatism. American libertarians also claim to be the true continuation of the classical liberal tradition. Intellectual sources John Locke Central to classical liberal ideology was their interpretation of John Locke's Second Treatise of Government and A Letter Concerning Toleration, which had been written as a defence of the Glorious Revolution of 1688. Although these writings were considered too radical at the time for Britain's new rulers, they later came to be cited by Whigs, radicals and supporters of the American Revolution. However, much of later liberal thought was absent in Locke's writings or scarcely mentioned and his writings have been subject to various interpretations. For example, there is little mention of constitutionalism, the separation of powers and limited government. James L. Richardson identified five central themes in Locke's writing: individualism, consent, the concepts of the rule of law and government as trustee, the significance of property and religious toleration. Although Locke did not develop a theory of natural rights, he envisioned individuals in the state of nature as being free and equal. The individual, rather than the community or institutions, was the point of reference. Locke believed that individuals had given consent to government and therefore authority derived from the people rather than from above. This belief would influence later revolutionary movements. As a trustee, government was expected to serve the interests of the people, not the rulers; and rulers were expected to follow the laws enacted by legislatures. Locke also held that the main purpose of men uniting into commonwealths and governments was for the preservation of their property. Despite the ambiguity of Locke's definition of property, which limited property to "as much land as a man tills, plants, improves, cultivates, and can use the product of", this principle held great appeal to individuals possessed of great wealth. Locke held that the individual had the right to follow his own religious beliefs and that the state should not impose a religion against Dissenters, but there were limitations. No tolerance should be shown for atheists, who were seen as amoral, or to Catholics, who were seen as owing allegiance to the Pope over their own national government. Adam Smith Adam Smith's The Wealth of Nations, published in 1776, was to provide most of the ideas of economics, at least until the publication of John Stuart Mill's Principles of Political Economy in 1848. Smith addressed the motivation for economic activity, the causes of prices and the distribution of wealth and the policies the state should follow to maximise wealth. Smith wrote that as long as supply, demand, prices and competition were left free of government regulation, the pursuit of material self-interest, rather than altruism, would maximise the wealth of a society through profit-driven production of goods and services. An "invisible hand" directed individuals and firms to work toward the public good as an unintended consequence of efforts to maximise their own gain. This provided a moral justification for the accumulation of wealth, which had previously been viewed by some as sinful. He assumed that workers could be paid wages as low as was necessary for their survival, which was later transformed by David Ricardo and Thomas Robert Malthus into the "iron law of wages". His main emphasis was on the benefit of free internal and international trade, which he thought could increase wealth through specialisation in production. He also opposed restrictive trade preferences, state grants of monopolies and employers' organisations and trade unions. Government should be limited to defence, public works and the administration of justice, financed by taxes based on income. Smith's economics was carried into practice in the nineteenth century with the lowering of tariffs in the 1820s, the repeal of the Poor Relief Act that had restricted the mobility of labour in 1834 and the end of the rule of the East India Company over India in 1858. Classical economics In addition to Smith's legacy, Say's law, Thomas Robert Malthus' theories of population and David Ricardo's iron law of wages became central doctrines of classical economics. The pessimistic nature of these theories provided a basis for criticism of capitalism by its opponents and helped perpetuate the tradition of calling economics the "dismal science". Jean-Baptiste Say was a French economist who introduced Smith's economic theories into France and whose commentaries on Smith were read in both France and Britain. Say challenged Smith's labour theory of value, believing that prices were determined by utility and also emphasised the critical role of the entrepreneur in the economy. However, neither of those observations became accepted by British economists at the time. His most important contribution to economic thinking was Say's law, which was interpreted by classical economists that there could be no overproduction in a market and that there would always be a balance between supply and demand. This general belief influenced government policies until the 1930s. Following this law, since the economic cycle was seen as self-correcting, government did not intervene during periods of economic hardship because it was seen as futile. Malthus wrote two books, An Essay on the Principle of Population (published in 1798) and Principles of Political Economy (published in 1820). The second book which was a rebuttal of Say's law had little influence on contemporary economists. However, his first book became a major influence on classical liberalism. In that book, Malthus claimed that population growth would outstrip food production because population grew geometrically while food production grew arithmetically. As people were provided with food, they would reproduce until their growth outstripped the food supply. Nature would then provide a check to growth in the forms of vice and misery. No gains in income could prevent this and any welfare for the poor would be self-defeating. The poor were in fact responsible for their own problems which could have been avoided through self-restraint. Ricardo, who was an admirer of Smith, covered many of the same topics, but while Smith drew conclusions from broadly empirical observations he used deduction, drawing conclusions by reasoning from basic assumptions While Ricardo accepted Smith's labour theory of value, he acknowledged that utility could influence the price of some rare items. Rents on agricultural land were seen as the production that was surplus to the subsistence required by the tenants. Wages were seen as the amount required for workers' subsistence and to maintain current population levels. According to his iron law of wages, wages could never rise beyond subsistence levels. Ricardo explained profits as a return on capital, which itself was the product of labour, but a conclusion many drew from his theory was that profit was a surplus appropriated by capitalists to which they were not entitled. Utilitarianism Utilitarianism provided the political justification for implementation of economic liberalism by British governments, which was to dominate economic policy from the 1830s. Although utilitarianism prompted legislative and administrative reform and John Stuart Mill's later writings on the subject foreshadowed the welfare state, it was mainly used as a justification for laissez-faire. The central concept of utilitarianism, which was developed by Jeremy Bentham, was that public policy should seek to provide "the greatest happiness of the greatest number". While this could be interpreted as a justification for state action to reduce poverty, it was used by classical liberals to justify inaction with the argument that the net benefit to all individuals would be higher. Political economy Classical liberals following Mill saw utility as the foundation for public policies. This broke both with conservative "tradition" and Lockean "natural rights", which were seen as irrational. Utility, which emphasises the happiness of individuals, became the central ethical value of all Mill-style liberalism. Although utilitarianism inspired wide-ranging reforms, it became primarily a justification for laissez-faire economics. However, Mill adherents rejected Smith's belief that the "invisible hand" would lead to general benefits and embraced Malthus' view that population expansion would prevent any general benefit and Ricardo's view of the inevitability of class conflict. Laissez-faire was seen as the only possible economic approach and any government intervention was seen as useless and harmful. The Poor Law Amendment Act 1834 was defended on "scientific or economic principles" while the authors of the Elizabethan Poor Law of 1601 were seen as not having had the benefit of reading Malthus. However, commitment to laissez-faire was not uniform and some economists advocated state support of public works and education. Classical liberals were also divided on free trade as Ricardo expressed doubt that the removal of grain tariffs advocated by Richard Cobden and the Anti-Corn Law League would have any general benefits. Most classical liberals also supported legislation to regulate the number of hours that children were allowed to work and usually did not oppose factory reform legislation. Despite the pragmatism of classical economists, their views were expressed in dogmatic terms by such popular writers as Jane Marcet and Harriet Martineau. The strongest defender of laissez-faire was The Economist founded by James Wilson in 1843. The Economist criticised Ricardo for his lack of support for free trade and expressed hostility to welfare, believing that the lower orders were responsible for their economic circumstances. The Economist took the position that regulation of factory hours was harmful to workers and also strongly opposed state support for education, health, the provision of water and granting of patents and copyrights. The Economist also campaigned against the Corn Laws that protected landlords in the United Kingdom of Great Britain and Ireland against competition from less expensive foreign imports of cereal products. A rigid belief in laissez-faire guided the government response in 1846–1849 to the Great Famine in Ireland, during which an estimated 1.5 million people died. The minister responsible for economic and financial affairs, Charles Wood, expected that private enterprise and free trade, rather than government intervention, would alleviate the famine. The Corn Laws were finally repealed in 1846 by the removal of tariffs on grain which kept the price of bread artificially high, but it came too late to stop the Irish famine, partly because it was done in stages over three years. Free trade and world peace Several liberals, including Smith and Cobden, argued that the free exchange of goods between nations could lead to world peace. Erik Gartzke states: "Scholars like Montesquieu, Adam Smith, Richard Cobden, Norman Angell, and Richard Rosecrance have long speculated that free markets have the potential to free states from the looming prospect of recurrent warfare". American political scientists John R. Oneal and Bruce M. Russett, well known for their work on the democratic peace theory, state: In The Wealth of Nations, Smith argued that as societies progressed from hunter gatherers to industrial societies the spoils of war would rise, but that the costs of war would rise further and thus making war difficult and costly for industrialised nations: Cobden believed that military expenditures worsened the welfare of the state and benefited a small, but concentrated elite minority, summing up British imperialism, which he believed was the result of the economic restrictions of mercantilist policies. To Cobden and many classical liberals, those who advocated peace must also advocate free markets. The belief that free trade would promote peace was widely shared by English liberals of the 19th and early 20th century, leading the economist John Maynard Keynes (1883–1946), who was a classical liberal in his early life, to say that this was a doctrine on which he was "brought up" and which he held unquestioned only until the 1920s. In his review of a book on Keynes, Michael S. Lawlor argues that it may be in large part due to Keynes' contributions in economics and politics, as in the implementation of the Marshall Plan and the way economies have been managed since his work, "that we have the luxury of not facing his unpalatable choice between free trade and full employment". A related manifestation of this idea was the argument of Norman Angell (1872–1967), most famously before World War I in The Great Illusion (1909), that the interdependence of the economies of the major powers was now so great that war between them was futile and irrational; and therefore unlikely. Notable thinkers Thomas Hobbes (1588–1679) James Harrington (author) (1611-1677) John Locke (1632–1704) Montesquieu (1689-1755) Voltaire (1694–1778) Jean-Jacques Rousseau (1712–1778) Adam Smith (1723–1790) Immanuel Kant (1724–1804) Anders Chydenius (1729–1803) Thomas Paine (1737–1809) Cesare Beccaria (1738-1794) Marquis de Condorcet (1743-1794) Thomas Jefferson (1743–1826) Jeremy Bentham (1748–1832) Gaetano Filangieri (1753-1788) Benjamin Constant (1767-1830) David Ricardo (1772–1823) Alexis de Tocqueville (1805–1859) Giuseppe Mazzini (1805–1872) John Stuart Mill (1806-1872) William Ewart Gladstone (1809–1898) Horace Greeley (1811-1873) Fukuzawa Yukichi (1835-1901) Henry George (1839-1897) Friedrich Naumann (1860-1919) Friedrich Hayek (1899–1992) Karl Popper (1902-1994) Ayn Rand (1905-1982) Raymond Aron (1905-1983) Milton Friedman (1912–2006) Robert Nozick (1938-2002) Classical liberal parties worldwide Although general libertarian, liberal-conservative and some right-wing populist political parties are also included in classical liberal parties in a broad sense, but only general classical liberal parties such as Germany's FDP, Denmark's Liberal Alliance and Thailand Democrat Party should be listed. Classical liberal parties or parties with classical liberal factions Australia : Liberal Party of Australia, Liberal Democratic Party Austria : NEOS – The New Austria and Liberal Forum, Freedom Party of Austria (factions) Belgium : Open Flemish Liberals and Democrats, Reformist Movement Brazil : New Party Canada : People's Party Chile : Evópoli, Amplitude Denmark : Venstre, Liberal Alliance Estonia : Estonian Reform Party France : Renaissance Germany : Free Democratic Party Iceland : Reform Party India : Lok Satta Party Lithuania : Liberals' Movement Luxembourg : Democratic Party Netherlands : People's Party for Freedom and Democracy, Belang van Nederland New Zealand : New Zealand National Party, ACT New Zealand Norway : Venstre, Progress Party Poland : Modern, Civic Platform Portugal : Liberal Initiative Russia : PARNAS Serbia : Liberal Democratic Party of Serbia Slovakia : Freedom and Solidarity South Africa : Democratic Alliance Sweden : Liberals, Classical Liberal Party Switzerland: FDP.The Liberals Thailand : Democrat Party Turkey : Liberal Democratic Party United Kingdom : Liberal Party, Reform UK Historical classical liberal parties or parties with classical liberal factions (Since 1900s) Chile: Liberal Party Germany : German Democratic Party India : Swatantra Party Japan : Liberal Party (1998), Liberal League South Korea : New Democratic Party Switzerland : Free Democratic Party of Switzerland, Liberal Party of Switzerland United Kingdom : Liberal Party See also Age of Enlightenment Austrian School Bourbon Democrat National Democratic Party Classical economics Cultural liberalism Classical radicalism Modern liberalism Classical republicanism Constitutionalism Constitutional liberalism Conservative liberalism Economic liberalism Fiscal conservatism Friedrich Naumann Foundation Georgism Gladstonian liberalism Jeffersonian democracy Liberal conservatism Liberal democracy Liberalism in Europe Libertarianism Left-libertarianism Right-libertarianism List of liberal theorists Neoclassical liberalism Neoliberalism Night-watchman state Opportunist Republicans Orléanist Physiocracy Political individualism Rule of law Separation of powers Whig history Notes References Sources Further reading Alan Bullock and Maurice Shock, ed. (1967). The Liberal Tradition: From Fox to Keynes. Oxford. Clarendon Press. Katherine Henry (2011). Liberalism and the Culture of Security: The Nineteenth-Century Rhetoric of Reform. University of Alabama Press; draws on literary and other writings to study the debates over liberty and tyranny. Donald Markwell (2006). John Maynard Keynes and International Relations: Economic Paths to War and Peace. Oxford, England. Oxford University Press. . Gustav Pollak, ed. (1915). Fifty Years of American Idealism: 1865–1915; short history of The Nation plus numerous excerpts, most by Edwin Lawrence Godkin. External links Liberalism 19th-century introductions Economic liberalism Ideologies of capitalism Liberalism in Europe Political ideologies
3,030
6,696
https://en.wikipedia.org/wiki/Chain%20mail
Chain mail
Chain mail (properly called mail or maille but usually called chain mail or chainmail) is a type of armour consisting of small metal rings linked together in a pattern to form a mesh. It was in common military use between the 3rd century BC and the 16th century AD in Europe, and longer in Asia and North Africa. A coat of this armour is often called a hauberk, and sometimes a byrnie. History The earliest examples of surviving mail were found in the Carpathian Basin at a burial in Horný Jatov, Slovakia dated at 3rd century BC, and in a chieftain's burial located in Ciumești, Romania. Its invention is commonly credited to the Celts, but there are examples of Etruscan pattern mail dating from at least the 4th century BC. Mail may have been inspired by the much earlier scale armour. Mail spread to North Africa, West Africa, the Middle East, Central Asia, India, Tibet, South East Asia, and Japan. Herodotus wrote that the ancient Persians wore scale armour, but mail is also distinctly mentioned in the Avesta, the ancient holy scripture of the Persian religion of Zoroastrianism that was founded by the prophet Zoroaster in the 5th century BC. Mail continues to be used in the 21st century as a component of stab-resistant body armour, cut-resistant gloves for butchers and woodworkers, shark-resistant wetsuits for defense against shark bites, and a number of other applications. Etymology The origins of the word mail are not fully known. One theory is that it originally derives from the Latin word macula, meaning spot or opacity (as in macula of retina). Another theory relates the word to the old French maillier, meaning to hammer (related to the modern English word malleable). In modern French, maille refers to a loop or stitch. The Arabic words "burnus", , a burnoose; a hooded cloak, also a chasuble (worn by Coptic priests) and "barnaza", , to bronze, suggest an Arabic influence for the Carolingian armour known as "byrnie" (see below). The first attestations of the word mail are in Old French and Anglo-Norman: maille, maile, or male or other variants, which became mailye, maille, maile, male, or meile in Middle English. Civilizations that used mail invented specific terms for each garment made from it. The standard terms for European mail armour derive from French: leggings are called chausses, a hood is a mail coif, and mittens, mitons. A mail collar hanging from a helmet is a camail or aventail. A shirt made from mail is a hauberk if knee-length and a haubergeon if mid-thigh length. A layer (or layers) of mail sandwiched between layers of fabric is called a jazerant. A waist-length coat in medieval Europe was called a byrnie, although the exact construction of a byrnie is unclear, including whether it was constructed of mail or other armour types. Noting that the byrnie was the "most highly valued piece of armour" to the Carolingian soldier, Bennet, Bradbury, DeVries, Dickie, and Jestice indicate that: There is some dispute among historians as to what exactly constituted the Carolingian byrnie. Relying... only on artistic and some literary sources because of the lack of archaeological examples, some believe that it was a heavy leather jacket with metal scales sewn onto it. It was also quite long, reaching below the hips and covering most of the arms. Other historians claim instead that the Carolingian byrnie was nothing more than a coat of mail, but longer and perhaps heavier than traditional early medieval mail. Without more certain evidence, this dispute will continue. In Europe The use of mail as battlefield armour was common during the Iron Age and the Middle Ages, becoming less common over the course of the 16th and 17th centuries when plate armour and more advanced firearms were developed. It is believed that the Roman Republic first came into contact with mail fighting the Gauls in Cisalpine Gaul, now Northern Italy. The Roman army adopted the technology for their troops in the form of the lorica hamata which was used as a primary form of armour through the Imperial period. After the fall of the Western Empire, much of the infrastructure needed to create plate armour diminished. Eventually the word "mail" came to be synonymous with armour. It was typically an extremely prized commodity, as it was expensive and time-consuming to produce and could mean the difference between life and death in a battle. Mail from dead combatants was frequently looted and was used by the new owner or sold for a lucrative price. As time went on and infrastructure improved, it came to be used by more soldiers. The oldest intact mail hauberk still in existence is thought to have been worn by Leopold III, Duke of Austria, who died in 1386 during the Battle of Sempach. Eventually with the rise of the lanced cavalry charge, impact warfare, and high-powered crossbows, mail came to be used as a secondary armour to plate for the mounted nobility. By the 14th century, articulated plate armour was commonly used to supplement mail. Eventually mail was supplanted by plate for the most part, as it provided greater protection against windlass crossbows, bludgeoning weapons, and lance charges while maintaining most of the mobility of mail. However, it was still widely used by many soldiers, along with brigandines and padded jacks. These three types of armour made up the bulk of the equipment used by soldiers, with mail being the most expensive. It was sometimes more expensive than plate armour. Mail typically persisted longer in less technologically advanced areas such as Eastern Europe but was in use throughout Europe into the 16th century. During the late 19th and early 20th century, mail was used as a material for bulletproof vests, most notably by the Wilkinson Sword Company. Results were unsatisfactory; Wilkinson mail worn by the Khedive of Egypt's regiment of "Iron Men" was manufactured from split rings which proved to be too brittle, and the rings would fragment when struck by bullets and aggravate the injury. The riveted mail armour worn by the opposing Sudanese Madhists did not have the same problem but also proved to be relatively useless against the firearms of British forces at the battle of Omdurman. During World War I, Wilkinson Sword transitioned from mail to a lamellar design which was the precursor to the flak jacket. Also during World War I, a mail fringe, designed by Captain Cruise of the British Infantry, was added to helmets to protect the face. This proved unpopular with soldiers, in spite of being proven to defend against a three-ounce (100 g) shrapnel round fired at a distance of . A protective face mask or splatter mask had a mail veil and was used by early tank crews as a measure against flying steel fragments (spalling) inside the vehicle. In Asia Mail armour was introduced to the Middle East and Asia through the Romans and was adopted by the Sassanid Persians starting in the 3rd century AD, where it was supplemental to the scale and lamellar armour already used. Mail was commonly also used as horse armour for cataphracts and heavy cavalry as well as armour for the soldiers themselves. Asian mail could be just as heavy as the European variety and sometimes had prayer symbols stamped on the rings as a sign of their craftsmanship as well as for divine protection. Mail armour is mentioned in the Quran as being a gift revealed by Allah to David: 21:80 It was We Who taught him the making of coats of mail for your benefit, to guard you from each other's violence: will ye then be grateful? (Yusuf Ali's translation) From the Abbasid Caliphate, mail was quickly adopted in Central Asia by Timur (Tamerlane) and the Sogdians and by India's Delhi Sultanate. Mail armour was introduced by the Turks in late 12th century and commonly used by Turk and the Mughal and Suri armies where it eventually became the armour of choice in India. Indian mail was constructed with alternating rows of solid links and round riveted links and it was often integrated with plate protection (mail and plate armour). China Mail was introduced to China when its allies in Central Asia paid tribute to the Tang Emperor in 718 by giving him a coat of "link armour" assumed to be mail. China first encountered the armour in 384 when its allies in the nation of Kuchi arrived wearing "armour similar to chains". Once in China, mail was imported but was not produced widely. Due to its flexibility, comfort, and rarity, it was typically the armour of high-ranking guards and those who could afford the exotic import (to show off their social status) rather than the armour of the rank and file, who used more common brigandine, scale, and lamellar types. However, it was one of the few military products that China imported from foreigners. Mail spread to Korea slightly later where it was imported as the armour of imperial guards and generals. Japan In Japan mail is called kusari which means chain. When the word kusari is used in conjunction with an armoured item it usually means that mail makes up the majority of the armour composition. An example of this would be kusari gusoku which means chain armour. Kusari jackets, hoods, gloves, vests, shin guards, shoulder guards, thigh guards, and other armoured clothing were produced, even kusari tabi socks. Kusari was used in samurai armour at least from the time of the Mongol invasion (1270s) but particularly from the Nambokucho Period (1336–1392). The Japanese used many different weave methods including a square 4-in-1 pattern (so gusari), a hexagonal 6-in-1 pattern (hana gusari) and a European 4-in-1 (nanban gusari). The rings of Japanese mail were much smaller than their European counterparts; they would be used in patches to link together plates and to drape over vulnerable areas such as the armpits. Riveted kusari was known and used in Japan. On page 58 of the book Japanese Arms & Armor: Introduction by H. Russell Robinson, there is a picture of Japanese riveted kusari, and this quote from the translated reference of 1800 book, The Manufacture of Armour and Helmets in Sixteenth-Century Japan, shows that the Japanese not only knew of and used riveted kusari but that they manufactured it as well. ... karakuri-namban (riveted namban), with stout links each closed by a rivet. Its invention is credited to Fukushima Dembei Kunitaka, pupil, of Hojo Awa no Kami Ujifusa, but it is also said to be derived directly from foreign models. It is heavy because the links are tinned (biakuro-nagashi) and these are also sharp-edged because they are punched out of iron plate Butted or split (twisted) links made up the majority of kusari links used by the Japanese. Links were either butted together meaning that the ends touched each other and were not riveted, or the kusari was constructed with links where the wire was turned or twisted two or more times; these split links are similar to the modern split ring commonly used on keychains. The rings were lacquered black to prevent rusting, and were always stitched onto a backing of cloth or leather. The kusari was sometimes concealed entirely between layers of cloth. Kusari gusoku or chain armour was commonly used during the Edo period 1603 to 1868 as a stand-alone defense. According to George Cameron Stone Entire suits of mail kusari gusoku were worn on occasions, sometimes under the ordinary clothing Ian Bottomley in his book Arms and Armor of the Samurai: The History of Weaponry in Ancient Japan shows a picture of a kusari armour and mentions kusari katabira (chain jackets) with detachable arms being worn by samurai police officials during the Edo period. The end of the samurai era in the 1860s, along with the 1876 ban on wearing swords in public, marked the end of any practical use for mail and other armour in Japan. Japan turned to a conscription army and uniforms replaced armour. Effectiveness Mail armour provided an effective defense against slashing blows by edged weapons and some forms of penetration by many thrusting and piercing weapons; in fact, a study conducted at the Royal Armouries at Leeds concluded that "it is almost impossible to penetrate using any conventional medieval weapon". Generally speaking, mail's resistance to weapons is determined by four factors: linkage type (riveted, butted, or welded), material used (iron versus bronze or steel), weave density (a tighter weave needs a thinner weapon to surpass), and ring thickness (generally ranging from 18 to 14 gauge (1.02–1.63 mm diameter) wire in most examples). Mail, if a warrior could afford it, provided a significant advantage when combined with competent fighting techniques. When the mail was not riveted, a thrust from most sharp weapons could penetrate it. However, when mail was riveted, only a strong well-placed thrust from certain spears, or thin or dedicated mail-piercing swords like the estoc, could penetrate, and a pollaxe or halberd blow could break through the armour. Strong projectile weapons such as stronger self bows, recurve bows, and crossbows could also penetrate riveted mail. Some evidence indicates that during armoured combat, the intention was to actually get around the armour rather than through it—according to a study of skeletons found in Visby, Sweden, a majority of the skeletons showed wounds on less well protected legs. Although mail was a formidable protection, due to technological advances as time progressed, mail worn under plate armour (and stand-alone mail as well) could be penetrated by the conventional weaponry of another knight. The flexibility of mail meant that a blow would often injure the wearer, potentially causing serious bruising or fractures, and it was a poor defence against head trauma. Mail-clad warriors typically wore separate rigid helms over their mail coifs for head protection. Likewise, blunt weapons such as maces and warhammers could harm the wearer by their impact without penetrating the armour; usually a soft armour, such as gambeson, was worn under the hauberk. Medieval surgeons were very well capable of setting and caring for bone fractures resulting from blunt weapons. With the poor understanding of hygiene, however, cuts that could get infected were much more of a problem. Thus mail armour proved to be sufficient protection in most situations. Manufacture Several patterns of linking the rings together have been known since ancient times, with the most common being the 4-to-1 pattern (where each ring is linked with four others). In Europe, the 4-to-1 pattern was completely dominant. Mail was also common in East Asia, primarily Japan, with several more patterns being utilised and an entire nomenclature developing around them. Historically, in Europe, from the pre-Roman period on, the rings composing a piece of mail would be riveted closed to reduce the chance of the rings splitting open when subjected to a thrusting attack or a hit by an arrow. Up until the 14th century European mail was made of alternating rows of round riveted rings and solid rings. Sometime during the 14th century European mail makers started to transition from round rivets to wedge shaped rivets but continued using alternating rows of solid rings. Eventually European mail makers stopped using solid rings and almost all European mail was made from wedge riveted rings only with no solid rings. Both were commonly made of wrought iron, but some later pieces were made of heat-treated steel. Wire for the riveted rings was formed by either of two methods. One was to hammer out wrought iron into plates and cut or slit the plates. These thin pieces were then pulled through a draw plate repeatedly until the desired diameter was achieved. Waterwheel powered drawing mills are pictured in several period manuscripts. Another method was to simply forge down an iron billet into a rod and then proceed to draw it out into wire. The solid links would have been made by punching from a sheet. Guild marks were often stamped on the rings to show their origin and craftsmanship. Forge welding was also used to create solid links, but there are few possible examples known; the only well documented example from Europe is that of the camail (mail neck-defence) of the 7th century Coppergate helmet. Outside of Europe this practice was more common such as "theta" links from India. Very few examples of historic butted mail have been found and it is generally accepted that butted mail was never in wide use historically except in Japan where mail (kusari) was commonly made from butted links. Butted link mail was also used by the Moros of the Philippines in their mail and plate armours. Modern uses Practical uses Mail is used as protective clothing for butchers against meat-packing equipment. Workers may wear up to of mail under their white coats. Butchers also commonly wear a single mail glove to protect themselves from self-inflicted injury while cutting meat, as do many oyster shuckers. Scuba divers sometimes use mail to protect them from sharkbite, as do animal control officers for protection against the animals they handle. In 1980 marine biologist Jeremiah Sullivan patented his design for Neptunic full coverage chain mail shark resistant suits which he had developed for close encounters with sharks. Shark expert and underwater filmmaker Valerie Taylor was among the first to develop and test shark suits in 1979 while diving with sharks. Mail is widely used in industrial settings as shrapnel guards and splash guards in metal working operations. Electrical applications for mail include RF leakage testing and being worn as a Faraday cage suit by tesla coil enthusiasts and high voltage electrical workers. Stab-proof vests Conventional textile-based ballistic vests are designed to stop soft-nosed bullets but offer little defense from knife attacks. Knife-resistant armour is designed to defend against knife attacks; some of these use layers of metal plates, mail and metallic wires. Historical re-enactment Many historical reenactment groups, especially those whose focus is Antiquity or the Middle Ages, commonly use mail both as practical armour and for costuming. Mail is especially popular amongst those groups which use steel weapons. A modern hauberk made from 1.5 mm diameter wire with 10 mm inner diameter rings weighs roughly and contains 15,000–45,000 rings. One of the drawbacks of mail is the uneven weight distribution; the stress falls mainly on shoulders. Weight can be better distributed by wearing a belt over the mail, which provides another point of support. Mail worn today for re-enactment and recreational use can be made in a variety of styles and materials. Most recreational mail today is made of butted links which are galvanised or stainless steel. This is historically inaccurate but is much less expensive to procure and especially to maintain than historically accurate reproductions. Mail can also be made of titanium, aluminium, bronze, or copper. Riveted mail offers significantly better protection ability as well as historical accuracy than mail constructed with butted links. Japanese mail (kusari) is one of the few historically correct examples of mail being constructed with such butted links. Decorative uses Mail remained in use as a decorative and possibly high-status symbol with military overtones long after its practical usefulness had passed. It was frequently used for the epaulettes of military uniforms. It is still used in this form by some regiments of the British Army. Mail has applications in sculpture and jewellery, especially when made out of precious metals or colourful anodized metals. Mail artwork includes headdresses, decorative wall hangings, ornaments, chess sets, macramé, and jewelry. For these non-traditional applications, hundreds of patterns (commonly referred to as "weaves") have been invented. Large-linked mail is occasionally used as a fetish clothing material, with the large links intended to reveal – in part – the body beneath them. In film In some films, knitted string spray-painted with a metallic paint is used instead of actual mail in order to cut down on cost (an example being Monty Python and the Holy Grail, which was filmed on a very small budget). Films more dedicated to costume accuracy often use ABS plastic rings, for the lower cost and weight. Such ABS mail coats were made for The Lord of the Rings film trilogy, in addition to many metal coats. The metal coats are used rarely because of their weight, except in close-up filming where the appearance of ABS rings is distinguishable. A large scale example of the ABS mail used in the Lord of the Rings can be seen in the entrance to the Royal Armouries museum in Leeds in the form of a large curtain bearing the logo of the museum. It was acquired from the makers of the film's armour, Weta Workshop, when the museum hosted an exhibition of WETA armour from their films. For the film Mad Max Beyond Thunderdome, Tina Turner is said to have worn actual mail and she complained how heavy this was. Game of Thrones makes use of mail, notably during the "Red Wedding" scene. Gallery See also Mail-based armour Banded mail Hauberk Mail and plate armour Kusari (Japanese mail armour) Lorica hamata Lorica plumata with scales attached to a backing of mail Tatami (Japanese armour) Armour supplementary to mail Typically worn under mail armour if thin or over mail armour if thick: Gambeson (also known as quilted armour or a padded jack) Can be worn over mail armour: Brigandine Coat of plates Lamellar armour Mirror armour (supplementary plates worn over mail) Scale armour Splint armour Transitional armour Others: Cataphract Proofing (armour) Ring armour References External links Erik D. Schmid/The Mail Research Society The Treatment of Mail on an Arm Guard from the Armoury of the Shah Shuja: Ethical Repair and in situ Documentation in Miniature Excavated lorica hamata Maillers Worldwide - weaves/tutorials/articles, and gallery photos The Maille Artisans International League (MAIL) – Hundreds of weaves/tutorials/articles, and gallery pictures "Mail: Unchained", an article taking an in-depth look at the construction and usage of European chain mail Construction tips Butted mail: A Mailmaker's Guide The Ringinator - Tool for making jump rings The Apprentice Armorer's Illustrated Handbook For Making Mail The Ring Lord Chainmail Discussion Forum Phong's Chainmaille Tutorials Ring Guide – Sizing Specialty Square Rings to Round Weaves Ancient Roman originals can be seen on the pages of the Roman Military Equipment Web museum, Romancoins.info http://artofchainmail.com/patterns/european/index.html http://www.iranicaonline.org/articles/armor-ii Chain Mail 101: Learn all about making Chain Mail Body armor Medieval armour Military equipment of antiquity
3,039
6,697
https://en.wikipedia.org/wiki/Cerberus
Cerberus
In Greek mythology, Cerberus (; Kérberos ), often referred to as the hound of Hades, is a multi-headed dog that guards the gates of the Underworld to prevent the dead from leaving. He was the offspring of the monsters Echidna and Typhon, and was usually described as having three heads, a serpent for a tail, and snakes protruding from multiple parts of his body. Cerberus is primarily known for his capture by Heracles, the last of Heracles' twelve labours. Etymology The etymology of Cerberus' name is uncertain. Ogden refers to attempts to establish an Indo-European etymology as "not yet successful". It has been claimed to be related to the Sanskrit word सर्वरा sarvarā, used as an epithet of one of the dogs of Yama, from a Proto-Indo-European word *k̑érberos, meaning "spotted". Lincoln (1991), among others, critiques this etymology. This etymology was also rejected by Manfred Mayrhofer, who proposed an Austro-Asiatic origin for the word, and Beekes. Lincoln notes a similarity between Cerberus and the Norse mythological dog Garmr, relating both names to a Proto-Indo-European root *ger- "to growl" (perhaps with the suffixes -*m/*b and -*r). However, as Ogden observes, this analysis actually requires Kerberos and Garmr to be derived from two different Indo-European roots (*ker- and *gher- respectively), and so does not actually establish a relationship between the two names. Though probably not Greek, Greek etymologies for Cerberus have been offered. An etymology given by Servius (the late-fourth-century commentator on Virgil)—but rejected by Ogden—derives Cerberus from the Greek word creoboros meaning "flesh-devouring". Another suggested etymology derives Cerberus from "Ker berethrou", meaning "evil of the pit". Descriptions Descriptions of Cerberus vary, including the number of his heads. Cerberus was usually three-headed, though not always. Cerberus had several multi-headed relatives. His father was the multi snake-headed Typhon, and Cerberus was the brother of three other multi-headed monsters, the multi-snake-headed Lernaean Hydra; Orthrus, the two-headed dog who guarded the Cattle of Geryon; and the Chimera, who had three heads: that of a lion, a goat, and a snake. And, like these close relatives, Cerberus was, with only the rare iconographic exception, multi-headed. In the earliest description of Cerberus, Hesiod's Theogony (c. 8th – 7th century BC), Cerberus has fifty heads, while Pindar (c. 522 – c. 443 BC) gave him one hundred heads. However, later writers almost universally give Cerberus three heads. An exception is the Latin poet Horace's Cerberus which has a single dog head, and one hundred snake heads. Perhaps trying to reconcile these competing traditions, Apollodorus's Cerberus has three dog heads and the heads of "all sorts of snakes" along his back, while the Byzantine poet John Tzetzes (who probably based his account on Apollodorus) gives Cerberus fifty heads, three of which were dog heads, the rest being the "heads of other beasts of all sorts". In art Cerberus is most commonly depicted with two dog heads (visible), never more than three, but occasionally with only one. On one of the two earliest depictions (c. 590–580 BC), a Corinthian cup from Argos (see below), now lost, Cerberus was shown as a normal single-headed dog. The first appearance of a three-headed Cerberus occurs on a mid-sixth-century BC Laconian cup (see below). Horace's many snake-headed Cerberus followed a long tradition of Cerberus being part snake. This is perhaps already implied as early as in Hesiod's Theogony, where Cerberus' mother is the half-snake Echidna, and his father the snake-headed Typhon. In art, Cerberus is often shown as being part snake, for example the lost Corinthian cup showed snakes protruding from Cerberus' body, while the mid sixth-century BC Laconian cup gives Cerberus a snake for a tail. In the literary record, the first certain indication of Cerberus' serpentine nature comes from the rationalized account of Hecataeus of Miletus (fl. 500–494 BC), who makes Cerberus a large poisonous snake. Plato refers to Cerberus' composite nature, and Euphorion of Chalcis (3rd century BC) describes Cerberus as having multiple snake tails, and presumably in connection to his serpentine nature, associates Cerberus with the creation of the poisonous aconite plant. Virgil has snakes writhe around Cerberus' neck, Ovid's Cerberus has a venomous mouth, necks "vile with snakes", and "hair inwoven with the threatening snake", while Seneca gives Cerberus a mane consisting of snakes, and a single snake tail. Cerberus was given various other traits. According to Euripides, Cerberus not only had three heads but three bodies, and according to Virgil he had multiple backs. Cerberus ate raw flesh (according to Hesiod), had eyes which flashed fire (according to Euphorion), a three-tongued mouth (according to Horace), and acute hearing (according to Seneca). The Twelfth Labour of Heracles Cerberus' only mythology concerns his capture by Heracles. As early as Homer we learn that Heracles was sent by Eurystheus, the king of Tiryns, to bring back Cerberus from Hades the king of the underworld. According to Apollodorus, this was the twelfth and final labour imposed on Heracles. In a fragment from a lost play Pirithous, (attributed to either Euripides or Critias) Heracles says that, although Eurystheus commanded him to bring back Cerberus, it was not from any desire to see Cerberus, but only because Eurystheus thought that the task was impossible. Heracles was aided in his mission by his being an initiate of the Eleusinian Mysteries. Euripides has his initiation being "lucky" for Heracles in capturing Cerberus. And both Diodorus Siculus and Apollodorus say that Heracles was initiated into the Mysteries, in preparation for his descent into the underworld. According to Diodorus, Heracles went to Athens, where Musaeus, the son of Orpheus, was in charge of the initiation rites, while according to Apollodorus, he went to Eumolpus at Eleusis. Heracles also had the help of Hermes, the usual guide of the underworld, as well as Athena. In the Odyssey, Homer has Hermes and Athena as his guides. And Hermes and Athena are often shown with Heracles on vase paintings depicting Cerberus' capture. By most accounts, Heracles made his descent into the underworld through an entrance at Tainaron, the most famous of the various Greek entrances to the underworld. The place is first mentioned in connection with the Cerberus story in the rationalized account of Hecataeus of Miletus (fl. 500–494 BC), and Euripides, Seneca, and Apolodorus, all have Heracles descend into the underworld there. However Xenophon reports that Heracles was said to have descended at the Acherusian Chersonese near Heraclea Pontica, on the Black Sea, a place more usually associated with Heracles' exit from the underworld (see below). Heraclea, founded c. 560 BC, perhaps took its name from the association of its site with Heracles' Cerberian exploit. Theseus and Pirithous While in the underworld, Heracles met the heroes Theseus and Pirithous, where the two companions were being held prisoner by Hades for attempting to carry off Hades' wife Persephone. Along with bringing back Cerberus, Heracles also managed (usually) to rescue Theseus, and in some versions Pirithous as well. According to Apollodorus, Heracles found Theseus and Pirithous near the gates of Hades, bound to the "Chair of Forgetfulness, to which they grew and were held fast by coils of serpents", and when they saw Heracles, "they stretched out their hands as if they should be raised from the dead by his might", and Heracles was able to free Theseus, but when he tried to raise up Pirithous, "the earth quaked and he let go." The earliest evidence for the involvement of Theseus and Pirithous in the Cerberus story, is found on a shield-band relief (c. 560 BC) from Olympia, where Theseus and Pirithous (named) are seated together on a chair, arms held out in supplication, while Heracles approaches, about to draw his sword. The earliest literary mention of the rescue occurs in Euripides, where Heracles saves Theseus (with no mention of Pirithous). In the lost play Pirithous, both heroes are rescued, while in the rationalized account of Philochorus, Heracles was able to rescue Theseus, but not Pirithous. In one place Diodorus says Heracles brought back both Theseus and Pirithous, by the favor of Persephone, while in another he says that Pirithous remained in Hades, or according to "some writers of myth" that neither Theseus, nor Pirithous returned. Both are rescued in Hyginus. Capture There are various versions of how Heracles accomplished Cerberus' capture. According to Apollodorus, Heracles asked Hades for Cerberus, and Hades told Heracles he would allow him to take Cerberus only if he "mastered him without the use of the weapons which he carried", and so, using his lion-skin as a shield, Heracles squeezed Cerberus around the head until he submitted. In some early sources Cerberus' capture seems to involve Heracles fighting Hades. Homer (Iliad 5.395–397) has Hades injured by an arrow shot by Heracles. A scholium to the Iliad passage, explains that Hades had commanded that Heracles "master Cerberus without shield or Iron". Heracles did this, by (as in Apollodorus) using his lion-skin instead of his shield, and making stone points for his arrows, but when Hades still opposed him, Heracles shot Hades in anger. Consistent with the no iron requirement, on an early-sixth-century BC lost Corinthian cup, Heracles is shown attacking Hades with a stone, while the iconographic tradition, from c. 560 BC, often shows Heracles using his wooden club against Cerberus. Euripides has Amphitryon ask Heracles: "Did you conquer him in fight, or receive him from the goddess [i.e. Persephone]? To which Heracles answers: "In fight", and the Pirithous fragment says that Heracles "overcame the beast by force". However, according to Diodorus, Persephone welcomed Heracles "like a brother" and gave Cerberus "in chains" to Heracles. Aristophanes has Heracles seize Cerberus in a stranglehold and run off, while Seneca has Heracles again use his lion-skin as shield, and his wooden club, to subdue Cerberus, after which a quailing Hades and Persephone allow Heracles to lead a chained and submissive Cerberus away. Cerberus is often shown being chained, and Ovid tells that Heracles dragged the three headed Cerberus with chains of adamant. Exit from the underworld There were several locations which were said to be the place where Heracles brought up Cerberus from the underworld. The geographer Strabo (63/64 BC – c. AD 24) reports that "according to the myth writers" Cerberus was brought up at Tainaron, the same place where Euripides has Heracles enter the underworld. Seneca has Heracles enter and exit at Tainaron. Apollodorus, although he has Heracles enter at Tainaron, has him exit at Troezen. The geographer Pausanias tells us that there was a temple at Troezen with "altars to the gods said to rule under the earth", where it was said that, in addition to Cerberus being "dragged" up by Heracles, Semele was supposed to have been brought up out of the underworld by Dionysus. Another tradition had Cerberus brought up at Heraclea Pontica (the same place which Xenophon had earlier associated with Heracles' descent) and the cause of the poisonous plant aconite which grew there in abundance. Herodorus of Heraclea and Euphorion said that when Heracles brought Cerberus up from the underworld at Heraclea, Cerberus "vomited bile" from which the aconite plant grew up. Ovid, also makes Cerberus the cause of the poisonous aconite, saying that on the "shores of Scythia", upon leaving the underworld, as Cerberus was being dragged by Heracles from a cave, dazzled by the unaccustomed daylight, Cerberus spewed out a "poison-foam", which made the aconite plants growing there poisonous. Seneca's Cerberus too, like Ovid's, reacts violently to his first sight of daylight. Enraged, the previously submissive Cerberus struggles furiously, and Heracles and Theseus must together drag Cerberus into the light. Pausanias reports that according to local legend Cerberus was brought up through a chasm in the earth dedicated to Clymenus (Hades) next to the sanctuary of Chthonia at Hermione, and in Euripides' Heracles, though Euripides does not say that Cerberus was brought out there, he has Cerberus kept for a while in the "grove of Chthonia" at Hermione. Pausanias also mentions that at Mount Laphystion in Boeotia, that there was a statue of Heracles Charops ("with bright eyes"), where the Boeotians said Heracles brought up Cerberus. Other locations which perhaps were also associated with Cerberus being brought out of the underworld include, Hierapolis, Thesprotia, and Emeia near Mycenae. Presented to Eurystheus, returned to Hades In some accounts, after bringing Cerberus up from the underworld, Heracles paraded the captured Cerberus through Greece. Euphorion has Heracles lead Cerberus through Midea in Argolis, as women and children watch in fear, and Diodorus Siculus says of Cerberus, that Heracles "carried him away to the amazement of all and exhibited him to men." Seneca has Juno complain of Heracles "highhandedly parading the black hound through Argive cities" and Heracles greeted by laurel-wreathed crowds, "singing" his praises. Then, according to Apollodorus, Heracles showed Cerberus to Eurystheus, as commanded, after which he returned Cerberus to the underworld. However, according to Hesychius of Alexandria, Cerberus escaped, presumably returning to the underworld on his own. Principal sources The earliest mentions of Cerberus (c. 8th – 7th century BC) occur in Homer's Iliad and Odyssey, and Hesiod's Theogony. Homer does not name or describe Cerberus, but simply refers to Heracles being sent by Eurystheus to fetch the "hound of Hades", with Hermes and Athena as his guides, and, in a possible reference to Cerberus' capture, that Heracles shot Hades with an arrow. According to Hesiod, Cerberus was the offspring of the monsters Echidna and Typhon, was fifty-headed, ate raw flesh, and was the "brazen-voiced hound of Hades", who fawns on those that enter the house of Hades, but eats those who try to leave. Stesichorus (c. 630 – 555 BC) apparently wrote a poem called Cerberus, of which virtually nothing remains. However the early-sixth-century BC-lost Corinthian cup from Argos, which showed a single head, and snakes growing out from many places on his body, was possibly influenced by Stesichorus' poem. The mid-sixth-century BC cup from Laconia gives Cerberus three heads and a snake tail, which eventually becomes the standard representation. Pindar (c. 522 – c. 443 BC) apparently gave Cerberus one hundred heads. Bacchylides (5th century BC) also mentions Heracles bringing Cerberus up from the underworld, with no further details. Sophocles (c. 495 – c. 405 BC), in his Women of Trachis, makes Cerberus three-headed, and in his Oedipus at Colonus, the Chorus asks that Oedipus be allowed to pass the gates of the underworld undisturbed by Cerberus, called here the "untamable Watcher of Hades". Euripides (c. 480 – 406 BC) describes Cerberus as three-headed, and three-bodied, says that Heracles entered the underworld at Tainaron, has Heracles say that Cerberus was not given to him by Persephone, but rather he fought and conquered Cerberus, "for I had been lucky enough to witness the rites of the initiated", an apparent reference to his initiation into the Eleusinian Mysteries, and says that the capture of Cerberus was the last of Heracles' labors. The lost play Pirthous (attributed to either Euripides or his late contemporary Critias) has Heracles say that he came to the underworld at the command of Eurystheus, who had ordered him to bring back Cerberus alive, not because he wanted to see Cerberus, but only because Eurystheus thought Heracles would not be able to accomplish the task, and that Heracles "overcame the beast" and "received favour from the gods". Plato (c. 425 – 348 BC) refers to Cerberus' composite nature, citing Cerberus, along with Scylla and the Chimera, as an example from "ancient fables" of a creature composed of many animal forms "grown together in one". Euphorion of Chalcis (3rd century BC) describes Cerberus as having multiple snake tails, and eyes that flashed, like sparks from a blacksmith's forge, or the volcaninc Mount Etna. From Euphorion, also comes the first mention of a story which told that at Heraclea Pontica, where Cerberus was brought out of the underworld, by Heracles, Cerberus "vomited bile" from which the poisonous aconite plant grew up. According to Diodorus Siculus (1st century BC), the capture of Cerberus was the eleventh of Heracles' labors, the twelfth and last being stealing the Apples of the Hesperides. Diodorus says that Heracles thought it best to first go to Athens to take part in the Eleusinian Mysteries, "Musaeus, the son of Orpheus, being at that time in charge of the initiatory rites", after which, he entered into the underworld "welcomed like a brother by Persephone", and "receiving the dog Cerberus in chains he carried him away to the amazement of all and exhibited him to men." In Virgil's Aeneid (1st century BC), Aeneas and the Sibyl encounter Cerberus in a cave, where he "lay at vast length", filling the cave "from end to end", blocking the entrance to the underworld. Cerberus is described as "triple-throated", with "three fierce mouths", multiple "large backs", and serpents writhing around his neck. The Sibyl throws Cerberus a loaf laced with honey and herbs to induce sleep, enabling Aeneas to enter the underworld, and so apparently for Virgil—contradicting Hesiod—Cerberus guarded the underworld against entrance. Later Virgil describes Cerberus, in his bloody cave, crouching over half-gnawed bones. In his Georgics, Virgil refers to Cerberus, his "triple jaws agape" being tamed by Orpheus' playing his lyre. Horace (65 – 8 BC) also refers to Cerberus yielding to Orphesus' lyre, here Cerberus has a single dog head, which "like a Fury's is fortified by a hundred snakes", with a "triple-tongued mouth" oozing "fetid breath and gore". Ovid (43 BC – AD 17/18) has Cerberus' mouth produce venom, and like Euphorion, makes Cerberus the cause of the poisonous plant aconite. According to Ovid, Heracles dragged Cerberus from the underworld, emerging from a cave "where 'tis fabled, the plant grew / on soil infected by Cerberian teeth", and dazzled by the daylight, Cerberus spewed out a "poison-foam", which made the aconite plants growing there poisonous. Seneca, in his tragedy Hercules Furens gives a detailed description of Cerberus and his capture. Seneca's Cerberus has three heads, a mane of snakes, and a snake tail, with his three heads being covered in gore, and licked by the many snakes which surround them, and with hearing so acute that he can hear "even ghosts". Seneca has Heracles use his lion-skin as shield, and his wooden club, to beat Cerberus into submission, after which Hades and Persephone, quailing on their thrones, let Heracles lead a chained and submissive Cerberus away. But upon leaving the underworld, at his first sight of daylight, a frightened Cerberus struggles furiously, and Heracles, with the help of Theseus (who had been held captive by Hades, but released, at Heracles' request) drag Cerberus into the light. Seneca, like Diodorus, has Heracles parade the captured Cerberus through Greece. Apollodorus' Cerberus has three dog-heads, a serpent for a tail, and the heads of many snakes on his back. According to Apollodorus, Heracles' twelfth and final labor was to bring back Cerberus from Hades. Heracles first went to Eumolpus to be initiated into the Eleusinian Mysteries. Upon his entering the underworld, all the dead flee Heracles except for Meleager and the Gorgon Medusa. Heracles drew his sword against Medusa, but Hermes told Heracles that the dead are mere "empty phantoms". Heracles asked Hades (here called Pluto) for Cerberus, and Hades said that Heracles could take Cerberus provided he was able to subdue him without using weapons. Heracles found Cerberus at the gates of Acheron, and with his arms around Cerberus, though being bitten by Cerberus' serpent tail, Heracles squeezed until Cerberus submitted. Heracles carried Cerberus away, showed him to Eurystheus, then returned Cerberus to the underworld. In an apparently unique version of the story, related by the sixth-century AD Pseudo-Nonnus, Heracles descended into Hades to abduct Persephone, and killed Cerberus on his way back up. Iconography The capture of Cerberus was a popular theme in ancient Greek and Roman art. The earliest depictions date from the beginning of the sixth century BC. One of the two earliest depictions, a Corinthian cup (c. 590–580 BC) from Argos (now lost), shows a naked Heracles, with quiver on his back and bow in his right hand, striding left, accompanied by Hermes. Heracles threatens Hades with a stone, who flees left, while a goddess, perhaps Persephone or possibly Athena, standing in front of Hades' throne, prevents the attack. Cerberus, with a single canine head and snakes rising from his head and body, flees right. On the far right a column indicates the entrance to Hades' palace. Many of the elements of this scene—Hermes, Athena, Hades, Persephone, and a column or portico—are common occurrences in later works. The other earliest depiction, a relief pithos fragment from Crete (c. 590–570 BC), is thought to show a single lion-headed Cerberus with a snake (open-mouthed) over his back being led to the right. A mid-sixth-century BC Laconian cup by the Hunt Painter adds several new features to the scene which also become common in later works: three heads, a snake tail, Cerberus' chain and Heracles' club. Here Cerberus has three canine heads, is covered by a shaggy coat of snakes, and has a tail which ends in a snake head. He is being held on a chain leash by Heracles who holds his club raised over head. In Greek art, the vast majority of depictions of Heracles and Cerberus occur on Attic vases. Although the lost Corinthian cup shows Cerberus with a single dog head, and the relief pithos fragment (c. 590–570 BC) apparently shows a single lion-headed Cerberus, in Attic vase painting Cerberus usually has two dog heads. In other art, as in the Laconian cup, Cerberus is usually three-headed. Occasionally in Roman art Cerberus is shown with a large central lion head and two smaller dog heads on either side. As in the Corinthian and Laconian cups (and possibly the relief pithos fragment), Cerberus is often depicted as part snake. In Attic vase painting, Cerberus is usually shown with a snake for a tail or a tail which ends in the head of a snake. Snakes are also often shown rising from various parts of his body including snout, head, neck, back, ankles, and paws. Two Attic amphoras from Vulci, one (c. 530–515 BC) by the Bucci Painter (Munich 1493), the other (c. 525–510 BC) by the Andokides painter (Louvre F204), in addition to the usual two heads and snake tail, show Cerberus with a mane down his necks and back, another typical Cerberian feature of Attic vase painting. Andokides' amphora also has a small snake curling up from each of Cerberus' two heads. Besides this lion-like mane and the occasional lion-head mentioned above, Cerberus was sometimes shown with other leonine features. A pitcher (c. 530–500) shows Cerberus with mane and claws, while a first-century BC sardonyx cameo shows Cerberus with leonine body and paws. In addition, a limestone relief fragment from Taranto (c. 320–300 BC) shows Cerberus with three lion-like heads. During the second quarter of the 5th century BC the capture of Cerberus disappears from Attic vase painting. After the early third century BC, the subject becomes rare everywhere until the Roman period. In Roman art the capture of Cerberus is usually shown together with other labors. Heracles and Cerberus are usually alone, with Heracles leading Cerberus. Cerberus rationalized At least as early as the 6th century BC, some ancient writers attempted to explain away various fantastical features of Greek mythology; included in these are various rationalized accounts of the Cerberus story. The earliest such account (late 6th century BC) is that of Hecataeus of Miletus. In his account Cerberus was not a dog at all, but rather simply a large venomous snake, which lived on Tainaron. The serpent was called the "hound of Hades" only because anyone bitten by it died immediately, and it was this snake that Heracles brought to Eurystheus. The geographer Pausanias (who preserves for us Hecataeus' version of the story) points out that, since Homer does not describe Cerberus, Hecataeus' account does not necessarily conflict with Homer, since Homer's "Hound of Hades" may not in fact refer to an actual dog. Other rationalized accounts make Cerberus out to be a normal dog. According to Palaephatus (4th century BC) Cerberus was one of the two dogs who guarded the cattle of Geryon, the other being Orthrus. Geryon lived in a city named Tricranium (in Greek Tricarenia, "Three-Heads"), from which name both Cerberus and Geryon came to be called "three-headed". Heracles killed Orthus, and drove away Geryon's cattle, with Cerberus following along behind. Molossus, a Mycenaen, offered to buy Cerberus from Eurystheus (presumably having received the dog, along with the cattle, from Heracles). But when Eurystheus refused, Molossus stole the dog and penned him up in a cave in Tainaron. Eurystheus commanded Heracles to find Cerberus and bring him back. After searching the entire Peloponnesus, Heracles found where it was said Cerberus was being held, went down into the cave, and brought up Cerberus, after which it was said: "Heracles descended through the cave into Hades and brought up Cerberus." In the rationalized account of Philochorus, in which Heracles rescues Theseus, Perithous is eaten by Cerberus. In this version of the story, Aidoneus (i.e., "Hades") is the mortal king of the Molossians, with a wife named Persephone, a daughter named Kore (another name for the goddess Persephone) and a large mortal dog named Cerberus, with whom all suitors of his daughter were required to fight. After having stolen Helen, to be Theseus' wife, Theseus and Perithous, attempt to abduct Kore, for Perithous, but Aidoneus catches the two heroes, imprisons Theseus, and feeds Perithous to Cerberus. Later, while a guest of Aidoneus, Heracles asks Aidoneus to release Theseus, as a favor, which Aidoneus grants. A 2nd-century AD Greek known as Heraclitus the paradoxographer (not to be confused with the 5th-century BC Greek philosopher Heraclitus)—claimed that Cerberus had two pups that were never away from their father, which made Cerberus appear to be three-headed. Cerberus allegorized Servius, a medieval commentator on Virgil's Aeneid, derived Cerberus' name from the Greek word creoboros meaning "flesh-devouring" (see above), and held that Cerberus symbolized the corpse-consuming earth, with Heracles' triumph over Cerberus representing his victory over earthly desires. Later, the mythographer Fulgentius, allegorizes Cerberus' three heads as representing the three origins of human strife: "nature, cause, and accident", and (drawing on the same flesh-devouring etymology as Servius) as symbolizing "the three ages—infancy, youth, old age, at which death enters the world." The Byzantine historian and bishop Eusebius wrote that Cerberus was represented with three heads, because the positions of the sun above the earth are three—rising, midday, and setting. The later Vatican Mythographers repeat and expand upon the traditions of Servius and Fulgentius. All three Vatican Mythographers repeat Servius' derivation of Cerberus' name from creoboros. The Second Vatican Mythographer repeats (nearly word for word) what Fulgentius had to say about Cerberus, while the Third Vatican Mythographer, in another very similar passage to Fugentius', says (more specifically than Fugentius), that for "the philosophers" Cerberus represented hatred, his three heads symbolizing the three kinds of human hatred: natural, causal, and casual (i.e. accidental). The Second and Third Vatican Mythographers, note that the three brothers Zeus, Poseidon and Hades each have tripartite insignia, associating Hades' three-headed Cerberus, with Zeus' three-forked thunderbolt, and Poseidon's three-pronged trident, while the Third Vatican Mythographer adds that "some philosophers think of Cerberus as the tripartite earth: Asia, Africa, and Europe. This earth, swallowing up bodies, sends souls to Tartarus." Virgil described Cerberus as "ravenous" (fame rabida), and a rapacious Cerberus became proverbial. Thus Cerberus came to symbolize avarice, and so, for example, in Dante's Inferno, Cerberus is placed in the Third Circle of Hell, guarding over the gluttons, where he "rends the spirits, flays and quarters them," and Dante (perhaps echoing Servius' association of Cerbeus with earth) has his guide Virgil take up handfuls of earth and throw them into Cerberus' "rapacious gullets." Constellation In the constellation Cerberus introduced by Johannes Hevelius in 1687, Cerberus is drawn as a three-headed snake, held in Hercules' hand (previously these stars had been depicted as a branch of the tree on which grew the Apples of the Hesperides). Snake genus In 1829 French naturalist Georges Cuvier gave the name Cerberus to a genus of Asian snakes, which are commonly called "dog-faced water snakes" in English. See also List of Greek mythological creatures Dormarch – part of the Cŵn Annwn Hellhound Ammit, a chthonic creature in Egyptian mythology Cadejo Notes References Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Apuleius, Metamorphoses (The Golden Ass), Volume I: Books 1–6. Edited and translated by J. Arthur Hanson. Loeb Classical Library No. 44. Cambridge, Massachusetts: Harvard University Press, 1996. Online version at Harvard University Press. Aristophanes, Frogs, Matthew Dillon, Ed., Perseus Digital Library, Tufts University, 1995. Online version at the Perseus Digital Library. Bacchylides, Odes, translated by Diane Arnson Svarlien. 1991. Online version at the Perseus Digital Library. Bloomfield, Maurice, Cerberus, the Dog of Hades: The History of an Idea, Open Court publishing Company, 1905. Online version at Internet Archive Bowra, C. M., Greek Lyric Poetry: From Alcman to Simonides, Clarendon Press, 2001. . Diodorus Siculus, Diodorus Siculus: The Library of History. Translated by C. H. Oldfather. Twelve volumes. Loeb Classical Library. Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. 1989. Euripides. Fragments: Oedipus-Chrysippus. Other Fragments. Edited and translated by Christopher Collard, Martin Cropp. Loeb Classical Library No. 506. Cambridge, Massachusetts: Harvard University Press, 2009. Euripides, Heracles, translated by E. P. Coleridge in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. Volume 1. New York. Random House. 1938. Online version at the Perseus Digital Library. Fowler, R. L. (2000), Early Greek Mythography: Volume 1: Text and Introduction, Oxford University Press, 2000. . Fowler, R. L. (2013), Early Greek Mythography: Volume 2: Commentary, Oxford University Press, 2013. . Freeman, Kathleen, Ancilla to the Pre-Socratic Philosophers: A Complete Translation of the Fragments in Diels, Fragmente Der Vorsokratiker, Harvard University Press, 1983. . Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books. Harding, Phillip, The Story of Athens: The Fragments of the Local Chronicles of Attika, Routledge, 2007. . Hawes, Greta, Rationalizing Myth in Antiquity, OUP Oxford, 2014. . Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library. Homer, The Iliad with an English Translation by A.T. Murray, PhD in two volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Homer; The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library. Hopman, Marianne Govers, Scylla: Myth, Metaphor, Paradox, Cambridge University Press, 2013. . Horace, The Odes and Carmen Saeculare of Horace. John Conington. trans. London. George Bell and Sons. 1882. Online version at the Perseus Digital Library. Hyginus, Gaius Julius, The Myths of Hyginus. Edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Kirk, G. S. 1990 The Iliad: A Commentary: Volume 2, Books 5–8, . Lansing, Richard (editor), The Dante Encyclopedia, Routledge, 2010. . Lightfoot, J. L. Hellenistic Collection: Philitas. Alexander of Aetolia. Hermesianax. Euphorion. Parthenius. Edited and translated by J. L. Lightfoot. Loeb Classical Library No. 508. Cambridge, Massachusetts: Harvard University Press, 2010. . Online version at Harvard University Press. Lucan, Pharsalia, Sir Edward Ridley. London. Longmans, Green, and Co. 1905. Online version at the Perseus Digital Library. Markantonatos, Andreas, Tragic Narrative: A Narratological Study of Sophocles' Oedipus at Colonus, Walter de Gruyter, 2002. . Nimmo Smith, Jennifer, A Christian's Guide to Greek Culture: The Pseudo-nonnus Commentaries on Sermons 4, 5, 39 and 43. Liverpool University Press, 2001. . Ogden, Daniel (2013a), Drakōn: Dragon Myth and Serpent Cult in the Greek and Roman Worlds, Oxford University Press, 2013. . Ogden, Daniel (2013b), Dragons, Serpents, and Slayers in the Classical and early Christian Worlds: A sourcebook, Oxford University Press. . Ovid. Heroides. Amores. Translated by Grant Showerman. Revised by G. P. Goold. Loeb Classical Library 41. Cambridge, Massachusetts: Harvard University Press, 1977. . Online version at Harvard University Press. Ovid, Metamorphoses, Brookes More. Boston. Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library. Papadopoulou, Thalia, Heracles and Euripidean Tragedy, Cambridge University Press, 2005. . Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library. Pepin, Ronald E., The Vatican Mythographers, Fordham University Press, 2008. Pindar, Odes, Diane Arnson Svarlien. 1990. Online version at the Perseus Digital Library. Pipili, Maria, Laconian Iconography of the Sixth Century B.C., Oxford University, 1987. Plato, Republic Books 6–10, Translated by Paul Shorey, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1969. Online version at the Perseus Digital Library Plutarch. Lives, Volume I: Theseus and Romulus. Lycurgus and Numa. Solon and Publicola. Translated by Bernadotte Perrin. Loeb Classical Library No. 46. Cambridge, Massachusetts: Harvard University Press, 1914. . Theseus at the Perseus Digital Library. Propertius Elegies Edited and translated by G. P. Goold. Loeb Classical Library 18. Cambridge, Massachusetts: Harvard University Press, 1990. Online version at Harvard University Press. Quintus Smyrnaeus, Quintus Smyrnaeus: The Fall of Troy, Translator: A.S. Way; Harvard University Press, Cambridge MA, 1913. Internet Archive Room, Adrian, Who's Who in Classical Mythology, Gramercy Books, 2003. . Schefold, Karl (1966), Myth and Legend in Early Greek Art, London, Thames and Hudson. Schefold, Karl (1992), Gods and Heroes in Late Archaic Greek Art, assisted by Luca Giuliani, Cambridge University Press, 1992. . Seneca, Tragedies, Volume I: Hercules. Trojan Women. Phoenician Women. Medea. Phaedra. Edited and translated by John G. Fitch. Loeb Classical Library No. 62. Cambridge, Massachusetts: Harvard University Press, 2002. . Online version at Harvard University Press. Seneca, Tragedies, Volume II: Oedipus. Agamemnon. Thyestes. Hercules on Oeta. Octavia. Edited and translated by John G. Fitch. Loeb Classical Library No. 78. Cambridge, Massachusetts: Harvard University Press, 2004. . Online version at Harvard University Press. Smallwood, Valerie, "M. Herakles and Kerberos (Labour XI)" in Lexicon Iconographicum Mythologiae Classicae (LIMC) V.1 Artemis Verlag, Zürich and Munich, 1990. . pp. 85–100. Sophocles, Women of Trachis, Translated by Robert Torrance. Houghton Mifflin. 1966. Online version at the Perseus Digital Library. Statius, Statius with an English Translation by J. H. Mozley, Volume I, Silvae, Thebaid, Books I–IV, Loeb Classical Library No. 206, London: William Heinemann, Ltd., New York: G. P. Putnamm's Sons, 1928. . Internet Archive Statius, Statius with an English Translation by J. H. Mozley, Volume II, Thebaid, Books V–XII, Achilleid, Loeb Classical Library No. 207, London: William Heinemann, Ltd., New York: G. P. Putnamm's Sons, 1928. . Internet Archive Stern, Jacob, Palaephatus Πεπὶ Ὰπίστων, On Unbelievable Tales, Bolchazy-Carducci Publishers, 1996. . Trypanis, C. A., Gelzer, Thomas; Whitman, Cedric, CALLIMACHUS, MUSAEUS, Aetia, Iambi, Hecale and Other Fragments. Hero and Leander, Harvard University Press, 1975. . Tzetzes, Chiliades, editor Gottlieb Kiessling, F.C.G. Vogel, 1826. (English translation, Books II–IV, by Gary Berkowitz. Internet Archive). Virgil, Aeneid, Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library Virgil, Bucolics, Aeneid, and Georgics of Vergil. J. B. Greenough. Boston. Ginn & Co. 1900. Online version at the Perseus Digital Library West, David, Horace, Odes 3, Oxford University Press, 2002. . West, M. L. (2003), Greek Epic Fragments: From the Seventh to the Fifth Centuries BC. Edited and translated by Martin L. West. Loeb Classical Library No. 497. Cambridge, Massachusetts: Harvard University Press, 2003. Online version at Harvard University Press. Whitbread, Leslie George, Fulgentius the Mythographer. Columbus: Ohio State University Press, 1971. Woodford, Susan, Spier, Jeffrey, "Kerberos", in Lexicon Iconographicum Mythologiae Classicae (LIMC) VI.1 Artemis Verlag, Zürich and Munich, 1992. . pp. 24–32. Xenophon, Anabasis in Xenophon in Seven Volumes, 3. Carleton L. Brownson. Harvard University Press, Cambridge, Massachusetts; William Heinemann, Ltd., London. 1922. Online version at the Perseus Digital Library. External links Characters in Book VI of the Aeneid Greek underworld Mythological dogs Mythological hybrids Symbols of Hades Labours of Hercules Mythological canines Deeds of Hermes Monsters in Greek mythology Mythical many-headed creatures Greek legendary creatures Dogs in religion Metamorphoses into flowers in Greek mythology
3,040
6,719
https://en.wikipedia.org/wiki/Columbia%2C%20Missouri
Columbia, Missouri
Columbia is a city in the U.S. state of Missouri. It is the county seat of Boone County and home to the University of Missouri. Founded in 1821, it is the principal city of the five-county Columbia metropolitan area. It is Missouri's fourth most-populous and fastest growing city, with an estimated 126,254 residents in 2020. As a Midwestern college town, Columbia has a reputation for progressive politics, persuasive journalism, and public art. The tripartite establishment of Stephens College (1833), the University of Missouri (1839), and Columbia College (1851), which surround the city's Downtown to the east, south, and north, has made the city a center of learning. At its center is 8th Street (also known as the Avenue of the Columns), which connects Francis Quadrangle and Jesse Hall to the Boone County Courthouse and the City Hall. Originally an agricultural town, education is now Columbia's primary economic concern, with secondary interests in the healthcare, insurance, and technology sectors; it has never been a manufacturing center. Companies like Shelter Insurance, Carfax, Veterans United Home Loans, and Slackers CDs and Games, were founded in the city. Cultural institutions include the State Historical Society of Missouri, the Museum of Art and Archaeology, and the annual True/False Film Festival and the Roots N Blues Festival. The Missouri Tigers, the state's only major college athletic program, play football at Faurot Field and basketball at Mizzou Arena as members of the rigorous Southeastern Conference. The city rests upon the forested hills and rolling prairies of Mid-Missouri, near the Missouri River valley, where the Ozark Mountains begin to transform into plains and savanna. Limestone forms bluffs and glades while rain dissolves the bedrock, creating caves and springs which water the Hinkson, Roche Perche, and Bonne Femme creeks. Surrounding the city, Rock Bridge Memorial State Park, Mark Twain National Forest, and Big Muddy National Fish and Wildlife Refuge form a greenbelt preserving sensitive and rare environments. The Columbia Agriculture Park is home to the Columbia Farmers Market. The first humans who entered the area at least 12,000 years ago were nomadic hunters. Later, woodland tribes lived in villages along waterways and built mounds in high places. The Osage and Missouria nations were expelled by the exploration of French traders and the rapid settlement of American pioneers. The latter arrived by the Boone's Lick Road and hailed from the culture of the Upland South, especially Virginia, Kentucky, and Tennessee. From 1812, the Boonslick area played a pivotal role in Missouri's early history and the nation's westward expansion. German, Irish, and other European immigrants soon joined. The modern populace is unusually diverse, over 8% foreign-born. White and black people are the largest ethnicities, and people of Asian descent are the third-largest group. The city has been called the "Athens of Missouri" for its classic beauty and educational emphasis, but is more commonly called "CoMo". History Columbia's origins begin with the settlement of American pioneers from Kentucky and Virginia in an early 1800s region known as the Boonslick. Before 1815 settlement in the region was confined to small log forts due to the threat of Native American attack during the War of 1812. When the war ended settlers came on foot, horseback, and wagon, often moving entire households along the Boone's Lick Road and sometimes bringing enslaved African Americans. By 1818 it was clear that the increased population would necessitate a new county be created from territorial Howard County. The Moniteau Creek on the west and Cedar Creek on the east were obvious natural boundaries. Believing it was only a matter of time before a county seat was chosen, the Smithton Land Company was formed to purchase over to establish the village of Smithton (near the present-day intersection of Walnut and Garth). In 1819 Smithton was a small cluster of log cabins in an ancient forest of oak and hickory; chief among them was the cabin of Richard Gentry, a trustee of the Smithton Company who would become first mayor of Columbia. In 1820, Boone County was formed and named after the recently deceased explorer Daniel Boone. The Missouri Legislature appointed John Gray, Jefferson Fulcher, Absalom Hicks, Lawrence Bass, and David Jackson as commissioners to select and establish a permanent county seat. Smithton never had more than twenty people, and it was quickly realized that well digging was difficult because of the bedrock. Springs were discovered across the Flat Branch Creek, so in the spring of 1821 Columbia was laid out, and the inhabitants of Smithton moved their cabins to the new town. The first house in Columbia was built by Thomas Duly in 1820 at what became Fifth and Broadway. Columbia's permanence was ensured when it was chosen as county seat in 1821 and the Boone's Lick Road was rerouted down Broadway. The roots of Columbia's three economic foundations—education, medicine, and insurance— can be traced to the city's incorporation in 1821. Original plans for the town set aside land for a state university. In 1833, Columbia Baptist Female College opened, which later became Stephens College. Columbia College, distinct from today's and later to become the University of Missouri, was founded in 1839. When the state legislature decided to establish a state university, Columbia raised three times as much money as any competing city, and James S. Rollins donated the land that is today the Francis Quadrangle. Soon other educational institutions were founded in Columbia, such as Christian Female College, the first college for women west of the Mississippi, which later became Columbia College. The city benefited from being a stagecoach stop of the Santa Fe and Oregon trails, and later from the Missouri–Kansas–Texas Railroad. In 1822, William Jewell set up the first hospital. In 1830, the first newspaper began; in 1832, the first theater in the state was opened; and in 1835, the state's first agricultural fair was held. By 1839, the population of 13,000 and wealth of Boone County was exceeded in Missouri only by that of St. Louis County, which, at that time, included the City of St. Louis. Columbia's infrastructure was relatively untouched by the Civil War. As a slave state, Missouri had many residents with Southern sympathies, but it stayed in the Union. The majority of the city was pro-Union; however, the surrounding agricultural areas of Boone County and the rest of central Missouri were decidedly pro-Confederate. Because of this, the University of Missouri became a base from which Union troops operated. No battles were fought within the city because the presence of Union troops dissuaded Confederate guerrillas from attacking, though several major battles occurred at nearby Boonville and Centralia. After Reconstruction, race relations in Columbia followed the Southern pattern of increasing violence of whites against blacks in efforts to suppress voting and free movement: George Burke, a black man who worked at the university, was lynched in 1889. In the spring of 1923, James T. Scott, an African-American janitor at the University of Missouri, was arrested on allegations of raping a university professor's daughter. He was taken from the county jail and lynched on April 29 before a white mob of several hundred, hanged from the Old Stewart Road Bridge. In the 21st century, a number of efforts have been undertaken to recognize Scott's death. In 2010 his death certificate was changed to reflect that he was never tried or convicted of charges, and that he had been lynched. In 2011 a headstone was put at his grave at Columbia Cemetery; it includes his wife's and parents' names and dates, to provide a fuller account of his life. In 2016, a marker was erected at the lynching site to memorialize Scott. In 1901, Rufus Logan established The Columbia Professional newspaper to serve Columbia's large African American population. In 1963, University of Missouri System and the Columbia College system established their headquarters in Columbia. The insurance industry also became important to the local economy as several companies established headquarters in Columbia, including Shelter Insurance, Missouri Employers Mutual, and Columbia Insurance Group. State Farm Insurance has a regional office in Columbia. In addition, the now-defunct Silvey Insurance was a large local employer. Columbia became a transportation crossroads when U.S. Route 63 and U.S. Route 40 (which was improved as present-day Interstate 70) were routed through the city. Soon after, the city opened the Columbia Regional Airport. By 2000, the city's population was nearly 85,000. In 2017, Columbia was in the path of totality for the Solar eclipse of August 21, 2017. The city was expecting upwards of 400,000 tourists coming to view the eclipse. Geography Columbia, in northern mid-Missouri, is away from both St. Louis and Kansas City, and north of the state capital of Jefferson City. The city is near the Missouri River, between the Ozark Plateau and the Northern Plains. According to the United States Census Bureau, the city has a total area of , of which is land and is water. Topography The city generally slopes from the highest point in the Northeast to the lowest point in the Southwest towards the Missouri River. Prominent tributaries of the river are Perche Creek, Hinkson Creek, and Flat Branch Creek. Along these and other creeks in the area can be found large valleys, cliffs, and cave systems such as that in Rock Bridge State Park just south of the city. These creeks are largely responsible for numerous stream valleys giving Columbia hilly terrain similar to the Ozarks while also having prairie flatland typical of northern Missouri. Columbia also operates several greenbelts with trails and parks throughout town. Animal life Large mammals found in the city include urbanized coyotes, red foxes, and numerous whitetail deer. Eastern gray squirrel, and other rodents are abundant, as well as cottontail rabbits and the nocturnal opossum and raccoon. Large bird species are abundant in parks and include the Canada goose, mallard duck, as well as shorebirds, including the great egret and great blue heron. Turkeys are also common in wooded areas and can occasionally be seen on the MKT recreation trail. Populations of bald eagles are found by the Missouri River. The city is on the Mississippi Flyway, used by migrating birds, and has a large variety of small bird species, common to the eastern U.S. The Eurasian tree sparrow, an introduced species, is limited in North America to the counties surrounding St. Louis. Columbia has large areas of forested and open land and many of these areas are home to wildlife. Climate Columbia has a humid continental climate (Köppen Dfa) marked by sharp seasonal contrasts in temperature, and is in USDA Plant Hardiness Zone 6a. The monthly daily average temperature ranges from in January to in July, while the high reaches or exceeds on an average of 35 days per year, on two days, while two nights of sub- lows can be expected. Precipitation tends to be greatest and most frequent in the latter half of spring, when severe weather is also most common. Snow averages per season, mostly from December to March, with occasional November accumulation and falls in April being rarer; historically seasonal snow accumulation has ranged from in 2005–06 to in 1977–78. Extreme temperatures have ranged from on February 12, 1899 to on July 12 and 14, 1954. Readings of or are uncommon, the last occurrences being January 7, 2014 and July 31, 2012. Cityscape Columbia's most significant and well-known architecture is found in buildings located in its downtown area and on the university campuses. The University of Missouri's Jesse Hall and the neo-gothic Memorial Union have become icons of the city. The David R. Francis Quadrangle is an example of Thomas Jefferson's academic village concept. Four historic districts located within the city are listed on the National Register of Historic Places: Downtown Columbia, the East Campus Neighborhood, Francis Quadrangle, and the North Ninth Street Historic District. The downtown skyline is relatively low and is dominated by the 10-story Tiger Hotel and the 15-story Paquin Tower. Downtown Columbia is an area of approximately one square mile surrounded by the University of Missouri on the south, Stephens College to the east, and Columbia College on the north. The area serves as Columbia's financial and business district. Since the early-21st century, a large number of high-rise apartment complexes have been built in downtown Columbia. Many of these buildings also offer mixed-use business and retail space on the lower levels. These developments have not been without criticism, with some expressing concern the buildings hurt the historic feel of the area, or that the city does not yet have the infrastructure to support them. The city's historic residential core lies in a ring around downtown, extending especially to the west along Broadway, and south into the East Campus Neighborhood. The city government recognizes 63 neighborhood associations. The city's most dense commercial areas are primarily along Interstate 70, U.S. Route 63, Stadium Boulevard, Grindstone Parkway, and Downtown. Demographics 2010 census As of the census of 2010, 108,500 people, 43,065 households, and 21,418 families resided in the city. The population density was . There were 46,758 housing units at an average density of . The racial makeup of the city was 79.0% White, 11.3% African American, 0.3% Native American, 5.2% Asian, 0.1% Pacific Islander, 1.1% from other races, and 3.1% from two or more races. Hispanic or Latino of any race were 3.4% of the population. There were 43,065 households, of which 26.1% had children under the age of 18 living with them, 35.6% were married couples living together, 10.6% had a female householder with no husband present, 3.5% had a male householder with no wife present, and 50.3% were non-families. 32.0% of all households were made up of individuals, and 6.6% had someone living alone who was 65 years of age or older. The average household size was 2.32 and the average family size was 2.94. In the city the population was spread out, with 18.8% of residents under the age of 18; 27.3% between the ages of 18 and 24; 26.7% from 25 to 44; 18.6% from 45 to 64; and 8.5% who were 65 years of age or older. The median age in the city was 26.8 years. The gender makeup of the city was 48.3% male and 51.7% female. 2000 census As of the census of 2000, there were 84,531 people, 33,689 households, and 17,282 families residing in the city. The population density was 1,592.8 people per square mile (615.0/km). There were 35,916 housing units at an average density of 676.8 per square mile (261.3/km). The racial makeup of the city was 81.54% White, 10.85% Black or African American, 0.39% Native American, 4.30% Asian, 0.04% Pacific Islander, 0.81% from other races, and 2.07% from two or more races. Hispanic or Latino of any race were 2.05% of the population. There were 33,689 households, out of which 26.1% had children under the age of 18 living with them, 38.2% were married couples living together, 10.3% had a female householder with no husband present, and 48.7% were non-families. 33.1% of all households were made up of individuals, and 6.5% had someone living alone who was 65 years of age or older. The average household size was 2.26 and the average family size was 2.92. In the city, the population was spread out, with 19.7% under the age of 18, 26.7% from 18 to 24, 28.7% from 25 to 44, 16.2% from 45 to 64, and 8.6% who were 65 years of age or older. The median age was 27 years. For every 100 females, there were 91.8 males. For every 100 females age 18 and over, there were 89.1 males. The median income for a household in the city was $33,729, and the median income for a family was $52,288. Males had a median income of $34,710 versus $26,694 for females. The per capita income for the city was $19,507. About 9.4% of families and 19.2% of the population were below the poverty line, including 14.8% of those under age 18 and 5.2% of those age 65 or over. However, traditional statistics of income and poverty can be misleading when applied to cities with high student populations, such as Columbia. Economy Columbia's economy is historically dominated by education, healthcare, and insurance. Jobs in government are also common, either in Columbia or a half-hour south in Jefferson City. The Columbia Regional Airport and the Missouri River Port of Rocheport connect the region with trade and transportation. With a Gross Metropolitan Product of $9.6 billion in 2018, Columbia's economy makes up 3% of the Gross State Product of Missouri. Columbia's metro area economy is slightly larger than the economy of Rwanda. Insurance corporations headquartered in Columbia include Shelter Insurance and the Columbia Insurance Group. Other organizations include StorageMart, Veterans United Home Loans, MFA Incorporated, the Missouri State High School Activities Association, and MFA Oil. Companies such as Socket, Datastorm Technologies, Inc. (no longer existent), Slackers CDs and Games, Carfax, and MBS Textbook Exchange were all founded in Columbia. Top employers According to Columbia's 2018 Comprehensive Annual Financial Report, the top employers in the city are: Culture The Missouri Theatre Center for the Arts and Jesse Auditorium are Columbia's largest fine arts venues. Ragtag Cinema annually hosts the True/False Film Festival. In 2008, filmmaker Todd Sklar completed the film Box Elder, which was filmed entirely in and around Columbia and the University of Missouri. The North Village Arts District, located on the north side of downtown, is home to galleries, restaurants, theaters, bars, music venues, and the Missouri Contemporary Ballet. The University of Missouri's Museum of Art and Archaeology displays 14,000 works of art and archaeological objects in five galleries for no charge to the public. Libraries include the Columbia Public Library, the University of Missouri Libraries, with over three million volumes in Ellis Library, and the State Historical Society of Missouri. Music The "We Always Swing" Jazz Series and the Roots N Blues Festival is held in Columbia. "9th Street Summerfest" (now hosted in Rose Park at Rose Music Hall) closes part of that street several nights each summer to hold outdoor performances and has featured Willie Nelson (2009), Snoop Dogg (2010), The Flaming Lips (2010), Weird Al Yankovic (2013), and others. The "University Concert Series" regularly includes musicians and dancers from various genres, typically in Jesse Hall. Other musical venues in town include the Missouri Theatre, the university's multipurpose Hearnes Center, the university's Mizzou Arena, The Blue Note, and Rose Music Hall. Shelter Gardens, a park on the campus of Shelter Insurance headquarters, also hosts outdoor performances during the summer. The University of Missouri School of Music attracts hundreds of musicians to Columbia, student performances are held in Whitmore Recital Hall. Among many non-profit organizations for classical music are included the "Odyssey Chamber Music Series", "Missouri Symphony", "Columbia Community Band", and "Columbia Civic Orchestra". Founded in 2006, the "Plowman Chamber Music Competition" is a biennial competition held in March/April of odd-numbered years, considered to be one of the finest, top five chamber music competitions in the nation. Theater Columbia has multiple opportunities to watch and perform in theatrical productions. Ragtag Cinema is one of the most well known theaters in Columbia. The city is home to Stephens College, a private institution known for performing arts. Their season includes multiple plays and musicals. The University of Missouri and Columbia College also present multiple productions a year. The city's three public high schools are also known for their productions. Rock Bridge High School performs a musical in November and two plays in the spring. Hickman High School also performs a similar season with two musical performances (one in the fall, and one in the spring) and 2 plays (one in the winter, and one at the end of their school year). The newest high school, Battle High, opened in 2013 and also is known for their productions. Battle presents a musical in the fall and a play in the spring, along with improv nights and more productions throughout the year. The city is also home to the indoor/outdoor theatre Maplewood Barn Theatre in Nifong Park and other community theatre programs such as Columbia Entertainment Company, Talking Horse Productions, Pace Youth Theatre and TRYPS. Sports The University of Missouri's sports teams, the Missouri Tigers, play a significant role in the city's sports culture. Faurot Field at Memorial Stadium, which has a capacity of 71,168, hosts home football games. The Hearnes Center and Mizzou Arena are two other large sport and event venues, the latter being the home arena for Mizzou's basketball team. Taylor Stadium is host to their baseball team and was the regional host for the 2007 NCAA Baseball Championship. Columbia College has several men and women collegiate sports teams as well. In 2007, Columbia hosted the National Association of Intercollegiate Athletics Volleyball National Championship, which the Lady Cougars participated in. Columbia also hosts the Show-Me State Games, a non-profit program of the Missouri Governor's Council on Physical Fitness and Health. They are the largest state games in the United States. Situated midway between St. Louis and Kansas City, Columbians will often have allegiances to the professional sports teams housed there, such as the St. Louis Cardinals, the Kansas City Royals, the Kansas City Chiefs, the St. Louis Blues, Sporting Kansas City, and St. Louis City SC. Cuisine Columbia has many bars and restaurants that provide diverse styles of cuisine, due in part to having three colleges. One such establishment is the historic Booches bar, restaurant, and pool hall, which was established in 1884 and is frequented by college students. Shakespeare's Pizza is known across the nation for its college town pizza. Parks and recreation Throughout the city are many parks and trails for public usage. Among the more popularly frequented is the MKT which is a spur that connects to the Katy Trail, meeting up just south of Columbia proper. The MKT ranked second in the nation for "Best Urban Trail" in the 2015 USA Todays 10 Best Readers' Choice Awards. This 10-foot wide trail built on the old railbed of the MKT railroad begins in downtown Columbia in Flat Branch Park at 4th and Cherry Streets. The all-weather crushed limestone surface provides opportunities for walking, jogging, running, and bicycling. Stephens Lake Park is the highlight of Columbia's park system and is known for its 11-acre fishing/swimming lake, mature trees, and historical significance in the community. It serves as the center for outdoor winter sports, a variety of community festivals such as the Roots N Blues Festival, and outdoor concert series at the amphitheater. Stephens Lake has reservable shelters, playgrounds, swimming beach and spraygrounds, art sculptures, waterfalls, and walking trails. Rock Bridge State Park is open year-round giving visitors the chance to scramble, hike, and bicycle through a scenic environment. Rock Bridge State Park contains some of the most popular hiking trails in the state, including the Gans Creek Wild Area. Columbia is home to Harmony Bends Disc Golf Course (https://www.como.gov/contacts/harmony-bends-championship-disc-golf-course-strawn-park/), which was named the 2017 Disc Golf Course of the Year by DGCourseReview.com. As of June, 2022, Harmony Bends still continues to rank on DGCourseReview.com as the No. 1 public course, and #2 overall course in the United States Media The city has two daily morning newspapers: the Columbia Missourian and the Columbia Daily Tribune. The Missourian is directed by professional editors and staffed by Missouri School of Journalism students who do reporting, design, copy editing, information graphics, photography, and multimedia. The Missourian publishes the weekly city magazine, Vox. With a daily circulation of nearly 20,000, the Daily Tribune is the most widely read newspaper in central Missouri. The University of Missouri has the independent official bi-weekly student newspaper called The Maneater, and the quarterly literary magazine, The Missouri Review. The now-defunct Prysms Weekly was also published in Columbia. In late 2009, KCOU News launched full operations out of KCOU 88.1 FM on the MU Campus. The entirely student-run news organization airs a weekday newscast, The Pulse. The city has 4 television channels. Columbia Access Television (CAT or CAT-TV) is the public access channel. CPSTV is the education access channel, managed by Columbia Public Schools as a function of the Columbia Public Schools Community Relations Department. The Government Access channel broadcasts City Council, Planning and Zoning Commission, and Board of Adjustment meetings. Television Radio Columbia has 19 radio stations as well as stations licensed from Jefferson City, Macon and, Lake of the Ozarks. {| |- | style="width:20%; vertical-align:top;"| AM KFAL 900 kHz • Country KWOS 950 kHz • News/Talk KFRU 1400 kHz • News/Talk KTGR 1580 kHz • Sports (ESPN Radio) FM KCOU 88.1 MHz • College KOPN 89.5 MHz • Public KMUC 90.5 MHz • Classical KBIA 91.3 MHz • News (NPR) KMFC 92.1 MHz • Christian (K-Love) KWJK 93.1 MHz • Variety (JACK FM) KSSZ 93.9 MHz • News/Talk KWWR 95.7 MHz • Country KCMQ 96.7 MHz • Classic Rock KDVC 98.3 MHz • Classic Hits KCLR 99.3 MHz • Country KPLA 101.5 MHz • Variety KBXR 102.3 MHz • Alternative KZZT 105.5 MHz • Classic Rock KOQL 106.1 MHz • Top 40 KTXY 106.9 MHz Top 40 Government and politics Columbia's current government was established by a home rule charter adopted by voters on November 11, 1974, which established a council-manager government that invested power in the city council. The city council has seven members: six elected by each of Columbia's six single-member districts or wards and an at-large member, the mayor, who is elected by all city voters. The mayor receives a $9,000 annual stipend, and the six other members receive a $6,000 annual stipend. They are elected to staggered three-year terms. As well as serving as a voting member of the city council, the mayor is recognized as the head of city government for ceremonial purposes. Chief executive authority is invested in a hired city manager, who oversees the government's day-to-day operations. Columbia is the county seat of Boone County, and houses the county court and government center. The city is in Missouri's 4th congressional district. The 19th Missouri State Senate district covers all of Boone County. There are five Missouri House of Representatives districts (9, 21, 23, 24, and 25) in the city. The Columbia Police Department provides law enforcement across the city, while the Columbia Fire Department provides fire protection. The University of Missouri Police Department also patrols areas on and around the University of Missouri campus and has jurisdiction throughout the state. Additionally, the Boone County Sheriff's Department, the law enforcement agency for the county, regularly patrols the city. The Public Service Joint Communications Center coordinates efforts between the two organizations as well as the Boone County Fire Protection District, which operates Urban Search and Rescue Missouri Task Force 1. The population generally supports progressive causes, such as recycling programs and the decriminalization of cannabis both for medical and recreational use at the municipal level, though the scope of the latter of the two cannabis ordinances has since been restricted. The city is one of only four in the state to offer medical benefits to same-sex partners of city employees. The new health plan extends health benefits to unmarried heterosexual domestic partners of city employees. On October 10, 2006, the city council approved an ordinance to prohibit smoking in public places, including restaurants and bars. The ordinance was passed over protest, and several amendments to the ordinance reflect this. Over half of residents possess at least a bachelor's degree, while over a quarter hold a graduate degree. Columbia is the 13th most-highly educated municipality in the United States. Education Columbia and much of the surrounding area lies within the Columbia Public School District. The district enrolled more than 18,000 students and had a budget of $281 million for the 2019–20 school year. It is above the state average in both attendance percentage and graduation rate. The city operates four public high schools which cover grades 9–12: David H. Hickman High School, Rock Bridge High School, Muriel Battle High School, and Frederick Douglass High School. Rock Bridge is one of two Missouri high schools to receive a silver medal by U.S. News & World Report, putting it in the Top 3% of all high schools in the nation. Hickman has been on Newsweek magazine's list of Top 1,300 schools in the country for the past three years and has more named presidential scholars than any other public high school in the US. There are also several private high schools located in the city, including Christian Fellowship School, Columbia Independent School, Heritage Academy, Christian Chapel Academy, and Tolton High School. CPS also manages seven middle schools: Jefferson, West, Oakland, Gentry, Smithton, Lange, and John Warner. John Warner Middle School first opened for the 2020/21 school year.. The city has three institutions of higher education: the University of Missouri, Stephens College, and Columbia College, all of which surround Downtown Columbia. The city is the headquarters of the University of Missouri System, which operates campuses in St. Louis, Kansas City, and Rolla. Moberly Area Community College, Central Methodist University, and William Woods University as well as operates satellite campuses in Columbia. Infrastructure Transportation The Columbia Transit provides public bus and para-transit service, and is owned and operated by the city. In 2008, 1,414,400 passengers boarded along the system's six fixed routes and nine University of Missouri shuttle routes, and 27,000 boarded the Para-transit service. The system is constantly experiencing growth in service and technology. A $3.5 million project to renovate and expand the Wabash Station, a rail depot built in 1910 and converted into the city's transit center in the mid-1980s, was completed in summer of 2007. In 2007, a Transit Master Plan was created to address the future transit needs of the city and county with a comprehensive plan to add infrastructure in three key phases. The five to 15-year plan intends to add service along the southwest, southeast and northeast sections of Columbia and develop alternative transportation models for Boone County. The city is served by Columbia Regional Airport. The closest rail station is Jefferson City station, in the state capital Jefferson City. Columbia is also known for its MKT Trail, a spur of the Katy Trail State Park, which allows foot and bike traffic across the city, and, conceivably, the state. It consists of a soft gravel surface for running and biking. Columbia also is preparing to embark on construction of several new bike paths and street bike lanes thanks to a $25 million grant from the federal government. The city is also served by American Airlines and United Airlines at the Columbia Regional Airport, the only commercial airport in mid-Missouri. I-70 (concurrent with US 40) and US 63 are the two main freeways used for travel to and from Columbia. Within the city, there are also three state highways: Routes 763 (Rangeline Street & College Avenue), 163 (Providence Road), and 740 (Stadium Boulevard). Rail service is provided by the city-owned Columbia Terminal Railroad (COLT), which runs from the north side of Columbia to Centralia and a connection to the Norfolk Southern Railway. Columbia would be at the center of the proposed Missouri Hyperloop, reducing travel times to Kansas City and St. Louis to around 15 minutes. Health systems Health care is a big part of Columbia's economy, with nearly one in six people working in a health-care related profession and a physician density that is about three times the United States average. The city's hospitals and supporting facilities are a large referral center for the state, and medical related trips to the city are common. There are three hospital systems within the city and five hospitals with a total of 1,105 beds. The University of Missouri Health Care operates three hospitals in Columbia: the University of Missouri Hospital, the University of Missouri Women's and Children's Hospital (formerly Columbia Regional Hospital), and the Ellis Fischel Cancer Center. Boone Hospital Center is administered by BJC Healthcare and operates several clinics as well as outpatient locations. The Harry S. Truman Memorial Veterans' Hospital, adjacent to University Hospital, is administered by the United States Department of Veterans Affairs. There are a large number of medical-related industries in Columbia. The University of Missouri School of Medicine uses university-owned facilities as teaching hospitals. The University of Missouri Research Reactor Center is the largest research reactor in the United States and produces radioisotopes used in nuclear medicine. The center serves as the sole supplier of the active ingredients in two U.S. Food and Drug Administration-approved radiopharmaceuticals and produces Fluorine-18 used in PET imaging with its cyclotron. Sister cities In accordance with the Columbia Sister Cities Program, which operates in conjunction with Sister Cities International, Columbia has been paired with five international sister cities in an attempt to foster cross-cultural understanding: Kutaisi, Georgia Hakusan, Ishikawa, Japan Sibiu, Romania Suncheon, South Jeolla, South Korea Laoshan, Shandong, China See also List of people from Columbia, Missouri History of the University of Missouri National Register of Historic Places listings in Boone County, Missouri List of tallest buildings in Columbia, Missouri The Big Tree Notes References Bibliography Stephens, E. W. (1875) "History of Boone County." An Illustrated Historical Atlas of Boone County, Missouri. Philadelphia: Edwards Brothers Sapp, David (2000) "Boone County Chronicles" Columbia: Boone County Historical Society Brownlee, Richard S. 1956 The Big Moniteau Bluff Pictographs in Boone County, MO. Missouri Archaeologist 18(4): 49-54 Viles, Jonas The University of Missouri, 1839–1939, E.W. Stephens Publishing Company External links Official city government website Columbia Convention & Visitors Bureau Columbia Chamber of Commerce Historic maps of Columbia in the Sanborn Maps of Missouri Collection at the University of Missouri Cities in Missouri Cities in Boone County, Missouri Populated places established in 1821 County seats in Missouri Columbia metropolitan area (Missouri) Busking venues Academic enclaves 1821 establishments in Missouri
3,049
6,742
https://en.wikipedia.org/wiki/Central%20Asia
Central Asia
Central Asia, also known as Middle Asia, is a region of Asia that stretches from the Caspian Sea in the west to western China and Mongolia in the east, and from Afghanistan and Iran in the south to Russia in the north. It includes the former Soviet republics of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan, which are colloquially referred to as the "-stans" as the countries all have names ending with the Persian suffix "-stan", meaning "land of". The current geographical location of Central Asia was formerly part of the historic region of Turkestan, also known as Turan. In the pre-Islamic and early Islamic eras ( and earlier) Central Asia was inhabited predominantly by Iranian peoples, populated by Eastern Iranian-speaking Bactrians, Sogdians, Chorasmians and the semi-nomadic Scythians and Dahae. After expansion by Turkic peoples, Central Asia also became the homeland for the Kazakhs, Uzbeks, Tatars, Turkmen, Kyrgyz, and Uyghurs; Turkic languages largely replaced the Iranian languages spoken in the area, with the exception of Tajikistan and areas where Tajik is spoken. Central Asia was historically closely tied to the Silk Road trade routes, acting as a crossroads for the movement of people, goods, and ideas between Europe and the Far East. From the mid-19th century until almost the end of the 20th century, Central Asia was colonised by the Russians, and incorporated into the Russian Empire, and later the Soviet Union, which led to Russians and other Slavs emigrating into the area. Modern-day Central Asia is home to a large population of European settlers, who mostly live in Kazakhstan; 7 million Russians, 500,000 Ukrainians, and about 170,000 Germans. Stalinist-era forced deportation policies also mean that over 300,000 Koreans live there. Central Asia (2019) has a population of about 72 million people, in five countries: Kazakhstan (pop. million), Kyrgyzstan ( million), Tajikistan ( million), Turkmenistan ( million), and Uzbekistan (35 million). Definitions One of the first geographers who mentioned Central Asia as a distinct region of the world was Alexander von Humboldt. The borders of Central Asia are subject to multiple definitions. Historically, political geography and culture have been two significant parameters widely used in scholarly definitions of Central Asia. Humboldt's definition composed of every country between 5° North and 5° South of the latitude 44.5°N. Humboldt mentions some geographic features of this region, which include the Caspian Sea in the west, the Altai mountains in the north and the Hindu Kush and Pamir mountains in the South. He did not give an eastern border for the region. His legacy is still seen: Humboldt University of Berlin, named after him, offers a course in Central Asian Studies. The Russian geographer Nikolaĭ Khanykov questioned the latitudinal definition of Central Asia and preferred a physical one of all countries located in the region landlocked from water, including Afghanistan, Khorasan, Kyrgyzstan, Tajikistan, Turkmenistan, Uyghuristan (Xinjiang), and Uzbekistan. Russian culture has two distinct terms: Средняя Азия (Srednyaya Aziya or "Middle Asia", the narrower definition, which includes only those traditionally non-Slavic, Central Asian lands that were incorporated within those borders of historical Russia) and Центральная Азия (Tsentralnaya Aziya or "Central Asia", the wider definition, which includes Central Asian lands that have never been part of historical Russia). The latter definition includes Afghanistan and 'East Turkestan'. The most limited definition was the official one of the Soviet Union, which defined Middle Asia as consisting solely of Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan, omitting Kazakhstan. Soon after the dissolution of the Soviet Union in 1991, the leaders of the four former Soviet Central Asian Republics met in Tashkent and declared that the definition of Central Asia should include Kazakhstan as well as the original four included by the Soviets. Since then, this has become the most common definition of Central Asia. The UNESCO History of the Civilizations of Central Asia, published in 1992, defines the region as "Afghanistan, northeastern Iran, northern and central Pakistan, northern India, western China, Mongolia and the former Soviet Central Asian republics". An alternative method is to define the region based on ethnicity, and in particular, areas populated by Eastern Turkic, Eastern Iranian, or Mongolian peoples. These areas include Xinjiang Uyghur Autonomous Region, the Turkic regions of southern Siberia, the five republics, and Afghan Turkestan. Afghanistan as a whole, the northern and western areas of Pakistan and the Kashmir Valley of India may also be included. The Tibetans and Ladakhis are also included. Most of the mentioned peoples are considered the "indigenous" peoples of the vast region. Central Asia is sometimes referred to as Turkestan. Geography Central Asia is a region of varied geography, including high passes and mountains (Tian Shan), vast deserts (Kyzyl Kum, Taklamakan), and especially treeless, grassy steppes. The vast steppe areas of Central Asia are considered together with the steppes of Eastern Europe as a homogeneous geographical zone known as the Eurasian Steppe. Much of the land of Central Asia is too dry or too rugged for farming. The Gobi desert extends from the foot of the Pamirs, 77° E, to the Great Khingan (Da Hinggan) Mountains, 116°–118° E. Central Asia has the following geographic extremes: The world's northernmost desert (sand dunes), at Buurug Deliin Els, Mongolia, 50°18' N. The Northern Hemisphere's southernmost permafrost, at Erdenetsogt sum, Mongolia, 46°17' N. The world's shortest distance between non-frozen desert and permafrost: . The Eurasian pole of inaccessibility. A majority of the people earn a living by herding livestock. Industrial activity centers in the region's cities. Major rivers of the region include the Amu Darya, the Syr Darya, Irtysh, the Hari River and the Murghab River. Major bodies of water include the Aral Sea and Lake Balkhash, both of which are part of the huge west-central Asian endorheic basin that also includes the Caspian Sea. Both of these bodies of water have shrunk significantly in recent decades due to the diversion of water from rivers that feed them for irrigation and industrial purposes. Water is an extremely valuable resource in arid Central Asia and can lead to rather significant international disputes. Historical regions Central Asia is bounded on the north by the forests of Siberia. The northern half of Central Asia (Kazakhstan) is the middle part of the Eurasian steppe. Westward the Kazakh steppe merges into the Russian-Ukrainian steppe and eastward into the steppes and deserts of Dzungaria and Mongolia. Southward the land becomes increasingly dry and the nomadic population increasingly thin. The south supports areas of dense population and cities wherever irrigation is possible. The main irrigated areas are along the eastern mountains, along the Oxus and Jaxartes Rivers and along the north flank of the Kopet Dagh near the Persian border. East of the Kopet Dagh is the important oasis of Merv and then a few places in Afghanistan like Herat and Balkh. Two projections of the Tian Shan create three "bays" along the eastern mountains. The largest, in the north, is eastern Kazakhstan, traditionally called Jetysu or Semirechye which contains Lake Balkhash. In the center is the small but densely-populated Ferghana valley. In the south is Bactria, later called Tocharistan, which is bounded on the south by the Hindu Kush mountains of Afghanistan. The Syr Darya (Jaxartes) rises in the Ferghana valley and the Amu Darya (Oxus) rises in Bactria. Both flow northwest into the Aral Sea. Where the Oxus meets the Aral Sea it forms a large delta called Khwarazm and later the Khanate of Khiva. North of the Oxus is the less-famous but equally important Zarafshan River which waters the great trading cities of Bokhara and Samarkand. The other great commercial city was Tashkent northwest of the mouth of the Ferghana valley. The land immediately north of the Oxus was called Transoxiana and also Sogdia, especially when referring to the Sogdian merchants who dominated the silk road trade. To the east, Dzungaria and the Tarim Basin were united into the Manchu-Chinese province of Xinjiang (Sinkiang; Hsin-kiang) about 1759. Caravans from China usually went along the north or south side of the Tarim basin and joined at Kashgar before crossing the mountains northwest to Ferghana or southwest to Bactria. A minor branch of the silk road went north of the Tian Shan through Dzungaria and Zhetysu before turning southwest near Tashkent. Nomadic migrations usually moved from Mongolia through Dzungaria before turning southwest to conquer the settled lands or continuing west toward Europe. The Kyzyl Kum Desert or semi-desert is between the Oxus and Jaxartes, and the Karakum Desert is between the Oxus and Kopet Dagh in Turkmenistan. Khorasan meant approximately northeast Persia and northern Afghanistan. Margiana was the region around Merv. The Ustyurt Plateau is between the Aral and Caspian Seas. To the southwest, across the Kopet Dagh, lies Persia. From here Persian and Islamic civilisation penetrated Central Asia and dominated its high culture until the Russian conquest. In the southeast is the route to India. In early times Buddhism spread north and throughout much of history warrior kings and tribes would move southeast to establish their rule in northern India. Most nomadic conquerors entered from the northeast. After 1800 western civilisation in its Russian and Soviet form penetrated from the northwest. Names of historical regions Ariana Bactria Dahistan Khorasan Khwarazm Margiana Parthia Sogdia Tokharistan Transoxiana Turan Turkestan Climate Because Central Asia is landlocked and not buffered by a large body of water, temperature fluctuations are often severe, excluding the hot, sunny summer months. In most areas the climate is dry and continental, with hot summers and cool to cold winters, with occasional snowfall. Outside high-elevation areas, the climate is mostly semi-arid to arid. In lower elevations, summers are hot with blazing sunshine. Winters feature occasional rain and/or snow from low-pressure systems that cross the area from the Mediterranean Sea. Average monthly precipitation is very low from July to September, rises in autumn (October and November) and is highest in March or April, followed by swift drying in May and June. Winds can be strong, producing dust storms sometimes, especially toward the end of the summer in September and October. Specific cities that exemplify Central Asian climate patterns include Tashkent and Samarkand, Uzbekistan, Ashgabat, Turkmenistan, and Dushanbe, Tajikistan. The last of these represents one of the wettest climates in Central Asia, with an average annual precipitation of over . Biogeographically, Central Asia is part of the Palearctic realm. The largest biome in Central Asia is the temperate grasslands, savannas, and shrublands biome. Central Asia also contains the montane grasslands and shrublands, deserts and xeric shrublands and temperate coniferous forests biomes. Climate change As of 2022, Central Asia is one of the most vulnerable regions to global climate change in the world and the region's temperature is growing faster than the global average. History Although, during the golden age of Orientalism the place of Central Asia in the world history was marginalised, contemporary historiography has rediscovered the "centrality" of the Central Asia. The history of Central Asia is defined by the area's climate and geography. The aridness of the region made agriculture difficult, and its distance from the sea cut it off from much trade. Thus, few major cities developed in the region; instead, the area was for millennia dominated by the nomadic horse peoples of the steppe. Relations between the steppe nomads and the settled people in and around Central Asia were long marked by conflict. The nomadic lifestyle was well suited to warfare, and the steppe horse riders became some of the most militarily potent people in the world, limited only by their lack of internal unity. Any internal unity that was achieved was most probably due to the influence of the Silk Road, which traveled along Central Asia. Periodically, great leaders or changing conditions would organise several tribes into one force and create an almost unstoppable power. These included the Hun invasion of Europe, the Five Barbarians rebellions in China and most notably the Mongol conquest of much of Eurasia. During pre-Islamic and early Islamic times, Central Asia was inhabited predominantly by speakers of Iranian languages. Among the ancient sedentary Iranian peoples, the Sogdians and Chorasmians played an important role, while Iranian peoples such as Scythians and the later on Alans lived a nomadic or semi-nomadic lifestyle. The main migration of Turkic peoples occurred between the 6th and 11th centuries, when they spread across most of Central Asia. The Eurasian Steppe slowly transitioned from Indo European and Iranian-speaking groups with dominant West-Eurasian ancestry to a more heterogeneous region with increasing East Asian ancestry through Turkic and Mongolian groups in the past thousands years, including extensive Turkic and later Mongol migrations out of Mongolia and slow assimilation of local populations. In the 8th century AD, the Islamic expansion reached the region but had no significant demographic impact. In the 13th century AD, the Mongolian invasion of Central Asia brought most of the region under Mongolian influence, which had "enormous demographic success", but did not impact the cultural or linguistic landscape. Invasion routes through Central Asia Once populated by Iranian tribes and other Indo-European people, Central Asia experienced numerous invasions emanating out of Southern Siberia and Mongolia that would drastically affect the region. Genetic data shows that the different Central Asian Turkic-speaking peoples have between ~22% to ~70% East Asian ancestry (samplified by "Baikal hunter-gatherer ancestry" shared with other Northeast Asians and Eastern Siberians), in contrast to Iranian-speaking Central Asians, specifically Tajiks, which display genetic continuity to Indo-Iranians of the Iron Age. Certain Turkic ethnic groups, specifically the Kazakhs, display even higher East Asian ancestry. This is explained by substantial Mongolian influence on the Kazakh genome, through significant admixture between the blue eyes, blonde hair medieval Kipchaks of Central Asia with the invading medieval Mongolians. The data suggests that the Mongol invasion of Central Asia had lasting impacts onto the genetic makeup of Kazakhs. According to recent genetic genealogy testing, the genetic admixture of the Uzbeks clusters somewhere between the Iranian peoples and the Mongols. Another study shows that the Uzbeks are closely related to other Turkic peoples of Central Asia and rather distant from Iranian people. The study also analysed the maternal and paternal DNA haplogroups and shows that Turkic speaking groups are more homogenous than Iranian speaking groups. Genetic studies analyzing the full genome of Uzbeks and other Central Asian populations found that about ~27-60% of the Uzbek ancestry is derived from East Asian sources, with the remainder ancestry (~40–73%) being made up by European and Middle Eastern components. According to a recent study, the Kyrgyz, Kazakhs, Uzbeks, and Turkmens share more of their gene pool with various East Asian and Siberian populations than with West Asian or European populations. The study further suggests that both migration and linguistic assimilation helped to spread the Turkic languages in Eurasia. The Tang dynasty of China expanded westwards and controlled large parts of Central Asia, directly and indirectly through their Turkic vassals. Tang China actively supported the Turkification of Central Asia, while extending its cultural influence. The Tang Chinese were defeated by the Abbasid Caliphate at the Battle of Talas in 751, marking the end of the Tang dynasty's western expansion and the 150 years of Chinese influence. The Tibetan Empire would take the chance to rule portions of Central Asia and South Asia. During the 13th and 14th centuries, the Mongols conquered and ruled the largest contiguous empire in recorded history. Most of Central Asia fell under the control of the Chagatai Khanate. The dominance of the nomads ended in the 16th century, as firearms allowed settled peoples to gain control of the region. Russia, China, and other powers expanded into the region and had captured the bulk of Central Asia by the end of the 19th century. After the Russian Revolution, the western Central Asian regions were incorporated into the Soviet Union. The eastern part of Central Asia, known as Xinjiang, was incorporated into the People's Republic of China, having been previously ruled by the Qing dynasty and the Republic of China. Mongolia gained its independence from China and has remained independent but became a Soviet satellite state until the dissolution of the Soviet Union. Afghanistan remained relatively independent of major influence by the Soviet Union until the Saur Revolution of 1978. The Soviet areas of Central Asia saw much industrialisation and construction of infrastructure, but also the suppression of local cultures, hundreds of thousands of deaths from failed collectivisation programmes, and a lasting legacy of ethnic tensions and environmental problems. Soviet authorities deported millions of people, including entire nationalities, from western areas of the Soviet Union to Central Asia and Siberia. According to Touraj Atabaki and Sanjyot Mehendale, "From 1959 to 1970, about two million people from various parts of the Soviet Union migrated to Central Asia, of which about one million moved to Kazakhstan." With the collapse of the Soviet Union, five countries gained independence. In nearly all the new states, former Communist Party officials retained power as local strongmen. None of the new republics could be considered functional democracies in the early days of independence, although in recent years Kyrgyzstan, Kazakhstan and Mongolia have made further progress towards more open societies, unlike Uzbekistan, Tajikistan, and Turkmenistan, which have maintained many Soviet-style repressive tactics. Culture Arts At the crossroads of Asia, shamanistic practices live alongside Buddhism. Thus, Yama, Lord of Death, was revered in Tibet as a spiritual guardian and judge. Mongolian Buddhism, in particular, was influenced by Tibetan Buddhism. The Qianlong Emperor of Qing China in the 18th century was Tibetan Buddhist and would sometimes travel from Beijing to other cities for personal religious worship. Central Asia also has an indigenous form of improvisational oral poetry that is over 1000 years old. It is principally practiced in Kyrgyzstan and Kazakhstan by akyns, lyrical improvisationalists. They engage in lyrical battles, the aitysh or the alym sabak. The tradition arose out of early bardic oral historians. They are usually accompanied by a stringed instrument—in Kyrgyzstan, a three-stringed komuz, and in Kazakhstan, a similar two-stringed instrument, the dombra. Photography in Central Asia began to develop after 1882, when a Russian Mennonite photographer named Wilhelm Penner moved to the Khanate of Khiva during the Mennonite migration to Central Asia led by Claas Epp, Jr. Upon his arrival to Khanate of Khiva, Penner shared his photography skills with a local student Khudaybergen Divanov, who later became the founder of Uzbek photography. Some also learn to sing the Manas, Kyrgyzstan's epic poem (those who learn the Manas exclusively but do not improvise are called manaschis). During Soviet rule, akyn performance was co-opted by the authorities and subsequently declined in popularity. With the fall of the Soviet Union, it has enjoyed a resurgence, although akyns still do use their art to campaign for political candidates. A 2005 The Washington Post article proposed a similarity between the improvisational art of akyns and modern freestyle rap performed in the West. As a consequence of Russian colonisation, European fine arts – painting, sculpture and graphics – have developed in Central Asia. The first years of the Soviet regime saw the appearance of modernism, which took inspiration from the Russian avant-garde movement. Until the 1980s, Central Asian arts had developed along with general tendencies of Soviet arts. In the 90s, arts of the region underwent some significant changes. Institutionally speaking, some fields of arts were regulated by the birth of the art market, some stayed as representatives of official views, while many were sponsored by international organisations. The years of 1990–2000 were times for the establishment of contemporary arts. In the region, many important international exhibitions are taking place, Central Asian art is represented in European and American museums, and the Central Asian Pavilion at the Venice Biennale has been organised since 2005. Sports Equestrian sports are traditional in Central Asia, with disciplines like endurance riding, buzkashi, dzhigit and kyz kuu. The traditional game of Buzkashi is played throughout the Central Asian region, the countries sometimes organise Buzkashi competition amongst each other. The First regional competition among the Central Asian countries, Russia, Chinese Xinjiang and Turkey was held in 2013. The first world title competition was played in 2017 and won by Kazakhstan. Association football is popular across Central Asia. Most countries are members of the Central Asian Football Association, a region of the Asian Football Confederation. However, Kazakhstan is a member of the UEFA. Wrestling is popular across Central Asia, with Kazakhstan having claimed 14 Olympic medals, Uzbekistan seven, and Kyrgyzstan three. As former Soviet states, Central Asian countries have been successful in gymnastics. Mixed Martial Arts is one of more common sports in Central Asia, Kyrgyz athlete Valentina Shevchenko holding the UFC Flyweight Champion title. Cricket is the most popular sport in Afghanistan. The Afghanistan national cricket team, first formed in 2001, has claimed wins over Bangladesh, West Indies and Zimbabwe. Notable Kazakh competitors include cyclists Alexander Vinokourov and Andrey Kashechkin, boxer Vassiliy Jirov and Gennady Golovkin, runner Olga Shishigina, decathlete Dmitriy Karpov, gymnast Aliya Yussupova, judoka Askhat Zhitkeyev and Maxim Rakov, skier Vladimir Smirnov, weightlifter Ilya Ilyin, and figure skaters Denis Ten and Elizabet Tursynbaeva. Notable Uzbekistani competitors include cyclist Djamolidine Abdoujaparov, boxer Ruslan Chagaev, canoer Michael Kolganov, gymnast Oksana Chusovitina, tennis player Denis Istomin, chess player Rustam Kasimdzhanov, and figure skater Misha Ge. Economy Since gaining independence in the early 1990s, the Central Asian republics have gradually been moving from a state-controlled economy to a market economy. The ultimate aim is to emulate the Asian Tigers by becoming the local equivalent, Central Asian snow leopards. However, reform has been deliberately gradual and selective, as governments strive to limit the social cost and ameliorate living standards. All five countries are implementing structural reforms to improve competitiveness. Kazakhstan is the only CIS country to be included in the 2020 and 2019 IWB World Competitiveness rankings. In particular, they have been modernizing the industrial sector and fostering the development of service industries through business-friendly fiscal policies and other measures, to reduce the share of agriculture in GDP. Between 2005 and 2013, the share of agriculture dropped in all but Tajikistan, where it increased while industry decreased. The fastest growth in industry was observed in Turkmenistan, whereas the services sector progressed most in the other four countries. Public policies pursued by Central Asian governments focus on buffering the political and economic spheres from external shocks. This includes maintaining a trade balance, minimizing public debt and accumulating national reserves. They cannot totally insulate themselves from negative exterior forces, however, such as the persistently weak recovery of global industrial production and international trade since 2008. Notwithstanding this, they have emerged relatively unscathed from the global financial crisis of 2008–2009. Growth faltered only briefly in Kazakhstan, Tajikistan and Turkmenistan and not at all in Uzbekistan, where the economy grew by more than 7% per year on average between 2008 and 2013. Turkmenistan achieved unusually high 14.7% growth in 2011. Kyrgyzstan's performance has been more erratic but this phenomenon was visible well before 2008. The republics which have fared best benefitted from the commodities boom during the first decade of the 2000s. Kazakhstan and Turkmenistan have abundant oil and natural gas reserves and Uzbekistan's own reserves make it more or less self-sufficient. Kyrgyzstan, Tajikistan and Uzbekistan all have gold reserves and Kazakhstan has the world's largest uranium reserves. Fluctuating global demand for cotton, aluminium and other metals (except gold) in recent years has hit Tajikistan hardest, since aluminium and raw cotton are its chief exports − the Tajik Aluminium Company is the country's primary industrial asset. In January 2014, the Minister of Agriculture announced the government's intention to reduce the acreage of land cultivated by cotton to make way for other crops. Uzbekistan and Turkmenistan are major cotton exporters themselves, ranking fifth and ninth respectively worldwide for volume in 2014. Although both exports and imports have grown significantly over the past decade, Central Asian republics countries remain vulnerable to economic shocks, owing to their reliance on exports of raw materials, a restricted circle of trading partners and a negligible manufacturing capacity. Kyrgyzstan has the added disadvantage of being considered resource poor, although it does have ample water. Most of its electricity is generated by hydropower. The Kyrgyz economy was shaken by a series of shocks between 2010 and 2012. In April 2010, President Kurmanbek Bakiyev was deposed by a popular uprising, with former minister of foreign affairs Roza Otunbayeva assuring the interim presidency until the election of Almazbek Atambayev in November 2011. Food prices rose two years in a row and, in 2012, production at the major Kumtor gold mine fell by 60% after the site was perturbed by geological movements. According to the World Bank, 33.7% of the population was living in absolute poverty in 2010 and 36.8% a year later. Despite high rates of economic growth in recent years, GDP per capita in Central Asia was higher than the average for developing countries only in Kazakhstan in 2013 (PPP$23,206) and Turkmenistan (PPP$14 201). It dropped to PPP$5,167 for Uzbekistan, home to 45% of the region's population, and was even lower for Kyrgyzstan and Tajikistan. Kazakhstan leads the Central Asian region in terms of foreign direct investments. The Kazakh economy accounts for more than 70% of all the investment attracted in Central Asia. In terms of the economic influence of big powers, China is viewed as one of the key economic players in Central Asia, especially after Beijing launched its grand development strategy known as the Belt and Road Initiative (BRI) in 2013. The Central Asian countries attracted $378.2 billion of foreign direct investment (FDI) between 2007 and 2019. Kazakhstan accounted for 77.7% of the total FDI directed to the region. Kazakhstan is also the largest country in Central Asia accounting for more than 60 percent of the region's gross domestic product (GDP). Central Asian nations fared better economically throughout the COVID-19 pandemic. Many variables are likely to have been at play, but disparities in economic structure, the intensity of the pandemic, and accompanying containment efforts may all be linked to part of the variety in nations' experiences. Central Asian countries are, however, predicted to be hit the worst in the future. Only 4% of permanently closed businesses anticipate to return in the future, with huge differences across sectors, ranging from 3% in lodging and food services to 27% in retail commerce. In 2022, experts assessed that global climate change is likely to pose multiple economic risks to Central Asia and may possibly result in many billions of losses unless proper adaptation measures are developed to counter growing temperatures across the region. Education, science and technology Modernisation of research infrastructure Bolstered by strong economic growth in all but Kyrgyzstan, national development strategies are fostering new high-tech industries, pooling resources and orienting the economy towards export markets. Many national research institutions established during the Soviet era have since become obsolete with the development of new technologies and changing national priorities. This has led countries to reduce the number of national research institutions since 2009 by grouping existing institutions to create research hubs. Several of the Turkmen Academy of Science's institutes were merged in 2014: the Institute of Botany was merged with the Institute of Medicinal Plants to become the Institute of Biology and Medicinal Plants; the Sun Institute was merged with the Institute of Physics and Mathematics to become the Institute of Solar Energy; and the Institute of Seismology merged with the State Service for Seismology to become the Institute of Seismology and Atmospheric Physics. In Uzbekistan, more than 10 institutions of the Academy of Sciences have been reorganised, following the issuance of a decree by the Cabinet of Ministers in February 2012. The aim is to orient academic research towards problem-solving and ensure continuity between basic and applied research. For example, the Mathematics and Information Technology Research Institute has been subsumed under the National University of Uzbekistan and the Institute for Comprehensive Research on Regional Problems of Samarkand has been transformed into a problem-solving laboratory on environmental issues within Samarkand State University. Other research institutions have remained attached to the Uzbek Academy of Sciences, such as the Centre of Genomics and Bioinformatics. Kazakhstan and Turkmenistan are also building technology parks as part of their drive to modernise infrastructure. In 2011, construction began of a technopark in the village of Bikrova near Ashgabat, the Turkmen capital. It will combine research, education, industrial facilities, business incubators and exhibition centres. The technopark will house research on alternative energy sources (sun, wind) and the assimilation of nanotechnologies. Between 2010 and 2012, technological parks were set up in the east, south and north Kazakhstan oblasts (administrative units) and in the capital, Astana. A Centre for Metallurgy was also established in the east Kazakhstan oblast, as well as a Centre for Oil and Gas Technologies which will be part of the planned Caspian Energy Hub. In addition, the Centre for Technology Commercialisation has been set up in Kazakhstan as part of the Parasat National Scientific and Technological Holding, a joint stock company established in 2008 that is 100% state-owned. The centre supports research projects in technology marketing, intellectual property protection, technology licensing contracts and start-ups. The centre plans to conduct a technology audit in Kazakhstan and to review the legal framework regulating the commercialisation of research results and technology. Countries are seeking to augment the efficiency of traditional extractive sectors but also to make greater use of information and communication technologies and other modern technologies, such as solar energy, to develop the business sector, education and research. In March 2013, two research institutes were created by presidential decree to foster the development of alternative energy sources in Uzbekistan, with funding from the Asian Development Bank and other institutions: the SPU Physical−Technical Institute (Physics Sun Institute) and the International Solar Energy Institute. Three universities have been set up since 2011 to foster competence in strategic economic areas: Nazarbayev University in Kazakhstan (first intake in 2011), an international research university, Inha University in Uzbekistan (first intake in 2014), specializing in information and communication technologies, and the International Oil and Gas University in Turkmenistan (founded in 2013). Kazakhstan and Uzbekistan are both generalizing the teaching of foreign languages at school, in order to facilitate international ties. Kazakhstan and Uzbekistan have both adopted the three-tier bachelor's, master's and PhD degree system, in 2007 and 2012 respectively, which is gradually replacing the Soviet system of Candidates and Doctors of Science. In 2010, Kazakhstan became the only Central Asian member of the Bologna Process, which seeks to harmonise higher education systems in order to create a European Higher Education Area. Financial investment in research The Central Asian republics' ambition of developing the business sector, education and research is being hampered by chronic low investment in research and development. Over the decade to 2013, the region's investment in research and development hovered around 0.2–0.3% of GDP. Uzbekistan broke with this trend in 2013 by raising its own research intensity to 0.41% of GDP. Kazakhstan is the only country where the business enterprise and private non-profit sectors make any significant contribution to research and development – but research intensity overall is low in Kazakhstan: just 0.18% of GDP in 2013. Moreover, few industrial enterprises conduct research in Kazakhstan. Only one in eight (12.5%) of the country's manufacturing firms were active in innovation in 2012, according to a survey by the UNESCO Institute for Statistics. Enterprises prefer to purchase technological solutions that are already embodied in imported machinery and equipment. Just 4% of firms purchase the license and patents that come with this technology. Nevertheless, there appears to be a growing demand for the products of research, since enterprises spent 4.5 times more on scientific and technological services in 2008 than in 1997. Trends in researchers Kazakhstan and Uzbekistan count the highest researcher density in Central Asia. The number of researchers per million population is close to the world average (1,083 in 2013) in Kazakhstan (1,046) and higher than the world average in Uzbekistan (1,097). Kazakhstan is the only Central Asian country where the business enterprise and private non-profit sectors make any significant contribution to research and development. Uzbekistan is in a particularly vulnerable position, with its heavy reliance on higher education: three-quarters of researchers were employed by the university sector in 2013 and just 6% in the business enterprise sector. With most Uzbek university researchers nearing retirement, this imbalance imperils Uzbekistan's research future. Almost all holders of a Candidate of Science, Doctor of Science or PhD are more than 40 years old and half are aged over 60; more than one in three researchers (38.4%) holds a PhD degree, or its equivalent, the remainder holding a bachelor's or master's degree. Kazakhstan, Kyrgyzstan and Uzbekistan have all maintained a share of women researchers above 40% since the fall of the Soviet Union. Kazakhstan has even achieved gender parity, with Kazakh women dominating medical and health research and representing some 45–55% of engineering and technology researchers in 2013. In Tajikistan, however, only one in three scientists (34%) was a woman in 2013, down from 40% in 2002. Although policies are in place to give Tajik women equal rights and opportunities, these are underfunded and poorly understood. Turkmenistan has offered a state guarantee of equality for women since a law adopted in 2007 but the lack of available data makes it impossible to draw any conclusions as to the law's impact on research. As for Turkmenistan, it does not make data available on higher education, research expenditure or researchers. Table: PhDs obtained in science and engineering in Central Asia, 2013 or closest year Source: UNESCO Science Report: towards 2030 (2015), Table 14.1 Note: PhD graduates in science cover life sciences, physical sciences, mathematics and statistics, and computing; PhDs in engineering also cover manufacturing and construction. For Central Asia, the generic term of PhD also encompasses Candidate of Science and Doctor of Science degrees. Data are unavailable for Turkmenistan. Table: Central Asian researchers by field of science and gender, 2013 or closest year Source: UNESCO Science Report: towards 2030 (2015), Table 14.1 Research output The number of scientific papers published in Central Asia grew by almost 50% between 2005 and 2014, driven by Kazakhstan, which overtook Uzbekistan over this period to become the region's most prolific scientific publisher, according to Thomson Reuters' Web of Science (Science Citation Index Expanded). Between 2005 and 2014, Kazakhstan's share of scientific papers from the region grew from 35% to 56%. Although two-thirds of papers from the region have a foreign co-author, the main partners tend to come from beyond Central Asia, namely the Russian Federation, USA, German, United Kingdom and Japan. Five Kazakh patents were registered at the US Patent and Trademark Office between 2008 and 2013, compared to three for Uzbek inventors and none at all for the other three Central Asian republics, Kyrgyzstan, Tajikistan and Turkmenistan. Kazakhstan is Central Asia's main trader in high-tech products. Kazakh imports nearly doubled between 2008 and 2013, from US$2.7 billion to US$5.1 billion. There has been a surge in imports of computers, electronics and telecommunications; these products represented an investment of US$744 million in 2008 and US$2.6 billion five years later. The growth in exports was more gradual – from US$2.3 billion to US$3.1 billion – and dominated by chemical products (other than pharmaceuticals), which represented two-thirds of exports in 2008 (US$1.5 billion) and 83% (US$2.6 billion) in 2013. International cooperation The five Central Asian republics belong to several international bodies, including the Organization for Security and Co-operation in Europe, the Economic Cooperation Organization and the Shanghai Cooperation Organisation. They are also members of the Central Asia Regional Economic Cooperation (CAREC) Programme, which also includes Afghanistan, Azerbaijan, China, Mongolia and Pakistan. In November 2011, the 10 member countries adopted the CAREC 2020 Strategy, a blueprint for furthering regional co-operation. Over the decade to 2020, US$50 billion is being invested in priority projects in transport, trade and energy to improve members' competitiveness. The landlocked Central Asian republics are conscious of the need to co-operate in order to maintain and develop their transport networks and energy, communication and irrigation systems. Only Kazakhstan, Azerbaijan, and Turkmenistan border the Caspian Sea and none of the republics has direct access to an ocean, complicating the transportation of hydrocarbons, in particular, to world markets. Kazakhstan is also one of the three founding members of the Eurasian Economic Union in 2014, along with Belarus and the Russian Federation. Armenia and Kyrgyzstan have since joined this body. As co-operation among the member states in science and technology is already considerable and well-codified in legal texts, the Eurasian Economic Union is expected to have a limited additional impact on co-operation among public laboratories or academia but it should encourage business ties and scientific mobility, since it includes provision for the free circulation of labour and unified patent regulations. Kazakhstan and Tajikistan participated in the Innovative Biotechnologies Programme (2011–2015) launched by the Eurasian Economic Community, the predecessor of the Eurasian Economic Union, The programme also involved Belarus and the Russian Federation. Within this programme, prizes were awarded at an annual bio-industry exhibition and conference. In 2012, 86 Russian organisations participated, plus three from Belarus, one from Kazakhstan and three from Tajikistan, as well as two scientific research groups from Germany. At the time, Vladimir Debabov, scientific director of the Genetika State Research Institute for Genetics and the Selection of Industrial Micro-organisms in the Russian Federation, stressed the paramount importance of developing bio-industry. "In the world today, there is a strong tendency to switch from petrochemicals to renewable biological sources", he said. "Biotechnology is developing two to three times faster than chemicals." Kazakhstan also participated in a second project of the Eurasian Economic Community, the establishment of the Centre for Innovative Technologies on 4 April 2013, with the signing of an agreement between the Russian Venture Company (a government fund of funds), the Kazakh JSC National Agency and the Belarusian Innovative Foundation. Each of the selected projects is entitled to funding of US$3–90 million and is implemented within a public–private partnership. The first few approved projects focused on supercomputers, space technologies, medicine, petroleum recycling, nanotechnologies and the ecological use of natural resources. Once these initial projects have spawned viable commercial products, the venture company plans to reinvest the profits in new projects. This venture company is not a purely economic structure; it has also been designed to promote a common economic space among the three participating countries. Kazakhstan recognises the role civil society initiatives have to address the consequences of the COVID-19 crisis. Four of the five Central Asian republics have also been involved in a project launched by the European Union in September 2013, IncoNet CA. The aim of this project is to encourage Central Asian countries to participate in research projects within Horizon 2020, the European Union's eighth research and innovation funding programme. The focus of this research projects is on three societal challenges considered as being of mutual interest to both the European Union and Central Asia, namely: climate change, energy and health. IncoNet CA builds on the experience of earlier projects which involved other regions, such as Eastern Europe, the South Caucasus and the Western Balkans. IncoNet CA focuses on twinning research facilities in Central Asia and Europe. It involves a consortium of partner institutions from Austria, the Czech Republic, Estonia, Germany, Hungary, Kazakhstan, Kyrgyzstan, Poland, Portugal, Tajikistan, Turkey and Uzbekistan. In May 2014, the European Union launched a 24-month call for project applications from twinned institutions – universities, companies and research institutes – for funding of up to €10, 000 to enable them to visit one another's facilities to discuss project ideas or prepare joint events like workshops. The International Science and Technology Center (ISTC) was established in 1992 by the European Union, Japan, the Russian Federation and the US to engage weapons scientists in civilian research projects and to foster technology transfer. ISTC branches have been set up in the following countries party to the agreement: Armenia, Belarus, Georgia, Kazakhstan, Kyrgyzstan and Tajikistan. The headquarters of ISTC were moved to Nazarbayev University in Kazakhstan in June 2014, three years after the Russian Federation announced its withdrawal from the centre. Kyrgyzstan, Tajikistan and Kazakhstan have been members of the World Trade Organization since 1998, 2013 and 2015 respectively. Territorial and regional data Demographics By a broad definition including Mongolia and Afghanistan, more than 90 million people live in Central Asia, about 2% of Asia's total population. Of the regions of Asia, only North Asia has fewer people. It has a population density of 9 people per km2, vastly less than the 80.5 people per km2 of the continent as a whole. Kazakhstan is one of the least densely populated countries in the world. Languages Russian, as well as being spoken by around six million ethnic Russians and Ukrainians of Central Asia, is the de facto lingua franca throughout the former Soviet Central Asian Republics. Mandarin Chinese has an equally dominant presence in Inner Mongolia, Qinghai and Xinjiang. The languages of the majority of the inhabitants of the former Soviet Central Asian Republics belong to the Turkic language group. Turkmen is mainly spoken in Turkmenistan, and as a minority language in Afghanistan, Russia, Iran and Turkey. Kazakh and Kyrgyz are related languages of the Kypchak group of Turkic languages and are spoken throughout Kazakhstan, Kyrgyzstan, and as a minority language in Tajikistan, Afghanistan and Xinjiang. Uzbek and Uyghur are spoken in Uzbekistan, Tajikistan, Kyrgyzstan, Afghanistan and Xinjiang. The Turkic languages may belong to a larger, but controversial, Altaic language family, which includes Mongolian. Mongolian is spoken throughout Mongolia and into Buryatia, Kalmyk, Inner Mongolia, and Xinjiang. Middle Iranian languages were once spoken throughout Central Asia, such as the once prominent Sogdian, Khwarezmian, Bactrian and Scythian, which are now extinct and belonged to the Eastern Iranian family. The Eastern Iranian Pashto language is still spoken in Afghanistan and northwestern Pakistan. Other minor Eastern Iranian languages such as Shughni, Munji, Ishkashimi, Sarikoli, Wakhi, Yaghnobi and Ossetic are also spoken at various places in Central Asia. Varieties of Persian are also spoken as a major language in the region, locally known as Dari (in Afghanistan), Tajik (in Tajikistan and Uzbekistan), and Bukhori (by the Bukharan Jews of Central Asia). Tocharian, another Indo-European language group, which was once predominant in oases on the northern edge of the Tarim Basin of Xinjiang, is now extinct. Other language groups include the Tibetic languages, spoken by around six million people across the Tibetan Plateau and into Qinghai, Sichuan (Szechwan), Ladakh and Baltistan, and the Nuristani languages of northeastern Afghanistan. Korean is spoken by the Koryo-saram minority, mainly in Kazakhstan and Uzbekistan. Religions Islam is the religion most common in the Central Asian Republics, Afghanistan, Xinjiang, and the peripheral western regions, such as Bashkortostan. Most Central Asian Muslims are Sunni, although there are sizable Shia minorities in Afghanistan and Tajikistan. Buddhism and Zoroastrianism were the major faiths in Central Asia before the arrival of Islam. Zoroastrian influence is still felt today in such celebrations as Nowruz, held in all five of the Central Asian states. The transmission of Buddhism along the Silk Road eventually brought the religion to China. Amongst the Turkic peoples, Tengrism was the leading religion before Islam. Tibetan Buddhism is most common in Tibet, Mongolia, Ladakh, and the southern Russian regions of Siberia. The form of Christianity most practiced in the region in previous centuries was Nestorianism, but now the largest denomination is the Russian Orthodox Church, with many members in Kazakhstan, where about 25% of the population of 19 million identify as Christian, 17% in Uzbekistan and 5% in Kyrgyzstan. The Bukharan Jews were once a sizable community in Uzbekistan and Tajikistan, but nearly all have emigrated since the dissolution of the Soviet Union. In Siberia, shaministic practices persist, including forms of divination such as Kumalak. Contact and migration with Han people from China has brought Confucianism, Daoism, Mahayana Buddhism, and other Chinese folk beliefs into the region. Geostrategy Central Asia has long been a strategic location merely because of its proximity to several great powers on the Eurasian landmass. The region itself never held a dominant stationary population nor was able to make use of natural resources. Thus, it has rarely throughout history become the seat of power for an empire or influential state. Central Asia has been divided, redivided, conquered out of existence, and fragmented time and time again. Central Asia has served more as the battleground for outside powers than as a power in its own right. Central Asia had both the advantage and disadvantage of a central location between four historical seats of power. From its central location, it has access to trade routes to and from all the regional powers. On the other hand, it has been continuously vulnerable to attack from all sides throughout its history, resulting in political fragmentation or outright power vacuum, as it is successively dominated. To the North, the steppe allowed for rapid mobility, first for nomadic horseback warriors like the Huns and Mongols, and later for Russian traders, eventually supported by railroads. As the Russian Empire expanded to the East, it would also push down into Central Asia towards the sea, in a search for warm water ports. The Soviet bloc would reinforce dominance from the North and attempt to project power as far south as Afghanistan. To the East, the demographic and cultural weight of Chinese empires continually pushed outward into Central Asia since the Silk Road period of Han Dynasty. However, with the Sino-Soviet split and collapse of Soviet Union, China would project its soft power into Central Asia, most notably in the case of Afghanistan, to counter Russian dominance of the region. To the Southeast, the demographic and cultural influence of India was felt in Central Asia, notably in Tibet, the Hindu Kush, and slightly beyond. From its base in India, the British Empire competed with the Russian Empire for influence in the region in the 19th and 20th centuries. To the Southwest, Western Asian powers have expanded into the southern areas of Central Asia (usually Uzbekistan, Afghanistan, and Turkmenistan). Several Persian empires would conquer and reconquer parts of Central Asia; Alexander the Great's Hellenic empire would extend into Central Asia; two Islamic empires would exert substantial influence throughout the region; and the modern state of Iran has projected influence throughout the region as well. Turkey, through a common Turkic nation identity, has gradually increased its ties and influence as well in the region. Furthermore, since Uzbekistan announced their intention to join in April 2018, Turkey and all of the Central Asian Turkic-speaking states except Turkmenistan are together part of the Turkic Council. In the post–Cold War era, Central Asia is an ethnic cauldron, prone to instability and conflicts, without a sense of national identity, but rather a mess of historical cultural influences, tribal and clan loyalties, and religious fervor. Projecting influence into the area is no longer just Russia, but also Turkey, Iran, China, Pakistan, India and the United States: Russia continues to dominate political decision-making throughout the former SSRs; although, as other countries move into the area, Russia's influence has begun to wane though Russia still maintains military bases in Kyrgyzstan and Tajikistan. The United States, with its military involvement in the region and oil diplomacy, is also significantly involved in the region's politics. The United States and other NATO members are the main contributors to the International Security Assistance Force in Afghanistan and also exert considerable influence in other Central Asian nations. China has security ties with Central Asian states through the Shanghai Cooperation Organisation, and conducts energy trade bilaterally. India has geographic proximity to the Central Asian region and, in addition, enjoys considerable influence on Afghanistan. India maintains a military base at Farkhor, Tajikistan, and also has extensive military relations with Kazakhstan and Uzbekistan. Turkey also exerts considerable influence in the region on account of its ethnic and linguistic ties with the Turkic peoples of Central Asia and its involvement in the Baku-Tbilisi-Ceyhan oil pipeline. Political and economic relations are growing rapidly (e.g., Turkey recently eliminated visa requirements for citizens of the Central Asian Turkic republics). Iran, the seat of historical empires that controlled parts of Central Asia, has historical and cultural links to the region and is vying to construct an oil pipeline from the Caspian Sea to the Persian Gulf. Pakistan, a nuclear-armed Islamic state, has a history of political relations with neighbouring Afghanistan and is termed capable of exercising influence. For some Central Asian nations, the shortest route to the ocean lies through Pakistan. Pakistan seeks natural gas from Central Asia and supports the development of pipelines from its countries. According to an independent study, Turkmenistan is supposed to be the fifth largest natural gas field in the world. The mountain ranges and areas in northern Pakistan lie on the fringes of greater Central Asia; the Gilgit–Baltistan region of Pakistan lies adjacent to Tajikistan, separated only by the narrow Afghan Wakhan Corridor. Being located on the northwest of South Asia, the area forming modern-day Pakistan maintained extensive historical and cultural links with the central Asian region. Japan has an important and growing influence in Central Asia, with the master plan of the capital city of Astana in Kazakhstan being designed by Japanese architect Kisho Kurokawa, and the Central Asia plus Japan initiative designed to strengthen ties between them and promote development and stability of the region. Russian historian Lev Gumilev wrote that Xiongnu, Mongols (Mongol Empire, Zunghar Khanate) and Turkic peoples (First Turkic Khaganate, Uyghur Khaganate) played a role to stop Chinese aggression to the north. The Turkic Khaganate had special policy against Chinese assimilation policy. Another interesting theoretical analysis on the historical-geopolitics of the Central Asia was made through the reinterpretation of Orkhun Inscripts. The region, along with Russia, is also part of "the great pivot" as per the Heartland Theory of Halford Mackinder, which says that the power which controls Central Asia—richly endowed with natural resources—shall ultimately be the "empire of the world". War on Terror In the context of the United States' War on Terror, Central Asia has once again become the center of geostrategic calculations. Pakistan's status has been upgraded by the U.S. government to Major non-NATO ally because of its central role in serving as a staging point for the invasion of Afghanistan, providing intelligence on Al-Qaeda operations in the region, and leading the hunt on Osama bin Laden. Afghanistan, which had served as a haven and source of support for Al-Qaeda under the protection of Mullah Omar and the Taliban, was the target of a U.S. invasion in 2001 and ongoing reconstruction and drug-eradication efforts. U.S. military bases have also been established in Uzbekistan and Kyrgyzstan, causing both Russia and the People's Republic of China to voice their concern over a permanent U.S. military presence in the region. Western governments have accused Russia, China and the former Soviet republics of justifying the suppression of separatist movements, and the associated ethnics and religion with the War on Terror. Major cultural, scientific and economic centres Cities in Central Asia See also Chinese Central Asia: Western Regions Central Asian Football Federation Central Asian Games Central Asia Regional Economic Cooperation Program Central Asian studies Central Asian Union Mountains of Central Asia Central Asians in Ancient Indian literature Continental pole of inaccessibility Economic Cooperation Organization Hindutash Inner Asia Russian Turkestan Soviet Central Asia Transoxiana References Citations Sources Further reading Chow, Edward. "Central Asia's Pipelines: Field of Dreams and Reality", in Pipeline Politics in Asia: The Intersection of Demand, Energy Markets, and Supply Routes. National Bureau of Asian Research, 2010. Farah, Paolo Davide, Energy Security, Water Resources and Economic Development in Central Asia, World Scientific Reference on Globalisation in Eurasia and the Pacific Rim, Imperial College Press (London, UK) & World Scientific Publishing, November 2015. Available at SSRN: http://ssrn.com/abstract=2701215 Dani, A.H. and V.M. Masson, eds. History of Civilizations of Central Asia. Paris: UNESCO, 1992.* Gorshunova. Olga V. Svjashennye derevja Khodzhi Barora..., (Sacred Trees of Khodzhi Baror: Phytolatry and the Cult of Female Deity in Central Asia) in Etnoragraficheskoe Obozrenie, 2008, n° 1, pp. 71–82. . . Mandelbaum, Michael, ed. Central Asia and the World: Kazakhstan, Uzbekistan, Tajikistan, Kyrgyzstan, and Turkmenistan. New York: Council on Foreign Relations Press, 1994. Marcinkowski, M. Ismail. Persian Historiography and Geography: Bertold Spuler on Major Works Produced in Iran, the Caucasus, Central Asia, Pakistan and Early Ottoman Turkey. Singapore: Pustaka Nasional, 2003. Olcott, Martha Brill. Central Asia's New States: Independence, Foreign policy, and Regional security. Washington, D.C.: United States Institute of Peace Press, 1996. Hasan Bulent Paksoy. ALPAMYSH: Central Asian Identity under Russian Rule. Hartford: AACAR, 1989. http://vlib.iue.it/carrie/texts/carrie_books/paksoy-1/ Soucek, Svatopluk. A History of Inner Asia. Cambridge: Cambridge University Press, 2000. Rall, Ted. Silk Road to Ruin: Is Central Asia the New Middle East? New York: NBM Publishing, 2006. Stone, L.A. The International Politics of Central Eurasia (272 pp). Central Eurasian Studies On Line: Accessible via the Web Page of the International Eurasian Institute for Economic and Political Research: https://web.archive.org/web/20071103154944/http://www.iicas.org/forumen.htm Trochev, Alexei; Slade, Gavin (2019), in Caron, Jean-François (ed.), "Trials and Tribulations: Kazakhstan's Criminal Justice Reforms", Kazakhstan and the Soviet Legacy, Singapore: Springer Singapore, pp. 75–99, , retrieved 4 December 2020. Vakulchuk, Roman (2014) Kazakhstan's Emerging Economy: Between State and Market, Peter Lang: Frankfurt/Main. Available at: www.researchgate.net/publication/299731455 Weston, David. Teaching about Inner Asia, Bloomington, Indiana: ERIC Clearinghouse for Social Studies, 1989. Yellinek, Roie, The Impact of China's Belt and Road Initiative on Central Asia and the South Caucasus, E-International Relations, 14 February 2020. External links Central Asia ethnicity, languages, and religious composition maps at Columbia University General Map of Central Asia I – World Digital Library a historic map from 1874 Regions of Asia Regions of Eurasia
3,060
6,747
https://en.wikipedia.org/wiki/Constans
Constans
Flavius Julius Constans ( 323 – 350), sometimes called Constans I, was Roman emperor from 337 to 350. He held the imperial rank of caesar from 333, and was the youngest son of Constantine the Great. After his father's death, he was made augustus alongside his brothers in September 337. Constans was given the administration of the praetorian prefectures of Italy, Illyricum, and Africa. He defeated the Sarmatians in a campaign shortly afterwards. Quarrels over the sharing of power led to a civil war with his eldest brother and co-emperor Constantine II, who invaded Italy in 340 and was killed in battle by Constans's forces near Aquileia. Constans gained from him the praetorian prefecture of Gaul. Thereafter there were tensions with his remaining brother and co-augustus Constantius II (), including over the exiled bishop Athanasius of Alexandria. In the following years he campaigned against the Franks, and in 343 he visited Roman Britain, the last legitimate emperor to do so. In January 350, Magnentius () the commander of the Jovians and Herculians, a corps in the Roman army, was acclaimed augustus at Augustodunum (Autun) with the support of Marcellinus, the comes rei privatae. Magnentius overthrew and killed Constans. Surviving sources, possibly influenced by the propaganda of Magnentius's faction, accuse Constans of misrule and of homosexuality. Early life Constans was probably born in 323. He was the third and youngest son of Constantine I and Fausta, his father's second wife. He was the grandson of both the augusti Constantius I and Maximian. When he was born his father Constantine was the empire's senior augustus, and at war with his colleague and brother-in-law Licinius I (). At the time of Constans's birth, his eldest brother Constantine II and his half-brother Crispus, Constantine's first-born son, already held the rank of caesar. Constans's half-aunt Constantia, a daughter of Constantius I, was Licinius's wife and mother to another caesar, Licinius II. After the defeat of Licinius by Crispus at the Battle of the Hellespont and at the Battle of Chrysopolis by Constantine, Licinius and his son were spared at Constantine's half-sister's urging. Licinius was executed on a pretext shortly afterwards. In 326, Constans's mother Fausta was also put to death on Constantine's orders, as were Constans's half-brother Crispus and Licinius II. This left Constans's branch of the Constantinian dynasty – descended from Constantius I's relationship with Helena – in control of the imperial college. According to the works of both Ausonius and Libanius he was educated at Constantinople under the tutelage of the poet Aemilius Magnus Arborius, who instructed him in Latin. Reign Caesar On 25 December 333, his father Constantine I elevated Constans to the imperial rank of caesar at Constantinople. He was nobilissimus caesar alongside his brothers Constantine II and Constantius II. Constans became engaged to Olympias, the daughter of the praetorian prefect Ablabius, but the marriage never came to pass. Official imagery was changed to accommodate an image of Constans as co-caesar beside his brothers and their father the augustus. It is possible that the occasion of Constans's elevation to the imperial college was timed to coincide with the celebration of the millennium of the city of Byzantium, whose re-foundation as Constantinople Constantine had begun the previous decade. In 248, Rome had celebrated its own millennium, the Secular Games (), in the reign of Philip the Arab (). Philip may also have raised his son to co-augustus at the start of the anniversary year. Rome had been calculated by the 1st-century BC Latin author Marcus Terentius Varro to have been founded by Romulus in 753 BC. Byzantium was thought to have been founded in 667 BC by Byzas, according to the reckoning derived from the Histories of Herodotus, the 5th-century BC Greek historian, and the writings of Constantine's court historian Eusebius of Caesarea in his Chronicon. Augustus With Constantine's death in 337, Constans and his two brothers, Constantine II and Constantius II, divided the Roman world among themselves and disposed of virtually all relatives who could possibly have a claim to the throne. The army proclaimed them augusti on 9 September 337. Almost immediately, Constans was required to deal with a Sarmatian invasion in late 337, in which he won a resounding victory. Constans managed to extract the prefecture of Illyricum and the diocese of Thrace, provinces that were originally to be ruled by his cousin Dalmatius, as per Constantine I's proposed division after his death. Constantine II soon complained that he had not received the amount of territory that was his due as the eldest son. Annoyed that Constans had received Thrace and Macedonia after the death of Dalmatius, Constantine demanded that Constans hand over the African provinces, which he agreed to do in order to maintain a fragile peace. Soon, however, they began quarreling over which parts of the African provinces belonged to Carthage and Constantine, and which parts belonged to Italy and Constans. This led to growing tensions between the two brothers, which were only heightened by Constans finally coming of age and Constantine refusing to give up his guardianship. In 340 Constantine II invaded Italy. Constans, at that time in Dacia, detached and sent a select and disciplined body of his Illyrian troops, stating that he would follow them in person with the remainder of his forces. Constantine was eventually trapped at Aquileia, where he died, leaving Constans to inherit all of his brother's former territories – Hispania, Britannia and Gaul. Constans began his reign in an energetic fashion. In 341–342, he led a successful campaign against the Franks, and in the early months of 343 he visited Britain, probably as part of a military campaign. Regarding religion, Constans was tolerant of Judaism and promulgated an edict banning pagan sacrifices in 341. He suppressed Donatism in Africa and supported Nicene orthodoxy against Arianism, which was championed by his brother Constantius. Although Constans called the Council of Serdica in 343 to settle the conflict, it was a complete failure, and by 346 the two emperors were on the point of open warfare over the dispute. Homosexuality Surviving sources, possibly influenced by the propaganda of Magnentius's faction, accuse Constans of misrule and of homosexuality. The Roman historian Eutropius says Constans "indulged in great vices," in reference to his homosexuality, and Aurelius Victor stated that Constans had a reputation for scandalous behaviour with "handsome barbarian hostages." Nevertheless, Constans did sponsor a decree alongside Constantius II that ruled that marriage based on "unnatural" sex should be punished meticulously. However, according to John Boswell, it was likely that Constans promulgated the legislation under pressure from the growing band of Christian leaders, in an attempt to placate public outrage at his own perceived indecencies. Death In the final years of his reign, Constans developed a reputation for cruelty and misrule. Dominated by favourites and openly preferring his select bodyguard, he lost the support of the legions. On 18 January 350, the general Magnentius declared himself emperor at Augustodunum (Autun) with the support of the troops on the Rhine frontier and, later, the western provinces of the Empire. Constans was enjoying himself nearby when he was notified of the elevation of Magnentius. Lacking any support beyond his immediate household, he was forced to flee for his life. As he was trying to reach Hispania, supporters of Magnentius cornered him in a fortification in Helena (Elne) in the eastern Pyrenees of southwestern Gaul, where he was killed after seeking sanctuary in a temple. An alleged prophecy at his birth had said Constans would die "in the arms of his grandmother". His place of death happens to have been named after Helena, mother of Constantine and his own grandmother, thus realizing the prophecy. Family tree Emperors are shown with a rounded-corner border with their dates as Augusti, names with a thicker border appear in both sections 1: Constantine's parents and half-siblings 2: Constantine's children See also Itineraries of the Roman emperors, 337–361 References Sources Primary sources Zosimus, Historia Nova II Aurelius Victor, Epitome de Caesaribus Eutropius, Breviarium ab urbe condita Secondary sources DiMaio, Michael; Frakes, Robert, Constans I (337–350 A.D.) (Archive), De Imperatoribus Romanis Gibbon, Edward (1888) The History of the Decline and Fall of the Roman Empire Norwich, John Julius (1989) Byzantium: The Early Centuries, Guild Publishing External links 320s births 350 deaths 4th-century Christians 4th-century murdered monarchs 4th-century Roman consuls 4th-century Roman emperors Constantine the Great Constantinian dynasty Flavii Julii LGBT Roman emperors Murdered Roman emperors People executed by the Roman Empire Sons of Roman emperors
3,064
6,749
https://en.wikipedia.org/wiki/Cheerleading
Cheerleading
Cheerleading is an activity in which the participants (called cheerleaders) cheer for their team as a form of encouragement. It can range from chanting slogans to intense physical activity. It can be performed to motivate sports teams, to entertain the audience, or for competition. Cheerleading routines typically range anywhere from one to three minutes, and contain components of tumbling, dance, jumps, cheers, and stunting. Modern cheerleading is very closely associated with American football and basketball. Sports such as association football (soccer), ice hockey, volleyball, baseball, and wrestling will sometimes sponsor cheerleading squads. The ICC Twenty20 Cricket World Cup in South Africa in 2007 was the first international cricket event to have cheerleaders. The Florida Marlins were the first Major League Baseball team to have a cheerleading team. Cheerleading originated as an all-male activity in the United States, and remains predominantly in America, with an estimated 3.85 million participants as of 2017. The global presentation of cheerleading was led by the 1997 broadcast of ESPN's International cheerleading competition, and the worldwide release of the 2000 film Bring It On. The International Cheer Union (ICU) now claims 116 member nations with an estimated 7.5 million participants worldwide. The sport has gained a lot of traction in Australia, Canada, Mexico, China, Colombia, Finland, France, Germany, Japan, the Netherlands, New Zealand, and the United Kingdom with popularity continuing to grow as sport leaders pursue Olympic status. Cheerleading carries the highest rate of catastrophic injuries to female athletes in sports, with most injuries associated with stunting, also known as pyramids. History Before organized cheerleading Cheerleading began during the late 18th century with the rebellion of male students. After the American Revolutionary War, students experienced harsh treatment from teachers. In response to faculty's abuse, college students violently acted out. The undergraduates began to riot, burn down buildings located on their college campuses, and assault faculty members. As a more subtle way to gain independence, however, students invented and organized their own extracurricular activities outside their professors' control. This brought about American sports, beginning first with collegiate teams. In the 1860s, students from Great Britain began to cheer and chant in unison for their favorite athletes at sporting events. Soon, that gesture of support crossed overseas to America. On November 6, 1869, the United States witnessed its first intercollegiate football game. It took place between Princeton University and Rutgers University, and marked the day the original "Sis Boom Rah!" cheer was shouted out by student fans. Beginning of organized cheerleading Organized cheerleading began as an all-male activity. As early as 1877, Princeton University had a "Princeton Cheer", documented in the February 22, 1877, March 12, 1880, and November 4, 1881, issues of The Daily Princetonian. This cheer was yelled from the stands by students attending games, as well as by the athletes themselves. The cheer, "Hurrah! Hurrah! Hurrah! Tiger! S-s-s-t! Boom! A-h-h-h!" remains in use with slight modifications today, where it is now referred to as the "Locomotive". Princeton class of 1882 graduate Thomas Peebles moved to Minnesota in 1884. He transplanted the idea of organized crowds cheering at football games to the University of Minnesota. The term "Cheer Leader" had been used as early as 1897, with Princeton's football officials having named three students as Cheer Leaders: Thomas, Easton, and Guerin from Princeton's classes of 1897, 1898, and 1899, respectively, on October 26, 1897. These students would cheer for the team also at football practices, and special cheering sections were designated in the stands for the games themselves for both the home and visiting teams. It was not until 1898 that University of Minnesota student Johnny Campbell directed a crowd in cheering "Rah, Rah, Rah! Ski-u-mah, Hoo-Rah! Hoo-Rah! Varsity! Varsity! Varsity, Minn-e-So-Tah!", making Campbell the very first cheerleader. November 2, 1898 is the official birth date of organized cheerleading. Soon after, the University of Minnesota organized a "yell leader" squad of six male students, who still use Campbell's original cheer today. Early 20th century cheerleading and female participation In 1903, the first cheerleading fraternity, Gamma Sigma, was founded. In 1923, at the University of Minnesota, women were permitted to participate in cheerleading. However, it took time for other schools to follow. In the late 1920s, many school manuals and newspapers that were published still referred to cheerleaders as "chap", "fellow", and "man". Women cheerleaders were overlooked until the 1940s when collegiate men were drafted for World War II, creating the opportunity for more women to make their way onto sporting event sidelines. As noted by Kieran Scott in Ultimate Cheerleading: "Girls really took over for the first time." In 1949, Lawrence Herkimer, a former cheerleader at Southern Methodist University and inventor of the Herkie jump, founded his first cheerleading camp in Huntsville, Texas. 52 girls were in attendance. The clinic was so popular that Herkimer was asked to hold a second, where 350 young women were in attendance. Herkimer also patented the pom-pom. Growth in popularity (1950-1979) In 1951, Herkimer created the National Cheerleading Association to help grow the activity and provide cheerleading education to schools around the country. During the 1950s, female participation in cheerleading continued to grow. An overview written on behalf of cheerleading in 1955 explained that in larger schools, "occasionally boys as well as girls are included", and in smaller schools, "boys can usually find their place in the athletic program, and cheerleading is likely to remain solely a feminine occupation". Cheerleading could be found at almost every school level across the country; even pee wee and youth leagues began to appear. In the 1950s, professional cheerleading also began. The first recorded cheer squad in National Football League (NFL) history was for the Baltimore Colts. Professional cheerleaders put a new perspective on American cheerleading. Women were exclusively chosen for dancing ability as well as to conform to the male gaze, as heterosexual men were the targeted marketing group. By the 1960s, college cheerleaders employed by the NCA were hosting workshops across the nation, teaching fundamental cheer skills to tens of thousands of high-school-age girls. Herkimer also contributed many notable firsts to cheerleading: the founding of a cheerleading uniform supply company, inventing the herkie jump (where one leg is bent towards the ground as if kneeling and the other is out to the side as high as it will stretch in toe-touch position), and creating the "Spirit Stick". In 1965, Fred Gastoff invented the vinyl pom-pom, which was introduced into competitions by the International Cheerleading Foundation (ICF, now the World Cheerleading Association, or WCA). Organized cheerleading competitions began to pop up with the first ranking of the "Top Ten College Cheerleading Squads" and "Cheerleader All America" awards given out by the ICF in 1967. The Dallas Cowboys Cheerleaders soon gained the spotlight with their revealing outfits and sophisticated dance moves, debuting in the 1972–1973 season, but were first widely seen in Super Bowl X (1976). These pro squads of the 1970s established cheerleaders as "American icons of wholesome sex appeal." In 1975, Randy Neil estimated that over 500,000 students actively participated in American cheerleading from elementary school to the collegiate level. Neil also approximated that ninety-five percent of cheerleaders within America were female. In 1978, America was introduced to competitive cheerleading by the first broadcast of Collegiate Cheerleading Championships on CBS. 1980s to present The 1980s saw the beginning of modern cheerleading, adding difficult stunt sequences and gymnastics into routines. All-star teams, or those not affiliated with a school, popped up, and eventually led to the creation of the U.S. All Star Federation (USASF). ESPN first broadcast the National High School Cheerleading Competition nationwide in 1983. By 1981, a total of seventeen Nation Football League teams had their own cheerleaders. The only teams without NFL cheerleaders at this time were New Orleans, New York, Detroit, Cleveland, Denver, Minnesota, Pittsburgh, San Francisco, and San Diego. Professional cheerleading eventually spread to soccer and basketball teams as well. Cheerleading organizations such as the American Association of Cheerleading Coaches and Advisors (AACCA), founded in 1987, started applying universal safety standards to decrease the number of injuries and prevent dangerous stunts, pyramids, and tumbling passes from being included in the cheerleading routines. In 2003, the National Council for Spirit Safety and Education (NCSSE) was formed to offer safety training for youth, school, all-star, and college coaches. The NCAA now requires college cheer coaches to successfully complete a nationally recognized safety-training program. Even with its athletic and competitive development, cheerleading at the school level has retained its ties to its spirit leading traditions. Cheerleaders are quite often seen as ambassadors for their schools, and leaders among the student body. At the college level, cheerleaders are often invited to help at university fundraisers and events. Debuting in 2003, the "Marlin Mermaids" gained national exposure, and have influenced other MLB teams to develop their own cheer/dance squads. As of 2005, overall statistics show around 97% of all modern cheerleading participants are female, although at the collegiate level, cheerleading is co-ed with about 50% of participants being male. Modern male cheerleaders' stunts focus less on flexibility and more on tumbling, flips, pikes, and handstands. These depend on strong legs and strong core strength. In 2019, Napoleon Jinnies and Quinton Peron became the first male cheerleaders in the history of the NFL to perform at the Super Bowl. Safety regulation changes Kristi Yamaoka, a cheerleader for Southern Illinois University, suffered a fractured vertebra when she hit her head after falling from a human pyramid. She also suffered from a concussion, and a bruised lung. The fall occurred when Yamaoka lost her balance during a basketball game between Southern Illinois University and Bradley University at the Savvis Center in St. Louis on March 5, 2006. The fall gained "national attention", because Yamaoka continued to perform from a stretcher as she was moved away from the game. The accident caused the Missouri Valley Conference to ban its member schools from allowing cheerleaders to be "launched or tossed and from taking part in formations higher than two levels" for one week during a women's basketball conference tournament, and also resulted in a recommendation by the NCAA that conferences and tournaments do not allow pyramids two and one half levels high or higher, and a stunt known as basket tosses, during the rest of the men's and women's basketball season. On July 11, 2006, the bans were made permanent by the AACCA rules committee: The committee unanimously voted for sweeping revisions to cheerleading safety rules, the most major of which restricts specific upper-level skills during basketball games. Basket tosses, high pyramids, one-arm stunts, stunts that involve twisting or flipping, and twisting tumbling skills may be performed only during halftime and post-game on a matted surface and are prohibited during game play or time-outs. Types of teams in the United States today School-sponsored Most American elementary schools, middle schools, high schools, and colleges have organized cheerleading squads. Some colleges even offer cheerleading scholarships for students. A school cheerleading team may compete locally, regionally, or nationally, but their main purpose is typically to cheer for sporting events and encourage audience participation. Cheerleading is quickly becoming a year-round activity, starting with tryouts during the spring semester of the preceding school year. Teams may attend organized summer cheerleading camps and practices to improve skills and create routines for competition. In addition to supporting their schools' football or other sports teams, student cheerleaders may compete with recreational-style routine at competitions year-round. Elementary school In far more recent years, it has become more common for elementary schools to have an organized cheerleading team. This is a great way to get younger children introduced to the sport and used to being crowd leaders. Also, with young children learning so much so quickly, tumbling can come very easy to a child in elementary school. Middle school Middle school cheerleading evolved shortly after high school squads were created and is set at the district level. In middle school, cheerleading squads serve the same purpose, but often follow a modified set of rules from high school squads with possible additional rules. Squads can cheer for basketball teams, football teams, and other sports teams in their school. Squads may also perform at pep rallies and compete against other local schools from the area. Cheerleading in middle school sometimes can be a two-season activity: fall and winter. However, many middle school cheer squads will go year-round like high school squads. Middle school cheerleaders use the same cheerleading movements as their older counterparts, yet may perform less extreme stunts and tumbling elements, depending on the rules in their area.. High school In high school, there are usually two squads per school: varsity and a junior varsity. High school cheerleading contains aspects of school spirit as well as competition. These squads have become part of a year-round cycle. Starting with tryouts in the spring, year-round practice, cheering on teams in the fall and winter, and participating in cheerleading competitions. Most squads practice at least three days a week for about two hours each practice during the summer. Many teams also attend separate tumbling sessions outside of practice. During the school year, cheerleading is usually practiced five- to six-days-a-week. During competition season, it often becomes seven days with practice twice a day sometimes. The school spirit aspect of cheerleading involves cheering, supporting, and "hyping up" the crowd at football games, basketball games, and even at wrestling meets. Along with this, cheerleaders usually perform at pep rallies, and bring school spirit to other students. In May 2009, the National Federation of State High School Associations released the results of their first true high school participation study. They estimated that the number of high school cheerleaders from public high schools is around 394,700. There are different cheerleading organizations that put on competitions; some of the major ones include state and regional competitions. Many high schools will often host cheerleading competitions, bringing in IHSA judges. The regional competitions are qualifiers for national competitions, such as the UCA (Universal Cheerleaders Association) in Orlando, Florida every year. Many teams have a professional choreographer that choreographs their routine in order to ensure they are not breaking rules or regulations and to give the squad creative elements. College Most American universities have a cheerleading squad to cheer for football, basketball, volleyball, wrestling, and soccer. Most college squads tend to be larger coed teams, although in recent years; all-girl squads and smaller college squads have increased rapidly. Cheerleading is not recognized by NCAA, NAIA, and NJCAA as athletics; therefore, there are few to no scholarships offered to athletes wanting to pursue cheerleading at the collegiate level. However, some community colleges and universities offer scholarships directly from the program or sponsorship funds. Some colleges offer scholarships for an athlete's talents, academic excellence, and/or involvement in community events. College squads perform more difficult stunts which include multi-level pyramids, as well as flipping and twisting basket tosses. Not only do college cheerleaders cheer on the other sports at their university, many teams at universities compete with other schools at either UCA College Nationals or NCA College Nationals. This requires the teams to choreograph a 2-minute and 30 second routine that includes elements of jumps, tumbling, stunting, basket tosses, pyramids, and a crowd involvement section. Winning one of these competitions is a very prestigious accomplishment, and is seen as another national title for most schools. Youth leagues and athletic associations Organizations that sponsor youth cheer teams usually sponsor either youth league football or basketball teams as well. This allows for the two, under the same sponsor, to be intermingled. Both teams have the same mascot name and the cheerleaders will perform at their football or basketball games. Examples of such sponsors include Pop Warner, American Youth Football, and the YMCA. The purpose of these squads is primarily to support their associated football or basketball players, but some teams do compete at local or regional competitions. The Pop Warner Association even hosts a national championship each December for teams in their program who qualify. All-star or club cheerleading "All-star" or club cheerleading differs from school or sideline cheerleading because all-star teams focus solely on performing a competition routine and not on leading cheers for other sports teams. All-star cheerleaders are members of a privately owned gym or club which they typically pay dues or tuition to, similar to a gymnastics gym. During the early 1980s, cheerleading squads not associated with a school or sports league, whose main objective was competition, began to emerge. The first organization to call themselves all-stars were the Q94 Rockers from Richmond, Virginia, founded in 1982. All-star teams competing prior to 1987 were placed into the same divisions as teams that represented schools and sports leagues. In 1986, the National Cheerleaders Association (NCA) addressed this situation by creating a separate division for teams lacking a sponsoring school or athletic association, calling it the All-Star Division and debuting it at their 1987 competitions. As the popularity of this type of team grew, more and more of them were formed, attending competitions sponsored by many different organizations and companies, each using its own set of rules, regulations, and divisions. This situation became a concern to coaches and gym owners, as the inconsistencies caused coaches to keep their routines in a constant state of flux, detracting from time that could be better utilized for developing skills and providing personal attention to their athletes. More importantly, because the various companies were constantly vying for a competitive edge, safety standards had become more and more lax. In some cases, unqualified coaches and inexperienced squads were attempting dangerous stunts as a result of these expanded sets of rules. The United States All Star Federation (USASF) was formed in 2003 by the competition companies to act as the national governing body for all star cheerleading and to create a standard set of rules and judging criteria to be followed by all competitions sanctioned by the Federation. Eager to grow the sport and create more opportunities for high-level teams, The USASF hosted the first Cheerleading Worlds on April 24, 2004. At the same time, cheerleading coaches from all over the country organized themselves for the same rule making purpose, calling themselves the National All Star Cheerleading Coaches Congress (NACCC). In 2005, the NACCC was absorbed by the USASF to become their rule making body. In late 2006, the USASF facilitated the creation of the International All-Star Federation (IASF), which now governs club cheerleading worldwide. , all-star cheerleading, as sanctioned by the USASF, involves a squad of 5–36 females and males. All-star cheerleaders are placed into divisions, which are grouped based upon age, size of the team, gender of participants, and ability level. The age groups vary from under 4 years of age to 18 years and over. The squad prepares year-round for many different competition appearances, but they actually perform only for up to minutes during their team's routine. The numbers of competitions a team participates in varies from team to team, but generally, most teams tend to participate in six to ten competitions a year. These competitions include locals or regionals, which normally take place in school gymnasiums or local venues, nationals, hosted in large venues all around the U.S., and the Cheerleading Worlds, which takes place at Walt Disney World in Orlando, Florida. During a competition routine, a squad performs carefully choreographed stunting, tumbling, jumping, and dancing to their own custom music. Teams create their routines to an eight-count system and apply that to the music so that the team members execute the elements with precise timing and synchronization. All-star cheerleaders compete at competitions hosted by private event production companies, the foremost of these being Varsity Spirit. Varsity Spirit is the parent company for many subsidiaries including The National Cheerleader's Association, The Universal Cheerleader's Association, AmeriCheer, Allstar Challenge, and JamFest, among others. Each separate company or subsidiary typically hosts their own local and national level competitions. This means that many gyms within the same area could be state and national champions for the same year and never have competed against each other. Currently, there is no system in place that awards only one state or national title. Judges at a competition watch closely for illegal skills from the group or any individual member. Here, an illegal skill is something that is not allowed in that division due to difficulty or safety restrictions. They look out for deductions, or things that go wrong, such as a dropped stunt or a tumbler who doesn't stick a landing. More generally, judges look at the difficulty and execution of jumps, stunts and tumbling, synchronization, creativity, the sharpness of the motions, showmanship, and overall routine execution. If a level 6 or 7 team places high enough at selected USASF/IASF sanctioned national competitions, they could earn a place at the Cheerleading Worlds and compete against teams from all over the world, as well as receive money for placing. For elite level cheerleaders, The Cheerleading Worlds is the highest level of competition to which they can aspire, and winning a world championship title is an incredible honor. Professional Professional cheerleaders and dancers cheer for sports such as football, basketball, baseball, wrestling, or hockey. There are only a small handful of professional cheerleading leagues around the world; some professional leagues include the NBA Cheerleading League, the NFL Cheerleading League, the CFL Cheerleading League, the MLS Cheerleading League, the MLB Cheerleading League, and the NHL Ice Girls. Although professional cheerleading leagues exist in multiple countries, there are no Olympic teams. In addition to cheering at games and competing, professional cheerleaders often do a lot of philanthropy and charity work, modeling, motivational speaking, television performances, and advertising. Injuries and accidents Cheerleading carries the highest rate of catastrophic injuries to female athletes in high school and collegiate sports. Of the United States' 2.9 million female high school athletes, only 3% are cheerleaders, yet cheerleading accounts for nearly 65% of all catastrophic injuries in girls' high school athletics. In data covering the 1982-83 academic year through the 2018-19 academic year in the US, the rate of serious, direct traumatic injury per 100,000 participants was 1.68 for female cheerleaders at the high school level, the highest for all high school sports surveyed. (table 9a) The college rate could not be determined, as the total number of collegiate cheerleaders was unknown, but the total number of traumatic, direct catastrophic injuries over this period was 33 (28 female, 5 male), higher than all sports at this level aside from football. (table 5a) Another study found that between 1982 and 2007, there were 103 fatal, disabling, or serious injuries recorded among female high school athletes, with the vast majority (67) occurring in cheerleading. The main source of injuries comes from stunting, also known as pyramids. These stunts are performed at games and pep rallies, as well as competitions. Sometimes competition routines are focused solely around the use of difficult and risky stunts. These stunts usually include a flyer (the person on top), along with one or two bases (the people on the bottom), and one or two spotters in the front and back on the bottom. The most common cheerleading related injury is a concussion. 96% of those concussions are stunt related. Other injuries include: sprained ankles, sprained wrists, back injuries, head injuries (sometimes concussions), broken arms, elbow injuries, knee injuries, broken noses, and broken collarbones. Sometimes, however, injuries can be as serious as whiplash, broken necks, broken vertebrae, and death. The journal Pediatrics has reportedly said that the number of cheerleaders suffering from broken bones, concussions, and sprains has increased by over 100 percent between the years of 1990 and 2002, and that in 2001, there were 25,000 hospital visits reported for cheerleading injuries dealing with the shoulder, ankle, head, and neck. Meanwhile, in the US, cheerleading accounted for 65.1% of all major physical injuries to high school females, and to 66.7% of major injuries to college students due to physical activity from 1982 to 2007, with 22,900 minors being admitted to hospital with cheerleading-related injuries in 2002. The risks of cheerleading were highlighted at the death of Lauren Chang. Chang died on April 14, 2008 after competing in a competition where her teammate had kicked her so hard in the chest that her lungs collapsed. Cheerleading (for both girls and boys) was one of the sports studied in the Pediatric Injury Prevention, Education and Research Program of the Colorado School of Public Health in 2009/10–2012/13. Data on cheerleading injuries is included in the report for 2012–13. Associations, federations, and organizations International Cheer Union (ICU): Established on April 26, 2004, the ICU is recognized by the SportAccord as the world governing body of cheerleading and the authority on all matters with relation to it. Including participation from its 105-member national federations reaching 3.5 million athletes globally, the ICU continues to serve as the unified voice for those dedicated to cheerleading's positive development around the world. Following a positive vote by the SportAccord General Assembly on May 31, 2013, in Saint Petersburg, the International Cheer Union (ICU) became SportAccord's 109th member, and SportAccord's 93rd international sports federation to join the international sports family. In accordance with the SportAccord statutes, the ICU is recognized as the world governing body of cheerleading and the authority on all matters related to it. As of the 2016–17 season, the ICU has introduced a Junior aged team (12-16) to compete at the Cheerleading Worlds, because cheerleading is now in provisional status to become a sport in the Olympics. For cheerleading to one day be in the Olympics, there must be a junior and senior team that competes at the world championships. The first junior cheerleading team that was selected to become the junior national team was Eastside Middle School, located in Mount Washington Kentucky and will represent the United States in the inaugural junior division at the world championships. The ICU holds training seminars for judges and coaches, global events and the World Cheerleading Championships. The ICU is also fully applied to the International Olympic Committee (IOC) and is compliant under the code set by the World Anti-Doping Agency (WADA). International Federation of Cheerleading (IFC): Established on July 5, 1998, the International Federation of Cheerleading (IFC) is a non-profit federation based in Tokyo, Japan, and is a world governing body of cheerleading, primarily in Asia. The IFC objectives are to promote cheerleading worldwide, to spread knowledge of cheerleading, and to develop friendly relations among the member associations and federations. USA Cheer The USA Federation for Sport Cheering (USA Cheer) was established in 2007 to serve as the national governing body for all types of cheerleading in the United States and is recognized by the ICU. "The USA Federation for Sport Cheering is a not-for profit 501(c)(6) organization that was established in 2007 to serve as the National Governing Body for Sport Cheering in the United States. USA Cheer exists to serve the cheer community, including club cheering (all star) and traditional school based cheer programs, and the growing sport of STUNT. USA Cheer has three primary objectives: help grow and develop interest and participation in cheer throughout the United States; promote safety and safety education for cheer in the United States; and represent the United States of America in international cheer competitions." In March 2018, they absorbed the American Association of Cheerleading Coaches and Advisors (AACCA) and now provide safety guidelines and training for all levels of cheerleading. Additionally, they organize the USA National Team. Universal Cheerleading Association: UCA is an association owned by the company brand Varsity. "Universal Cheerleaders Association was founded in 1974 by Jeff Webb to provide the best educational training for cheerleaders with the goal of incorporating high-level skills with traditional crowd leading. It was Jeff's vision that would transform cheerleading into the dynamic, athletic combination of high energy entertainment and school leadership that is loved by so many." "Today, UCA is the largest cheerleading camp company in the world, offering the widest array of dates and locations of any camp company. We also celebrate cheerleader's incredible hard work and athleticism through the glory of competition at over 50 regional events across the country and our Championships at the Walt Disney World Resort every year." "UCA has instilled leadership skills and personal confidence in more than 4.5 million athletes on and off the field while continuing to be the industry's leader for more than forty-five years. UCA has helped many cheerleaders get the training they need to succeed. Competitions and companies Asian Thailand Cheerleading Invitational (ATCI): Organised by the Cheerleading Association of Thailand (CAT) in accordance with the rules and regulations of the International Federation of Cheerleading (IFC). The ATCI is held every year since 2009. At the ATCI, many teams from all over Thailand compete, joining them are many invited neighbouring nations who also send cheer squads. Cheerleading Asia International Open Championships (CAIOC): Hosted by the Foundation of Japan Cheerleading Association (FJCA) in accordance with the rules and regulations of the IFC. The CAIOC has been a yearly event since 2007. Every year, many teams from all over Asia converge in Tokyo to compete. Cheerleading World Championships (CWC): Organised by the IFC. The IFC is a non-profit organisation founded in 1998 and based in Tokyo, Japan. The CWC has been held every two years since 2001, and to date, the competition has been held in Japan, the United Kingdom, Finland, Germany, and Hong Kong. The 6th CWC was held at the Hong Kong Coliseum on November 26–27, 2011. ICU World Championships: The International Cheer Union currently encompasses 105 National Federations from countries across the globe. Every year, the ICU host the World Cheerleading Championship. This competition uses a more collegiate style performance and rulebook. Countries assemble and send only one team to represent them. National Cheerleading Championships (NCC): The NCC is the annual IFC-sanctioned national cheerleading competition in Indonesia organised by the Indonesian Cheerleading Community (ICC). Since NCC 2010, the event is now open to international competition, representing a significant step forward for the ICC. Teams from many countries such as Japan, Thailand, the Philippines, and Singapore participated in the ground breaking event. Pan-American Cheerleading Championships (PCC): The PCC was held for the first time in 2009 in the city of Latacunga, Ecuador and is the continental championship organised by the Pan-American Federation of Cheerleading (PFC). The PFC, operating under the umbrella of the IFC, is the non-profit continental body of cheerleading whose aim it is to promote and develop cheerleading in the Americas. The PCC is a biennial event, and was held for the second time in Lima, Peru, in November 2010. USASF/IASF Worlds: Many United States cheerleading organizations form and register the not-for-profit entity the United States All Star Federation (USASF) and also the International All Star Federation (IASF) to support international club cheerleading and the World Cheerleading Club Championships. The first World Cheerleading Championships, or Cheerleading Worlds, were hosted by the USASF/IASF at the Walt Disney World Resort and taped for an ESPN global broadcast in 2004. This competition is only for All-Star/Club cheer. Only level 6 and 7 teams may attend and must receive a bid from a partner company. Varsity: Varsity Spirit, a branch of Varsity Brands is a parent company which, over the past 10 years, has absorbed or bought most other cheerleading event production companies. The following is a list of subsidiary competition companies owned by Varsity Spirit: All Star Challenge All Star Championships All Things Cheer Aloha Spirit Championships America's Best Championships American Cheer and Dance American Cheer Power American Cheerleaders Association AmeriCheer: Americheer was founded in 1987 by Elizabeth Rossetti. It is the parent company to Ameridance and Eastern Cheer and Dance Association. In 2005, Americheer became one of the founding members of the NLCC. This means that Americheer events offer bids to The U.S. Finals: The Final Destination. AmeriCheer InterNational Championship competition is held every March at the Walt Disney World Resort in Orlando, Florida. Athletic Championships Champion Cheer and Dance Champion Spirit Group Cheer LTD CHEERSPORT: CHEERSPORT was founded in 1993 by all star coaches who believed they could conduct competitions that would be better for the athletes, coaches and spectators. Their main event is CHEERSPORT Nationals, held each February at the Georgia World Congress Center in Atlanta, Georgia CheerStarz COA Cheer and Dance Coastal Cheer and Dance Encore Championships GLCC Events Golden State Spirit Association The JAM Brands: The JAM Brands, headquartered in Louisville, Kentucky, provides products and services for the cheerleading and dance industry. It was previously made up of approximately 12 different brands that produce everything from competitions to camps to uniforms to merchandise and apparel, but is now owned by the parent company Varsity. JAMfest, the original brand of The JAM Brands, has been around since 1996 and was founded by Aaron Flaker and Emmitt Tyler. Mardi Gras Spirit Events Mid Atlantic Championships Nation's Choice National Cheerleaders Association (NCA): The NCA was founded in 1948 by Lawrence Herkimer. Every year, the NCA hosts a variety of competitions all across the United States, most notably the NCA High School Cheerleading Nationals and the NCA All-Star Cheerleading Nationals in Dallas, Texas. They also host the NCA/NDA Collegiate Cheer & Dance Championship in Daytona Beach, Florida. In addition to competitions, they also provide summer camps for school cheerleaders. Their sister organization is the National Dance Alliance (NDA). One Up Championships PacWest Sea to Sky Spirit Celebration Spirit Cheer Spirit Sports Spirit Unlimited Spirit Xpress The American Championships The U.S. Finals: This event was formerly hosted by Nation's Leading Cheer Companies which was a multi brand company, partnered with other companies such as: Americheer/Ameridance, American Cheer & Dance Academy, Eastern Cheer & Dance Association, and Spirit Unlimited before they were all acquired by Varsity. Every year, starting in 2006, the NLCC hosted The US Finals: The Final Destination of Cheerleading and Dance. Every team that attends must qualify and receive a bid at a partner company's competition. In May 2008, the NLCC and The JAM Brands announced a partnership to produce The U.S. Finals - Final Destination. This event is still produced under the new parent company, Varsity. There are nine Final Destination locations across the country. After the regional events, videos of all the teams that competed are sent to a new panel of judges and rescored to rank teams against those against whom they may never have had a chance to compete. Universal Cheerleaders Association (UCA): Universal Cheerleaders Association was founded in 1974 by Jeff Webb. Since 1980, UCA has hosted the National High School Cheerleading Championship in Walt Disney World Resort. They also host the National All-Star Cheerleading Championship, and the College Cheerleading National Championship at Walt Disney World Resort. All of these events air on ESPN. United Spirit Association: In 1950, Robert Olmstead directed his first summer training camp, and USA later sprouted from this. USA's focus is on the game day experience as a way to enhance audience entertainment. This focus led to the first American football half-time shows to reach adolescences from around the world and expose them to American style cheerleading. USA provides competitions for cheerleading squads without prior qualifications needed in order to participate. The organization also allows the opportunity for cheerleaders to become an All-American, participate in the Macy's Thanksgiving Day Parade, and partake in London's New Year's Day Parade and other special events much like UCA and NCA allow participants to do. Universal Spirit Association World Spirit Federation Title IX sports status In the United States, the designation of a "sport" is important because of Title IX. There is a large debate on whether or not cheerleading should be considered a sport for Title IX (a portion of the United States Education Amendments of 1972 forbidding discrimination under any education program on the basis of sex) purposes. The Office for Civil Rights (OCR) issued memos and letters to schools that cheerleading, both sideline and competitive, may not be considered "athletic programs" for the purposes of Title IX. Supporters consider cheerleading, as a whole, a sport, citing the heavy use of athletic talents while critics see it as a physical activity because a "sport" implies a competition among all squads and not all squads compete, along with subjectivity of competitions where—as with gymnastics, diving, and figure skating—scores are assessed based on human judgment and not an objective goal or measurement of time. On January 27, 2009, in a lawsuit involving an accidental injury sustained during a cheerleading practice, the Wisconsin Supreme Court ruled that cheerleading is a full-contact sport in that state, not allowing any participants to be sued for accidental injury. In contrast, on July 21, 2010, in a lawsuit involving whether college cheerleading qualified as a sport for purposes of Title IX, a federal court, citing a current lack of program development and organization, ruled that it is not a sport at all. The National Collegiate Athletic Association (NCAA) does not recognize cheerleading as a sport. In 2014, the American Medical Association adopted a policy that, as the leading cause of catastrophic injuries of female athletes both in high school and college, cheerleading should be considered a sport. While there are cheerleading teams at the majority of the NCAA's Division I schools, they are still not recognized as a sport. This results in many teams not being properly funded. Additionally, there are little to no college programs offering scholarships because their universities cannot offer athletic scholarships to "spirit" team members. Cheerleading in Canada Cheerleading in Canada is rising in popularity among the youth in co-curricular programs. Cheerleading has grown from the sidelines to a competitive activity throughout the world and in particular Canada. Cheerleading has a few streams in Canadian sports culture. It is available at the middle-school, high-school, collegiate, and best known for all-star. There are multiple regional, provincial, and national championship opportunities for all athletes participating in cheerleading. Canada does not have provincial teams, just a national program referred to as CCU or Team Canada. Their first year as a national team was in 2009 when they represented Canada at the International Cheer Union World Cheerleading Championships International Cheer Union (ICU). Competition in Canada There is no official governing body for Canadian cheerleading. The rules and guidelines for cheerleading used in Canada are the ones set out by the USASF. However, there are many organizations in Canada that put on competitions and have separate and individual rules and scoresheets for each competition. Cheer Evolution is the largest cheerleading and dance organization for Canada. They hold many competitions as well as provide a competition for bids to Worlds. There are other organizations such as the Ontario Cheerleading Federation (Ontario), Power Cheerleading Association (Ontario), Kicks Athletics (Quebec), and the International Cheer Alliance (Vancouver). There are over forty recognized competitive gym clubs with numerous teams that compete at competitions across Canada. Canadian Cheer of the Global Stage There are two world championship competitions that Canada participates in. The first is the ICU World Championships where the Canadian National Teams compete against other countries. The second is The Cheerleading Worlds where Canadian club teams, referred to as "all-star" teams, compete at the USASF Cheerleading Worlds. National team members who compete at the ICU Worlds can also compete with their "all-star club" teams. Although athletes can compete in both International Cheer Union (ICU) and USASF, crossovers between teams at each individual competition are not permitted. Teams compete against the other teams from their countries on the first day of competition and the top three teams from each country in each division continue to finals. At the end of finals, the top team scoring the highest for their country earns the "Nations Cup". Canada has multiple teams across their country that compete in the USASF Cheerleading Worlds Championship. The International Cheer Union (ICU) is built of 103 countries that compete against each other in four divisions; Coed Premier, All-girl Premier, Coed Elite, and All-girl Elite. Canada has a national team ran by the Canadian Cheer Union (CCU). Their Coed Elite Level 5 Team and their All-girl Elite Level 5 team are 4-time world champions. The athletes on the teams are found from all over the country. In 2013, they added two more teams to their roster. A new division that will compete head-to-head with the United States: in both the All-girl and Coed Premier Level 6 divisions. Members tryout and are selected on the basis of their skills and potential to succeed. Canada's national program has grown to be one of the most successful programs. Cheerleading in Mexico Cheerleading in Mexico is a popular sport commonly seen in Mexican College Football and Professional Mexican Soccer sporting events. Cheerleading emerged within the National Autonomous University of Mexico (UNAM), the highest House of Studies in the country, during the 1930s, almost immediately after it was granted its autonomy. Since then, this phenomenon has been evolving to become what it is now. Firstly, it was developed only in the UNAM, later in other secondary and higher education institutions in Mexico City, and currently in practically the entire country. Competition in Mexico In Mexico, this sport is endorsed by the Mexican Federation of Cheerleaders and Cheerleading Groups (Federación Mexicana de Porristas y Grupos de Animación) (FMPGA), a body that regulates competitions in Mexico and subdivisions such as the Olympic Confederation of Cheerleaders (COP Brands), National Organization of Cheerleaders (Organización Nacional de Porristas) (ONP) and the Mexican Organization of Trainers and Animation Groups (Organización Mexicana de Entrenadores y Grupos de Animación) (OMEGA Mexico), these being the largest in the country. In 2021, the third edition of the National Championship of State Teams was held and organized by The Mexican Federation of Cheerleaders and Cheerleading Groups, on this occasion, the event was held virtually, broadcasting live, through the Vimeo platform. Mexican Cheer of the Global Stage In Mexico there are more than 500 teams and almost 10,000 athletes who practice this sport, in addition to a representative national team of Mexico, which won first place in the cheerleading world championship organized by the ICU (International Cheer Union) on April 24, 2015, receiving a gold medal; In 2016, Mexico became the second country with the most medals in the world in this sport. With 27 medals, it is considered the second world power in this sport, only behind the United States. In the 2019 Coed Premier World Cheerleading Championship Mexico ranked 4th just behind the United States, Canada and Taiwan. In 2021, the Mexican team won 3rd place at the Junior Boom category in World Cheerleading Championship 2021 hosted by international cheerleading federation. Cheerleading in the United Kingdom This section has a link to a separate Wikipedia page that talks about the history and growth of cheerleading in the United Kingdom. This can be used to compare and contrast the activity in the U.S and in Australia. Cheerleading in Australia This section has a link to a separate Wikipedia page that talks about the history and growth of cheerleading in Australia. This can be used to compare and contrast the activity in the U.S and in Australia. Notable former cheerleaders This section has a link to a separate Wikipedia page that lists former cheerleaders and well-known cheerleading squads. See also Cheerleader Nation Cheerleading in Japan Cheerleading Philippines Color guard Dance squad Dance Gymnastics List of cheerleading jumps List of cheerleading stunts Majorette (dancer) National Basketball Association Cheerleading National Football League Cheerleading Pep squad Pom-pom UAAP Cheerdance Competition Varsity Brands References External links Sports culture Sports entertainment Concert dance
3,065
6,769
https://en.wikipedia.org/wiki/Commodore%201581
Commodore 1581
The Commodore 1581 is a 3½-inch double-sided double-density floppy disk drive that was released by Commodore Business Machines (CBM) in 1987, primarily for its C64 and C128 home/personal computers. The drive stores 800 kilobytes using an MFM encoding but formats different from the MS-DOS (720 kB), Amiga (880 kB), and Mac Plus (800 kB) formats. With special software it's possible to read C1581 disks on an x86 PC system, and likewise, read MS-DOS and other formats of disks in the C1581 (using Big Blue Reader), provided that the PC or other floppy handles the size format. This capability was most frequently used to read MS-DOS disks. The drive was released in the summer of 1987 and quickly became popular with bulletin board system (BBS) operators and other users. Like the 1541 and 1571, the 1581 has an onboard MOS Technology 6502 CPU with its own ROM and RAM, and uses a serial version of the IEEE-488 interface. Inexplicably, the drive's ROM contains commands for parallel use, although no parallel interface was available. Unlike the 1571, which is nearly 100% backward-compatible with the 1541, the 1581 is only compatible with previous Commodore drives at the DOS level and cannot utilize software that performs low-level disk access (as the vast majority of Commodore 64 games do). The version of Commodore DOS built into the 1581 added support for partitions, which could also function as fixed-allocation subdirectories. PC-style subdirectories were rejected as being too difficult to work with in terms of block availability maps, then still much in vogue, and which for some time had been the traditional way of inquiring into block availability. The 1581 supports the C128's burst mode for fast disk access, but not when connected to an older Commodore machine like the Commodore 64. The 1581 provides a total of 3160 blocks free when formatted (a block being equal to 256 bytes). The number of permitted directory entries was also increased, to 296 entries. With a storage capacity of 800 kB, the 1581 is the highest-capacity serial-bus drive that was ever made by Commodore (the 1-MB SFD-1001 uses the parallel IEEE-488), and the only 3½" one. However, starting in 1991, Creative Micro Designs (CMD) made the FD-2000 high density (1.6 MB) and FD-4000 extra-high density (3.2 MB) 3½" drives, both of which offered not only a 1581-emulation mode but also 1541- and 1571-compatibility modes. Like the 1541 and 1571, a nearly identical job queue is available to the user in zero page (except for job 0), providing for exceptional degrees of compatibility. Unlike the cases of the 1541 and 1571, the low-level disk format used by the 1581 is similar enough to the MS-DOS format as the 1581 is built around a WD1770 FM/MFM floppy controller chip. The 1581 disk format consists of 80 tracks and ten 512 byte sectors per track, used as 20 logical sectors of 256 bytes each. Special software is required to read 1581 disks on a PC due to the different file system. An internal floppy drive and controller are required as well; USB floppy drives operate strictly at the file system level and do not allow low-level disk access. The WD1770 controller chip, however, was the seat of some early problems with 1581 drives when the first production runs were recalled due to a high failure rate; the problem was quickly corrected. Later versions of the 1581 drive have a smaller, more streamlined-looking external power supply provided with them. Specifications 1581 Image Layout The 1581 disk has 80 logical tracks, each with 40 logical sectors (the actual physical layout of the diskette is abstracted and managed by a hardware translation layer). The directory starts on 40/3 (track 40, sector 3). The disk header is on 40/0, and the BAM (block availability map) resides on 40/1 and 40/2. Header Contents $00–01 T/S reference to first directory sector (40/3) 02 DOS version ('D') 04-13 Disk Label, $A0 padded 16-17 Disk ID 19-1A DOS type ('3D') BAM Contents, 40/1 $00–01 T/S to next BAM sector (40/2) 02 DOS version ('D') 04-05 Disk ID 06 I/O byte 07 Autoboot flag 10-FF BAM entries for Tracks 1-40 BAM Contents, 40/2 $00–01 00/FF 02 DOS version ('D') 04-05 Disk ID 06 I/O byte 07 Autoboot flag 10-FF BAM entries for Tracks 41-80 See also Commodore 64 peripherals Commodore 128 References External links d81.de: Permanent home of 1581-Copy, A MS-Windows based Tool uses any standard x86-PC 3.5" drive to WRITE & READ 1581 disk images (d81). optusnet.com.au: 1581 Games, Commodore 1581 Games, D81, CMD FD2000 & FD4000 Games, Tools & Games specifically for the 1581 disk drive. optusnet.com.au: SEGA SF-7000 with PC 3.5" Floppy Drive, Copy disk to PC and vice versa, How to use a PC 3.5" floppy drive in the 1581 device vice-emu: Commodore compatible Disk Drives, drive info tut.fi: DCN-2692 floppy controller board, C1581 clone (complete) Products introduced in 1987 Commodore 64 CBM floppy disk drives
3,075
6,773
https://en.wikipedia.org/wiki/Ciprofloxacin
Ciprofloxacin
Ciprofloxacin is a fluoroquinolone antibiotic used to treat a number of bacterial infections. This includes bone and joint infections, intra-abdominal infections, certain types of infectious diarrhea, respiratory tract infections, skin infections, typhoid fever, and urinary tract infections, among others. For some infections it is used in addition to other antibiotics. It can be taken by mouth, as eye drops, as ear drops, or intravenously. Common side effects include nausea, vomiting, and diarrhea. Severe side effects include an increased risk of tendon rupture, hallucinations, and nerve damage. In people with myasthenia gravis, there is worsening muscle weakness. Rates of side effects appear to be higher than some groups of antibiotics such as cephalosporins but lower than others such as clindamycin. Studies in other animals raise concerns regarding use in pregnancy. No problems were identified, however, in the children of a small number of women who took the medication. It appears to be safe during breastfeeding. It is a second-generation fluoroquinolone with a broad spectrum of activity that usually results in the death of the bacteria. Ciprofloxacin was patented in 1980 and introduced in 1987. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies ciprofloxacin as critically important for human medicine. It is available as a generic medication. In 2020, it was the 132nd-most-commonly prescribed medication in the United States, with more than 4million prescriptions. Medical uses Ciprofloxacin is used to treat a wide variety of infections, including infections of bones and joints, endocarditis, gastroenteritis, malignant otitis externa, respiratory tract infections, cellulitis, urinary tract infections, prostatitis, anthrax, and chancroid. Ciprofloxacin only treats bacterial infections; it does not treat viral infections such as the common cold. For certain uses including acute sinusitis, lower respiratory tract infections and uncomplicated gonorrhea, ciprofloxacin is not considered a first-line agent. Ciprofloxacin occupies an important role in treatment guidelines issued by major medical societies for the treatment of serious infections, especially those likely to be caused by Gram-negative bacteria, including Pseudomonas aeruginosa. For example, ciprofloxacin in combination with metronidazole is one of several first-line antibiotic regimens recommended by the Infectious Diseases Society of America for the treatment of community-acquired abdominal infections in adults. It also features prominently in treatment guidelines for acute pyelonephritis, complicated or hospital-acquired urinary tract infection, acute or chronic prostatitis, certain types of endocarditis, certain skin infections, and prosthetic joint infections. In other cases, treatment guidelines are more restrictive, recommending in most cases that older, narrower-spectrum drugs be used as first-line therapy for less severe infections to minimize fluoroquinolone-resistance development. For example, the Infectious Diseases Society of America recommends the use of ciprofloxacin and other fluoroquinolones in urinary tract infections be reserved to cases of proven or expected resistance to narrower-spectrum drugs such as nitrofurantoin or trimethoprim/sulfamethoxazole. The European Association of Urology recommends ciprofloxacin as an alternative regimen for the treatment of uncomplicated urinary tract infections, but cautions that the potential for "adverse events have to be considered". Although approved by regulatory authorities for the treatment of respiratory infections, ciprofloxacin is not recommended for respiratory infections by most treatment guidelines due in part to its modest activity against the common respiratory pathogen Streptococcus pneumoniae. "Respiratory quinolones" such as levofloxacin, having greater activity against this pathogen, are recommended as first line agents for the treatment of community-acquired pneumonia in patients with important co-morbidities and in patients requiring hospitalization (Infectious Diseases Society of America 2007). Similarly, ciprofloxacin is not recommended as a first-line treatment for acute sinusitis. Ciprofloxacin is approved for the treatment of gonorrhea in many countries, but this recommendation is widely regarded as obsolete due to resistance development. Pregnancy In the United States, ciprofloxacin is pregnancy category C. This category includes drugs for which no adequate and well-controlled studies in human pregnancy exist, and for which animal studies have suggested the potential for harm to the fetus, but potential benefits may warrant use of the drug in pregnant women despite potential risks. An expert review of published data on experiences with ciprofloxacin use during pregnancy by the Teratogen Information System concluded therapeutic doses during pregnancy are unlikely to pose a substantial teratogenic risk (quantity and quality of data=fair), but the data are insufficient to state no risk exists. Exposure to quinolones, including levofloxacin, during the first-trimester is not associated with an increased risk of stillbirths, premature births, birth defects, or low birth weight. Two small post-marketing epidemiology studies of mostly short-term, first-trimester exposure found that fluoroquinolones did not increase risk of major malformations, spontaneous abortions, premature birth, or low birth weight. The label notes, however, that these studies are insufficient to reliably evaluate the definitive safety or risk of less common defects by ciprofloxacin in pregnant women and their developing fetuses. Breastfeeding Fluoroquinolones have been reported as present in a mother's milk and thus passed on to the nursing child. The U.S. Food and Drug Administration (FDA) recommends that because of the risk of serious adverse reactions (including articular damage) in infants nursing from mothers taking ciprofloxacin, a decision should be made whether to discontinue nursing or discontinue the drug, taking into account the importance of the drug to the mother. Children Oral and intravenous ciprofloxacin are approved by the FDA for use in children for only two indications due to the risk of permanent injury to the musculoskeletal system: Inhalational anthrax (postexposure) Complicated urinary tract infections and pyelonephritis due to Escherichia coli, but never as first-line agents. Current recommendations by the American Academy of Pediatrics note the systemic use of ciprofloxacin in children should be restricted to infections caused by multidrug-resistant pathogens or when no safe or effective alternatives are available. Spectrum of activity Its spectrum of activity includes most strains of bacterial pathogens responsible for community-acquired pneumonias, bronchitis, urinary tract infections, and gastroenteritis. Ciprofloxacin is particularly effective against Gram-negative bacteria (such as Escherichia coli, Haemophilus influenzae, Klebsiella pneumoniae, Legionella pneumophila, Moraxella catarrhalis, Proteus mirabilis, and Pseudomonas aeruginosa), but is less effective against Gram-positive bacteria (such as methicillin-sensitive Staphylococcus aureus, Streptococcus pneumoniae, and Enterococcus faecalis) than newer fluoroquinolones. Bacterial resistance As a result of its widespread use to treat minor infections readily treatable with older, narrower-spectrum antibiotics, many bacteria have developed resistance to this drug in recent years, leaving it significantly less effective than it would have been otherwise. Resistance to ciprofloxacin and other fluoroquinolones may evolve rapidly, even during a course of treatment. Numerous pathogens, including enterococci, Streptococcus pyogenes and Klebsiella pneumoniae (quinolone-resistant) now exhibit resistance. Widespread veterinary usage of the fluoroquinolones, particularly in Europe, has been implicated. Meanwhile, some Burkholderia cepacia, Clostridium innocuum and Enterococcus faecium strains have developed resistance to ciprofloxacin to varying degrees. Fluoroquinolones had become the class of antibiotics most commonly prescribed to adults in 2002. Nearly half (42%) of those prescriptions in the U.S. were for conditions not approved by the FDA, such as acute bronchitis, otitis media, and acute upper respiratory tract infection, according to a study supported in part by the Agency for Healthcare Research and Quality. Additionally, they were commonly prescribed for medical conditions that were not even bacterial to begin with, such as viral infections, or those to which no proven benefit existed. Contraindications Contraindications include: Taking tizanidine at the same time Use by those who are hypersensitive to any member of the quinolone class of antimicrobial agents Use by those who are diagnosed with myasthenia graves, as muscle weakness may be exacerbated Ciprofloxacin is also considered to be contraindicated in children (except for the indications outlined above), in pregnancy, to nursing mothers, and in people with epilepsy or other seizure disorders. Caution may be required in people with Marfan syndrome or Ehlers-Danlos syndrome. Adverse effects Adverse effects can involve the tendons, muscles, joints, nerves, and the central nervous system. Rates of adverse effects appear to be higher than with some groups of antibiotics such as cephalosporins but lower than with others such as clindamycin. Compared to other antibiotics some studies find a higher rate of adverse effects while others find no difference. In clinical trials most of the adverse events were described as mild or moderate in severity, abated soon after the drug was discontinued, and required no treatment. Some adverse effects may be permanent. Ciprofloxacin was stopped because of an adverse event in 1% of people treated with the medication by mouth. The most frequently reported drug-related events, from trials of all formulations, all dosages, all drug-therapy durations, and for all indications, were nausea (2.5%), diarrhea (1.6%), abnormal liver function tests (1.3%), vomiting (1%), and rash (1%). Other adverse events occurred at rates of <1%. Tendon problems Ciprofloxacin includes a boxed warning in the United States due to an increased risk of tendinitis and tendon rupture, especially in people who are older than 60 years, people who also use corticosteroids, and people with kidney, lung, or heart transplants. Tendon rupture can occur during therapy or even months after discontinuation of the medication. One study found that fluoroquinolone use was associated with a 1.9-fold increase in tendon problems. The risk increased to 3.2 in those over 60 years of age and to 6.2 in those over the age of 60 who were also taking corticosteroids. Among the 46,766 quinolone users in the study, 38 (0.08%) cases of Achilles tendon rupture were identified. Applying ice behind the neck in the evenings before bed and heat in the morning may assist with tendon and nerve pain and repair. Magnesium and drinking water along with moderate exercise has been found effective in the repair of the mitochondria. Cardiac arrhythmia The fluoroquinolones, including ciprofloxacin, are associated with an increased risk of cardiac toxicity, including QT interval prolongation, torsades de pointes, ventricular arrhythmia, and sudden death. Nervous system Because Ciprofloxacin is lipophilic, it has the ability to cross the blood–brain barrier. The 2013 FDA label warns of nervous system effects. Ciprofloxacin, like other fluoroquinolones, is known to trigger seizures or lower the seizure threshold, and may cause other central nervous system adverse effects. Headache, dizziness, and insomnia have been reported as occurring fairly commonly in postapproval review articles, along with a much lower incidence of serious CNS adverse effects such as tremors, psychosis, anxiety, hallucinations, paranoia, and suicide attempts, especially at higher doses. Like other fluoroquinolones, it is also known to cause peripheral neuropathy that may be irreversible, such as weakness, burning pain, tingling or numbness. Cancer Ciprofloxacin is active in six of eight in vitro assays used as rapid screens for the detection of genotoxic effects, but is not active in in vivo assays of genotoxicity. Long-term carcinogenicity studies in rats and mice resulted in no carcinogenic or tumorigenic effects due to ciprofloxacin at daily oral dose levels up to 250 and 750 mg/kg to rats and mice, respectively (about 1.7 and 2.5 times the highest recommended therapeutic dose based upon mg/m2). Results from photo co-carcinogenicity testing indicate ciprofloxacin does not reduce the time to appearance of UV-induced skin tumors as compared to vehicle control. Other The other black box warning is that ciprofloxacin should not be used in people with myasthenia gravis due to possible exacerbation of muscle weakness which may lead to breathing problems resulting in death or ventilator support. Fluoroquinolones are known to block neuromuscular transmission. There are concerns that fluoroquinolones including ciprofloxacin can affect cartilage in young children. Clostridium difficile-associated diarrhea is a serious adverse effect of ciprofloxacin and other fluoroquinolones; it is unclear whether the risk is higher than with other broad-spectrum antibiotics. A wide range of rare but potentially fatal adverse effects reported to the U.S. FDA or the subject of case reports includes aortic dissection, toxic epidermal necrolysis, Stevens–Johnson syndrome, low blood pressure, allergic pneumonitis, bone marrow suppression, hepatitis or liver failure, and sensitivity to light. The medication should be discontinued if a rash, jaundice, or other sign of hypersensitivity occurs. Children and the elderly are at a much greater risk of experiencing adverse reactions. Overdose Overdose of ciprofloxacin may result in reversible renal toxicity. Treatment of overdose includes emptying of the stomach by induced vomiting or gastric lavage, as well as administration of antacids containing magnesium, aluminium, or calcium to reduce drug absorption. Renal function and urinary pH should be monitored. Important support includes adequate hydration and urine acidification if necessary to prevent crystalluria. Hemodialysis or peritoneal dialysis can only remove less than 10% of ciprofloxacin. Ciprofloxacin may be quantified in plasma or serum to monitor for drug accumulation in patients with hepatic dysfunction or to confirm a diagnosis of poisoning in acute overdose victims. Interactions Ciprofloxacin interacts with certain foods and several other drugs leading to undesirable increases or decreases in the serum levels or distribution of one or both drugs. Ciprofloxacin should not be taken with antacids containing magnesium or aluminum, highly buffered drugs (sevelamer, lanthanum carbonate, sucralfate, didanosine), or with supplements containing calcium, iron, or zinc. It should be taken two hours before or six hours after these products. Magnesium or aluminum antacids turn ciprofloxacin into insoluble salts that are not readily absorbed by the intestinal tract, reducing peak serum concentrations by 90% or more, leading to therapeutic failure. Additionally, it should not be taken with dairy products or calcium-fortified juices alone, as peak serum concentration and the area under the serum concentration-time curve can be reduced up to 40%. However, ciprofloxacin may be taken with dairy products or calcium-fortified juices as part of a meal. Ciprofloxacin inhibits the drug-metabolizing enzyme CYP1A2 and thereby can reduce the clearance of drugs metabolized by that enzyme. CYP1A2 substrates that exhibit increased serum levels in ciprofloxacin-treated patients include tizanidine, theophylline, caffeine, methylxanthines, clozapine, olanzapine, and ropinirole. Co-administration of ciprofloxacin with the CYP1A2 substrate tizanidine (Zanaflex) is contraindicated due to a 583% increase in the peak serum concentrations of tizanidine when administered with ciprofloxacin as compared to administration of tizanidine alone. Use of ciprofloxacin is cautioned in patients on theophylline due to its narrow therapeutic index. The authors of one review recommended that patients being treated with ciprofloxacin reduce their caffeine intake. Evidence for significant interactions with several other CYP1A2 substrates such as cyclosporine is equivocal or conflicting. The Committee on Safety of Medicines and the FDA warn that central nervous system adverse effects, including seizure risk, may be increased when NSAIDs are combined with quinolones. The mechanism for this interaction may involve a synergistic increased antagonism of GABA neurotransmission. Altered serum levels of the antiepileptic drugs phenytoin and carbamazepine (increased and decreased) have been reported in patients receiving concomitant ciprofloxacin. Ciprofloxacin is a potent inhibitor of CYP1A2, CYP2D6, and CYP3A4. Mechanism of action Ciprofloxacin is a broad-spectrum antibiotic of the fluoroquinolone class. It is active against some Gram-positive and many Gram-negative bacteria. It functions by inhibiting a type II topoisomerase (DNA gyrase) and topoisomerase IV, necessary to separate bacterial DNA, thereby inhibiting cell division. Bacterial DNA fragmentation will occur as a result of inhibition of the enzymes. Pharmacokinetics Ciprofloxacin for systemic administration is available as immediate-release tablets, extended-release tablets, an oral suspension, and as a solution for intravenous administration. When administered over one hour as an intravenous infusion, ciprofloxacin rapidly distributes into the tissues, with levels in some tissues exceeding those in the serum. Penetration into the central nervous system is relatively modest, with cerebrospinal fluid levels normally less than 10% of peak serum concentrations. The serum half-life of ciprofloxacin is about 4–6 hours, with 50–70% of an administered dose being excreted in the urine as unmetabolized drug. An additional 10% is excreted in urine as metabolites. Urinary excretion is virtually complete 24 hours after administration. Dose adjustment is required in the elderly and in those with renal impairment. Ciprofloxacin is weakly bound to serum proteins (20–40%). It is an inhibitor of the drug-metabolizing enzyme cytochrome P450 1A2, which leads to the potential for clinically important drug interactions with drugs metabolized by that enzyme. Ciprofloxacin is about 70% orally available when administered orally, so a slightly higher dose is needed to achieve the same exposure when switching from IV to oral administration The extended release oral tablets allow once-daily administration by releasing the drug more slowly in the gastrointestinal tract. These tablets contain 35% of the administered dose in an immediate-release form and 65% in a slow-release matrix. Maximum serum concentrations are achieved between 1 and 4 hours after administration. Compared to the 250- and 500-mg immediate-release tablets, the 500-mg and 1000-mg XR tablets provide higher Cmax, but the 24‑hour AUCs are equivalent. Ciprofloxacin immediate-release tablets contain ciprofloxacin as the hydrochloride salt, and the XR tablets contain a mixture of the hydrochloride salt as the free base. Chemical properties Ciprofloxacin is 1-cyclopropyl-6-fluoro-1,4-dihydro-4-oxo-7-(1-piperazinyl)-3-quinolinecarboxylic acid. Its empirical formula is C17H18FN3O3 and its molecular weight is 331.4 g/mol. It is a faintly yellowish to light yellow crystalline substance. Ciprofloxacin hydrochloride (USP) is the monohydrochloride monohydrate salt of ciprofloxacin. It is a faintly yellowish to light yellow crystalline substance with a molecular weight of 385.8 g/mol. Its empirical formula is C17H18FN3O3HCl•H2O. Usage Ciprofloxacin is the most widely used of the second-generation quinolones. In 2010, over 20 million prescriptions were written, making it the 35th-most-commonly prescribed generic drug and the 5th-most-commonly prescribed antibacterial in the U.S. History The first members of the quinolone antibacterial class were relatively low-potency drugs such as nalidixic acid, used mainly in the treatment of urinary tract infections owing to their renal excretion and propensity to be concentrated in urine. In 1979, the publication of a patent filed by the pharmaceutical arm of Kyorin Seiyaku Kabushiki Kaisha disclosed the discovery of norfloxacin, and the demonstration that certain structural modifications including the attachment of a fluorine atom to the quinolone ring leads to dramatically enhanced antibacterial potency. In the aftermath of this disclosure, several other pharmaceutical companies initiated research and development programs with the goal of discovering additional antibacterial agents of the fluoroquinolone class. The fluoroquinolone program at Bayer focused on examining the effects of very minor changes to the norfloxacin structure. In 1983, the company published in vitro potency data for ciprofloxacin, a fluoroquinolone antibacterial having a chemical structure differing from that of norfloxacin by the presence of a single carbon atom. This small change led to a two- to 10-fold increase in potency against most strains of Gram-negative bacteria. Importantly, this structural change led to a four-fold improvement in activity against the important Gram-negative pathogen Pseudomonas aeruginosa, making ciprofloxacin one of the most potent known drugs for the treatment of this intrinsically antibiotic-resistant pathogen. The oral tablet form of ciprofloxacin was approved in October 1987, just one year after the approval of norfloxacin. In 1991, the intravenous formulation was introduced. Ciprofloxacin sales reached a peak of about 2 billion euros in 2001, before Bayer's patent expired in 2004, after which annual sales have averaged around €200 million. The name probably originates from the International Scientific Nomenclature: ci- (alteration of cycl-) + propyl + fluor- + ox- + az- + -mycin. Society and culture Cost It is available as a generic medication and not very expensive. At least one company, Turtle Pharma Private Limited provides industrial-size amounts Generic equivalents On 24 October 2001, the Prescription Access Litigation (PAL) project filed suit to dissolve an agreement between Bayer and three of its competitors which produced generic versions of drugs (Barr Laboratories, Rugby Laboratories, and Hoechst-Marion-Roussel) that PAL claimed was blocking access to adequate supplies and cheaper, generic versions of ciprofloxacin. The plaintiffs charged that Bayer Corporation, a unit of Bayer AG, had unlawfully paid the three competing companies a total of $200 million to prevent cheaper, generic versions of ciprofloxacin from being brought to the market, as well as manipulating its price and supply. Numerous other consumer advocacy groups joined the lawsuit. On 15 October 2008, five years after Bayer's patent had expired, the United States District Court for the Eastern District of New York granted Bayer's and the other defendants' motion for summary judgment, holding that any anticompetitive effects caused by the settlement agreements between Bayer and its codefendants were within the exclusionary zone of the patent and thus could not be redressed by federal antitrust law, in effect upholding Bayer's agreement with its competitors. Available forms Ciprofloxacin for systemic administration is available as immediate-release tablets, as extended-release tablets, as an oral suspension, and as a solution for intravenous infusion. It is also available for local administration as eye drops and ear drops. Litigation A class action was filed against Bayer AG on behalf of employees of the Brentwood Post Office in Washington, D.C., and workers at the U.S. Capitol, along with employees of American Media, Inc. in Florida and postal workers in general who alleged they developed serious adverse effects from taking ciprofloxacin in the aftermath of the anthrax attacks in 2001. The action alleged Bayer failed to warn class members of the potential side effects of the drug, thereby violating the Pennsylvania Unfair Trade Practices and Consumer Protection Laws. The class action was defeated and the litigation abandoned by the plaintiffs. A similar action was filed in 2003 in New Jersey by four New Jersey postal workers but was withdrawn for lack of grounds, as workers had been informed of the risks of ciprofloxacin when they were given the option of taking the drug. Research As resistance to ciprofloxacin has grown since its introduction, research has been conducted to discover and develop analogs that can be effective against resistant bacteria; some have been looked at in antiviral models as well. References External links 1,4-di-hydro-7-(1-piperazinyl)-4-oxo-3-quinolinecarboxylic acids Bayer brands Cyclopropyl compounds CYP1A2 inhibitors CYP3A4 inhibitors Dermatoxins Fluoroquinolone antibiotics GABAA receptor negative allosteric modulators Nephrotoxins Novartis brands Ophthalmology drugs Otologicals World Health Organization essential medicines Wikipedia medicine articles ready to translate
3,077
6,780
https://en.wikipedia.org/wiki/CD-R
CD-R
CD-R (Compact disc-recordable) is a digital optical disc storage format. A CD-R disc is a compact disc that can be written once and read arbitrarily many times. CD-R discs (CD-Rs) are readable by most CD readers manufactured prior to the introduction of CD-R, unlike CD-RW discs. History Originally named CD Write-Once (WO), the CD-R specification was first published in 1988 by Philips and Sony in the Orange Book, which consists of several parts that provide details of the CD-WO, CD-MO (Magneto-Optic), and later CD-RW (ReWritable). The latest editions have abandoned the use of the term "CD-WO" in favor of "CD-R", while "CD-MO" was rarely used. Written CD-Rs and CD-RWs are, in the aspect of low-level encoding and data format, fully compatible with the audio CD (Red Book CD-DA) and data CD (Yellow Book CD-ROM) standards. The Yellow Book standard for CD-ROM only specifies a high-level data format and refers to the Red Book for all physical format and low-level code details, such as track pitch, linear bit density, and bitstream encoding. This means they use Eight-to-Fourteen Modulation, CIRC error correction, and, for CD-ROM, the third error correction layer defined in the Yellow Book. Properly written CD-R discs on blanks of less than 80 minutes in length are fully compatible with the audio CD and CD-ROM standards in all details including physical specifications. 80-minute CD-R discs marginally violate the Red Book physical format specifications, and longer discs are noncompliant. CD-RW discs have lower reflectivity than CD-R or pressed (non-writable) CDs and for this reason cannot meet the Red Book standard. Some hardware compatible with Red Book CDs may have difficulty reading CD-Rs and, because of their lower reflectivity, especially CD-RWs. To the extent that CD hardware can read extended-length discs or CD-RW discs, it is because that hardware has capability beyond the minimum required by the Red Book and Yellow Book standards (the hardware is more capable than it needs to be to bear the Compact Disc logo). CD-R recording systems available in 1990 were similar to the washing machine-sized Meridian CD Publisher, based on the two-piece rack mount Yamaha PDS audio recorder costing $35,000, not including the required external ECC circuitry for data encoding, SCSI hard drive subsystem, and MS-DOS control computer. On July 3, 1991, the first recording of a concert directly to CD was made using a Yamaha YPDR 601. The concert was performed by Claudio Baglioni at the Stadio Flaminio in Rome, Italy. At that time, it was generally anticipated that recordable CDs would have a lifetime of no more than 10 years. However, as of July 2020 the CD from this live recording still plays back with no uncorrectable errors. In the same year, the first company to successfully & professionally duplicate CD-R media was CDRM Recordable Media. With quality technical media being limited from Taiyo Yuden, Early CD-R Media used Phthalocyanine dye for duplication, which has a light aqua color. By 1992, the cost of typical recorders was down to $10,000–12,000, and in September 1995, Hewlett-Packard introduced its model 4020i manufactured by Philips, which, at $995, was the first recorder to cost less than $1000. As of the 2010s, devices capable of writing to CD-Rs and other types of writable CDs could be found under $20. The dye materials developed by Taiyo Yuden made it possible for CD-R discs to be compatible with Audio CD and CD-ROM discs. In the United States, there is a market separation between "music" CD-Rs and "data" CD-Rs, the former being notably more expensive than the latter due to industry copyright arrangements with the RIAA. Specifically, the price of every music CD-R includes a mandatory royalty disbursed to RIAA members by the disc manufacturer; this grants the disc an "application flag" indicating that the royalty has been paid. Consumer standalone music recorders refuse to burn CD-Rs that are missing this flag. Professional CD recorders are not subject to this restriction and can record music to data discs. The two types of discs are functionally and physically identical other than this, and computer CD burners can record data and/or music to either. New music CD-Rs are still being manufactured as of the late 2010s, although demand for them has declined as CD-based music recorders have been supplanted by other devices incorporating the same or similar functionality. Prior to CD-R, Tandy Corporation had announced a rewritable CD system known as the Tandy High-Density Optical Recording (THOR) system, claiming to offer support for erasable and rewritable discs, made possible by a "secret coating material" on which Tandy had applied for patents, and reportedly based partly on a process developed by Optical Data Inc., with research and development undertaken at Tandy's Magnetic Media Research Center. Known also as the Tandy High-Intensity Optical Recording system, THOR-CD media was intended to be playable in existing CD players, being compatible with existing CD audio and CD-ROM equipment, with the discs themselves employing a layer in which the "marks", "bumps" or "pits" readable by a conventional CD player could be established in, and removed from, the medium by a laser operating at a different frequency. Tandy's announcement was surprising enough to "catch half a dozen industries off guard", claiming availability of consumer-level audio and video products below $500 by the end of 1990, and inviting other organisations to license the technology. The announcement attracted enthusiasm but also skepticism of Tandy's capability to deliver the system, with the latter proving to be justified, the technology having been "announced... heavily promoted; then it was delayed, and finally, it just never appeared". Physical characteristics A standard CD-R is a thick disc made of polycarbonate about 120 mm (5") in diameter. The 120 mm (5") disc has a storage capacity of 74 minutes of audio or 650 Megabytes of data. CD-R/RWs are available with capacities of 80 minutes of audio or 737,280,000 bytes (700 MiB), which they achieve by molding the disc at the tightest allowable tolerances specified in the Orange Book CD-R/CD-RW standards. The engineering margin that was reserved for manufacturing tolerance has been used for data capacity instead, leaving no tolerance for manufacturing; for these discs to be truly compliant with the Orange Book standard, the manufacturing process must be perfect. Despite the foregoing, most CD-Rs on the market have an 80-minute capacity. There are also 90 minute/790 MiB and 99 minute/870 MiB discs, although they are less common and depart from the Orange Book standard. Due to the limitations of the data structures in the ATIP, 90 and 99-minute blanks will identify as 80-minute ones. As the ATIP is part of the Orange Book standard, its design does not support some nonstandard disc configurations. In order to use the additional capacity, these discs have to be burned using overburn options in the CD recording software. Overburning itself is so named because it is outside the written standards, but, due to market demand, it has nonetheless become a de facto standard function in most CD writing drives and software for them. Some drives use special techniques, such as Plextor's GigaRec or Sanyo's HD-BURN, to write more data onto a given disc; these techniques are deviations from the compact disc (Red, Yellow, and/or Orange Book) standards, making the recorded discs proprietary-formatted and not fully compatible with standard CD players and drives. In certain applications where discs will not be distributed or exchanged outside a private group and will not be archived for a long time, a proprietary format may be an acceptable way to obtain greater capacity (up to 1.2 GiB with GigaRec or 1.8 GiB with HD-BURN on 99-minute media). The greatest risk in using such a proprietary data storage format, assuming that it works reliably as designed, is that it may be difficult or impossible to repair or replace the hardware used to read the media if it fails, is damaged, or is lost after its original vendor discontinues it. Nothing in the Red, Yellow, or Orange Book standards prohibits disc reading/writing devices from having the capacity to read/write discs beyond the compact disc standards. The standards do require discs to meet precise requirements in order to be called compact discs, but the other discs may be called by other names; if this were not true, no DVD drive could legally bear the compact disc logo. While disc players and drives may have capabilities beyond the standards, enabling them to read and write nonstandard discs, there is no assurance, in the absence of explicit additional manufacturer specifications beyond normal compact disc logo certification, that any particular player or drive will perform beyond the standards at all or consistently. If the same device with no explicit performance specs beyond the compact disc logo initially handles nonstandard discs reliably, there is no assurance that it will not later stop doing so, and in that case, there is no assurance that it can be made to do so again by service or adjustment. Discs with capacities larger than 650 MB, and especially those larger than 700 MB, are less interchangeable among players/drives than standard discs and are not very suitable for archival use, as their readability on future equipment, or even on the same equipment at a future time, is not assured unless specifically tested and certified in that combination, even under the assumption that the discs will not degrade at all. The polycarbonate disc contains a spiral groove, called the pregroove because it is molded in before data are written to the disc; it guides the laser beam upon writing and reading information. The pregroove is molded into the top side of the polycarbonate disc, where the pits and lands would be molded if it were a pressed, nonrecordable Red Book CD. The bottom side, which faces the laser beam in the player or drive, is flat and smooth. The polycarbonate disc is coated on the pregroove side with a very thin layer of organic dye. Then, on top of the dye is coated a thin, reflecting layer of silver, a silver alloy, or gold. Finally, a protective coating of a photo-polymerizable lacquer is applied on top of the metal reflector and cured with UV light. A blank CD-R is not "empty"; the pregroove has a wobble (the ATIP), which helps the writing laser to stay on track and to write the data to the disc at a constant rate. Maintaining a constant rate is essential to ensure the proper size and spacing of the pits and lands burned into the dye layer. As well as providing timing information, the ATIP (absolute time in pregroove) is also a data track containing information about the CD-R manufacturer, the dye used, and media information (disc length and so on). The pregroove is not destroyed when the data are written to the CD-R, a point which some copy protection schemes use to distinguish copies from an original CD. Dyes There are three basic formulations of dye used in CD-Rs: Cyanine dye CD-Rs were the earliest ones developed, and their formulation is patented by Taiyo Yuden. CD-Rs based on this dye are mostly green in color. The earlier models were very chemically unstable and this made cyanine-based discs unsuitable for archival use; they could fade and become unreadable in a few years. Many manufacturers like Taiyo Yuden use proprietary chemical additives to make more stable cyanine discs ("metal-stabilized Cyanine", "Super Cyanine"). Older cyanine dye-based CD-Rs, as well as all the hybrid dyes based on cyanine, are very sensitive to UV-rays and can become unreadable after only a few days if they were exposed to direct sunlight. Although the additives used have made cyanine more stable, it is still the most sensitive of the dyes in UV rays (showing signs of degradation within a week of direct sunlight exposure). A common mistake users make is to leave the CD-Rs with the "clear" (recording) surface upwards, in order to protect it from scratches, as this lets the sun hit the recording surface directly. Phthalocyanine dye CD-Rs are usually silver, gold, or light green. The patents on phthalocyanine CD-Rs are held by Mitsui and Ciba Specialty Chemicals. Phthalocyanine is a natively stable dye (has no need for stabilizers) and CD-Rs based on this are often given a rated lifetime of hundreds of years. Unlike cyanine, phthalocyanine is more resistant to UV rays, and CD-Rs based on this dye show signs of degradation only after two weeks of direct sunlight exposure. However, phthalocyanine is more sensitive than cyanine to writing laser power calibration, meaning that the power level used by the writing laser has to be more accurately adjusted for the disc in order to get a good recording; this may erode the benefits of dye stability, as marginally written discs (with higher correctable error rates) will lose data (i.e. have uncorrectable errors) after less dye degradation than well-written discs (with lower correctable error rates). Azo dye CD-Rs are dark blue in color, and their formulation is patented by Mitsubishi Chemical Corporation. Azo dyes are also chemically stable, and Azo CD-Rs are typically rated with a lifetime of decades. Azo is the most resistant dye against UV light and begins to degrade only after the third or fourth week of direct sunlight exposure. More modern implementations of this kind of dye include Super Azo which is not as deep blue as the earlier Metal Azo. This change of composition was necessary in order to achieve faster writing speeds. There are many hybrid variations of the dye formulations, such as Formazan by Kodak (a hybrid of cyanine and phthalocyanine). Unfortunately, many manufacturers have added additional coloring to disguise their unstable cyanine CD-Rs in the past, so the formulation of a disc cannot be determined based purely on its color. Similarly, a gold reflective layer does not guarantee the use of phthalocyanine dye. The quality of the disc is also not only dependent on the dye used, it is also influenced by sealing, the top layer, the reflective layer, and the polycarbonate. Simply choosing a disc based on its dye type may be problematic. Furthermore, correct power calibration of the laser in the writer, as well as correct timing of the laser pulses, stable disc speed, and so on, is critical to not only the immediate readability but the longevity of the recorded disc, so for archiving it is important to have not only a high-quality disc but a high-quality writer. In fact, a high-quality writer may produce adequate results with medium-quality media, but high-quality media cannot compensate for a mediocre writer, and discs written by such a writer cannot achieve their maximum potential archival lifetime. Speed These times only include the actual optical writing pass over the disc. For most disc recording operations, additional time is used for overhead processes, such as organizing the files and tracks, which adds to the theoretical minimum total time required to produce a disc. (An exception might be making a disc from a prepared ISO image, for which the overhead would likely be trivial.) At the lowest write speeds, this overhead takes so much less time than the actual disc writing pass that it may be negligible, but at higher write speeds, the overhead time becomes a larger proportion of the overall time taken to produce a finished disc and may add significantly to it. Also, above 20× speed, drives use a Zoned-CLV or CAV strategy, where the advertised maximum speed is only reached near the outer rim of the disc. This is not taken into account by the above table. (If this were not done, the faster rotation that would be required at the inner tracks could cause the disc to fracture and/or could cause excessive vibration which would make accurate and successful writing impossible.) Writing methods The blank disc has a pre-groove track onto which the data are written. The pre-groove track, which also contains timing information, ensures that the recorder follows the same spiral path as a conventional CD. A CD recorder writes data to a CD-R disc by pulsing its laser to heat areas of the organic dye layer. The writing process does not produce indentations (pits); instead, the heat permanently changes the optical properties of the dye, changing the reflectivity of those areas. Using a low laser power, so as not to further alter the dye, the disc is read back in the same way as a CD-ROM. However, the reflected light is modulated not by pits, but by the alternating regions of heated and unaltered dye. The change of the intensity of the reflected laser radiation is transformed into an electrical signal, from which the digital information is recovered ("decoded"). Once a section of a CD-R is written, it cannot be erased or rewritten, unlike a CD-RW. A CD-R can be recorded in multiple sessions. A CD recorder can write to a CD-R using several methods including: Disc At Once – the whole CD-R is written in one session with no gaps and the disc is "closed" meaning no more data can be added and the CD-R effectively becomes a standard read-only CD. With no gaps between the tracks, the Disc At Once format is useful for "live" audio recordings. Track At Once – data are written to the CD-R one track at a time but the CD is left "open" for further recording at a later stage. It also allows data and audio to reside on the same CD-R. Packet Writing – used to record data to a CD-R in "packets", allowing extra information to be appended to a disc at a later time, or for information on the disc to be made "invisible". In this way, CD-R can emulate CD-RW; however, each time information on the disc is altered, more data has to be written to the disc. There can be compatibility issues with this format and some CD drives. With careful examination, the written and unwritten areas can be distinguished by the naked eye. CD-Rs are written from the center outwards, so the written area appears as an inner band with slightly different shading. CDs have a Power Calibration Area, used to calibrate the writing laser before and during recording. CDs contain two such areas: one close to the inner edge of the disc, for low-speed calibration, and another on the outer edge on the disc, for high-speed calibration. The calibration results are recorded on a Recording Management Area (RMA) that can hold up to 99 calibrations. The disc cannot be written after the RMA is full, however, the RMA may be emptied in CD-RW discs. Lifespan Real-life (not accelerated aging) tests have revealed that some CD-Rs degrade quickly even if stored normally. The quality of a CD-R disc has a large and direct influence on longevity—low-quality discs should not be expected to last very long. According to research conducted by J. Perdereau, CD-Rs are expected to have an average life expectancy of 10 years. Branding isn't a reliable guide to quality, because many brands (major as well as no name) do not manufacture their own discs. Instead, they are sourced from different manufacturers of varying quality. For best results, the actual manufacturer and material components of each batch of discs should be verified. Burned CD-Rs suffer from material degradation, just like most writable media. CD-R media have an internal layer of dye used to store data. In a CD-RW disc, the recording layer is made of an alloy of silver and other metals—indium, antimony, and tellurium. In CD-R media, the dye itself can degrade, causing data to become unreadable. As well as degradation of the dye, failure of a CD-R can be due to the reflective surface. While silver is less expensive and more widely used, it is more prone to oxidation resulting in a non-reflecting surface. Gold on the other hand, although more expensive and no longer widely used, is an inert material, so gold-based CD-Rs do not suffer from this problem. Manufacturers have estimated the longevity of gold-based CD-Rs to be as high as 100 years. By measuring the rate of correctable data errors, the data integrity and/or manufacturing quality of CD-R media can be measured, allowing for a reliable prediction of future data losses caused by media degradation. Labeling It is recommended if using adhesive-backed paper labels that the labels be specially made for CD-Rs. A balanced CD vibrates only slightly when rotated at high speed. Bad or improperly made labels, or labels applied off-center, unbalance the CD and can cause it to vibrate when it spins, which causes read errors and even risks damaging the drive. A professional alternative to CD labels is pre-printed CDs using a 5-color silkscreen or offset press. Using a permanent marker pen is also a common practice. However, solvents from such pens can affect the dye layer. Disposal Data confidentiality Since CD-Rs, in general, cannot be logically erased to any degree, the disposal of CD-Rs presents a possible security issue if they contain sensitive/private data. Destroying the data requires physically destroying the disc or data layer. Heating the disc in a microwave oven for 10–15 seconds effectively destroys the data layer by causing arcing in the metal reflective layer, but this same arcing may cause damage or excessive wear to the microwave oven. Many office paper shredders are also designed to shred CDs. Some recent burners (Plextor, LiteOn) support erase operations on -R media, by "overwriting" the stored data with strong laser power, although the erased area cannot be overwritten with new data. Recycling The polycarbonate material and possible gold or silver in the reflective layer would make CD-Rs highly recyclable. However, the polycarbonate is of very little value and the quantity of precious metals is so small that it is not profitable to recover them. Consequently, recyclers that accept CD-Rs typically do not offer compensation for donating or transporting the materials. See also Absolute Time In Pregroove Blu-ray Disc CD recorder CD-R caddy CD-ROM, GD-ROM CD-RW, DVD-RW DVD, DVD-R, DVD+R, DVD+R DL HD DVD Labelflash LightScribe MultiLevel Recording, an obsolete technology (with non-binary modulation) Optical disc authoring Rainbow Books GD-ROM MIL-CD List of optical disc manufacturers References External links ECMA-394: Recordable Compact Disc Systems CD-R Multi-Speed (standardized Orange Book, Part II, Volume 2) The CD-R FAQ Understanding CD-R & CD-RW at the Optical Storage Technology Association site. Audiovisual introductions in 1988 Compact disc Optical computer storage media Audio storage Video storage Japanese inventions Dutch inventions Information technology in the Netherlands Science and technology in the Netherlands Information technology in Japan Science and technology in Japan
3,084
6,782
https://en.wikipedia.org/wiki/Compound
Compound
Compound may refer to: Architecture and built environments Compound (enclosure), a cluster of buildings having a shared purpose, usually inside a fence or wall Compound (fortification), a version of the above fortified with defensive structures Compound (migrant labour), a hostel for migrant workers such as those historically connected with mines in South Africa The Compound, an area of Palm Bay, Florida, US Komboni or compound, a type of slum in Zambia Government and law Composition (fine), a legal procedure in use after the English Civil War Committee for Compounding with Delinquents, an English Civil War institution that allowed Parliament to compound the estates of Royalists Compounding treason, an offence under the common law of England Compounding a felony, a previous offense under the common law of England Linguistics Compound (linguistics), a word that consists of more than one radical element Compound sentence (linguistics), a type of sentence made up of two or more independent clauses and no subordinate (dependent) clauses Science, technology, and mathematics Biology and medicine Compounding, the mixing of drugs in pharmacy Compound fracture, a complete fractures of bone where at least one fragment has damaged the skin, soft tissue or surrounding body cavity Compound leaf, a type of leaf being divided into smaller leaflets Chemistry and materials science Chemical compound, combination of two or more elements Plastic compounding, a method of preparing plastic formulations Vehicles and engines Compound engine, a steam engine in which steam is expanded through a series of two or three cylinders before exhaust Turbo-compound engine, an internal combustion engine where exhaust gases expand through power-turbines Compounding pressure, a method in which pressure in a steam turbine is made to drop in a number of stages Other uses in science, technology, and mathematics Compound bow, a type of bow for archery Polyhedral compound, a polyhedron composed of multiple polyhedra sharing the same centre Other uses Common names Compound (music), an attribute of a time signature Compound interest, in finance, unpaid interest that is added to the principal Compound chocolate, an inexpensive chocolate substitute that uses cocoa but excludes cocoa butter Proper names The Compound (book), a 2008 young adult novel by S. A. Bodeen Compound (company), a venture capital firm previously known as Metamorphic Ventures Eisenhuth Horseless Vehicle Company, or Compound, a former US automobile manufacturer See also Composite (disambiguation)
3,086
6,787
https://en.wikipedia.org/wiki/Chiapas
Chiapas
Chiapas (; Tzotzil and Tzeltal: Chyapas ), officially the Free and Sovereign State of Chiapas (), is one of the states that make up the 32 federal entities of Mexico. It comprises 124 municipalities and its capital and largest city is Tuxtla Gutiérrez. Other important population centers in Chiapas include Ocosingo, Tapachula, San Cristóbal de las Casas, Comitán, and Arriaga. Chiapas is the southernmost state in Mexico, and it borders the states of Oaxaca to the west, Veracruz to the northwest, and Tabasco to the north, and the Petén, Quiché, Huehuetenango, and San Marcos departments of Guatemala to the east and southeast. Chiapas has a significant coastline on the Pacific Ocean to the southwest. In general, Chiapas has a humid, tropical climate. In the northern area bordering Tabasco, near Teapa, rainfall can average more than per year. In the past, natural vegetation in this region was lowland, tall perennial rainforest, but this vegetation has been almost completely cleared to allow agriculture and ranching. Rainfall decreases moving towards the Pacific Ocean, but it is still abundant enough to allow the farming of bananas and many other tropical crops near Tapachula. On the several parallel sierras or mountain ranges running along the center of Chiapas, the climate can be quite moderate and foggy, allowing the development of cloud forests like those of Reserva de la Biosfera El Triunfo, home to a handful of horned guans, resplendent quetzals, and azure-rumped tanagers. Chiapas is home to the ancient Mayan ruins of Palenque, Yaxchilán, Bonampak, Chinkultic and Toniná. It is also home to one of the largest indigenous populations in the country, with ten federally recognized ethnicities. History The official name of the state is Chiapas, which is believed to have come from the ancient city of Chiapan, which in Náhuatl means "the place where the chia sage grows." After the Spanish arrived (1522), they established two cities called Chiapas de los Indios and Chiapas de los Españoles (1528), with the name of Provincia de Chiapas for the area around the cities. The first coat of arms of the region dates from 1535 as that of the Ciudad Real (San Cristóbal de las Casas). Chiapas painter Javier Vargas Ballinas designed the modern coat of arms. Pre-Columbian Era Hunter gatherers began to occupy the central valley of the state around 7000 BCE, but little is known about them. The oldest archaeological remains in the seat are located at the Santa Elena Ranch in Ocozocoautla whose finds include tools and weapons made of stone and bone. It also includes burials. In the pre Classic period from 1800 BCE to 300 CE, agricultural villages appeared all over the state although hunter gather groups would persist for long after the era. Recent excavations in the Soconusco region of the state indicate that the oldest civilization to appear in what is now modern Chiapas is that of the Mokaya, which were cultivating corn and living in houses as early as 1500 BCE, making them one of the oldest in Mesoamerica. There is speculation that these were the forefathers of the Olmec, migrating across the Grijalva Valley and onto the coastal plain of the Gulf of Mexico to the north, which was Olmec territory. One of these people's ancient cities is now the archeological site of Chiapa de Corzo, in which was found the oldest calendar known on a piece of ceramic with a date of 36 BCE. This is three hundred years before the Mayans developed their calendar. The descendants of Mokaya are the Mixe-Zoque. During the pre Classic era, it is known that most of Chiapas was not Olmec, but had close relations with them, especially the Olmecs of the Isthmus of Tehuantepec. Olmec-influenced sculpture can be found in Chiapas and products from the state including amber, magnetite, and ilmenite were exported to Olmec lands. The Olmecs came to what is now the northwest of the state looking for amber with one of the main pieces of evidence for this called the Simojovel Ax. Mayan civilization began in the pre-Classic period as well, but did not come into prominence until the Classic period (300–900 CE). Development of this culture was agricultural villages during the pre-Classic period with city building during the Classic as social stratification became more complex. The Mayans built cities on the Yucatán Peninsula and west into Guatemala. In Chiapas, Mayan sites are concentrated along the state's borders with Tabasco and Guatemala, near Mayan sites in those entities. Most of this area belongs to the Lacandon Jungle. Mayan civilization in the Lacandon area is marked by rising exploitation of rain forest resources, rigid social stratification, fervent local identity, waging war against neighboring peoples. At its height, it had large cities, a writing system, and development of scientific knowledge, such as mathematics and astronomy. Cities were centered on large political and ceremonial structures elaborately decorated with murals and inscriptions. Among these cities are Palenque, Bonampak, Yaxchilan, Chinkultic, Toniná and Tenón. The Mayan civilization had extensive trade networks and large markets trading in goods such as animal skins, indigo, amber, vanilla and quetzal feathers. It is not known what ended the civilization but theories range from over population size, natural disasters, disease, and loss of natural resources through over exploitation or climate change. Nearly all Mayan cities collapsed around the same time, 900 CE. From then until 1500 CE, social organization of the region fragmented into much smaller units and social structure became much less complex. There was some influence from the rising powers of central Mexico but two main indigenous groups emerged during this time, the Zoques and the various Mayan descendants. The Chiapans, for whom the state is named, migrated into the center of the state during this time and settled around Chiapa de Corzo, the old Mixe–Zoque stronghold. There is evidence that the Aztecs appeared in the center of the state around Chiapa de Corza in the 15th century, but were unable to displace the native Chiapa tribe. However, they had enough influence so that the name of this area and of the state would come from Nahuatl. Colonial period When the Spanish arrived in the 16th century, they found the indigenous peoples divided into Mayan and non-Mayan, with the latter dominated by the Zoques and Chiapa. The first contact between Spaniards and the people of Chiapas came in 1522, when Hernán Cortés sent tax collectors to the area after Aztec Empire was subdued. The first military incursion was headed by Luis Marín, who arrived in 1523. After three years, Marín was able to subjugate a number of the local peoples, but met with fierce resistance from the Tzotzils in the highlands. The Spanish colonial government then sent a new expedition under Diego de Mazariegos. Mazariegos had more success than his predecessor, but many natives preferred to commit suicide rather than submit to the Spanish. One famous example of this is the Battle of Tepetchia, where many jumped to their deaths in the Sumidero Canyon. Indigenous resistance was weakened by continual warfare with the Spaniards and disease. By 1530 almost all of the indigenous peoples of the area had been subdued with the exception of the Lacandons in the deep jungles who actively resisted until 1695. However, the main two groups, the Tzotzils and Tzeltals of the central highlands were subdued enough to establish the first Spanish city, today called San Cristóbal de las Casas, in 1528. It was one of two settlements initially called Villa Real de Chiapa de los Españoles and the other called Chiapa de los Indios. Soon after, the encomienda system was introduced, which reduced most of the indigenous population to serfdom and many even as slaves as a form of tribute and way of locking in a labor supply for tax payments. The conquistadors brought previously unknown diseases. This, as well as overwork on plantations, dramatically decreased the indigenous population. The Spanish also established missions, mostly under the Dominicans, with the Diocese of Chiapas established in 1538 by Pope Paul III. The Dominican evangelizers became early advocates of the indigenous' people's plight, with Bartolomé de las Casas winning a battle with the passing of a law in 1542 for their protection. This order also worked to make sure that communities would keep their indigenous name with a saint's prefix leading to names such as San Juan Chamula and San Lorenzo Zinacantán. He also advocated adapting the teaching of Christianity to indigenous language and culture. The encomienda system that had perpetrated much of the abuse of the indigenous peoples declined by the end of the 16th century, and was replaced by haciendas. However, the use and misuse of Indian labor remained a large part of Chiapas politics into modern times. Maltreatment and tribute payments created an undercurrent of resentment in the indigenous population that passed on from generation to generation. One uprising against high tribute payments occurred in the Tzeltal communities in the Los Alto region in 1712. Soon, the Tzoltzils and Ch'ols joined the Tzeltales in rebellion, but within a year the government was able to extinguish the rebellion. As of 1778, Thomas Kitchin described Chiapas as "the metropolis of the original Mexicans," with a population of approximately 20,000, and consisting mainly of indigenous peoples. The Spanish introduced new crops such as sugar cane, wheat, barley and indigo as main economic staples along native ones such as corn, cotton, cacao and beans. Livestock such as cattle, horses and sheep were introduced as well. Regions would specialize in certain crops and animals depending on local conditions and for many of these regions, communication and travel were difficult. Most Europeans and their descendants tended to concentrate in cities such as Ciudad Real, Comitán, Chiapa and Tuxtla. Intermixing of the races was prohibited by colonial law but by the end of the 17th century there was a significant mestizo population. Added to this was a population of African slaves brought in by the Spanish in the middle of the 16th century due to the loss of native workforce. Initially, "Chiapas" referred to the first two cities established by the Spanish in what is now the center of the state and the area surrounding them. Two other regions were also established, the Soconusco and Tuxtla, all under the regional colonial government of Guatemala. Chiapas, Soconusco and Tuxla regions were united to the first time as an intendencia during the Bourbon Reforms in 1790 as an administrative region under the name of Chiapas. However, within this intendencia, the division between Chiapas and Soconusco regions would remain strong and have consequences at the end of the colonial period. Era of Independence From the colonial period Chiapas was relatively isolated from the colonial authorities in Mexico City and regional authorities in Guatemala. One reason for this was the rugged terrain. Another was that much of Chiapas was not attractive to the Spanish. It lacked mineral wealth, large areas of arable land, and easy access to markets. This isolation spared it from battles related to Independence. José María Morelos y Pavón did enter the city of Tonalá but incurred no resistance. The only other insurgent activity was the publication of a newspaper called El Pararrayos by Matías de Córdova in San Cristóbal de las Casas. Following the end of Spanish rule in New Spain, it was unclear what new political arrangements would emerge. The isolation of Chiapas from centers of power, along with the strong internal divisions in the intendencia caused a political crisis after the royal government collapsed in Mexico City in 1821, ending the Mexican War of Independence. During this war, a group of influential Chiapas merchants and ranchers sought the establishment of the Free State of Chiapas. This group became known as the La Familia Chiapaneca. However, this alliance did not last with the lowlands preferring inclusion among the new republics of Central America and the highlands annexation to Mexico. In 1821, a number of cities in Chiapas, starting in Comitán, declared the state's separation from the Spanish empire. In 1823, Guatemala became part of the United Provinces of Central America, which united to form a federal republic that would last from 1823 to 1839. With the exception of the pro-Mexican Ciudad Real (San Cristóbal) and some others, many Chiapanecan towns and villages favored a Chiapas independent of Mexico and some favored unification with Guatemala. Elites in highland cities pushed for incorporation into Mexico. In 1822, then-Emperor Agustín de Iturbide decreed that Chiapas was part of Mexico. In 1823, the Junta General de Gobierno was held and Chiapas declared independence again. In July 1824, the Soconusco District of southwestern Chiapas split off from Chiapas, announcing that it would join the Central American Federation. In September of the same year, a referendum was held on whether the intendencia would join Central America or Mexico, with many of the elite endorsing union with Mexico. This referendum ended in favor of incorporation with Mexico (allegedly through manipulation of the elite in the highlands), but the Soconusco region maintained a neutral status until 1842, when Oaxacans under General Antonio López de Santa Anna occupied the area, and declared it reincorporated into Mexico. Elites of the area would not accept this until 1844. Guatemala would not recognize Mexico's annexation of the Soconusco region until 1895, even though the border between Chiapas and the country was agreed upon in 1882. The State of Chiapas was officially declared in 1824, with its first constitution in 1826. Ciudad Real was renamed San Cristóbal de las Casas in 1828. In the decades after the official end of the war, the provinces of Chiapas and Soconusco unified, with power concentrated in San Cristóbal de las Casas. The state's society evolved into three distinct spheres: indigenous peoples, mestizos from the farms and haciendas and the Spanish colonial cities. Most of the political struggles were between the last two groups especially over who would control the indigenous labor force. Economically, the state lost one of its main crops, indigo, to synthetic dyes. There was a small experiment with democracy in the form of "open city councils" but it was short-lived because voting was heavily rigged. The Universidad Pontificia y Literaria de Chiapas was founded in 1826, with Mexico's second teacher's college founded in the state in 1828. Era of the Liberal Reform With the ouster of conservative Antonio López de Santa Anna, Mexican liberals came to power. The Reform War (1858–1861) fought between Liberals, who favored federalism and sought economic development, decreased power of the Roman Catholic Church, and Mexican army, and Conservatives, who favored centralized autocratic government, retention of elite privileges, did not lead to any military battles in the state. Despite that it strongly affected Chiapas politics. In Chiapas, the Liberal-Conservative division had its own twist. Much of the division between the highland and lowland ruling families was for whom the Indians should work for and for how long as the main shortage was of labor. These families split into Liberals in the lowlands, who wanted further reform and Conservatives in the highlands who still wanted to keep some of the traditional colonial and church privileges. For most of the early and mid 19th century, Conservatives held most of the power and were concentrated in the larger cities of San Cristóbal de las Casas, Chiapa (de Corzo), Tuxtla and Comitán. As Liberals gained the upper hand nationally in the mid-19th century, one Liberal politician Ángel Albino Corzo gained control of the state. Corzo became the primary exponent of Liberal ideas in the southeast of Mexico and defended the Palenque and Pichucalco areas from annexation by Tabasco. However, Corzo's rule would end in 1875, when he opposed the regime of Porfirio Díaz. Liberal land reforms would have negative effects on the state's indigenous population unlike in other areas of the country. Liberal governments expropriated lands that were previously held by the Spanish Crown and Catholic Church in order to sell them into private hands. This was not only motivated by ideology, but also due to the need to raise money. However, many of these lands had been in a kind of "trust" with the local indigenous populations, who worked them. Liberal reforms took away this arrangement and many of these lands fell into the hands of large landholders who when made the local Indian population work for three to five days a week just for the right to continue to cultivate the lands. This requirement caused many to leave and look for employment elsewhere. Most became "free" workers on other farms, but they were often paid only with food and basic necessities from the farm shop. If this was not enough, these workers became indebted to these same shops and then unable to leave. The opening up of these lands also allowed many whites and mestizos (often called Ladinos in Chiapas) to encroach on what had been exclusively indigenous communities in the state. These communities had had almost no contact with the Ladino world, except for a priest. The new Ladino landowners occupied their acquired lands as well as others, such as shopkeepers, opened up businesses in the center of Indian communities. In 1848, a group of Tzeltals plotted to kill the new mestizos in their midst, but this plan was discovered, and was punished by the removal of large number of the community's male members. The changing social order had severe negative effects on the indigenous population with alcoholism spreading, leading to more debts as it was expensive. The struggles between Conservatives and Liberals nationally disrupted commerce and confused power relations between Indian communities and Ladino authorities. It also resulted in some brief respites for Indians during times when the instability led to uncollected taxes. One other effect that Liberal land reforms had was the start of coffee plantations, especially in the Soconusco region. One reason for this push in this area was that Mexico was still working to strengthen its claim on the area against Guatemala's claims on the region. The land reforms brought colonists from other areas of the country as well as foreigners from England, the United States and France. These foreign immigrants would introduce coffee production to the areas, as well as modern machinery and professional administration of coffee plantations. Eventually, this production of coffee would become the state's most important crop. Although the Liberals had mostly triumphed in the state and the rest of the country by the 1860s, Conservatives still held considerable power in Chiapas. Liberal politicians sought to solidify their power among the indigenous groups by weakening the Roman Catholic Church. The more radical of these even allowed indigenous groups the religious freedoms to return to a number of native rituals and beliefs such as pilgrimages to natural shrines such as mountains and waterfalls. This culminated in the Chiapas "caste war", which was an uprising of Tzotzils beginning in 1868. The basis of the uprising was the establishment of the "three stones cult" in Tzajahemal. Agustina Gómez Checheb was a girl tending her father's sheep when three stones fell from the sky. Collecting them, she put them on her father's altar and soon claimed that the stone communicated with her. Word of this soon spread and the "talking stones" of Tzajahemel soon became a local indigenous pilgrimage site. The cult was taken over by one pilgrim, Pedro Díaz Cuzcat, who also claimed to be able to communicate with the stones, and had knowledge of Catholic ritual, becoming a kind of priest. However, this challenged the traditional Catholic faith and non Indians began to denounce the cult. Stories about the cult include embellishments such as the crucifixion of a young Indian boy. This led to the arrest of Checheb and Cuzcat in December 1868. This caused resentment among the Tzotzils. Although the Liberals had earlier supported the cult, Liberal landowners had also lost control of much of their Indian labor and Liberal politicians were having a harder time collecting taxes from indigenous communities. An Indian army gathered at Zontehuitz then attacked various villages and haciendas. By the following June the city of San Cristóbal was surrounded by several thousand Indians, who offered the exchanged of several Ladino captives for their religious leaders and stones. Chiapas governor Dominguéz came to San Cristóbal with about three hundred heavily armed men, who then attacked the Indian force armed only with sticks and machetes. The indigenous force was quickly dispersed and routed with government troops pursuing pockets of guerrilla resistance in the mountains until 1870. The event effectively returned control of the indigenous workforce back to the highland elite. Porfiriato, 1876–1911 The Porfirio Díaz era at the end of the 19th century and beginning of the 20th was initially thwarted by regional bosses called caciques, bolstered by a wave of Spanish and mestizo farmers who migrated to the state and added to the elite group of wealthy landowning families. There was some technological progress such as a highway from San Cristóbal to the Oaxaca border and the first telephone line in the 1880s, but Porfirian era economic reforms would not begin until 1891 with Governor Emilio Rabasa. This governor took on the local and regional caciques and centralized power into the state capital, which he moved from San Cristóbal de las Casas to Tuxtla in 1892. He modernized public administration, transportation and promoted education. Rabasa also introduced the telegraph, limited public schooling, sanitation and road construction, including a route from San Cristóbal to Tuxtla then Oaxaca, which signaled the beginning of favoritism of development in the central valley over the highlands. He also changed state policies to favor foreign investment, favored large land mass consolidation for the production of cash crops such as henequen, rubber, guayule, cochineal and coffee. Agricultural production boomed, especially coffee, which induced the construction of port facilities in Tonalá. The economic expansion and investment in roads also increased access to tropical commodities such as hardwoods, rubber and chicle. These still required cheap and steady labor to be provided by the indigenous population. By the end of the 19th century, the four main indigenous groups, Tzeltals, Tzotzils, Tojolabals and Ch’ols were living in "reducciones" or reservations, isolated from one another. Conditions on the farms of the Porfirian era was serfdom, as bad if not worse than for other indigenous and mestizo populations leading to the Mexican Revolution. While this coming event would affect the state, Chiapas did not follow the uprisings in other areas that would end the Porfirian era. Japanese immigration to Mexico began in 1897 when the first thirty five migrants arrived in Chiapas to work on coffee farms, so that Mexico was the first Latin American country to receive organized Japanese immigration. Although this colony ultimately failed, there remains a small Japanese community in Acacoyagua, Chiapas. Early 20th century to 1960 In the early 20th century and into the Mexican Revolution, the production of coffee was particularly important but labor-intensive. This would lead to a practice called enganche (hook), where recruiters would lure workers with advanced pay and other incentives such as alcohol and then trap them with debts for travel and other items to be worked off. This practice would lead to a kind of indentured servitude and uprisings in areas of the state, although they never led to large rebel armies as in other parts of Mexico. A small war broke out between Tuxtla Gutiérrez and San Cristobal in 1911. San Cristóbal, allied with San Juan Chamula, tried to regain the state's capital but the effort failed. San Cristóbal de las Casas, which had a very limited budget, to the extent that it had to ally with San Juan Chamula challenged Tuxtla Gutierrez which, with only a small ragtag army overwhelmingly defeated the army helped by chamulas from San Cristóbal. There were three years of peace after that until troops allied with the "First Chief" of the revolutionary Constitutionalist forces, Venustiano Carranza, entered in 1914 taking over the government, with the aim of imposing the Ley de Obreros (Workers' Law) to address injustices against the state's mostly indigenous workers. Conservatives responded violently months later when they were certain the Carranza forces would take their lands. This was mostly by way of guerrilla actions headed by farm owners who called themselves the Mapaches. This action continued for six years, until President Carranza was assassinated in 1920 and revolutionary general Álvaro Obregón became president of Mexico. This allowed the Mapaches to gain political power in the state and effectively stop many of the social reforms occurring in other parts of Mexico. The Mapaches continued to fight against socialists and communists in Mexico from 1920 to 1936, to maintain their control over the state. In general, elite landowners also allied with the nationally dominant party founded by Plutarco Elías Calles following the assassination of president-elect Obregón in 1928; that party was renamed the Institutional Revolutionary Party in 1946. Through that alliance, they could block land reform in this way as well. The Mapaches were first defeated in 1925 when an alliance of socialists and former Carranza loyalists had Carlos A. Vidal selected as governor, although he was assassinated two years later. The last of the Mapache resistance was overcome in the early 1930s by Governor Victorico Grajales, who pursued President Lázaro Cárdenas' social and economic policies including persecution of the Catholic Church. These policies would have some success in redistributing lands and organizing indigenous workers but the state would remain relatively isolated for the rest of the 20th century. The territory was reorganized into municipalities in 1916. The current state constitution was written in 1921. There was political stability from the 1940s to the early 1970s; however, regionalism regained with people thinking of themselves as from their local city or municipality over the state. This regionalism impeded the economy as local authorities restrained outside goods. For this reason, construction of highways and communications were pushed to help with economic development. Most of the work was done around Tuxtla Gutiérrez and Tapachula. This included the Sureste railroad connecting northern municipalities such as Pichucalco, Salto de Agua, Palenque, Catazajá and La Libertad. The Cristobal Colon highway linked Tuxtla to the Guatemalan border. Other highways included El Escopetazo to Pichucalco, a highway between San Cristóbal and Palenque with branches to Cuxtepeques and La Frailesca. This helped to integrate the state's economy, but it also permitted the political rise of communal land owners called ejidatarios. Mid-20th century to 1990 In the mid-20th century, the state experienced a significant rise in population, which outstripped local resources, especially land in the highland areas. Since the 1930s, many indigenous and mestizos have migrated from the highland areas into the Lacandon Jungle with the populations of Altamirano, Las Margaritas, Ocosingo and Palenque rising from less than 11,000 in 1920 to over 376,000 in 2000. These migrants came to the jungle area to clear forest and grow crops and raise livestock, especially cattle. Economic development in general raised the output of the state, especially in agriculture, but it had the effect of deforesting many areas, especially the Lacandon. Added to this was there were still serf like conditions for many workers and insufficient educational infrastructure. Population continued to increase faster than the economy could absorb. There were some attempts to resettle peasant farmers onto non cultivated lands, but they were met with resistance. President Gustavo Díaz Ordaz awarded a land grant to the town of Venustiano Carranza in 1967, but that land was already being used by cattle-ranchers who refused to leave. The peasants tried to take over the land anyway, but when violence broke out, they were forcibly removed. In Chiapas poor farmland and severe poverty afflict the Mayan Indians which led to unsuccessful non violent protests and eventually armed struggle started by the Zapatista National Liberation Army in January 1994. These events began to lead to political crises in the 1970s, with more frequent land invasions and takeovers of municipal halls. This was the beginning of a process that would lead to the emergence of the Zapatista movement in the 1990s. Another important factor to this movement would be the role of the Catholic Church from the 1960s to the 1980s. In 1960, Samuel Ruiz became the bishop of the Diocese of Chiapas, centered in San Cristóbal. He supported and worked with Marist priests and nuns following an ideology called liberation theology. In 1974, he organized a statewide "Indian Congress" with representatives from the Tzeltal, Tzotzil, Tojolabal and Ch'ol peoples from 327 communities as well as Marists and the Maoist People's Union. This congress was the first of its kind with the goal of uniting the indigenous peoples politically. These efforts were also supported by leftist organizations from outside Mexico, especially to form unions of ejido organizations. These unions would later form the base of the EZLN organization. One reason for the Church's efforts to reach out to the indigenous population was that starting in the 1970s, a shift began from traditional Catholic affiliation to Protestant, Evangelical and other Christian sects. The 1980s saw a large wave of refugees coming into the state from Central America as a number of these countries, especially Guatemala, were in the midst of violent political turmoil. The Chiapas/Guatemala border had been relatively porous with people traveling back and forth easily in the 19th and 20th centuries, much like the Mexico/U.S. border around the same time. This is in spite of tensions caused by Mexico's annexation of the Soconusco region in the 19th century. The border between Mexico and Guatemala had been traditionally poorly guarded, due to diplomatic considerations, lack of resources and pressure from landowners who need cheap labor sources. The arrival of thousands of refugees from Central America stressed Mexico's relationship with Guatemala, at one point coming close to war as well as a politically destabilized Chiapas. Although Mexico is not a signatory to the UN Convention Relating to the Status of Refugees, international pressure forced the government to grant official protection to at least some of the refugees. Camps were established in Chiapas and other southern states, and mostly housed Mayan peoples. However, most Central American refugees from that time never received any official status, estimated by church and charity groups at about half a million from El Salvador alone. The Mexican government resisted direct international intervention in the camps, but eventually relented somewhat because of finances. By 1984, there were 92 camps with 46,000 refugees in Chiapas, concentrated in three areas, mostly near the Guatemalan border. To make matters worse, the Guatemalan army conducted raids into camps on Mexican territories with significant casualties, terrifying the refugees and local populations. From within Mexico, refugees faced threats by local governments who threatened to deport them, legally or not, and local paramilitary groups funded by those worried about the political situation in Central America spilling over into the state. The official government response was to militarize the areas around the camps, which limited international access and migration into Mexico from Central America was restricted. By 1990, it was estimated that there were over 200,000 Guatemalans and half a million from El Salvador, almost all peasant farmers and most under age twenty. In the 1980s, the politization of the indigenous and rural populations of the state that began in the 1960s and 1970s continued. In 1980, several ejido (communal land organizations) joined to form the Union of Ejidal Unions and United Peasants of Chiapas, generally called the Union of Unions, or UU. It had a membership of 12,000 families from over 180 communities. By 1988, this organization joined with other to form the ARIC-Union of Unions (ARIC-UU) and took over much of the Lacandon Jungle portion of the state. Most of the members of these organization were from Protestant and Evangelical sects as well as "Word of God" Catholics affiliated with the political movements of the Diocese of Chiapas. What they held in common was indigenous identity vis-à-vis the non-indigenous, using the old 19th century "caste war" word "Ladino" for them. Economic liberalization and the EZLN The adoption of liberal economic reforms by the Mexican federal government clashed with the leftist political ideals of these groups, notably as the reforms were believed to have begun to have negative economic effects on poor farmers, especially small-scale indigenous coffee-growers. Opposition would coalesce into the Zapatista movement in the 1990s. Although the Zapatista movement couched its demands and cast its role in response to contemporary issues, especially in its opposition to neoliberalism, it operates in the tradition of a long line of peasant and indigenous uprisings that have occurred in the state since the colonial era. This is reflected in its indigenous vs. Mestizo character. However, the movement was an economic one as well. Although the area has extensive resources, much of the local population of the state, especially in rural areas, did not benefit from this bounty. In the 1990s, two thirds of the state's residents did not have sewage service, only a third had electricity and half did not have potable water. Over half of the schools offered education only to the third grade and most pupils dropped out by the end of first grade. Grievances, strongest in the San Cristóbal and Lacandon Jungle areas, were taken up by a small leftist guerrilla band led by a man called only "Subcomandante Marcos." This small band, called the Zapatista Army of National Liberation (Ejército Zapatista de Liberación Nacional, EZLN), came to the world's attention when on January 1, 1994 (the day the NAFTA treaty went into effect) EZLN forces occupied and took over the towns of San Cristobal de las Casas, Las Margaritas, Altamirano, Ocosingo and three others. They read their proclamation of revolt to the world and then laid siege to a nearby military base, capturing weapons and releasing many prisoners from the jails. This action followed previous protests in the state in opposition to neoliberal economic policies. Although it has been estimated as having no more than 300 armed guerrilla members, the EZLN paralyzed the Mexican government, which balked at the political risks of direct confrontation. The major reason for this was that the rebellion caught the attention of the national and world press, as Marcos made full use of the then-new Internet to get the group's message out, putting the spotlight on indigenous issues in Mexico in general. Furthermore, the opposition press in Mexico City, especially La Jornada, actively supported the rebels. These factors encouraged the rebellion to go national. Many blamed the unrest on infiltration of leftists among the large Central American refugee population in Chiapas, and the rebellion opened up splits in the countryside between those supporting and opposing the EZLN. Zapatista sympathizers have included mostly Protestants and Word of God Catholics, opposing those "traditionalist" Catholics who practiced a syncretic form of Catholicism and indigenous beliefs. This split had existed in Chiapas since the 1970s, with the latter group supported by the caciques and others in the traditional power-structure. Protestants and Word of God Catholics (allied directly with the bishopric in San Cristóbal) tended to oppose traditional power structures. The Bishop of Chiapas, Samuel Ruiz, and the Diocese of Chiapas reacted by offering to mediate between the rebels and authorities. However, because of this diocese's activism since the 1960s, authorities accused the clergy of being involved with the rebels. There was some ambiguity about the relationship between Ruiz and Marcos and it was a constant feature of news coverage, with many in official circles using such to discredit Ruiz. Eventually, the activities of the Zapatistas began to worry the Roman Catholic Church in general and to upstage the diocese's attempts to re establish itself among Chiapan indigenous communities against Protestant evangelization. This would lead to a breach between the Church and the Zapatistas. The Zapatista story remained in headlines for a number of years. One reason for this was the December 1997 massacre of forty-five unarmed Tzotzil peasants, mostly women and children, in the Zapatista-controlled village of Acteal in the Chenhaló municipality just north of San Cristóbal. This allowed many media outlets in Mexico to step up their criticisms of the government. Despite this, the armed conflict was brief, mostly because the Zapatistas, unlike many other guerilla movements, did not try to gain traditional political power. It focused more on trying to manipulate public opinion in order to obtain concessions from the government. This has linked the Zapatistas to other indigenous and identity-politics movements that arose in the late-20th century. The main concession that the group received was the San Andrés Accords (1996), also known as the Law on Indian Rights and Culture. The Accords appear to grant certain indigenous zones autonomy, but this is against the Mexican constitution, so its legitimacy has been questioned. Zapatista declarations since the mid-1990s have called for a new constitution. the government had not found a solution to this problem. The revolt also pressed the government to institute anti-poverty programs such as "Progresa" (later called "Oportunidades") and the "Puebla-Panama Plan" – aiming to increase trade between southern Mexico and Central America. As of the first decade of the 2000s the Zapatista movement remained popular in many indigenous communities. The uprising gave indigenous peoples a more active role in the state's politics. However, it did not solve the economic issues that many peasant farmers face, especially the lack of land to cultivate. This problem has been at crisis proportions since the 1970s, and the government's reaction has been to encourage peasant farmers—mostly indigenous—to migrate into the sparsely populated Lacandon Jungle, a trend since earlier in the century. From the 1970s on, some 100,000 people set up homes in this rainforest area, with many being recognized as ejidos, or communal land-holding organizations. These migrants included Tzeltals, Tojolabals, Ch'ols and mestizos, mostly farming corn and beans and raising livestock. However, the government changed policies in the late 1980s with the establishment of the Montes Azules Biosphere Reserve, as much of the Lacandon Jungle had been destroyed or severely damaged. While armed resistance has wound down, the Zapatistas have remained a strong political force, especially around San Cristóbal and the Lacandon Jungle, its traditional bases. Since the Accords, they have shifted focus in gaining autonomy for the communities they control. Since the 1994 uprising, migration into the Lacandon Jungle has significantly increased, involving illegal settlements and cutting in the protected biosphere reserve. The Zapatistas support these actions as part of indigenous rights, but that has put them in conflict with international environmental groups and with the indigenous inhabitants of the rainforest area, the Lacandons. Environmental groups state that the settlements pose grave risks to what remains of the Lacandon, while the Zapatistas accuse them of being fronts for the government, which wants to open the rainforest up to multinational corporations. Added to this is the possibility that significant oil and gas deposits exist under this area. The Zapatista movement has had some successes. The agricultural sector of the economy now favors ejidos and other commonly-owned land. There have been some other gains economically as well. In the last decades of the 20th century, Chiapas's traditional agricultural economy has diversified somewhat with the construction of more roads and better infrastructure by the federal and state governments. Tourism has become important in some areas of the state, especially in San Cristóbal de las Casas and Palenque. Its economy is important to Mexico as a whole as well, producing coffee, corn, cacao, tobacco, sugar, fruit, vegetables and honey for export. It is also a key state for the nation's petrochemical and hydroelectric industries. A significant percentage of PEMEX's drilling and refining takes place in Chiapas and Tabasco, and Chiapas produces fifty-five percent of Mexico's hydroelectric energy. However, Chiapas remains one of the poorest states in Mexico. Ninety-four of its 111 municipalities have a large percentage of the population living in poverty. In areas such as Ocosingo, Altamirano and Las Margaritas, the towns where the Zapatistas first came into prominence in 1994, 48% of the adults were illiterate. Chiapas is still considered isolated and distant from the rest of Mexico, both culturally and geographically. It has significantly underdeveloped infrastructure compared to the rest of the country, and its significant indigenous population with isolationist tendencies keep the state distinct culturally. Cultural stratification, neglect and lack of investment by the Mexican federal government has exacerbated this problem. Geography Political geography Chiapas is located in the south east of Mexico, bordering the states of Tabasco, Veracruz and Oaxaca with the Pacific Ocean to the south and Guatemala to the east. It has a territory of 74,415 km2, the eighth largest state in Mexico. The state consists of 118 municipalities organized into nine political regions called Center, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa. There are 18 cities, twelve towns (villas) and 111 pueblos (villages). Major cities include Tuxtla Gutiérrez, San Cristóbal de las Casas, Tapachula, Palenque, Comitán, and Chiapa de Corzo. Geographical regions The state has a complex geography with seven distinct regions according to the Mullerried classification system. These include the Pacific Coast Plains, the Sierra Madre de Chiapas, the Central Depression, the Central Highlands, the Eastern Mountains, the Northern Mountains and the Gulf Coast Plains. The Pacific Coast Plains is a strip of land parallel to the ocean. It is composed mostly of sediment from the mountains that border it on the northern side. It is uniformly flat, and stretches from the Bernal Mountain south to Tonalá. It has deep salty soils due to its proximity to the sea. It has mostly deciduous rainforest although most has been converted to pasture for cattle and fields for crops. It has numerous estuaries with mangroves and other aquatic vegetation. The Sierra Madre de Chiapas runs parallel to the Pacific coastline of the state, northwest to southeast as a continuation of the Sierra Madre del Sur. This area has the highest altitudes in Chiapas including the Tacaná Volcano, which rises above sea level. Most of these mountains are volcanic in origin although the nucleus is metamorphic rock. It has a wide range of climates but little arable land. It is mostly covered in middle altitude rainforest, high altitude rainforest, and forests of oaks and pines. The mountains partially block rain clouds from the Pacific, a process known as Orographic lift, which creates a particularly rich coastal region called the Soconusco. The main commercial center of the sierra is the town of Motozintla, also near the Guatemalan border. The Central Depression is in the center of the state. It is an extensive semi flat area bordered by the Sierra Madre de Chiapas, the Central Highlands and the Northern Mountains. Within the depression there are a number of distinct valleys. The climate here can be very hot and humid in the summer, especially due to the large volume of rain received in July and August. The original vegetation was lowland deciduous forest with some rainforest of middle altitudes and some oaks above above sea level. The Central Highlands, also referred to as Los Altos, are mountains oriented from northwest to southeast with altitudes ranging from above sea level. The western highlands are displaced faults, while the eastern highlands are mainly folds of sedimentary formationsmainly limestone, shale, and sandstone. These mountains, along the Sierra Madre of Chiapas become the Cuchumatanes where they extend over the border into Guatemala. Its topography is mountainous with many narrow valleys and karst formations called uvalas or poljés, depending on the size. Most of the rock is limestone allowing for a number of formations such as caves and sinkholes. There are also some isolated pockets of volcanic rock with the tallest peaks being the Tzontehuitz and Huitepec volcanos. There are no significant surface water systems as they are almost all underground. The original vegetation was forest of oak and pine but these have been heavily damaged. The highlands climate in the Koeppen modified classification system for Mexico is humid temperate C(m) and subhumid temperate C (w 2 ) (w). This climate exhibits a summer rainy season and a dry winter, with possibilities of frost from December to March. The Central Highlands have been the population center of Chiapas since the Conquest. European epidemics were hindered by the tierra fría climate, allowing the indigenous peoples in the highlands to retain their large numbers. The Eastern Mountains (Montañas del Oriente) are in the east of the state, formed by various parallel mountain chains mostly made of limestone and sandstone. Its altitude varies from . This area receives moisture from the Gulf of Mexico with abundant rainfall and exuberant vegetation, which creates the Lacandon Jungle, one of the most important rainforests in Mexico. The Northern Mountains (Montañas del Norte) are in the north of the state. They separate the flatlands of the Gulf Coast Plains from the Central Depression. Its rock is mostly limestone. These mountains also receive large amounts of rainfall with moisture from the Gulf of Mexico giving it a mostly hot and humid climate with rains year round. In the highest elevations around , temperatures are somewhat cooler and do experience a winter. The terrain is rugged with small valleys whose natural vegetation is high altitude rainforest. The Gulf Coast Plains (Llanura Costera del Golfo) stretch into Chiapas from the state of Tabasco, which gives it the alternate name of the Tabasqueña Plains. These plains are found only in the extreme north of the state. The terrain is flat and prone to flooding during the rainy season as it was built by sediments deposited by rivers and streams heading to the Gulf. Lacandon Jungle The Lacandon Jungle is situated in north eastern Chiapas, centered on a series of canyonlike valleys called the Cañadas, between smaller mountain ridges oriented from northwest to southeast. The ecosystem covers an area of approximately extending from Chiapas into northern Guatemala and southern Yucatán Peninsula and into Belize. This area contains as much as 25% of Mexico's total species diversity, most of which has not been researched. It has a predominantly hot and humid climate (Am w" i g) with most rain falling from summer to part of fall, with an average of between 2300 and 2600 mm per year. There is a short dry season from March to May. The predominant wild vegetation is perennial high rainforest. The Lacandon comprises a biosphere reserve (Montes Azules); four natural protected areas (Bonampak, Yaxchilan, Chan Kin, and Lacantum); and the communal reserve (La Cojolita), which functions as a biological corridor with the area of Petén in Guatemala. Flowing within the Rainforest is the Usumacinta River, considered to be one of the largest rivers in Mexico and seventh largest in the world based on volume of water. During the 20th century, the Lacandon has had a dramatic increase in population and along with it, severe deforestation. The population of municipalities in this area, Altamirano, Las Margaritas, Ocosingo and Palenque have risen from 11,000 in 1920 to over 376,000 in 2000. Migrants include Ch'ol, Tzeltal, Tzotzil, Tojolabal indigenous peoples along with mestizos, Guatemalan refugees and others. Most of these migrants are peasant farmers, who cut forest to plant crops. However, the soil of this area cannot support annual crop farming for more than three or four harvests. The increase in population and the need to move on to new lands has pitted migrants against each other, the native Lacandon people, and the various ecological reserves for land. It is estimated that only ten percent of the original Lacandon rainforest in Mexico remains, with the rest strip-mined, logged and farmed. It once stretched over a large part of eastern Chiapas but all that remains is along the northern edge of the Guatemalan border. Of this remaining portion, Mexico is losing over five percent each year. The best preserved portion of the Lacandon is within the Montes Azules Biosphere Reserve. It is centered on what was a commercial logging grant by the Porfirio Díaz government, which the government later nationalized. However, this nationalization and conversion into a reserve has made it one of the most contested lands in Chiapas, with the already existing ejidos and other settlements within the park along with new arrivals squatting on the land. Soconusco The Soconusco region encompasses a coastal plain and a mountain range with elevations of up to above sea levels paralleling the Pacific Coast. The highest peak in Chiapas is the Tacaná Volcano at above sea level. In accordance with an 1882 treaty, the dividing line between Mexico and Guatemala goes right over the summit of this volcano. The climate is tropical, with a number of rivers and evergreen forests in the mountains. This is Chiapas's major coffee-producing area, as it has the best soils and climate for coffee. Before the arrival of the Spanish, this area was the principal source of cocoa seeds in the Aztec empire, which they used as currency, and for the highly prized quetzal feathers used by the nobility. It would become the first area to produce coffee, introduced by an Italian entrepreneur on the La Chacara farm. Coffee is cultivated on the slopes of these mountains mostly between asl. Mexico produces about 4 million sacks of green coffee each year, fifth in the world behind Brazil, Colombia, Indonesia and Vietnam. Most producers are small with plots of land under . From November to January, the annual crop is harvested and processed employing thousands of seasonal workers. Lately, a number of coffee haciendas have been developing tourism infrastructure as well. Environment and protected areas Chiapas is located in the tropical belt of the planet, but the climate is moderated in many areas by altitude. For this reason, there are hot, semi-hot, temperate and even cold climates. Some areas have abundant rainfall year-round and others receive most of their rain between May and October, with a dry season from November to April. The mountain areas affect wind and moisture flow over the state, concentrating moisture in certain areas of the state. They also are responsible for some cloud-covered rainforest areas in the Sierra Madre. Chiapas's rainforests are home to thousands of animals and plants, some of which cannot be found anywhere else in the world. Natural vegetation varies from lowland to highland tropical forest, pine and oak forests in the highest altitudes and plains area with some grassland. Chiapas is ranked second in forest resources in Mexico with valued woods such as pine, cypress, Liquidambar, oak, cedar, mahogany and more. The Lacandon Jungle is one of the last major tropical rainforests in the northern hemisphere with an extension of . It contains about sixty percent of Mexico's tropical tree species, 3,500 species of plants, 1,157 species of invertebrates and over 500 of vertebrate species. Chiapas has one of the greatest diversities in wildlife in the Americas. There are more than 100 species of amphibians, 700 species of birds, fifty of mammals and just over 200 species of reptiles. In the hot lowlands, there are armadillos, monkeys, pelicans, wild boar, jaguars, crocodiles, iguanas and many others. In the temperate regions there are species such as bobcats, salamanders, a large red lizard Abronia lythrochila, weasels, opossums, deer, ocelots and bats. The coastal areas have large quantities of fish, turtles, and crustaceans, with many species in danger of extinction or endangered as they are endemic only to this area. The total biodiversity of the state is estimated at over 50,000 species of plants and animals. The diversity of species is not limited to the hot lowlands. The higher altitudes also have mesophile forests, oak/pine forests in the Los Altos, Northern Mountains and Sierra Madre and the extensive estuaries and mangrove wetlands along the coast. Chiapas has about thirty percent of Mexico's fresh water resources. The Sierra Madre divides them into those that flow to the Pacific and those that flow to the Gulf of Mexico. Most of the first are short rivers and streams; most longer ones flow to the Gulf. Most Pacific side rivers do not drain directly into this ocean but into lagoons and estuaries. The two largest rivers are the Grijalva and the Usumacinta, with both part of the same system. The Grijalva has four dams built on it the Belisario Dominguez (La Angostura); Manuel Moreno Torres (Chicoasén); Nezahualcóyotl (Malpaso); and Angel Albino Corzo (Peñitas). The Usumacinta divides the state from Guatemala and is the longest river in Central America. In total, the state has of surface waters, of coastline, control of of ocean, of estuaries and ten lake systems. Laguna Miramar is a lake in the Montes Azules reserve and the largest in the Lacandon Jungle at 40 km in diameter. The color of its waters varies from indigo to emerald green and in ancient times, there were settlements on its islands and its caves on the shoreline. The Catazajá Lake is 28 km north of the city of Palenque. It is formed by rainwater captured as it makes its way to the Usumacinta River. It contains wildlife such as manatees and iguanas and it is surrounded by rainforest. Fishing on this lake is an ancient tradition and the lake has an annual bass fishing tournament. The Welib Já Waterfall is located on the road between Palenque and Bonampak. The state has thirty-six protected areas at the state and federal levels along with 67 areas protected by various municipalities. The Sumidero Canyon National Park was decreed in 1980 with an extension of . It extends over two of the regions of the state, the Central Depression and the Central Highlands over the municipalities of Tuxtla Gutiérrez, Nuevo Usumacinta, Chiapa de Corzo and San Fernando. The canyon has steep and vertical sides that rise to up to 1000 meters from the river below with mostly tropical rainforest but some areas with xerophile vegetation such as cactus can be found. The river below, which has cut the canyon over the course of twelve million years, is called the Grijalva. The canyon is emblematic for the state as it is featured in the state seal. The Sumidero Canyon was once the site of a battle between the Spaniards and Chiapanecan Indians. Many Chiapanecans chose to throw themselves from the high edges of the canyon rather than be defeated by Spanish forces. Today, the canyon is a popular destination for ecotourism. Visitors can take boat trips down the river that runs through the canyon and see the area's many birds and abundant vegetation. The Montes Azules Biosphere Reserve was decreed in 1978. It is located in the northeast of the state in the Lacandon Jungle. It covers in the municipalities of Maravilla Tenejapa, Ocosingo and Las Margaritas. It conserves highland perennial rainforest. The jungle is in the Usumacinta River basin east of the Chiapas Highlands. It is recognized by the United Nations Environment Programme for its global biological and cultural significance. In 1992, the Lacantun Reserve, which includes the Classic Maya archaeological sites of Yaxchilan and Bonampak, was added to the biosphere reserve. Agua Azul Waterfall Protection Area is in the Northern Mountains in the municipality of Tumbalá. It covers an area of of rainforest and pine-oak forest, centered on the waterfalls it is named after. It is located in an area locally called the "Mountains of Water", as many rivers flow through there on their way to the Gulf of Mexico. The rugged terrain encourages waterfalls with large pools at the bottom, that the falling water has carved into the sedimentary rock and limestone. Agua Azul is one of the best known in the state. The waters of the Agua Azul River emerge from a cave that forms a natural bridge of thirty meters and five small waterfalls in succession, all with pools of water at the bottom. In addition to Agua Azul, the area has other attractions—such as the Shumuljá River, which contains rapids and waterfalls, the Misol Há Waterfall with a thirty-meter drop, the Bolón Ajau Waterfall with a fourteen-meter drop, the Gallito Copetón rapids, the Blacquiazules Waterfalls, and a section of calm water called the Agua Clara. The El Ocote Biosphere Reserve was decreed in 1982 located in the Northern Mountains at the boundary with the Sierra Madre del Sur in the municipalities of Ocozocoautla, Cintalapa and Tecpatán. It has a surface area of and preserves a rainforest area with karst formations. The Lagunas de Montebello National Park was decreed in 1959 and consists of near the Guatemalan border in the municipalities of La Independencia and La Trinitaria. It contains two of the most threatened ecosystems in Mexico the "cloud rainforest" and the Soconusco rainforest. The El Triunfo Biosphere Reserve, decreed in 1990, is located in the Sierra Madre de Chiapas in the municipalities of Acacoyagua, Ángel Albino Corzo, Montecristo de Guerrero, La Concordia, Mapastepec, Pijijiapan, Siltepec and Villa Corzo near the Pacific Ocean with . It conserves areas of tropical rainforest and many freshwater systems endemic to Central America. It is home to around 400 species of birds including several rare species such as the horned guan, the quetzal and the azure-rumped tanager. The Palenque National Forest is centered on the archaeological site of the same name and was decreed in 1981. It is located in the municipality of Palenque where the Northern Mountains meet the Gulf Coast Plain. It extends over of tropical rainforest. The Laguna Bélgica Conservation Zone is located in the north west of the state in the municipality of Ocozocoautla. It covers forty-two hectares centered on the Bélgica Lake. The El Zapotal Ecological Center was established in 1980. Nahá–Metzabok is an area in the Lacandon Forest whose name means "place of the black lord" in Nahuatl. It extends over and in 2010, it was included in the World Network of Biosphere Reserves. Two main communities in the area are called Nahá and Metzabok. They were established in the 1940s, but the oldest communities in the area belong to the Lacandon people. The area has large numbers of wildlife including endangered species such as eagles, quetzals and jaguars. Demographics General statistics As of 2010, the population is 4,796,580, the eighth most populous state in Mexico. The 20th century saw large population growth in Chiapas. From fewer than one million inhabitants in 1940, the state had about two million in 1980, and over 4 million in 2005. Overcrowded land in the highlands was relieved when the rainforest to the east was subject to land reform. Cattle ranchers, loggers, and subsistence farmers migrated to the rain forest area. The population of the Lacandon was only one thousand people in 1950, but by the mid-1990s this had increased to 200 thousand. As of 2010, 78% lives in urban communities with 22% in rural communities. While birthrates are still high in the state, they have come down in recent decades from 7.4 per woman in 1950. However, these rates still mean significant population growth in raw numbers. About half of the state's population is under age 20, with an average age of 19. In 2005, there were 924,967 households, 81% headed by men and the rest by women. Most households were nuclear families (70.7%) with 22.1% consisting of extended families. More migrate out of Chiapas than migrate in, with emigrants leaving for Tabasco, Oaxaca, Veracruz, State of Mexico and the Federal District primarily. While Catholics remain the majority, their numbers have dropped as many have converted to Protestant denominations in recent decades. Islam is also a small but growing religion due to the Indigenous Muslims as well as Muslim immigrants from Africa continuously rising in numbers. The National Presbyterian Church in Mexico has a large following in Chiapas; some estimate that 40% of the population are followers of the Presbyterian church. There are a number of people in the state with African features. These are the descendants of slaves brought to the state in the 16th century. There are also those with predominantly European features who are the descendants of the original Spanish colonizers as well as later immigrants to Mexico. The latter mostly came at the end of the 19th and early 20th century under the Porfirio Díaz regime to start plantations. According to the 2020 Census, 1.02% of Chiapas's population identified as Black, Afro-Mexican, or of African descent. Indigenous population Numbers and influence Over the history of Chiapas, there have been 3 main indigenous groups: the Mixes-Zoques, the Mayas and the Chiapa. Today, there are an estimated fifty-six linguistic groups. As of the 2005 Census, there were 957,255 people who spoke an indigenous language out of a total population of about 3.5 million. Of this one million, one third do not speak Spanish. Out of Chiapas's 111 municipios, 99 have majority indigenous populations. 22 municipalities have indigenous populations over 90%, and 36 municipalities have native populations exceeding 50%. However, despite population growth in indigenous villages, the percentage of indigenous to non indigenous continues to fall with less than 35% indigenous. Indian populations are concentrated in a few areas, with the largest concentration of indigenous-language-speaking individuals is living in 5 of Chiapas's 9 economic regions: Los Altos, Selva, Norte, Fronteriza, and Sierra. The remaining three regions, Soconusco, Centro and Costa, have populations that are considered to be dominantly mestizo. The state has about 13.5% of all of Mexico's indigenous population, and it has been ranked among the ten "most indianized" states, with only Campeche, Oaxaca, Quintana Roo and Yucatán having been ranked above it between 1930 and the present. These indigenous peoples have been historically resistant to assimilation into the broader Mexican society, with it best seen in the retention rates of indigenous languages and the historic demands for autonomy over geographic areas as well as cultural domains. Much of the latter has been prominent since the Zapatista uprising in 1994. Most of Chiapas's indigenous groups are descended from the Mayans, speaking languages that are closely related to one another, belonging to the Western Maya language group. The state was part of a large region dominated by the Mayans during the Classic period. The most numerous of these Mayan groups include the Tzeltal, Tzotzil, Ch'ol, Zoque, Tojolabal, Lacandon and Mam, which have traits in common such as syncretic religious practices, and social structure based on kinship. The most common Western Maya languages are Tzeltal and Tzotzil along with Chontal, Ch’ol, Tojolabal, Chuj, Kanjobal, Acatec, Jacaltec and Motozintlec. 12 of Mexico's officially recognized native peoples living in the state have conserved their language, customs, history, dress and traditions to a significant degree. The primary groups include the Tzeltal, Tzotzil, Ch'ol, Tojolabal, Zoque, Chuj, Kanjobal, Mam, Jacalteco, Mochó Cakchiquel and Lacandon. Most indigenous communities are found in the municipalities of the Centro, Altos, Norte and Selva regions, with many having indigenous populations of over fifty percent. These include Bochil, Sitalá, Pantepec, Simojovel to those with over ninety percent indigenous such as San Juan Cancuc, Huixtán, Tenejapa, Tila, Oxchuc, Tapalapa, Zinacantán, Mitontic, Ocotepec, Chamula, and Chalchihuitán. The most numerous indigenous communities are the Tzeltal and Tzotzil peoples, who number about 400,000 each, together accounting for about half of the state's indigenous population. The next most numerous are the Ch’ol with about 200,000 people and the Tojolabal and Zoques, who number about 50,000 each. The top 3 municipalities in Chiapas with indigenous language speakers 3 years of age and older are: Ocosingo (133,811), Chilon (96,567), and San Juan Chamula (69,475). These 3 municipalities accounted for 24.8% (299,853) of all indigenous language speakers 3 years or older in the state of Chiapas, out of a total of 1,209,057 indigenous language speakers 3 years or older. Although most indigenous language speakers are bilingual, especially in the younger generations, many of these languages have shown resilience. Four of Chiapas's indigenous languages, Tzeltal, Tzotzil, Tojolabal and Chol, are high-vitality languages, meaning that a high percentage of these ethnicities speak the language and that there is a high rate of monolingualism in it. It is used in over 80% of homes. Zoque is considered to be of medium-vitality with a rate of bilingualism of over 70% and home use somewhere between 65% and 80%. Maya is considered to be of low-vitality with almost all of its speakers bilingual with Spanish. The most spoken indigenous languages as of 2010 are Tzeltal with 461,236 speakers, Tzotzil with 417,462, Ch’ol with 191,947 and Zoque with 53,839. In total, there are 1,141,499 who speak an indigenous language or 27% of the total population. Of these, 14% do not speak Spanish. Studies done between 1930 and 2000 have indicated that Spanish is not dramatically displacing these languages. In raw number, speakers of these languages are increasing, especially among groups with a long history of resistance to Spanish/Mexican domination. Language maintenance has been strongest in areas related to where the Zapatista uprising took place such as the municipalities of Altamirano, Chamula, Chanal, Larráinzar, Las Margaritas, Ocosingo, Palenque, Sabanilla, San Cristóbal de Las Casas and Simojovel. The state's rich indigenous tradition along with its associated political uprisings, especially that of 1994, has great interest from other parts of Mexico and abroad. It has been especially appealing to a variety of academics including many anthropologists, archeologists, historians, psychologists and sociologists. The concept of "mestizo" or mixed indigenous European heritage became important to Mexico's identity by the time of Independence, but Chiapas has kept its indigenous identity to the present day. Since the 1970s, this has been supported by the Mexican government as it has shifted from cultural policies that favor a "multicultural" identity for the country. One major exception to the separatist, indigenous identity has been the case of the Chiapa people, from whom the state's name comes, who have mostly been assimilated and intermarried into the mestizo population. Most Indigenous communities have economies based primarily on traditional agriculture such as the cultivation and processing of corn, beans and coffee as a cash crop and in the last decade, many have begun producing sugarcane and jatropha for refinement into biodiesel and ethanol for automobile fuel. The raising of livestock, particularly chicken and turkey and to a lesser extent beef and farmed fish is also a major economic activity. Many indigenous people, in particular the Maya, are employed in the production of traditional clothing, fabrics, textiles, wood items, artworks and traditional goods such as jade and amber works. Tourism has provided a number of a these communities with markets for their handcrafts and works, some of which are very profitable. San Cristóbal de las Casas and San Juan Chamula maintain a strong indigenous identity. On market day, many indigenous people from rural areas come into San Cristóbal to buy and sell mostly items for everyday use such as fruit, vegetables, animals, cloth, consumer goods and tools. San Juan Chamula is considered to be a center of indigenous culture, especially its elaborate festivals of Carnival and Day of Saint John. It was common for politicians, especially during Institutional Revolutionary Party's dominance to visit here during election campaigns and dress in indigenous clothing and carry a carved walking stick, a traditional sign of power. Relations between the indigenous ethnic groups is complicated. While there has been inter-ethnic political activism such as that promoted by the Diocese of Chiapas in the 1970s and the Zapatista movement in the 1990s, there has been inter-indigenous conflict as well. Much of this has been based on religion, pitting those of the traditional Catholic/indigenous beliefs who support the traditional power structure against Protestants, Evangelicals and Word of God Catholics (directly allied with the Diocese) who tend to oppose it. This is particularly significant problem among the Tzeltals and Tzotzils. Starting in the 1970s, traditional leaders in San Juan Chamula began expelling dissidents from their homes and land, amounting to about 20,000 indigenous forced to leave over a thirty-year period. It continues to be a serious social problem although authorities downplay it. Recently there has been political, social and ethnic conflict between the Tzotzil who are more urbanized and have a significant number of Protestant practitioners and the Tzeltal who are predominantly Catholic and live in smaller farming communities. Many Protestant Tzotzil have accused the Tzeltal of ethnic discrimination and intimidation due to their religious beliefs and the Tzeltal have in return accused the Tzotzil of singling them out for discrimination. Clothing, especially women's clothing, varies by indigenous group. For example, women in Ocosingo tend to wear a blouse with a round collar embroidered with flowers and a black skirt decorated with ribbons and tied with a cloth belt. The Lacandon people tend to wear a simple white tunic. They also make a ceremonial tunic from bark, decorated with astronomy symbols. In Tenejapa, women wear a huipil embroidered with Mayan fretwork along with a black wool rebozo. Men wear short pants, embroidered at the bottom. Tzeltals The Tzeltals call themselves Winik atel, which means "working men." This is the largest ethnicity in the state, mostly living southeast of San Cristóbal with the largest number in Amatenango. Today, there are about 500,000 Tzeltal Indians in Chiapas. Tzeltal Mayan, part of the Mayan language family, today is spoken by about 375,000 people making it the fourth-largest language group in Mexico. There are two main dialects; highland (or Oxchuc) and lowland (or Bachajonteco). This language, along with Tzotzil, is from the Tzeltalan subdivision of the Mayan language family. Lexico-statistical studies indicate that these two languages probably became differentiated from one another around 1200 Most children are bilingual in the language and Spanish although many of their grandparents are monolingual Tzeltal speakers. Each Tzeltal community constitutes a distinct social and cultural unit with its own well-defined lands, wearing apparel, kinship system, politico-religious organization, economic resources, crafts, and other cultural features. Women are distinguished by a black skirt with a wool belt and an undyed cotton bloused embroidered with flowers. Their hair is tied with ribbons and covered with a cloth. Most men do not use traditional attire. Agriculture is the basic economic activity of the Tzeltal people. Traditional Mesoamerican crops such as maize, beans, squash, and chili peppers are the most important, but a variety of other crops, including wheat, manioc, sweet potatoes, cotton, chayote, some fruits, other vegetables, and coffee. Tzotzils Tzotzil speakers number just slightly less than theTzeltals at 226,000, although those of the ethnicity are probably higher. Tzotzils are found in the highlands or Los Altos and spread out towards the northeast near the border with Tabasco. However, Tzotzil communities can be found in almost every municipality of the state. They are concentrated in Chamula, Zinacantán, Chenalhó, and Simojovel. Their language is closely related to Tzeltal and distantly related to Yucatec Mayan and Lacandon. Men dress in short pants tied with a red cotton belt and a shirt that hangs down to their knees. They also wear leather huaraches and a hat decorated with ribbons. The women wear a red or blue skirt, a short huipil as a blouse, and use a chal or rebozo to carry babies and bundles. Tzotzil communities are governed by a katinab who is selected for life by the leaders of each neighborhood. The Tzotzils are also known for their continued use of the temazcal for hygiene and medicinal purposes. Ch’ols The Ch’ols of Chiapas migrated to the northwest of the state starting about 2,000 years ago, when they were concentrated in Guatemala and Honduras. Those Ch’ols who remained in the south are distinguished by the name Chortís. Chiapas Ch’ols are closely related to the Chontal in Tabasco as well. Choles are found in Tila, Tumbalá, Sabanilla, Palenque, and Salto de Agua, with an estimated population of about 115,000 people. The Ch’ol language belongs to the Maya family and is related to Tzeltal, Tzotzil, Lacandon, Tojolabal, and Yucatec Mayan. There are three varieties of Chol (spoken in Tila, Tumbalá, and Sabanilla), all mutually intelligible. Over half of speakers are monolingual in the Chol language. Women wear a long navy blue or black skirt with a white blouse heavily embroidered with bright colors and a sash with a red ribbon. The men only occasionally use traditional dress for events such as the feast of the Virgin of Guadalupe. This dress usually includes pants, shirts and huipils made of undyed cotton, with leather huaraches, a carrying sack and a hat. The fundamental economic activity of the Ch’ols is agriculture. They primarily cultivate corn and beans, as well as sugar cane, rice, coffee, and some fruits. They have Catholic beliefs strongly influenced by native ones. Harvests are celebrated on the Feast of Saint Rose on 30 August. Tojolabals The Totolabals are estimated at 35,000 in the highlands. According to oral tradition, the Tojolabales came north from Guatemala. The largest community is Ingeniero González de León in the La Cañada region, an hour outside the municipal seat of Las Margaritas. Tojolabales are also found in Comitán, Trinitaria, Altamirano and La Independencia. This area is filled with rolling hills with a temperate and moist climate. There are fast moving rivers and jungle vegetation. Tojolabal is related to Kanjobal, but also to Tzeltal and Tzotzil. However, most of the youngest of this ethnicity speak Spanish. Women dress traditionally from childhood with brightly colored skirts decorated with lace or ribbons and a blouse decorated with small ribbons, and they cover their heads with kerchiefs. They embroider many of their own clothes but do not sell them. Married women arrange their hair in two braids and single women wear it loose decorated with ribbons. Men no longer wear traditional garb daily as it is considered too expensive to make. Zoques The Zoques are found in 3,000 square kilometers the center and west of the state scattered among hundreds of communities. These were one of the first native peoples of Chiapas, with archeological ruins tied to them dating back as far as 3500 BCE. Their language is not Mayan but rather related to Mixe, which is found in Oaxaca and Veracruz. By the time the Spanish arrived, they had been reduced in number and territory. Their ancient capital was Quechula, which was covered with water by the creation of the Malpaso Dam, along with the ruins of Guelegas, which was first buried by an eruption of the Chichonal volcano. There are still Zoque ruins at Janepaguay, the Ocozocuautla and La Ciénega valleys. Lacandons The Lacandons are one of the smallest native indigenous groups of the state with a population estimated between 600 and 1,000. They are mostly located in the communities of Lacanjá Chansayab, Najá, and Mensabak in the Lacandon Jungle. They live near the ruins of Bonampak and Yaxchilan and local lore states that the gods resided here when they lived on Earth. They inhabit about a million hectares of rainforest but from the 16th century to the present, migrants have taken over the area, most of which are indigenous from other areas of Chiapas. This dramatically altered their lifestyle and worldview. Traditional Lacandon shelters are huts made with fonds and wood with an earthen floor, but this has mostly given way to modern structures. Mochós The Mochós or Motozintlecos are concentrated in the municipality of Motozintla on the Guatemalan border. According to anthropologists, these people are an "urban" ethnicity as they are mostly found in the neighborhoods of the municipal seat. Other communities can be found near the Tacaná volcano, and in the municipalities of Tuzantán and Belisario Dominguez. The name "Mochó" comes from a response many gave the Spanish whom they could not understand and means "I don't know." This community is in the process of disappearing as their numbers shrink. Mams The Mams are a Mayan ethnicity that numbers about 20,000 found in thirty municipalities, especially Tapachula, Motozintla, El Porvenir, Cacahoatán and Amatenango in the southeastern Sierra Madre of Chiapas. The Mame language is one of the most ancient Mayan languages with 5,450 Mame speakers were tallied in Chiapas in the 2000 census. These people first migrated to the border region between Chiapas and Guatemala at the end of the nineteenth century, establishing scattered settlements. In the 1960s, several hundred migrated to the Lacandon rain forest near the confluence of the Santo Domingo and Jataté Rivers. Those who live in Chiapas are referred to locally as the "Mexican Mam (or Mame)" to differentiate them from those in Guatemala. Most live around the Tacaná volcano, which the Mams call "our mother" as it is considered to be the source of the fertility of the area's fields. The masculine deity is the Tajumulco volcano, which is in Guatemala. Guatemalan migrant groups In the last decades of the 20th century, Chiapas received a large number of indigenous refugees, especially from Guatemala, many of whom remain in the state. These have added ethnicities such as the Kekchi, Chuj, Ixil, Kanjobal, K'iche' and Cakchikel to the population. The Kanjobal mainly live along the border between Chiapas and Guatemala, with almost 5,800 speakers of the language tallied in the 2000 census. It is believed that a significant number of these Kanjobal-speakers may have been born in Guatemala and immigrated to Chiapas, maintaining strong cultural ties to the neighboring nation. Economy Economic indicators Chiapas accounts for 1.73% of Mexico's GDP. The primary sector, agriculture, produces 15.2% of the state's GDP. The secondary sector, mostly energy production, but also commerce, services and tourism, accounts for 21.8%. The share of the GDP coming from services is rising while that of agriculture is falling. The state is divided into nine economic regions. These regions were established in the 1980s in order to facilitate statewide economic planning. Many of these regions are based on state and federal highway systems. These include Centro, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa. Despite being rich in resources, Chiapas, along with Oaxaca and Guerrero, lags behind the rest of the country in almost all socioeconomic indicators. , there were 889,420 residential units; 71% had running water, 77.3% sewerage, and 93.6% electricity. Construction of these units varies from modern construction of block and concrete to those constructed of wood and laminate. Because of its high rate of economic marginalization, more people migrate from Chiapas than migrate to it. Most of its socioeconomic indicators are the lowest in the country including income, education, health and housing. It has a significantly higher percentage of illiteracy than the rest of the country, although that situation has improved since the 1970s when over 45% were illiterate and 1980s, about 32%. The tropical climate presents health challenges, with most illnesses related to the gastro-intestinal tract and parasites. As of 2005, the state has 1,138 medical facilities: 1098 outpatient and 40 inpatient. Most are run by IMSS and ISSSTE and other government agencies. The implementation of NAFTA had negative effects on the economy, particularly by lowering prices for agricultural products. It made the southern states of Mexico poorer in comparison to those in the north, with over 90% of the poorest municipalities in the south of the country. As of 2006, 31.8% work in communal services, social services and personal services. 18.4% work in financial services, insurance and real estate, 10.7% work in commerce, restaurants and hotels, 9.8% work in construction, 8.9% in utilities, 7.8% in transportation, 3.4% in industry (excluding handcrafts), and 8.4% in agriculture. Although until the 1960s, many indigenous communities were considered by scholars to be autonomous and economically isolated, this was never the case. Economic conditions began forcing many to migrate to work, especially in agriculture for non-indigenous. However, unlike many other migrant workers, most indigenous in Chiapas have remained strongly tied to their home communities. A study as early as the 1970s showed that 77 percent of heads of household migrated outside of the Chamula municipality as local land did not produce sufficiently to support families. In the 1970s, cuts in the price of corn forced many large landowners to convert their fields into pasture for cattle, displacing many hired laborers, cattle required less work. These agricultural laborers began to work for the government on infrastructure projects financed by oil revenue. It is estimated that in the 1980s to 1990s as many as 100,000 indigenous people moved from the mountain areas into cities in Chiapas, with some moving out of the state to Mexico City, Cancún and Villahermosa in search of employment. Agriculture, livestock, forestry and fishing Agriculture, livestock, forestry and fishing employ over 53% of the state's population; however, its productivity is considered to be low. Agriculture includes both seasonal and perennial plants. Major crops include corn, beans, sorghum, soybeans, peanuts, sesame seeds, coffee, cacao, sugar cane, mangos, bananas, and palm oil. These crops take up 95% of the cultivated land in the state and 90% of the agricultural production. Only four percent of fields are irrigated with the rest dependent on rainfall either seasonally or year round. Chiapas ranks second among the Mexican states in the production of cacao, the product used to make chocolate, and is responsible for about 60 percent of Mexico's total coffee output. The production of bananas, cacao and corn make Chiapas Mexico's second largest agricultural producer overall. Coffee is the state's most important cash crop with a history from the 19th century. The crop was introduced in 1846 by Jeronimo Manchinelli who brought 1,500 seedlings from Guatemala on his farm La Chacara. This was followed by a number of other farms as well. Coffee production intensified during the regime of Porfirio Díaz and the Europeans who came to own many of the large farms in the area. By 1892, there were 22 coffee farms in the region, among them Nueva Alemania, Hamburgo, Chiripa, Irlanda, Argovia, San Francisco, and Linda Vista in the Soconusco region. Since then coffee production has grown and diversified to include large plantations, the use and free and forced labor and a significant sector of small producers. While most coffee is grown in the Soconusco, other areas grow it, including the municipalities of Oxchuc, Pantheló, El Bosque, Tenejapa, Chenalhó, Larráinzar, and Chalchihuitán, with around six thousand producers. It also includes organic coffee producers with 18 million tons grown annually 60,000 producers. One third of these producers are indigenous women and other peasant farmers who grow the coffee under the shade of native trees without the use of agro chemicals. Some of this coffee is even grown in environmentally protected areas such as the El Triunfo reserve, where ejidos with 14,000 people grow the coffee and sell it to cooperativers who sell it to companies such as Starbucks, but the main market is Europe. Some growers have created cooperatives of their own to cut out the middleman. Ranching occupies about three million hectares of natural and induced pasture, with about 52% of all pasture induced. Most livestock is done by families using traditional methods. Most important are meat and dairy cattle, followed by pigs and domestic fowl. These three account for 93% of the value of production. Annual milk production in Chiapas totals about 180 million liters per year. The state's cattle production, along with timber from the Lacandon Jungle and energy output gives it a certain amount of economic clouts compared to other states in the region. Forestry is mostly based on conifers and common tropical species producing 186,858 m3 per year at a value of 54,511,000 pesos. Exploited non-wood species include the Camedor palm tree for its fronds. The fishing industry is underdeveloped but includes the capture of wild species as well as fish farming. Fish production is generated both from the ocean as well as the many freshwater rivers and lakes. In 2002, 28,582 tons of fish valued at 441.2 million pesos was produced. Species include tuna, shark, shrimp, mojarra and crab. Industry and energy The state's abundant rivers and streams have been dammed to provide about fifty-five percent of the country's hydroelectric energy. Much of this is sent to other states accounting for over six percent of all of Mexico's energy output. Main power stations are located at Malpaso, La Angostura, Chicoasén and Peñitas, which produce about eight percent of Mexico's hydroelectric energy. Manuel Moreno Torres plant on the Grijalva River the most productive in Mexico. All of the hydroelectric plants are owned and operated by the Federal Electricity Commission (Comisión Federal de Electricidad, CFE). Chiapas is rich in petroleum reserves. Oil production began during the 1980s and Chiapas has become the fourth largest producer of crude oil and natural gas among the Mexican states. Many reserves are yet untapped, but between 1984 and 1992, PEMEX drilled nineteen oil wells in the Lacandona Jungle. Currently, petroleum reserves are found in the municipalities of Juárez, Ostuacán, Pichucalco and Reforma in the north of the state with 116 wells accounting for about 6.5% of the country's oil production. It also provides about a quarter of the country's natural gas. This production equals of natural gas and 17,565,000 barrels of oil per year. Industry is limited to small and micro enterprises and include auto parts, bottling, fruit packing, coffee and chocolate processing, production of lime, bricks and other construction materials, sugar mills, furniture making, textiles, printing and the production of handcrafts. The two largest enterprises is the Comisión Federal de Electricidad and a Petróleos Mexicanos refinery. Chiapas opened its first assembly plant in 2002, a fact that highlights the historical lack of industry in this area. Handcrafts Chiapas is one of the states that produces a wide variety of handcrafts and folk art in Mexico. One reason for this is its many indigenous ethnicities who produce traditional items out of identity as well as commercial reasons. One commercial reason is the market for crafts provided by the tourism industry. Another is that most indigenous communities can no longer provide for their own needs through agriculture. The need to generate outside income has led to many indigenous women producing crafts communally, which has not only had economic benefits but also involved them in the political process as well. Unlike many other states, Chiapas has a wide variety of wood resources such as cedar and mahogany as well as plant species such as reeds, ixtle and palm. It also has minerals such as obsidian, amber, jade and several types of clay and animals for the production of leather, dyes from various insects used to create the colors associated with the region. Items include various types of handcrafted clothing, dishes, jars, furniture, roof tiles, toys, musical instruments, tools and more. Chiapas's most important handcraft is textiles, most of which is cloth woven on a backstrap loom. Indigenous girls often learn how to sew and embroider before they learn how to speak Spanish. They are also taught how to make natural dyes from insects, and weaving techniques. Many of the items produced are still for day-to-day use, often dyed in bright colors with intricate embroidery. They include skirts, belts, rebozos, blouses, huipils and shoulder wraps called chals. Designs are in red, yellow, turquoise blue, purple, pink, green and various pastels and decorated with designs such as flowers, butterflies, and birds, all based on local flora and fauna. Commercially, indigenous textiles are most often found in San Cristóbal de las Casas, San Juan Chamula and Zinacantán. The best textiles are considered to be from Magdalenas, Larráinzar, Venustiano Carranza and Sibaca. One of the main minerals of the state is amber, much of which is 25 million years old, with quality comparable to that found in the Dominican Republic. Chiapan amber has a number of unique qualities, including much that is clear all the way through and some with fossilized insects and plants. Most Chiapan amber is worked into jewelry including pendants, rings and necklaces. Colors vary from white to yellow/orange to a deep red, but there are also green and pink tones as well. Since pre-Hispanic times, native peoples have believed amber to have healing and protective qualities. The largest amber mine is in Simojovel, a small village 130 km from Tuxtla Gutiérrez, which produces 95% of Chiapas's amber. Other mines are found in Huitiupán, Totolapa, El Bosque, Pueblo Nuevo Solistahuacán, Pantelhó and San Andrés Duraznal. According to the Museum of Amber in San Cristóbal, almost 300 kg of amber is extracted per month from the state. Prices vary depending on quality and color. The major center for ceramics in the state is the city of Amatenango del Valle, with its barro blanco (white clay) pottery. The most traditional ceramic in Amatenango and Aguacatenango is a type of large jar called a cantaro used to transport water and other liquids. Many pieces created from this clay are ornamental as well as traditional pieces for everyday use such as comals, dishes, storage containers and flowerpots. All pieces here are made by hand using techniques that go back centuries. Other communities that produce ceramics include Chiapa de Corzo, Tonalá, Ocuilpa, Suchiapa and San Cristóbal de las Casas. Wood crafts in the state center on furniture, brightly painted sculptures and toys. The Tzotzils of San Juan de Chamula are known for their sculptures as well as for their sturdy furniture. Sculptures are made from woods such as cedar, mahogany and strawberry tree. Another town noted for their sculptures is Tecpatán. The making lacquer to use in the decoration of wooden and other items goes back to the colonial period. The best-known area for this type of work, called "laca" is Chiapa de Corzo, which has a museum dedicated to it. One reason this type of decoration became popular in the state was that it protected items from the constant humidity of the climate. Much of the laca in Chiapa de Corzo is made in the traditional way with natural pigments and sands to cover gourds, dipping spoons, chests, niches and furniture. It is also used to create the Parachicos masks. Traditional Mexican toys, which have all but disappeared in the rest of Mexico, are still readily found here and include the cajita de la serpiente, yo yos, ball in cup and more. Other wooden items include masks, cooking utensils, and tools. One famous toy is the "muñecos zapatistas" (Zapatista dolls), which are based on the revolutionary group that emerged in the 1990s. Tourism and general commerce/services Ninety-four percent of the state's commercial outlets are small retail stores with about 6% wholesalers. There are 111 municipal markets, 55 tianguis, three wholesale food markets and 173 large vendors of staple products. The service sector is the most important to the economy, with mostly commerce, warehousing and tourism. Tourism brings large numbers of visitors to the state each year. Most of Chiapas's tourism is based on its culture, colonial cities and ecology. The state has a total of 491 ranked hotels with 12,122 rooms. There are also 780 other establishments catering primarily to tourism, such as services and restaurants. There are three main tourist routes: the Maya Route, the Colonial Route and the Coffee Route. The Maya Route runs along the border with Guatemala in the Lacandon Jungle and includes the sites of Palenque, Bonampak, Yaxchilan along with the natural attractions of Agua Azul Waterfalls, Misol-Há Waterfall, and the Catazajá Lake. Palenque is the most important of these sites, and one of the most important tourist destinations in the state. Yaxchilan was a Mayan city along the Usumacinta River. It developed between 350 and 810 CE. Bonampak is known for its well preserved murals. These Mayan sites have made the state an attraction for international tourism. These sites contain a large number of structures, most of which date back thousands of years, especially to the sixth century. In addition to the sites on the Mayan Route, there are others within the state away from the border such as Toniná, near the city of Ocosingo. The Colonial Route is mostly in the central highlands with a significant number of churches, monasteries and other structures from the colonial period along with some from the 19th century and even into the early 20th. The most important city on this route is San Cristóbal de las Casas, located in the Los Altos region in the Jovel Valley. The historic center of the city is filled with tiled roofs, patios with flowers, balconies, Baroque facades along with Neoclassical and Moorish designs. It is centered on a main plaza surrounded by the cathedral, the municipal palace, the Portales commercial area and the San Nicolás church. In addition, it has museums dedicated to the state's indigenous cultures, one to amber and one to jade, both of which have been mined in the state. Other attractions along this route include Comitán de Domínguez and Chiapa de Corzo, along with small indigenous communities such as San Juan Chamula. The state capital of Tuxtla Gutiérrez does not have many colonial era structures left, but it lies near the area's most famous natural attraction of the Sumidero Canyon. This canyon is popular with tourists who take boat tours into it on the Grijalva River to see such features such as caves (La Cueva del Hombre, La Cueva del Silencio) and the Christmas Tree, which is a rock and plant formation on the side of one of the canyon walls created by a seasonal waterfall. The Coffee Route begins in Tapachula and follows a mountainous road into the Suconusco regopm. The route passes through Puerto Chiapas, a port with modern infrastructure for shipping exports and receiving international cruises. The route visits a number of coffee plantations, such as Hamburgo, Chiripa, Violetas, Santa Rita, Lindavista, Perú-París, San Antonio Chicarras and Rancho Alegre. These haciendas provide visitors with the opportunity to see how coffee is grown and initially processed on these farms. They also offer a number of ecotourism activities such as mountain climbing, rafting, rappelling and mountain biking. There are also tours into the jungle vegetation and the Tacaná Volcano. In addition to coffee, the region also produces most of Chiapas's soybeans, bananas and cacao. The state has a large number of ecological attractions most of which are connected to water. The main beaches on the coastline include Puerto Arista, Boca del Cielo, Playa Linda, Playa Aventuras, Playa Azul and Santa Brigida. Others are based on the state's lakes and rivers. Laguna Verde is a lake in the Coapilla municipality. The lake is generally green but its tones constantly change through the day depending on how the sun strikes it. In the early morning and evening hours there can also be blue and ochre tones as well. The El Chiflón Waterfall is part of an ecotourism center located in a valley with reeds, sugarcane, mountains and rainforest. It is formed by the San Vicente River and has pools of water at the bottom popular for swimming. The Las Nubes Ecotourism center is located in the Las Margaritas municipality near the Guatemalan border. The area features a number of turquoise blue waterfalls with bridges and lookout points set up to see them up close. Still others are based on conservation, local culture and other features. The Las Guacamayas Ecotourism Center is located in the Lacandon Jungle on the edge of the Montes Azules reserve. It is centered on the conservation of the red macaw, which is in danger of extinction. The Tziscao Ecotourism Center is centered on a lake with various tones. It is located inside the Lagunas de Montebello National Park, with kayaking, mountain biking and archery. Lacanjá Chansayab is located in the interior of the Lacandon Jungle and a major Lacandon people community. It has some activities associated with ecotourism such as mountain biking, hiking and cabins. The Grutas de Rancho Nuevo Ecotourism Center is centered on a set of caves in which appear capricious forms of stalagmite and stalactites. There is horseback riding as well. Culture Architecture Architecture in the state begins with the archeological sites of the Mayans and other groups who established color schemes and other details that echo in later structures. After the Spanish subdued the area, the building of Spanish style cities began, especially in the highland areas. Many of the colonial-era buildings are related to Dominicans who came from Seville. This Spanish city had much Arabic influence in its architecture, and this was incorporated into the colonial architecture of Chiapas, especially in structures dating from the 16th to 18th centuries. However, there are a number of architectural styles and influences present in Chiapas colonial structures, including colors and patterns from Oaxaca and Central America along with indigenous ones from Chiapas. The main colonial structures are the cathedral and Santo Domingo church of San Cristóbal, the Santo Domingo monastery and La Pila in Chiapa de Corzo. The San Cristóbal cathedral has a Baroque facade that was begun in the 16th century but by the time it was finished in the 17th, it had a mix of Spanish, Arabic, and indigenous influences. It is one of the most elaborately decorated in Mexico. The churches and former monasteries of Santo Domingo, La Merced and San Francisco have ornamentation similar to that of the cathedral. The main structures in Chiapa de Corzo are the Santo Domingo monastery and the La Pila fountain. Santo Domingo has indigenous decorative details such as double headed eagles as well as a statue of the founding monk. In San Cristóbal, the Diego de Mazariegos house has a Plateresque facade, while that of Francisco de Montejo, built later in the 18th century has a mix of Baroque and Neoclassical. Art Deco structures can be found in San Cristóbal and Tapachula in public buildings as well as a number of rural coffee plantations from the Porfirio Díaz era. Art and literature Art in Chiapas is based on the use of color and has strong indigenous influence. This dates back to cave paintings such as those found in Sima de las Cotorras near Tuxtla Gutiérrez and the caverns of Rancho Nuevo where human remains and offerings were also found. The best-known pre-Hispanic artwork is the Maya murals of Bonampak, which are the only Mesoamerican murals to have been preserved for over 1500 years. In general, Mayan artwork stands out for its precise depiction of faces and its narrative form. Indigenous forms derive from this background and continue into the colonial period with the use of indigenous color schemes in churches and modern structures such as the municipal palace in Tapachula. Since the colonial period, the state has produced a large number of painters and sculptors. Noted 20th-century artists include Lázaro Gómez, Ramiro Jiménez Chacón, Héctor Ventura Cruz, Máximo Prado Pozo, and Gabriel Gallegos Ramos. The two best-known poets from the state are Jaime Sabines and Rosario Castellanos, both from prominent Chiapan families. The first was a merchant and diplomat and the second was a teacher, diplomat, theatre director and the director of the Instituto Nacional Indigenista. Jaime Sabines is widely regarded as Mexico's most influential contemporary poet. His work celebrates everyday people in common settings. Music The most important instrument in the state is the marimba. In the pre-Hispanic period, indigenous peoples had already been producing music with wooden instruments. The marimba was introduced by African slaves brought to Chiapas by the Spanish. However, it achieved its widespread popularity in the early 20th century due to the formation of the Cuarteto Marimbistico de los Hermanos Gómez in 1918, who popularized the instrument and the popular music that it plays not only in Chiapas but in various parts of Mexico and into the United States. Along with Cuban Juan Arozamena, they composed the piece "Las chiapanecas" considered to be the unofficial anthem of the state. In the 1940s, they were also featured in a number of Mexican films. Marimbas are constructed in Venustiano Carranza, Chiapas de Corzo and Tuxtla Gutiérrez. Cuisine Like the rest of Mesoamerica, the basic diet has been based on corn and Chiapas cooking retains strong indigenous influence. One important ingredient is chipilin, a fragrant and strongly flavored herb that is used on most of the indigenous plates and hoja santa, the large anise-scented leaves used in much of southern Mexican cuisine. Chiapan dishes do not incorporate many chili peppers as part of their dishes. Rather, chili peppers are most often found in the condiments. One reason for that is that a local chili pepper, called the simojovel, is far too hot to use except very sparingly. Chiapan cuisine tends to rely more on slightly sweet seasonings in their main dishes such as cinnamon, plantains, prunes and pineapple are often found in meat and poultry dishes. Tamales are a major part of the diet and often include chipilín mixed into the dough and hoja santa, within the tamale itself or used to wrap it. One tamale native to the state is the "picte", a fresh sweet corn tamale. Tamales juacanes are filled with a mixture of black beans, dried shrimp, and pumpkin seeds. Meats are centered on the European introduced beef, pork and chicken as many native game animals are in danger of extinction. Meat dishes are frequently accompanied by vegetables such as squash, chayote and carrots. Black beans are the favored type. Beef is favored, especially a thin cut called tasajo usually served in a sauce. Pepita con tasajo is a common dish at festivals especially in Chiapa de Corzo. It consists of a squash seed based sauced over reconstituted and shredded dried beef. As a cattle raising area, beef dishes in Palenque are particularly good. Pux-Xaxé is a stew with beef organ meats and mole sauce made with tomato, chili bolita and corn flour. Tzispolá is a beef broth with chunks of meat, chickpeas, cabbage and various types of chili peppers. Pork dishes include cochito, which is pork in an adobo sauce. In Chiapa de Corzo, their version is cochito horneado, which is a roast suckling pig flavored with adobo. Seafood is a strong component in many dishes along the coast. Turula is dried shrimp with tomatoes. Sausages, ham and other cold cuts are most often made and consumed in the highlands. In addition to meat dishes, there is chirmol, a cooked tomato sauced flavored with chili pepper, onion and cilantro and zats, butterfly caterpillars from the Altos de Chiapas that are boiled in salted water, then sautéed in lard and eaten with tortillas, limes, and green chili pepper. Sopa de pan consists of layers of bread and vegetables covered with a broth seasoned with saffron and other flavorings. A Comitán speciality is hearts of palm salad in vinaigrette and Palenque is known for many versions of fried plaintains, including filled with black beans or cheese. Cheese making is important, especially in the municipalities of Ocosingo, Rayon and Pijijiapan. Ocosingo has its own self-named variety, which is shipped to restaurants and gourmet shops in various parts of the country. Regional sweets include crystallized fruit, coconut candies, flan and compotes. San Cristobal is noted for its sweets, as well as chocolates, coffee and baked goods. While Chiapas is known for good coffee, there are a number of other local beverages. The oldest is pozol, originally the name for a fermented corn dough. This dough has its origins in the pre-Hispanic period. To make the beverage, the dough is dissolved in water and usually flavored with cocoa and sugar, but sometimes it is left to ferment further. It is then served very cold with much ice. Taxcalate is a drink made from a powder of toasted corn, achiote, cinnamon and sugar prepared with milk or water. Pumbo is a beverage made with pineapple, club soda, vodka, sugar syrup and much ice. Pox is a drink distilled from sugar cane. Religion Like in the rest of Mexico, Christianity was introduced to the native populations of Chiapas by the Spanish conquistadors. However, Catholic beliefs were mixed with indigenous ones to form what is now called "traditionalist" Catholic belief. The Diocese of Chiapas comprises almost the entire state, and centered on San Cristobal de las Casas. It was founded in 1538 by Pope Paul III to evangelize the area with its most famous bishop of that time Bartolomé de las Casas. Evangelization focused on grouping indigenous peoples into communities centered on a church. This bishop not only graciously evangelized the people in their own language, he worked to introduce many of the crafts still practiced today. While still a majority, only fifty-eight percent of Chiapas residents profess the Catholic faith as of 2010, compared to 83% of the rest of the country. Some indigenous people mix Christianity with Indian beliefs. One particular area where this is strong is the central highlands in small communities such as San Juan Chamula. In one church in San Cristobal, Mayan rites including the sacrifice of animals is permitted inside the church to ask for good health or to "ward off the evil eye." Starting in the 1970s, there has been a shift away from traditional Catholic affiliation to Protestant, Evangelical and other Christian denominations. Presbyterians and Pentecostals attracted a large number of converts, with percentages of Protestants in the state rising from five percent in 1970 to twenty-one percent in 2000. This shift has had a political component as well, with those making the switch tending to identify across ethnic boundaries, especially across indigenous ethnic boundaries and being against the traditional power structure. The National Presbyterian Church in Mexico is particularly strong in Chiapas, the state can be described as one of the strongholds of the denomination. Both Protestants and Word of God Catholics tend to oppose traditional cacique leadership and often worked to prohibit the sale of alcohol. The latter had the effect of attracting many women to both movements. The growing number of Protestants, Evangelicals and Word of God Catholics challenging traditional authority has caused religious strife in a number of indigenous communities. Tensions have been strong, at times, especially in rural areas such as San Juan Chamula. Tension among the groups reached its peak in the 1990s with a large number of people injured during open clashes. In the 1970s, caciques began to expel dissidents from their communities for challenging their power, initially with the use of violence. By 2000, more than 20,000 people had been displaced, but state and federal authorities did not act to stop the expulsions. Today, the situation has quieted but the tension remains, especially in very isolated communities. Islam The Spanish Murabitun community, the Comunidad Islámica en España, based in Granada in Spain, and one of its missionaries, Muhammad Nafia (formerly Aureliano Pérez), now emir of the Comunidad Islámica en México, arrived in the state of Chiapas shortly after the Zapatista uprising and established a commune in the city of San Cristóbal. The group, characterized as anti-capitalistic, entered an ideological pact with the socialist Zapatistas group. President Vicente Fox voiced concerns about the influence of the fundamentalism and possible connections to the Zapatistas and the Basque terrorist organization Euskadi Ta Askatasuna (ETA), but it appeared that converts had no interest in political extremism. By 2015, many indigenous Mayans and more than 700 Tzotzils have converted to Islam. In San Cristóbal, the Murabitun established a pizzeria, a carpentry workshop and a Quranic school (madrasa) where children learned Arabic and prayed five times a day in the backroom of a residential building, and women in head scarves have become a common sight. Nowadays, most of the Mayan Muslims have left the Murabitun and established ties with the CCIM, now following the orthodox Sunni school of Islam. They built the Al-Kausar Mosque in San Cristobal de las Casas. Archaeology The earliest population of Chiapas was in the coastal Soconusco region, where the Chantuto peoples appeared, going back to 5500 BC. This was the oldest Mesoamerican culture discovered to date. The largest and best-known archaeological sites in Chiapas belong to the Mayan civilization. Apart from a few works by Franciscan friars, knowledge of Maya civilisation largely disappeared after the Spanish Conquest. In the mid-19th century, John Lloyd Stephens and Frederick Catherwood traveled though the sites in Chiapas and other Mayan areas and published their writings and illustrations. This led to serious work on the culture including the deciphering of its hieroglyphic writing. In Chiapas, principal Mayan sites include Palenque, Toniná, Bonampak, Chinkoltic and Tenam Puentes, all or near in the Lacandon Jungle. They are technically more advanced than earlier Olmec sites, which can best be seen in the detailed sculpting and novel construction techniques, including structures of four stories in height. Mayan sites are not only noted for large numbers of structures, but also for glyphs, other inscriptions, and artwork that has provided a relatively complete history of many of the sites. Palenque is the most important Mayan and archaeological site. Though much smaller than the huge sites at Tikal or Copán, Palenque contains some of the finest architecture, sculpture and stucco reliefs the Mayans ever produced. The history of the Palenque site begins in 431 with its height under Pakal I (615–683), Chan-Bahlum II (684–702) and Kan-Xul who reigned between 702 and 721. However, the power of Palenque would be lost by the end of the century. Pakal's tomb was not discovered inside the Temple of Inscriptions until 1949. Today, Palenque is a World Heritage Site and one of the best-known sites in Mexico. Yaxchilan flourished in the 8th and 9th centuries. The site contains impressive ruins, with palaces and temples bordering a large plaza upon a terrace above the Usumacinta River. The architectural remains extend across the higher terraces and the hills to the south of the river, overlooking both the river itself and the lowlands beyond. Yaxchilan is known for the large quantity of excellent sculpture at the site, such as the monolithic carved stelae and the narrative stone reliefs carved on lintels spanning the temple doorways. Over 120 inscriptions have been identified on the various monuments from the site. The major groups are the Central Acropolis, the West Acropolis and the South Acropolis. The South Acropolis occupies the highest part of the site. The site is aligned with relation to the Usumacinta River, at times causing unconventional orientation of the major structures, such as the two ballcourts. The city of Bonampak features some of the finest remaining Maya murals. The realistically rendered paintings depict human sacrifices, musicians and scenes of the royal court. In fact the name means “painted murals.” It is centered on a large plaza and has a stairway that leads to the Acropolis. There are also a number of notable steles. Toniná is near the city of Ocosingo with its main features being the Casa de Piedra (House of Stone) and Acropolis. The latter is a series of seven platforms with various temples and steles. This site was a ceremonial center that flourished between 600 and 900 CE. The capital of Sak Tz’i’ (an Ancient Maya kingdom) now named Lacanja Tzeltal, was revealed by researchers led by associate anthropology professor Charles Golden and bioarchaeologist Andrew Scherer in the Chiapas in the backyard of a Mexican farmer in 2020. Multiple domestic constructions used by the population for religious purposes. “Plaza Muk’ul Ton” or Monuments Plaza where people used to gather for ceremonies was also unearthed by the team. Pre-Mayan cultures While the Mayan sites are the best-known, there are a number of other important sites in the state, including many older than the Maya civilization. The oldest sites are in the coastal Soconusco region. This includes the Mokaya culture, the oldest ceramic culture of Mesoamerica. Later, Paso de la Amada became important. Many of these sites are in Mazatan, Chiapas area. Izapa became an important pre-Mayan site as well. There are also other ancient sites including Tapachula and Tepcatán, and Pijijiapan. These sites contain numerous embankments and foundations that once lay beneath pyramids and other buildings. Some of these buildings have disappeared and others have been covered by jungle for about 3,000 years, unexplored. Pijijiapan and Izapa are on the Pacific coast and were the most important pre Hispanic cities for about 1,000 years, as the most important commercial centers between the Mexican Plateau and Central America. Sima de las Cotorras is a sinkhole 140 meters deep with a diameter of 160 meters in the municipality of Ocozocoautla. It contains ancient cave paintings depicting warriors, animals and more. It is best known as a breeding area for parrots, thousands of which leave the area at once at dawn and return at dusk. The state as its Museo Regional de Antropologia e Historia located in Tuxtla Gutiérrez focusing on the pre Hispanic peoples of the state with a room dedicated to its history from the colonial period. Education The average number of years of schooling is 6.7, which is the beginning of middle school, compared to the Mexico average of 8.6. 16.5% have no schooling at all, 59.6% have only primary school/secondary school, 13.7% finish high school or technical school and 9.8% go to university. Eighteen out of every 100 people 15 years or older cannot read or write, compared to 7/100 nationally. Most of Chiapas's illiterate population are indigenous women, who are often prevented from going to school. School absenteeism and dropout rates are highest among indigenous girls. There are an estimated 1.4 million students in the state from preschool on up. The state has about 61,000 teachers and just over 17,000 centers of educations. Preschool and primary schools are divided into modalities called general, indigenous, private and community educations sponsored by CONAFE. Middle school is divided into technical, telesecundaria (distance education) and classes for working adults. About 98% of the student population of the state is in state schools. Higher levels of education include "professional medio" (vocational training), general high school and technology-focused high school. At this level, 89% of students are in public schools. There are 105 universities and similar institutions with 58 public and 47 private serving over 60,500 students. The state university is the Universidad Autónoma de Chiapas (UNACH). It was begun when an organization to establish a state level institution was formed in 1965, with the university itself opening its doors ten years later in 1975. The university project was partially supported by UNESCO in Mexico. It integrated older schools such as the Escuela de Derecho (Law School), which originated in 1679; the Escuela de Ingeniería Civil (School of Civil Engineering), founded in 1966; and the Escuela de Comercio y Administración, which was located in Tuxtla Gutiérrez. Infrastructure Transport The state has approximately of highway with 10,857 federally maintained and 11,660 maintained by the state. Almost all of these kilometers are paved. Major highways include the Las Choapas-Raudales-Ocozocoautla, which links the state to Oaxaca, Veracruz, Puebla and Mexico City. Major airports include Llano San Juan in Ocozocoautla, Francisco Sarabia National Airport (which was replaced by Ángel Albino Corzo International Airport) in Tuxtla Gutiérrez and Corazón de María Airport (which closed in 2010) in San Cristóbal de las Casas. These are used for domestic flights with the airports in Palenque and Tapachula providing international service into Guatemala. There are 22 other airfields in twelve other municipalities. Rail lines extend over 547.8 km. There are two major lines: one in the north of the state that links the center and southeast of the country, and the Costa Panamericana route, which runs from Oaxaca to the Guatemalan border. Chiapas's main port is just outside the city of Tapachula called the Puerto Chiapas. It faces of ocean, with of warehouse space. Next to it there is an industrial park that covers . Puerto Chiapas has of area with a capacity to receive 1,800 containers as well as refrigerated containers. The port serves the state of Chiapas and northern Guatemala. Puerto Chiapas serves to import and export products across the Pacific to Asia, the United States, Canada and South America. It also has connections with the Panama Canal. A marina serves yachts in transit. There is an international airport located away as well as a railroad terminal ending at the port proper. Over the past five years the port has grown with its newest addition being a terminal for cruise ships with tours to the Izapa site, the Coffee Route, the city of Tapachula, Pozuelos Lake and an Artesanal Chocolate Tour. Principal exports through the port include banana and banana trees, corn, fertilizer and tuna. Media There are thirty-six AM radio stations and sixteen FM stations. There are thirty-seven local television stations and sixty-six repeaters. Newspapers of Chiapas include: Chiapas Hoy, Cuarto Poder , El Heraldo de Chiapas, El Orbe, La Voz del Sureste, and Noticias de Chiapas. See also 2017 Chiapas earthquake Ciudad Hidalgo References Further reading Benjamin, Thomas. A Rich Land, a Poor People: Politics and Society in Modern Chiapas. Albuquerque: University of New Mexico Press. 1996. Benjamin, Thomas. "A Time of Reconquest: History, the Maya Revival, and the Zapatista Rebellion." The American Historical Review, Vol. 105, no. 2 (April 2000): pp. 417–450. Collier, George A, and Elizabeth Lowery Quaratiello. Basta! Land and the Zapatista Rebellion in Chiapas. Oakland: The Institute for Food and Development Policy, 1994. Collier, George A. "The Rebellion in Chiapas and the Legacy of Energy Development." Mexican Studies/Estudios Mexicanos, Vol. 10, no. 2 (Summer 1994): pp. 371–382 García, María Cristina. Seeking Refuge: Central American Migration to Mexico, the United States, and Canada. Berkeley and Los Angeles: University of California Press 2006 Hamnett, Brian R. Concise History of Mexico. Cambridge: Cambridge University Press 1999. Hidalgo, Margarita G. (Editor). Contributions to the Sociology of Language: Mexican Indigenous Languages at the Dawn of the Twenty-First Century. Berlin: DEU: Walter de Gruyter & Co. kg Publishers, Berlin, 2009. Higgins, Nicholas P. Understanding the Chiapas Rebellion: Modernist Visions and the Invisible Indian. Austin: University of Texas Press, 2004, Jiménez González, Victor Manuel (Editor). Chiapas: Guía para descubrir los encantos del estado. Mexico City: Editorial Océano de México, SA de CV 2009. Lowe, G. W., “Chiapas de Corzo”, in Evans, Susan, ed., Archaeology of Ancient Mexico and Central America, Taylor & Francis, London. Whitmeyer, Joseph M. and Hopcroft, Rosemary L. "Community, Capitalism, and Rebellion in Chiapas." Sociological Perspectives Vol. 39, no. 4 (Winter 1996): pp. 517–538. External links Chiapas State Government Zapatista National Army of Liberation brief history of the conflict in Chiapas (1994–2007) Acosta et al., 2018. "Climate change and peopling of the Neotropics during the Pleistocene-Holocene transition". Boletín de la Sociedad Geológica Mexicana. Guide to the University of Chicago Department of Anthropology Chiapas Project Records 1942-circa 1990s at the University of Chicago Special Collections Research Center 1824 establishments in Mexico States and territories established in 1824 States of Mexico Former countries of Mexico Former countries in Central America Former republics
3,088
6,790
https://en.wikipedia.org/wiki/Cape%20Breton%20%28disambiguation%29
Cape Breton (disambiguation)
Cape Breton Island is an island in the Canadian province of Nova Scotia, in Canada. Cape Breton may also refer to: Places On Cape Breton Island Cape Breton, a cape at the eastern tip of Cape Breton Island, Canada Cape Breton Highlands, a mountain range in the north of Cape Breton Island, Canada Cape Breton Highlands National Park Cape Breton Regional Municipality, a regional municipality in Nova Scotia Cape Breton—Canso, a federal electoral district In France Capbreton or Cap Berton, a commune of the Landes département in southwestern France Organizations Cape Breton Eagles, a Sydney-based ice hockey team Cape Breton Post Cape Breton Development Corporation Cape Breton University Other uses Cape Breton and Central Nova Scotia Railway Jeanneau Cape Breton, a French sailboat design See also Breton (disambiguation) Cape (disambiguation)
3,090
6,799
https://en.wikipedia.org/wiki/COBOL
COBOL
COBOL (; an acronym for "common business-oriented language") is a compiled English-like computer programming language designed for business use. It is an imperative, procedural and, since 2002, object-oriented language. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. However, due to its declining popularity and the retirement of experienced COBOL programmers, programs are being migrated to new platforms, rewritten in modern languages or replaced with software packages. Most programming in COBOL is now purely to maintain existing applications; however, many large financial institutions were still developing new systems in COBOL as late as 2006. COBOL was designed in 1959 by CODASYL and was partly based on the programming language FLOW-MATIC designed by Grace Hopper. It was created as part of a US Department of Defense effort to create a portable programming language for data processing. It was originally seen as a stopgap, but the Department of Defense promptly forced computer manufacturers to provide it, resulting in its widespread adoption. It was standardized in 1968 and has since been revised four times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2014. COBOL statements have an English-like syntax, which was designed to be self-documenting and highly readable. However, it is verbose and uses over 300 reserved words. In contrast with modern, succinct syntax like , COBOL has a more English-like syntax (in this case, ). COBOL code is split into four divisions (identification, environment, data, and procedure) containing a rigid hierarchy of sections, paragraphs and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions and just one class. Academic computer scientists were generally uninterested in business applications when COBOL was created and were not involved in its design; it was (effectively) designed from the ground up as a computer language for business, with an emphasis on inputs and outputs, whose only data types were numbers and strings of text. COBOL has been criticized throughout its life for its verbosity, design process, and poor support for structured programming. These weaknesses result in monolithic, verbose (intended to be English-like) programs that are not easily comprehensible. For years, COBOL has been assumed as a programming language for business operations in mainframes, although in recent years an increasing interest has surged on migrating COBOL operations to cloud computing. History and specification Background In the late 1950s, computer users and manufacturers were becoming concerned about the rising cost of programming. A 1959 survey had found that in any data processing installation, the programming cost US$800,000 on average and that translating programs to run on new hardware would cost $600,000. At a time when new programming languages were proliferating at an ever-increasing rate, the same survey suggested that if a common business-oriented language were used, conversion would be far cheaper and faster. On 8 April 1959, Mary K. Hawes, a computer scientist at Burroughs Corporation, called a meeting of representatives from academia, computer users, and manufacturers at the University of Pennsylvania to organize a formal meeting on common business languages. Representatives included Grace Hopper (inventor of the English-like data processing language FLOW-MATIC), Jean Sammet and Saul Gorn. At the April meeting, the group asked the Department of Defense (DoD) to sponsor an effort to create a common business language. The delegation impressed Charles A. Phillips, director of the Data System Research Staff at the DoD, who thought that they "thoroughly understood" the DoD's problems. The DoD operated 225 computers, had a further 175 on order and had spent over $200 million on implementing programs to run on them. Portable programs would save time, reduce costs and ease modernization. Charles Phillips agreed to sponsor the meeting and tasked the delegation with drafting the agenda. COBOL 60 On 28 and 29 May 1959 (exactly one year after the Zürich ALGOL 58 meeting), a meeting was held at the Pentagon to discuss the creation of a common programming language for business. It was attended by 41 people and was chaired by Phillips. The Department of Defense was concerned about whether it could run the same data processing programs on different computers. FORTRAN, the only mainstream language at the time, lacked the features needed to write such programs. Representatives enthusiastically described a language that could work in a wide variety of environments, from banking and insurance to utilities and inventory control. They agreed unanimously that more people should be able to program and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximal use of English, be capable of change, be machine-independent and be easy to use, even at the expense of power. The meeting resulted in the creation of a steering committee and short, intermediate and long-range committees. The short-range committee was given to September (three months) to produce specifications for an interim language, which would then be improved upon by the other committees. Their official mission, however, was to identify the strengths and weaknesses of existing programming languages and did not explicitly direct them to create a new language. The deadline was met with disbelief by the short-range committee. One member, Betty Holberton, described the three-month deadline as "gross optimism" and doubted that the language really would be a stopgap. The steering committee met on 4 June and agreed to name the entire activity as the Committee on Data Systems Languages, or CODASYL, and to form an executive committee. The short-range committee members represented six computer manufacturers and three government agencies. The computer manufacturers were Burroughs Corporation, IBM, Minneapolis-Honeywell (Honeywell Labs), RCA, Sperry Rand, and Sylvania Electric Products. The government agencies were the US Air Force, the Navy's David Taylor Model Basin, and the National Bureau of Standards (now the National Institute of Standards and Technology). The committee was chaired by Joseph Wegstein of the US National Bureau of Standards. Work began by investigating data description, statements, existing applications and user experiences. The committee mainly examined the FLOW-MATIC, AIMACO and COMTRAN programming languages. The FLOW-MATIC language was particularly influential because it had been implemented and because AIMACO was a derivative of it with only minor changes. FLOW-MATIC's inventor, Grace Hopper, also served as a technical adviser to the committee. FLOW-MATIC's major contributions to COBOL were long variable names, English words for commands and the separation of data descriptions and instructions. Hopper is sometimes referred to as "the mother of COBOL" or "the grandmother of COBOL", although Jean Sammet, a lead designer of COBOL, stated that Hopper "was not the mother, creator or developer of Cobol". IBM's COMTRAN language, invented by Bob Bemer, was regarded as a competitor to FLOW-MATIC by a short-range committee made up of colleagues of Grace Hopper. Some of its features were not incorporated into COBOL so that it would not look like IBM had dominated the design process, and Jean Sammet said in 1981 that there had been a "strong anti-IBM bias" from some committee members (herself included). In one case, after Roy Goldfinger, author of the COMTRAN manual and intermediate-range committee member, attended a subcommittee meeting to support his language and encourage the use of algebraic expressions, Grace Hopper sent a memo to the short-range committee reiterating Sperry Rand's efforts to create a language based on English. In 1980, Grace Hopper commented that "COBOL 60 is 95% FLOW-MATIC" and that COMTRAN had had an "extremely small" influence. Furthermore, she said that she would claim that work was influenced by both FLOW-MATIC and COMTRAN only to "keep other people happy [so they] wouldn't try to knock us out". Features from COMTRAN incorporated into COBOL included formulas, the clause, an improved IF statement, which obviated the need for GO TOs, and a more robust file management system. The usefulness of the committee's work was subject of great debate. While some members thought the language had too many compromises and was the result of design by committee, others felt it was better than the three languages examined. Some felt the language was too complex; others, too simple. Controversial features included those some considered useless or too advanced for data processing users. Such features included boolean expressions, formulas and table (indices). Another point of controversy was whether to make keywords context-sensitive and the effect that would have on readability. Although context-sensitive keywords were rejected, the approach was later used in PL/I and partially in COBOL from 2002. Little consideration was given to interactivity, interaction with operating systems (few existed at that time) and functions (thought of as purely mathematical and of no use in data processing). The specifications were presented to the executive committee on 4 September. They fell short of expectations: Joseph Wegstein noted that "it contains rough spots and requires some additions", and Bob Bemer later described them as a "hodgepodge". The subcommittee was given until December to improve it. At a mid-September meeting, the committee discussed the new language's name. Suggestions included "BUSY" (Business System), "INFOSYL" (Information System Language) and "COCOSYL" (Common Computer Systems Language). It is unclear who coined the name "COBOL", although Bob Bemer later claimed it had been his suggestion. In October, the intermediate-range committee received copies of the FACT language specification created by Roy Nutt. Its features impressed the committee so much that they passed a resolution to base COBOL on it. This was a blow to the short-range committee, who had made good progress on the specification. Despite being technically superior, FACT had not been created with portability in mind or through manufacturer and user consensus. It also lacked a demonstrable implementation, allowing supporters of a FLOW-MATIC-based COBOL to overturn the resolution. RCA representative Howard Bromberg also blocked FACT, so that RCA's work on a COBOL implementation would not go to waste. It soon became apparent that the committee was too large for any further progress to be made quickly. A frustrated Howard Bromberg bought a $15 tombstone with "COBOL" engraved on it and sent it to Charles Phillips to demonstrate his displeasure. A sub-committee was formed to analyze existing languages and was made up of six individuals: William Selden and Gertrude Tierney of IBM, Howard Bromberg and Howard Discount of RCA, Vernon Reeves and Jean E. Sammet of Sylvania Electric Products. The sub-committee did most of the work creating the specification, leaving the short-range committee to review and modify their work before producing the finished specification. The specifications were approved by the executive committee on 8 January 1960, and sent to the government printing office, which printed them as COBOL 60. The language's stated objectives were to allow efficient, portable programs to be easily written, to allow users to move to new systems with minimal effort and cost, and to be suitable for inexperienced programmers. The CODASYL Executive Committee later created the COBOL Maintenance Committee to answer questions from users and vendors and to improve and expand the specifications. During 1960, the list of manufacturers planning to build COBOL compilers grew. By September, five more manufacturers had joined CODASYL (Bendix, Control Data Corporation, General Electric (GE), National Cash Register and Philco), and all represented manufacturers had announced COBOL compilers. GE and IBM planned to integrate COBOL into their own languages, GECOM and COMTRAN, respectively. In contrast, International Computers and Tabulators planned to replace their language, CODEL, with COBOL. Meanwhile, RCA and Sperry Rand worked on creating COBOL compilers. The first COBOL program ran on 17 August on an RCA 501. On 6 and 7 December, the same COBOL program (albeit with minor changes) ran on an RCA computer and a Remington-Rand Univac computer, demonstrating that compatibility could be achieved. The relative influences of which languages were used continues to this day in the recommended advisory printed in all COBOL reference manuals: COBOL-61 to COBOL-65 Many logical flaws were found in COBOL 60, leading General Electric's Charles Katz to warn that it could not be interpreted unambiguously. A reluctant short-term committee enacted a total cleanup and, by March 1963, it was reported that COBOL's syntax was as definable as ALGOL's, although semantic ambiguities remained. Early COBOL compilers were primitive and slow. A 1962 US Navy evaluation found compilation speeds of 3–11 statements per minute. By mid-1964, they had increased to 11–1000 statements per minute. It was observed that increasing memory would drastically increase speed and that compilation costs varied wildly: costs per statement were between $0.23 and $18.91. In late 1962, IBM announced that COBOL would be their primary development language and that development of COMTRAN would cease. The COBOL specification was revised three times in the five years after its publication. COBOL-60 was replaced in 1961 by COBOL-61. This was then replaced by the COBOL-61 Extended specifications in 1963, which introduced the sort and report writer facilities. The added facilities corrected flaws identified by Honeywell in late 1959 in a letter to the short-range committee. COBOL Edition 1965 brought further clarifications to the specifications and introduced facilities for handling mass storage files and tables. COBOL-68 Efforts began to standardize COBOL to overcome incompatibilities between versions. In late 1962, both ISO and the United States of America Standards Institute (now ANSI) formed groups to create standards. ANSI produced USA Standard COBOL X3.23 in August 1968, which became the cornerstone for later versions. This version was known as American National Standard (ANS) COBOL and was adopted by ISO in 1972. COBOL-74 By 1970, COBOL had become the most widely used programming language in the world. Independently of the ANSI committee, the CODASYL Programming Language Committee was working on improving the language. They described new versions in 1968, 1969, 1970 and 1973, including changes such as new inter-program communication, debugging and file merging facilities as well as improved string-handling and library inclusion features. Although CODASYL was independent of the ANSI committee, the CODASYL Journal of Development was used by ANSI to identify features that were popular enough to warrant implementing. The Programming Language Committee also liaised with ECMA and the Japanese COBOL Standard committee. The Programming Language Committee was not well-known, however. The vice-president, William Rinehuls, complained that two-thirds of the COBOL community did not know of the committee's existence. It also lacked the funds to make public documents, such as minutes of meetings and change proposals, freely available. In 1974, ANSI published a revised version of (ANS) COBOL, containing new features such as file organizations, the statement and the segmentation module. Deleted features included the statement, the statement (which was replaced by ) and the implementer-defined random access module (which was superseded by the new sequential and relative I/O modules). These made up 44 changes, which rendered existing statements incompatible with the new standard. The report writer was slated to be removed from COBOL, but was reinstated before the standard was published. ISO later adopted the updated standard in 1978. COBOL-85 In June 1978, work began on revising COBOL-74. The proposed standard (commonly called COBOL-80) differed significantly from the previous one, causing concerns about incompatibility and conversion costs. In January 1981, Joseph T. Brophy, Senior Vice-president of Travelers Insurance, threatened to sue the standard committee because it was not upwards compatible with COBOL-74. Mr. Brophy described previous conversions of their 40-million-line code base as "non-productive" and a "complete waste of our programmer resources". Later that year, the Data Processing Management Association (DPMA) said it was "strongly opposed" to the new standard, citing "prohibitive" conversion costs and enhancements that were "forced on the user". During the first public review period, the committee received 2,200 responses, of which 1,700 were negative form letters. Other responses were detailed analyses of the effect COBOL-80 would have on their systems; conversion costs were predicted to be at least 50 cents per line of code. Fewer than a dozen of the responses were in favor of the proposed standard. ISO TC97-SC5 installed in 1979 the international COBOL Experts Group, on initiative of Wim Ebbinkhuijsen. The group consisted of COBOL experts from many countries, including the United States. Its goal was to achieve mutual understanding and respect between ANSI and the rest of the world with regard to the need of new COBOL features. After three years, ISO changed the status of the group to a formal Working Group: WG 4 COBOL. The group took primary ownership and development of the COBOL standard, where ANSI made most of the proposals. In 1983, the DPMA withdrew its opposition to the standard, citing the responsiveness of the committee to public concerns. In the same year, a National Bureau of Standards study concluded that the proposed standard would present few problems. A year later, DEC released a VAX/VMS COBOL-80, and noted that conversion of COBOL-74 programs posed few problems. The new EVALUATE statement and inline PERFORM were particularly well received and improved productivity, thanks to simplified control flow and debugging. The second public review drew another 1,000 (mainly negative) responses, while the last drew just 25, by which time many concerns had been addressed. In 1985, the ISO Working Group 4 accepted the then-version of the ANSI proposed standard, made several changes and set it as the new ISO standard COBOL 85. It was published in late 1985. Sixty features were changed or deprecated and 115 were added, such as: Scope terminators (END-IF, END-PERFORM, END-READ, etc.) Nested subprograms CONTINUE, a no-operation statement EVALUATE, a switch statement INITIALIZE, a statement that can set groups of data to their default values Inline PERFORM loop bodies – previously, loop bodies had to be specified in a separate procedure Reference modification, which allows access to substrings I/O status codes. The new standard was adopted by all national standard bodies, including ANSI. Two amendments followed in 1989 and 1993, the first introducing intrinsic functions and the other providing corrections. COBOL 2002 and object-oriented COBOL In 1997, Gartner Group estimated that there were a total of 200 billion lines of COBOL in existence, which ran 80% of all business programs. In the early 1990s, work began on adding object-orientation in the next full revision of COBOL. Object-oriented features were taken from C++ and Smalltalk. The initial estimate was to have this revision completed by 1997, and an ISO Committee Draft (CD) was available by 1997. Some vendors (including Micro Focus, Fujitsu, and IBM) introduced object-oriented syntax based on drafts of the full revision. The final approved ISO standard was approved and published in late 2002. Fujitsu/GTSoftware, Micro Focus and RainCode introduced object-oriented COBOL compilers targeting the .NET Framework. There were many other new features, many of which had been in the CODASYL COBOL Journal of Development since 1978 and had missed the opportunity to be included in COBOL-85. These other features included: Free-form code User-defined functions Recursion Locale-based processing Support for extended character sets such as Unicode Floating-point and binary data types (until then, binary items were truncated based on their declaration's base-10 specification) Portable arithmetic results Bit and boolean data types Pointers and syntax for getting and freeing storage The for text-based user interfaces The facility Improved interoperability with other programming languages and framework environments such as .NET and Java. Three corrigenda were published for the standard: two in 2006 and one in 2009. COBOL 2014 Between 2003 and 2009, three technical reports were produced describing object finalization, XML processing and collection classes for COBOL. COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced. COBOL 2014 includes the following changes: Portable arithmetic results have been replaced by IEEE 754 data types Major features have been made optional, such as the VALIDATE facility, the report writer and the screen-handling facility Method overloading Dynamic capacity tables (a feature dropped from the draft of COBOL 2002) Legacy COBOL programs are used globally in governments and businesses and are running on diverse operating systems such as z/OS, z/VSE, VME, Unix, NonStop OS, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually. Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. Some studies attribute as much as "24% of Y2K software repair costs to Cobol". After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use. The authors said that the survey data suggest "a gradual decline in the importance of COBOL in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted". In 2006 and 2012, Computerworld surveys (of 352 readers) found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software. 36% of managers said they planned to migrate from COBOL, and 25% said they would like to if it were cheaper. Instead, some businesses have migrated their systems from expensive mainframes to cheaper, more modern systems, while maintaining their COBOL programs. Testimony before the House of Representatives in 2016 indicated that COBOL is still in use by many federal agencies. Reuters reported in 2017 that 43% of banking systems still used COBOL with over 220 billion lines of COBOL code in use. By 2019, the number of COBOL programmers was shrinking fast due to retirements, leading to an impending skills gap in business and government organizations which still use mainframe systems for high-volume transaction processing. Efforts to rewrite systems in newer languages have proven expensive and problematic, as has the outsourcing of code maintenance, thus proposals to train more people in COBOL are advocated. During the COVID-19 pandemic and the ensuing surge of unemployment, several US states reported a shortage of skilled COBOL programmers to support the legacy systems used for unemployment benefit management. Many of these systems had been in the process of conversion to more modern programming languages prior to the pandemic, but the process was put on hold. Similarly, the US Internal Revenue Service rushed to patch its COBOL-based Individual Master File in order to disburse the tens of millions of payments mandated by the Coronavirus Aid, Relief, and Economic Security Act. Features Syntax COBOL has an English-like syntax, which is used to describe nearly everything in a program. For example, a condition can be expressed as   or more concisely as    or  . More complex conditions can be "abbreviated" by removing repeated conditions and variables. For example,    can be shortened to . To support this English-like syntax, COBOL has over 300 keywords. Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more English-like statements and clauses; e.g., the and keywords can be used interchangeably, as can and , and and . Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see ) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. ) and strings (e.g. ). Separators include the space character and commas and semi-colons followed by a space. A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs. Metalanguage COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. Although Backus–Naur form did exist at the time, the committee had not heard of it. As an example, consider the following description of an ADD statement: This description permits the following variants: ADD 1 TO x ADD 1, a, b TO x ROUNDED, y, z ROUNDED ADD a, b TO c ON SIZE ERROR DISPLAY "Error" END-ADD ADD a TO b NOT SIZE ERROR DISPLAY "No error" ON SIZE ERROR DISPLAY "Error" Code format The height of COBOL's popularity coincided with the era of keypunch machines and punched cards. The program itself was written onto punched cards, then read in and compiled, and the data fed into the program was sometimes on cards as well. COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were: In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column. COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using *>, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the >>PAGE directive replaces the / indicator. Identification division The identification division identifies the following code entity and contains the definition of a class or interface. Object-oriented programming Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables. Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the statement, which acts similarly to , or through inline method invocation, which is analogous to using functions. *> These are equivalent. INVOKE my-class "foo" RETURNING var MOVE my-class::"foo" TO var *> Inline method invocation COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a clause, which leaves the user with no way to access it. Method overloading was added in COBOL 2014. Environment division The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information. Files COBOL supports three file formats, or : sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, , record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C-ISAM and VSAM, are based on IBM's ISAM. Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access. A common non-standard extension is the organization, used to process text files. Records in a file are terminated by a newline and may be of varying length. Data division The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces. Aggregated data Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called . Items that have subordinate aggregate data are called ; those that do not are called . Level-numbers used to describe standard data items are between 1 and 49. 01 some-record. *> Aggregate group record item 05 num PIC 9(10). *> Elementary item 05 the-date. *> Aggregate (sub)group record item 10 the-year PIC 9(4). *> Elementary item 10 the-month PIC 99. *> Elementary item 10 the-day PIC 99. *> Elementary item In the above example, elementary item and group item are subordinate to the record , while elementary items , , and are part of the group item . Subordinate items can be disambiguated with the (or ) keyword. For example, consider the example code above along with the following example: 01 sale-date. 05 the-year PIC 9(4). 05 the-month PIC 99. 05 the-day PIC 99. The names , , and are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the group, the programmer would use (or the equivalent ). (This syntax is similar to the "dot notation" supported by most contemporary languages.) Other data levels A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated , is rarely used and, circa 1988, was usually found in old programs. Its ability to ignore the hierarchical and logical structure data meant its use was not recommended and many installations forbade its use. 01 customer-record. 05 cust-key PIC X(10). 05 cust-name. 10 cust-first-name PIC X(30). 10 cust-last-name PIC X(30). 05 cust-dob PIC 9(8). 05 cust-balance PIC 9(7)V99. 66 cust-personal-details RENAMES cust-name THRU cust-dob. 66 cust-all-details RENAMES cust-name THRU cust-balance. A 77 level-number indicates the item is stand-alone, and in such situations is equivalent to the level-number 01. For example, the following code declares two 77-level data items, and , which are non-group data items that are independent of (not subordinate to) any other data items: 77 property-name PIC X(80). 77 sales-region PIC 9(5). An 88 level-number declares a (a so-called 88-level) which is true when its parent data item contains one of the values specified in its clause. For example, the following code defines two 88-level condition-name items that are true or false depending on the current character data value of the data item. When the data item contains a value of , the condition-name is true, whereas when it contains a value of or , the condition-name is true. If the data item contains some other value, both of the condition-names are false. 01 wage-type PIC X. 88 wage-is-hourly VALUE "H". 88 wage-is-yearly VALUE "S", "Y". Data types Standard COBOL provides the following data types: Type safety is variable in COBOL. Numeric data is converted between different representations and sizes silently and alphanumeric data can be placed in any data item that can be stored as a string, including numeric and group data. In contrast, object references and pointers may only be assigned from items of the same type and their values may be restricted to a certain type. PICTURE clause A (or ) clause is a string of characters, each of which represents a portion of the data item and what it may contain. Some picture characters specify the type of the item and how many characters or digits it occupies in memory. For example, a indicates a decimal digit, and an indicates that the item is signed. Other picture characters (called and characters) specify how an item should be formatted. For example, a series of characters define character positions as well as how a leading sign character is to be positioned within the final character data; the rightmost non-numeric character will contain the item's sign, while other character positions corresponding to a to the left of this position will contain a space. Repeated characters can be specified more concisely by specifying a number in parentheses after a picture character; for example, is equivalent to . Picture specifications containing only digit () and sign () characters define purely data items, while picture specifications containing alphabetic () or alphanumeric () characters define data items. The presence of other formatting characters define or data items. USAGE clause The clause declares the format data is stored in. Depending on the data type, it can either complement or be used instead of a clause. While it can be used to declare pointers and object references, it is mostly geared towards specifying numeric types. These numeric formats are: Binary, where a minimum size is either specified by the PICTURE clause or by a USAGE clause such as BINARY-LONG. , where data may be stored in whatever format the implementation provides; often equivalent to   , the default format, where data is stored as a string Floating-point, in either an implementation-dependent format or according to IEEE 754. , where data is stored as a string using an extended character set , where data is stored in the smallest possible decimal format (typically packed binary-coded decimal) Report writer The report writer is a declarative facility for creating reports. The programmer need only specify the report layout and the data required to produce it, freeing them from having to write code to handle things like page breaks, data formatting, and headings and footings. Reports are associated with report files, which are files which may only be written to through report writer statements. FD report-out REPORT sales-report. Each report is defined in the report section of the data division. A report is split into report groups which define the report's headings, footings and details. Reports work around hierarchical . Control breaks occur when a key variable changes it value; for example, when creating a report detailing customers' orders, a control break could occur when the program reaches a different customer's orders. Here is an example report description for a report which gives a salesperson's sales and which warns of any invalid records: RD sales-report PAGE LIMITS 60 LINES FIRST DETAIL 3 CONTROLS seller-name. 01 TYPE PAGE HEADING. 03 COL 1 VALUE "Sales Report". 03 COL 74 VALUE "Page". 03 COL 79 PIC Z9 SOURCE PAGE-COUNTER. 01 sales-on-day TYPE DETAIL, LINE + 1. 03 COL 3 VALUE "Sales on". 03 COL 12 PIC 99/99/9999 SOURCE sales-date. 03 COL 21 VALUE "were". 03 COL 26 PIC $$$$9.99 SOURCE sales-amount. 01 invalid-sales TYPE DETAIL, LINE + 1. 03 COL 3 VALUE "INVALID RECORD:". 03 COL 19 PIC X(34) SOURCE sales-record. 01 TYPE CONTROL HEADING seller-name, LINE + 2. 03 COL 1 VALUE "Seller:". 03 COL 9 PIC X(30) SOURCE seller-name. The above report description describes the following layout: Sales Report Page 1 Seller: Howard Bromberg Sales on 10/12/2008 were $1000.00 Sales on 12/12/2008 were $0.00 Sales on 13/12/2008 were $31.47 INVALID RECORD: Howard Bromberg XXXXYY Seller: Howard Discount ... Sales Report Page 12 Sales on 08/05/2014 were $543.98 INVALID RECORD: William Selden 12O52014FOOFOO Sales on 30/05/2014 were $0.00 Four statements control the report writer: , which prepares the report writer for printing; , which prints a report group; , which suppresses the printing of a report group; and , which terminates report processing. For the above sales report example, the procedure division might look like this: OPEN INPUT sales, OUTPUT report-out INITIATE sales-report PERFORM UNTIL 1 <> 1 READ sales AT END EXIT PERFORM END-READ VALIDATE sales-record IF valid-record GENERATE sales-on-day ELSE GENERATE invalid-sales END-IF END-PERFORM TERMINATE sales-report CLOSE sales, report-out . Use of the Report Writer facility tended to vary considerably; some organizations used it extensively and some not at all. In addition, implementations of Report Writer ranged in quality, with those at the lower end sometimes using excessive amounts of memory at runtime. Procedure division Procedures The sections and paragraphs in the procedure division (collectively called procedures) can be used as labels and as simple subroutines. Unlike in other divisions, paragraphs do not need to be in sections. Execution goes down through the procedures of a program until it is terminated. To use procedures as subroutines, the verb is used. A statement somewhat resembles a procedure call in a modern language in the sense that execution returns to the code following the statement at the end of the called code; however, it does not provide any mechanism for parameter passing or for returning a result value. If a subroutine is invoked using a simple statement like , then control returns at the end of the called procedure. However, is unusual in that it may be used to call a range spanning a sequence of several adjacent procedures. This is done with the construct: PROCEDURE so-and-so. PERFORM ALPHA PERFORM ALPHA THRU GAMMA STOP RUN. ALPHA. DISPLAY 'A'. BETA. DISPLAY 'B'. GAMMA. DISPLAY 'C'. The output of this program will be: "A A B C". also differs from conventional procedure calls in that there is, at least traditionally, no notion of a call stack. As a consequence, nested invocations are possible (a sequence of code being 'ed may execute a statement itself), but require extra care if parts of the same code are executed by both invocations. The problem arises when the code in the inner invocation reaches the exit point of the outer invocation. More formally, if control passes through the exit point of a invocation that was called earlier but has not completed yet, the COBOL 2002 standard officially stipulates that the behavior is undefined. The reason is that COBOL, rather than a "return address", operates with what may be called a continuation address. When control flow reaches the end of any procedure, the continuation address is looked up and control is transferred to that address. Before the program runs, the continuation address for every procedure is initialized to the start address of the procedure that comes next in the program text so that, if no statements happen, control flows from top to bottom through the program. But when a statement executes, it modifies the continuation address of the called procedure (or the last procedure of the called range, if was used), so that control will return to the call site at the end. The original value is saved and is restored afterwards, but there is only one storage position. If two nested invocations operate on overlapping code, they may interfere which each other's management of the continuation address in several ways. The following example (taken from ) illustrates the problem: LABEL1. DISPLAY '1' PERFORM LABEL2 THRU LABEL3 STOP RUN. LABEL2. DISPLAY '2' PERFORM LABEL3 THRU LABEL4. LABEL3. DISPLAY '3'. LABEL4. DISPLAY '4'. One might expect that the output of this program would be "1 2 3 4 3": After displaying "2", the second causes "3" and "4" to be displayed, and then the first invocation continues on with "3". In traditional COBOL implementations, this is not the case. Rather, the first statement sets the continuation address at the end of so that it will jump back to the call site inside . The second statement sets the return at the end of but does not modify the continuation address of , expecting it to be the default continuation. Thus, when the inner invocation arrives at the end of , it jumps back to the outer statement, and the program stops having printed just "1 2 3". On the other hand, in some COBOL implementations like the open-source TinyCOBOL compiler, the two statements do not interfere with each other and the output is indeed "1 2 3 4 3". Therefore, the behavior in such cases is not only (perhaps) surprising, it is also not portable. A special consequence of this limitation is that cannot be used to write recursive code. Another simple example to illustrate this (slightly simplified from ): MOVE 1 TO A PERFORM LABEL STOP RUN. LABEL. DISPLAY A IF A < 3 ADD 1 TO A PERFORM LABEL END-IF DISPLAY 'END'. One might expect that the output is "1 2 3 END END END", and in fact that is what some COBOL compilers will produce. But some compilers, like IBM COBOL, will produce code that prints "1 2 3 END END END END ..." and so on, printing "END" over and over in an endless loop. Since there is limited space to store backup continuation addresses, the backups get overwritten in the course of recursive invocations, and all that can be restored is the jump back to . Statements COBOL 2014 has 47 statements (also called ), which can be grouped into the following broad categories: control flow, I/O, data manipulation and the report writer. The report writer statements are covered in the report writer section. Control flow COBOL's conditional statements are and . is a switch-like statement with the added capability of evaluating multiple values and conditions. This can be used to implement decision tables. For example, the following might be used to control a CNC lathe: EVALUATE TRUE ALSO desired-speed ALSO current-speed WHEN lid-closed ALSO min-speed THRU max-speed ALSO LESS THAN desired-speed PERFORM speed-up-machine WHEN lid-closed ALSO min-speed THRU max-speed ALSO GREATER THAN desired-speed PERFORM slow-down-machine WHEN lid-open ALSO ANY ALSO NOT ZERO PERFORM emergency-stop WHEN OTHER CONTINUE END-EVALUATE The statement is used to define loops which are executed a condition is true (not true, which is more common in other languages). It is also used to call procedures or ranges of procedures (see the procedures section for more details). and call subprograms and methods, respectively. The name of the subprogram/method is contained in a string which may be a literal or a data item. Parameters can be passed by reference, by content (where a copy is passed by reference) or by value (but only if a prototype is available). unloads subprograms from memory. causes the program to jump to a specified procedure. The statement is a return statement and the statement stops the program. The statement has six different formats: it can be used as a return statement, a break statement, a continue statement, an end marker or to leave a procedure. Exceptions are raised by a statement and caught with a handler, or , defined in the portion of the procedure division. Declaratives are sections beginning with a statement which specify the errors to handle. Exceptions can be names or objects. is used in a declarative to jump to the statement after the one that raised the exception or to a procedure outside the . Unlike other languages, uncaught exceptions may not terminate the program and the program can proceed unaffected. I/O File I/O is handled by the self-describing , , , and statements along with a further three: , which updates a record; , which selects subsequent records to access by finding a record with a certain key; and , which releases a lock on the last record accessed. User interaction is done using and . Data manipulation The following verbs manipulate data: , which sets data items to their default values. , which assigns values to data items ; MOVE CORRESPONDING assigns corresponding like-named fields. , which has 15 formats: it can modify indices, assign object references and alter table capacities, among other functions. , , , , and , which handle arithmetic (with assigning the result of a formula to a variable). and , which handle dynamic memory. , which validates and distributes data as specified in an item's description in the data division. and , which concatenate and split strings, respectively. , which tallies or replaces instances of specified substrings within a string. , which searches a table for the first entry satisfying a condition. Files and tables are sorted using and the verb merges and sorts files. The verb provides records to sort and retrieves sorted records in order. Scope termination Some statements, such as and , may themselves contain statements. Such statements may be terminated in two ways: by a period (), which terminates all unterminated statements contained, or by a scope terminator, which terminates the nearest matching open statement. *> Terminator period ("implicit termination") IF invalid-record IF no-more-records NEXT SENTENCE ELSE READ record-file AT END SET no-more-records TO TRUE. *> Scope terminators ("explicit termination") IF invalid-record IF no-more-records CONTINUE ELSE READ record-file AT END SET no-more-records TO TRUE END-READ END-IF END-IF Nested statements terminated with a period are a common source of bugs. For example, examine the following code: IF x DISPLAY y. DISPLAY z. Here, the intent is to display y and z if condition x is true. However, z will be displayed whatever the value of x because the IF statement is terminated by an erroneous period after . Another bug is a result of the dangling else problem, when two IF statements can associate with an ELSE. IF x IF y DISPLAY a ELSE DISPLAY b. In the above fragment, the ELSE associates with the    statement instead of the    statement, causing a bug. Prior to the introduction of explicit scope terminators, preventing it would require    to be placed after the inner IF. Self-modifying code The original (1959) COBOL specification supported the infamous    statement, for which many compilers generated self-modifying code. X and Y are procedure labels, and the single    statement in procedure X executed after such an statement means    instead. Many compilers still support it, but it was deemed obsolete in the COBOL 1985 standard and deleted in 2002. The statement was poorly regarded because it undermined "locality of context" and made a program's overall logic difficult to comprehend. As textbook author Daniel D. McCracken wrote in 1976, when "someone who has never seen the program before must become familiar with it as quickly as possible, sometimes under critical time pressure because the program has failed ... the sight of a GO TO statement in a paragraph by itself, signaling as it does the existence of an unknown number of ALTER statements at unknown locations throughout the program, strikes fear in the heart of the bravest programmer." Hello, world A "Hello, world" program in COBOL: IDENTIFICATION DIVISION. PROGRAM-ID. hello-world. PROCEDURE DIVISION. DISPLAY "Hello, world!" . When the – now famous – "Hello, World!" program example in The C Programming Language was first published in 1978 a similar mainframe COBOL program sample would have been submitted through JCL, very likely using a punch card reader, and 80 column punch cards. The listing below, with an empty DATA DIVISION, was tested using Linux and the System/370 Hercules emulator running MVS 3.8J. The JCL, written in July 2015, is derived from the Hercules tutorials and samples hosted by Jay Moseley. In keeping with COBOL programming of that era, HELLO, WORLD is displayed in all capital letters. //COBUCLG JOB (001),'COBOL BASE TEST', 00010000 // CLASS=A,MSGCLASS=A,MSGLEVEL=(1,1) 00020000 //BASETEST EXEC COBUCLG 00030000 //COB.SYSIN DD * 00040000 00000* VALIDATION OF BASE COBOL INSTALL 00050000 01000 IDENTIFICATION DIVISION. 00060000 01100 PROGRAM-ID. 'HELLO'. 00070000 02000 ENVIRONMENT DIVISION. 00080000 02100 CONFIGURATION SECTION. 00090000 02110 SOURCE-COMPUTER. GNULINUX. 00100000 02120 OBJECT-COMPUTER. HERCULES. 00110000 02200 SPECIAL-NAMES. 00120000 02210 CONSOLE IS CONSL. 00130000 03000 DATA DIVISION. 00140000 04000 PROCEDURE DIVISION. 00150000 04100 00-MAIN. 00160000 04110 DISPLAY 'HELLO, WORLD' UPON CONSL. 00170000 04900 STOP RUN. 00180000 //LKED.SYSLIB DD DSNAME=SYS1.COBLIB,DISP=SHR 00190000 // DD DSNAME=SYS1.LINKLIB,DISP=SHR 00200000 //GO.SYSPRINT DD SYSOUT=A 00210000 // 00220000 After submitting the JCL, the MVS console displayed: 19.52.48 JOB 3 $HASP100 COBUCLG ON READER1 COBOL BASE TEST 19.52.48 JOB 3 IEF677I WARNING MESSAGE(S) FOR JOB COBUCLG ISSUED 19.52.48 JOB 3 $HASP373 COBUCLG STARTED - INIT 1 - CLASS A - SYS BSP1 19.52.48 JOB 3 IEC130I SYSPUNCH DD STATEMENT MISSING 19.52.48 JOB 3 IEC130I SYSLIB DD STATEMENT MISSING 19.52.48 JOB 3 IEC130I SYSPUNCH DD STATEMENT MISSING 19.52.48 JOB 3 IEFACTRT - Stepname Procstep Program Retcode 19.52.48 JOB 3 COBUCLG BASETEST COB IKFCBL00 RC= 0000 19.52.48 JOB 3 COBUCLG BASETEST LKED IEWL RC= 0000 19.52.48 JOB 3 +HELLO, WORLD 19.52.48 JOB 3 COBUCLG BASETEST GO PGM=*.DD RC= 0000 19.52.48 JOB 3 $HASP395 COBUCLG ENDED Line 10 of the console listing above is highlighted for effect, the highlighting is not part of the actual console output. The associated compiler listing generated over four pages of technical detail and job run information, for the single line of output from the 14 lines of COBOL. Reception Lack of structure In the 1970s, adoption of the structured programming paradigm was becoming increasingly widespread. Edsger Dijkstra, a preeminent computer scientist, wrote a letter to the editor of Communications of the ACM, published 1975 entitled "How do we tell truths that might hurt?", in which he was critical of COBOL and several other contemporary languages; remarking that "the use of COBOL cripples the mind". In a published dissent to Dijkstra's remarks, the computer scientist Howard E. Tompkins claimed that unstructured COBOL tended to be "written by programmers that have never had the benefit of structured COBOL taught well", arguing that the issue was primarily one of training. One cause of spaghetti code was the statement. Attempts to remove s from COBOL code, however, resulted in convoluted programs and reduced code quality. s were largely replaced by the statement and procedures, which promoted modular programming and gave easy access to powerful looping facilities. However, could be used only with procedures so loop bodies were not located where they were used, making programs harder to understand. COBOL programs were infamous for being monolithic and lacking modularization. COBOL code could be modularized only through procedures, which were found to be inadequate for large systems. It was impossible to restrict access to data, meaning a procedure could access and modify data item. Furthermore, there was no way to pass parameters to a procedure, an omission Jean Sammet regarded as the committee's biggest mistake. Another complication stemmed from the ability to a specified sequence of procedures. This meant that control could jump to and return from any procedure, creating convoluted control flow and permitting a programmer to break the single-entry single-exit rule. This situation improved as COBOL adopted more features. COBOL-74 added subprograms, giving programmers the ability to control the data each part of the program could access. COBOL-85 then added nested subprograms, allowing programmers to hide subprograms. Further control over data and code came in 2002 when object-oriented programming, user-defined functions and user-defined data types were included. Nevertheless, much important legacy COBOL software uses unstructured code, which has become unmaintainable. It can be too risky and costly to modify even a simple section of code, since it may be used from unknown places in unknown ways. Compatibility issues COBOL was intended to be a highly portable, "common" language. However, by 2001, around 300 dialects had been created. One source of dialects was the standard itself: the 1974 standard was composed of one mandatory nucleus and eleven functional modules, each containing two or three levels of support. This permitted 104,976 official variants. COBOL-85 was not fully compatible with earlier versions, and its development was controversial. Joseph T. Brophy, the CIO of Travelers Insurance, spearheaded an effort to inform COBOL users of the heavy reprogramming costs of implementing the new standard. As a result, the ANSI COBOL Committee received more than 2,200 letters from the public, mostly negative, requiring the committee to make changes. On the other hand, conversion to COBOL-85 was thought to increase productivity in future years, thus justifying the conversion costs. Verbose syntax COBOL syntax has often been criticized for its verbosity. Proponents say that this was intended to make the code self-documenting, easing program maintenance. COBOL was also intended to be easy for programmers to learn and use, while still being readable to non-technical staff such as managers. The desire for readability led to the use of English-like syntax and structural elements, such as nouns, verbs, clauses, sentences, sections, and divisions. Yet by 1984, maintainers of COBOL programs were struggling to deal with "incomprehensible" code and the main changes in COBOL-85 were there to help ease maintenance. Jean Sammet, a short-range committee member, noted that "little attempt was made to cater to the professional programmer, in fact people whose main interest is programming tend to be very unhappy with COBOL" which she attributed to COBOL's verbose syntax. Isolation from the computer science community The COBOL community has always been isolated from the computer science community. No academic computer scientists participated in the design of COBOL: all of those on the committee came from commerce or government. Computer scientists at the time were more interested in fields like numerical analysis, physics and system programming than the commercial file-processing problems which COBOL development tackled. Jean Sammet attributed COBOL's unpopularity to an initial "snob reaction" due to its inelegance, the lack of influential computer scientists participating in the design process and a disdain for business data processing. The COBOL specification used a unique "notation", or metalanguage, to define its syntax rather than the new Backus–Naur form which the committee did not know of. This resulted in "severe" criticism. Later, COBOL suffered from a shortage of material covering it; it took until 1963 for introductory books to appear (with Richard D. Irwin publishing a college textbook on COBOL in 1966). By 1985, there were twice as many books on FORTRAN and four times as many on BASIC as on COBOL in the Library of Congress. University professors taught more modern, state-of-the-art languages and techniques instead of COBOL which was said to have a "trade school" nature. Donald Nelson, chair of the CODASYL COBOL committee, said in 1984 that "academics ... hate COBOL" and that computer science graduates "had 'hate COBOL' drilled into them". By the mid-1980s, there was also significant condescension towards COBOL in the business community from users of other languages, for example FORTRAN or assembler, implying that COBOL could be used only for non-challenging problems. In 2003, COBOL featured in 80% of information systems curricula in the United States, the same proportion as C++ and Java. Ten years later, a poll by Micro Focus found that 20% of university academics thought COBOL was outdated or dead and that 55% believed their students thought COBOL was outdated or dead. The same poll also found that only 25% of academics had COBOL programming on their curriculum even though 60% thought they should teach it. Concerns about the design process Doubts have been raised about the competence of the standards committee. Short-term committee member Howard Bromberg said that there was "little control" over the development process and that it was "plagued by discontinuity of personnel and ... a lack of talent." Jean Sammet and Jerome Garfunkel also noted that changes introduced in one revision of the standard would be reverted in the next, due as much to changes in who was in the standard committee as to objective evidence. COBOL standards have repeatedly suffered from delays: COBOL-85 arrived five years later than hoped, COBOL 2002 was five years late, and COBOL 2014 was six years late. To combat delays, the standard committee allowed the creation of optional addenda which would add features more quickly than by waiting for the next standard revision. However, some committee members raised concerns about incompatibilities between implementations and frequent modifications of the standard. Influences on other languages COBOL's data structures influenced subsequent programming languages. Its record and file structure influenced PL/I and Pascal, and the REDEFINES clause was a predecessor to Pascal's variant records. Explicit file structure definitions preceded the development of database management systems and aggregated data was a significant advance over Fortran's arrays. PICTURE data declarations were incorporated into PL/I, with minor changes. COBOL's facility, although considered "primitive", influenced the development of include directives. The focus on portability and standardization meant programs written in COBOL could be portable and facilitated the spread of the language to a wide variety of hardware platforms and operating systems. Additionally, the well-defined division structure restricts the definition of external references to the Environment Division, which simplifies platform changes in particular. See also Alphabetical list of programming languages BLIS/COBOL CODASYL Comparison of programming languages Notes References Citations Sources (Link goes to draft N 0147) External links COBOL Language Standard (1991; COBOL-85 with Amendment 1), from The Open Group .NET programming languages 1959 software Class-based programming languages Computer-related introductions in 1959 Cross-platform software Object-oriented programming languages Procedural programming languages Programming languages created by women Programming languages created in 1959 Programming languages with an ISO standard Statically typed programming languages Structured programming languages
3,093
6,803
https://en.wikipedia.org/wiki/CCD
CCD
CCD may refer to: Science and technology Charge-coupled device, an electronic light sensor used in various devices including digital cameras .ccd, the filename extension for CloneCD's CD image file Carbonate compensation depth, a property of oceans Colony collapse disorder, a phenomenon involving the abrupt disappearance of honey bees in a beehive or Western honey bee colony centicandela (ccd), an SI unit of luminous intensity denoting one hundredth of a candela Central composite design, an experimental design in response surface methodology for building a second order model for a response variable without a complete three-level factorial Complementary cumulative distribution function Continuous collision detection, especially in rigid-body dynamics Countercurrent distribution, used for separating mixtures Core complex die, an element of AMD Zen 2 and later microprocessor architectures Medicine Canine compulsive disorder, a behavioral condition in dogs, similar to human obsessive-compulsive disorder (OCD) Caput-collum-diaphyseal angle, the angle between the neck and the shaft of the femur in the hip Cleidocranial dysostosis (also called cleidocranial dysplasia), a genetic abnormality in humans Central core disease, a rare neuromuscular disorder Congenital chloride diarrhea, a rare disorder in babies Continuity of Care Document, an XML-based markup standard for patient medical document exchange Cross-reactive carbohydrate determinants, protein-linked carbohydrate structures that have a role in the phenomenon of cross-reactivity in allergic patients Cortical collecting duct, a segment of the kidney Politics and government Census county division, a term used by the US Census Bureau Center City District, an economic development agency for the Center City area of Philadelphia Consular Consolidated Database, a database used for visa processing by the Bureau of Consular Affairs, US Department of State Religion Confraternity of Christian Doctrine, a religious instruction program of the Catholic Church Organizations Café Coffee Day, a chain of coffee shops in India Country Club of Detroit Centre of Cricket Development, a cricket team; see Namibia Cricket Board Cricket Club of Dhakuria; see Gopal Bose Education Community College of Denver, US Cincinnati Country Day School, a non-parochial, private school in Indian Hill Non-governmental Christian Care Foundation for Children with Disabilities, in Thailand Council for a Community of Democracies, in the US Canadian Coalition for Democracies, a former advocacy organization in Canada Politics and government Centro Cristiano Democratico (Christian Democratic Centre), a defunct Italian political party Other uses Convention Centre Dublin, Ireland
3,095
6,854
https://en.wikipedia.org/wiki/Church%E2%80%93Turing%20thesis
Church–Turing thesis
In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability: In 1933, Kurt Gödel, with Jacques Herbrand, formalized the definition of the class of general recursive functions: the smallest class of functions (with arbitrarily many arguments) that is closed under composition, recursion, and minimization, and includes zero, successor, and all projections. In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus. Also in 1936, before learning of Church's work, Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers. Church, Kleene, and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below). On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the informal notion of an effectively calculable function. Although the thesis has near-universal acceptance, it cannot be formally proven, as the concept of effective calculability is only informally defined. Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below). Statement in Church's and Turing's words addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result". In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same: We shall use the expression "computable function" to mean a function calculable by a machine, and let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions. The thesis can be stated as: Every effectively calculable function is a computable function. Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine". Turing stated it this way: It was stated ... that "a function is effectively calculable if its values can be found by some purely mechanical process". We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability with effective calculability. [ is the footnote quoted above.] History One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis"). Circa 1930–1952 In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–1935), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that: But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically". Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form. Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–1964 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system". A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the Entscheidungsproblem is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating: Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church. Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–1935 that the thesis might be expressed as an axiom or set of axioms. Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–1937 paper "On Computable Numbers, with an Application to the Entscheidungsproblem" appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). And in a proof-sketch added as an "Appendix" to his 1936–1937 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately". In a few years (1939) Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be definitions of "effective calculability"; neither framed their statements as theses. Rosser (1939) formally identified the three notions-as-definitions: Kleene proposes Thesis I: This left the overt expression of a "thesis" to Kleene. In 1943 Kleene proposed his "Thesis I": The Church–Turing Thesis: Stephen Kleene, in Introduction To Metamathematics, finally goes on to formally name "Church's Thesis" and "Turing's Thesis", using his theory of recursive realizability. Kleene having switched from presenting his work in the terminology of Church-Kleene lambda definability, to that of Gödel-Kleene recursiveness (partial recursive functions). In this transition, Kleene modified Gödel's general recursive functions to allow for proofs of the unsolvability of problems in the Intuitionism of E. J. Brouwer. In his graduate textbook on logic, "Church's thesis" is introduced and basic mathematical results are demonstrated to be unrealizable. Next, Kleene proceeds to present "Turing's thesis", where results are shown to be uncomputable, using his simplified derivation of a Turing machine based on the work of Emil Post. Both theses are proven equivalent by use of "Theorem XXX". Kleene, finally, uses for the first time the term the "Church-Turing thesis" in a section in which he helps to give clarifications to concepts in Alan Turing's paper "The Word Problem in Semi-Groups with Cancellation", as demanded in a critique from William Boone. Later developments An attempt to understand the notion of "effective computability" better led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game of life), parallelism, and crystalline automata, led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy". His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance". From these principles and some additional constraints—(1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior—he produces a theorem that "What can be calculated by a device satisfying principles I–IV is computable." In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework". In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a computor—"a human computing agent who proceeds mechanically". These constraints reduce to: "(B.1) (Boundedness) There is a fixed bound on the number of symbolic configurations a computor can immediately recognize. "(B.2) (Boundedness) There is a fixed bound on the number of internal states a computor can be in. "(L.1) (Locality) A computor can change only elements of an observed symbolic configuration. "(L.2) (Locality) A computor can shift attention from one symbolic configuration to another one, but the new observed configurations must be within a bounded distance of the immediately previously observed configuration. "(D) (Determinacy) The immediately recognizable (sub-)configuration determines uniquely the next computation step (and id [instantaneous description])"; stated another way: "A computor's internal state together with the observed configuration fixes uniquely the next computation step and the next internal state." The matter remains in active discussion within the academic community. The thesis as a definition The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. "the correct definition of mechanical computability was established beyond any doubt by Turing". The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare, where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function. Success of the thesis Other formalisms (besides recursion, the λ-calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Kleene (1952) adds to the list the functions "reckonable in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) "canonical [also called normal] systems". In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post–Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "... they just wanted to ... convince themselves that there is no way to extend the notion of computable function." All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1": Informal usage in proofs Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the (often very long) details which would be involved in a rigorous, formal proof. To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable (equivalently, partial recursive). Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis: In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive. Variations The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: "All physically computable functions are Turing-computable." The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a (multi-tape) universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine. A variation of the Church–Turing thesis addresses whether an arbitrary but "reasonable" model of computation can be efficiently simulated. This is called the feasibility thesis, also known as the (classical) complexity-theoretic Church–Turing thesis or the extended Church–Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states: "A probabilistic Turing machine can efficiently simulate any realistic model of computation." The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani (1997). The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space." The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be simultaneously achieved for a simulation of a Random Access Machine on a Turing machine. If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states: "A quantum Turing machine can efficiently simulate any realistic model of computation." Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly, stating "Though [...] Turing machines express the behavior of algorithms, the broader assertion that algorithms precisely capture what can be computed is invalid". They claim that forms of computation not captured by the thesis are relevant today, terms which they call super-Turing computation. Philosophical implications Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings: The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics. The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category. The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.) John Lucas and Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation. There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept. Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory. Non-computable functions One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input n and returns the largest number of symbols that a Turing machine with n states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method. Several computational models allow for the computation of (Church-Turing) non-computable functions. These are known as hypercomputers. Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community. See also Abstract machine Church's thesis in constructive mathematics Church–Turing–Deutsch principle, which states that every physical process can be simulated by a universal computing device Computability logic Computability theory Decidability Hypercomputation Model of computation Oracle (computer science) Super-recursive algorithm Turing completeness Footnotes References Includes original papers by Gödel, Church, Turing, Rosser, Kleene, and Post mentioned in this section. Cited by . Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in ) and name it "Church's Thesis" (i.e., the Church thesis). and (See also: ) External links . —a comprehensive philosophical treatment of relevant issues. A special issue (Vol. 28, No. 4, 1987) of the Notre Dame Journal of Formal Logic was devoted to the Church–Turing thesis. Computability theory Alan Turing Theory of computation Philosophy of computer science
3,121
6,856
https://en.wikipedia.org/wiki/Chomsky%20%28surname%29
Chomsky (surname)
Chomsky (, , , , , "from (Vyoska) / (nearby Brest, now Belarus)") is a surname of Slavic origin. Notable people with the surname include: Alejandro Chomski (born 1968), Argentine film director and screenwriter Aviva Chomsky (born 1957), American historian Carol (Schatz) Chomsky (1930–2008), American linguist and wife of Noam Chomsky Judith Chomsky (born 1942), American human rights lawyer and co-founder of the Juvenile Law Center Marvin J. Chomsky (1929–2022), American television and film director Noam Chomsky (born 1928), American linguist and political activist, professor emeritus at MIT (born 1957), Polish speedway rider and coach William Chomsky (1896–1977), American scholar of Hebrew (1925–2016), Soviet and Russian theater director See also Gryf coat of arms Odrowąż coat of arms Slavic-language surnames Polish-language surnames Surnames of Polish origin Polish toponymic surnames Khomskiy Ashkenazi surnames Jewish families Belarusian Jews
3,122
6,865
https://en.wikipedia.org/wiki/Concordat%20of%20Worms
Concordat of Worms
The Concordat of Worms was an agreement between the Catholic Church and the Holy Roman Empire which regulated the procedure for the appointment of bishops and abbots in the Empire. Signed on 23 September 1122 in the German city of Worms by Pope Callixtus II and Emperor Henry V, the agreement set an end to the Investiture Controversy, a conflict between state and church over the right to appoint religious office holders that had begun in the middle of the 11th century. By signing the concordat, Henry renounced his right to invest bishops and abbots with ring and crosier, and opened ecclesiastical appointments in his realm to canonical elections. Callixtus, in turn, agreed to the presence of the emperor or his officials at the elections and granted the emperor the right to intervene in the case of disputed outcomes. The emperor was also allowed to perform a separate ceremony in which he would invest bishops and abbots with a sceptre, representing the imperial lands associated with their episcopal see. Background During the middle of the 11th century, a reformist movement within the Christian Church sought to reassert the rights of the Holy See at the expense of the European monarchs. Having been elected in 1073, the reformist Pope Gregory VII proclaimed several edicts aimed at strengthening the authority of the papacy, some of which were formulated in the Dictatus papae of 1075. Gregory's edicts postulated that secular rulers were answerable to the pope and forbade them to make appointments to clerical offices (a process known as investiture). The pope's doctrines were vehemently rejected by Henry IV, the Holy Roman Emperor, who habitually invested the bishops and abbots of his realm. The ensuing conflict between the Empire and the papacy is known as the Investiture Controversy. The dispute continued after the death of Gregory VII in 1084 and the abdication of Henry IV in 1105. Even though Henry's son and successor, Henry V, looked towards reconciliation with the reformist movement, no lasting compromise was achieved in the first 16 years of his reign. In 1111, Henry V brokered an agreement with Pope Paschal II at Sutri, whereby he would abstain from investing clergy in his realm in exchange for the restoration of church property that had originally belonged to the Empire. The Sutri agreement, Henry hoped, would convince Paschal to assent to Henry's official coronation as emperor. The agreement failed to be implemented, leading Henry to imprison the pope. After two months of captivity, Paschal vowed to grant the coronation and to accept the emperor's role in investiture ceremonies. He also agreed never to excommunicate Henry. Given that these concessions had been won by force, ecclesiastical opposition to the Empire continued. The following year, Paschal reneged on his promises. Mouzon summit In January 1118, Pope Paschal died. He was succeeded by Gelasius II, who died in January 1119. His successor, the Burgundian Callixtus II, resumed negotiations with the Emperor with the aim of settling the dispute between the church and the Empire. In the autumn of 1119, two papal emissaries, William of Champeaux and Pons of Cluny, met Henry at Strasbourg, where the emperor agreed in principle to abandon the secular investiture ceremony that involved giving new bishops and abbots a ring and a crosier. The two parties scheduled a final summit between Henry and Callixtus at Mouzon, but the meeting ended abruptly after the emperor refused to accept a short-notice change in Callixtus's demands. The church leaders, who were deliberating their position at a council in Reims, reacted by excommunicating Henry. However, they did not endorse the pope's insistence upon the complete abandonment of secular investiture. The negotiations ended in failure. Historians disagree as to whether Calixtus actually wanted peace or fundamentally mistrusted Henry. Due to his uncompromising position in 1111, Calixtus has been termed an "ultra", and his election to the papacy may indicate that the College of Cardinals saw no reason to show weakness to the emperor. This optimism about victory was founded on the very visible, and very vocal opposition to Henry from within his own nobility, and the cardinals may have seen the emperor's internal weaknesses as an opportunity for outright victory. Further negotiations After the failure of the Mouzon negotiations, and the disappearance into the horizon of the chances of Henry's unconditional surrender, the majority of the clergy became willing to compromise in order to settle the dispute. The polemic writings and pronouncements that had figured so highly during the Investiture Dispute had died down by this point. Historian Gerd Tellenbach argues that, despite appearances, these years were "no longer marked by an atmosphere of bitter conflict". This was in part the result of the papacy's realization that it could not win two different disputes on two separate fronts, as it had been trying to do. Calixtus had been personally involved in negotiations with the Emperor over the last decade, and his intimate knowledge of the delicate situation made him the perfect candidate for the attempt. The difference between 1119 and 1122, argues Stroll, was not Henry, who had been willing to make concessions in 1119, but Calixtus, who had then been intransigent, but who now was intent upon reaching an agreement". The same sentiment prevailed in much of the German nobility. In 1121, pressured by a faction of nobles from the Lower Rhine and Duchy of Saxony under the leadership of the archbishop Adalbert of Mainz, Henry agreed to submit to make peace with the pope. In response in February 1122, Calixtus wrote to Frederick in a conciliatory tone via the Bishop of Acqui. His letter has been described as "a carefully crafted overture". Calixtus drew attention to their blood relationship, suggesting that while their shared ancestry compelled them to love each other as brothers, it was fundamental that the German kings draw their authority from God, but via his servants, not directly. However, Calixtus also emphasised for the first time that he blamed not Henry personally for the dispute but his bad advisors who had dictated unsound policy to him. In a major shift in policy since the Council of Reims of 1119, the pope stated that the church gifts what it possesses to all its children, without making claims upon them. This was intended to reassure Henry that in the event of peace between them, his position and Empire were secure. Shifting from the practical to the spiritual, Calixtus asked Henry to bear in mind that he was a king, but like all men limited on his earthly capability; he had armies, and kings below him, but the church had Christ and the Apostles. Continuing his theme, he referred, indirectly, to Henry's excommunication by himself (twice), he begged Henry to allow the conditions for peace to be created, as a result of which the church's, and God's glory would be increased, as concomitantly would the Emperor's. Conversely, he made sure to include a threat: if Henry did not change his ways, Calixtus threatened to place "the protection of the church in the hands of wise men". Historian Mary Stroll argues that, in taking this approach, Calixtus was taking advantage of the fact that, while he himself "was hardly in a position to sabre rattle" due to his military defeat in in the south and his difficulty with his own Cardinals, Henry was also under pressure in Germany in both the military and spiritual spheres. The Emperor replied through the Bishop of Speyer and the Abbot of Fulda, who travelled to Rome and collected the pope's emissaries under the Cardinal Bishop of Ostia. Speyer was a representative of Henry's political opponents in Germany, whereas Fulda was a negotiator rather than politically partisan. Complicating matters was a disputed election to the bishopric of Wurzburg in February 1122 of the kind that was at the heart of the Investiture Dispute. Although this almost led to an outbreak of civil war, a truce was arranged in August, allowing the parties to return to the papal negotiations. In the summer of 1122, a synod was convened in Mainz, at which imperial emissaries concluded the terms of their agreement with representatives of the church. In a sign that the Pope intended the impending negotiations to be successful, a Lateran council was announced for the following year. Worms The Emperor received the papal legates in Worms with due ceremony, where he awaited the outcome of the negotiations which appear to have actually taken place in nearby Mainz, which was hostile territory to Henry. As such, he had to communicate via messenger to keep up with events. Abbot Ekkehard of Aura chronicles that discussions took over a week to conclude. On 8 September, he met the papal legates and their final agreements were codified for publication. Although a possible compromise solution had already been received from England, this does not seem to have ever been considered in depth, probably on account of it containing an oath of Homage between Emperor and Pope, which been a historical sticking point in earlier negotiations. The papal delegation was led by Cardinal bishop Lamberto Scannabecchi of Ostia, the future Pope Honorius II. Both sides studied previous negotiations between them, including those from 1111, which were considered to have created precedent. On 23 September 1122, papal and imperial delegates signed a series of documents on the outside the walls of Worms. There was insufficient room in the city for the number of attendees and watchers. Adalbert, Archbishop of Mainz wrote to Calixtus of how complex the negotiations had been, given that, as he said, Henry regarded the powers he was being asked to renounce as being hereditary in the Imperial throne. It is probable that what was eventually promulgated was the result of almost every word being carefully considered. The main difference between what was to be agreed at Worms and previous negotiations were the concessions from the pope. Concordat The agreements come to at Worms were in the nature of both concessions and assurances to the other party. Henry, on oath before God, the apostles and the church renounced his right to invest bishops and abbots with ring and crosier, and opened ecclesiastical appointments in his realm to canonical elections, regno vel imperio. He also recognised the traditional extent and boundaries of the papal patrimony as a legal entity rather than one malleable to the emperor. Henry promised to return to the church those lands rightfully belonging to the church seized by himself or his father to the church; furthermore, he will assist the pope in regaining those that were taken by others, and "he will do the same thing for all other churches and princes, both ecclesiastical and lay". If the pope requests Imperial assistance, he will receive it, and the church comes to the empire for justice, it will be treated fairly. He also swore to abstain from "all investiture by ring and staff", marking the end of an ancient imperial tradition. Callixtus made similar reciprocal promises regarding the empire in Italy. He agreed to the presence of the emperor or his officials at the elections and granted the emperor the right to ajudge in the case of disputed outcomes on episcopal advice —as long as they had been held peacefully and without simony—which had officially been the case ever since precedent had been set by the London Accord of 1107. This right to judge was constrained by an assurance that he would support the majority vote among electors, and further that he would take the advice of his other bishops before doing so. The emperor was also allowed to perform a separate ceremony in which he would invest bishops and abbots with their regalia, a sceptre representing the imperial lands associated with their episcopal see. This clause also contained a "cryptic" condition that once the elect had been so endowed, the new bishop "should do what he ought to do according to imperial rights". In the German imperial lands this was to take place prior to the bishop-elect's consecration; elsewhere in the empire—Burgundy and Italy, exempting the Papal States—within six months of the ceremony. The differentiating between the German portion of the Empire and the rest was of particular importance to Calixtus as the papacy had traditionally felt threatened more from it in the peninsular than the broader Empire. Finally, the pope granted "true peace" on the emperor and all those who had supported him. Calixtus had effectively overturned wholesale the strategy he had pursued during the Mouzon negotiation; episcopal investitures in Germany were to take place with very little substantive change in ceremony, while temporal involvement remained, only replacing investiture with homage, although the word itself—hominium—was studiously avoided. Adalbert, from whom Calixtus first received news of the final concordat, emphasized that it still had to be approved in Rome; this suggests, argues Stroll, that the Archbishop—and probably the papal legation as a whole—were against making concessions to the emperor, and probably wanted Calixtus to disown the agreement. Adalbert believed the agreement would make it easier for the Emperor to legalise intimidation of episcopal electors, writing that "through the opportunity of [the emperor's] presence, the Church of God must undergo the same slavery as before, or an even more oppressive one". However, argues Stroll, the concessions Calixtus made were an "excellent bargain" in return for eradicating the danger on the papacy's northern border and therefore allowing him to focus, without threat or distraction, on the Normans to the south. It had achieved its peace, argues Norman Cantor, by allowing local national custom and practice to determine future relations between crown and pope; in most cases, he notes, this "favored the continuance of royal control over the church". The concordat was published as two distinct charters, each laying out the concessions the one party was making to the other. They are known respectively as the Papal (or the Calixtinum) and the Imperial (Henricianum) charters. Calixtus's is addressed to the emperor—in quite personal terms—while Henry's is made out to God. The bishop of Ostia gave the emperor the kiss of peace on behalf of the pope and said Mass. By these rites was Henry returned to the church, the negotiators were lauded for succeeding in their delicate mission and the concordat was called "peace at the will of the pope". Neither charter was signed; both contained probably intentional vagaries and unanswered questions—such as the position of the papacy's churches that lay outside both the patrimony and Germany—which were subsequently addressed on a case-by-case basis. Indeed, Robert Benson has suggested that the brevity of the charters was deliberate and that the agreement as a whole is as important for what it omits as for what it includes. The term regalia, for example, was not only undefined but literally meant two different things to each party. In the Henricianum it referred to the feudal duty owed to a monarch; in the Calixtinium, it was the episcopal temporalities. Broader question, such as the nature of the church and Empire relationship, were also not addressed, although some ambiguity was removed by an 1133 Papal privilege. The Concordat was widely, and deliberately, publicised around Europe. Calixtus was not in Rome when the concordat was delivered. He had left the city by late August and was not to return until mid- to late October, making a progress to Anagni, taking the bishopric of Anagni and Casamari Abbey under his protection. Agreements Preservation The concordat was ratified at the First Council of the Lateran and the original Henricianum charter is preserved at the Vatican Apostolic Archive; the Calixtinum has not survived except in subsequent copies. A copy of the former is also held in the Codex Udalrici, but this is an abridged version for political circulation, as it reduces the number of imperial concessions made. Indicating the extent that he saw the agreement as a papal victory, Calixtus had a copy of the Henricianum painted on a Lateran Palace chamber wall; while nominally portraying the concordat as a victory for the papacy, it also ignored the numerous concessions made to the emperor. This was part of what Hartmut Hoffmann has called "a conspiracy of silence" regarding papal concessions. Indeed, while the Pope is pictured enthroned, and Henry only standing, the suggestion is still that they were jointly wielding their respective authority to come to this agreement. An English copy of the Calixtinum made by William of Malmsbury is reasonably accurate but omits the clause mentioning the use of a sceptre in the granting of the regalia. He then, having condemned Henry's "Teuton fury", proceeds to praise him, comparing him favourably ton Charlemagne for his devotion to God and the peace of Christendom. Aftermath The first invocation of the concordat was not in the empire, as it turned out, but by Henry I of England the following year. Following a long-running dispute between Canterbury–York which ended up in the Papal court, Joseph Huffman argues that it would have been controversial for the Pope "to justify one set of concessions in Germany and another in England". The concordat ended once and for all the "Imperial church system of the Ottonians and Salians". The First Lateran Council was convoked to confirm the Concordat of Worms. The council was most representative with nearly 300 bishops and 600 abbots from every part of Catholic Europe being present. It convened on March 18, 1123. One of its primary concerns was to emphasise the independence of diocesan clergy, and to do so it forbade monks to leave their monasteries to provide pastoral care, which would in future be the sole preserve of the diocese. In ratifying the Concordat, the Council confirmed that in future bishops would be elected by their clergy, although, also per the Concordat, the Emperor could refuse the homage of German bishops. Decrees were passed directed against simony, concubinage among the clergy, church robbers, and forgers of Church documents; the council also reaffirmed indulgences for Crusaders. These, argues C. Colt Anderson "established important precedents in canon law restricting the influence of the laity and the monks". While this led to a busy period of reform, it was important for those advocating reform not to allow themselves to be confused with the myriad heretical sects and schismatics who were making similar criticisms. The Concordat was the last major achievement for Emperor Henry, as he died in 1125; an attempted invasion of France came to nothing in 1124 in the face of "determined opposition". Fuhrmann comments that, as Henry had shown in his life "even less interest in new currents of thought and feeling than his father", he probably did not understand the significance of the events he had lived through. The peace only lasted until his death; when Imperial Electors met to choose his successor, reformists took the opportunity to attack the imperial gains of Worms on the grounds that they had been granted to him personally rather than Emperors generally. However, later emperors, such as Frederick I and Henry VI, continued to wield as much, if intangible, power as their predecessors in episcopal elections, and to a greater degree to that allowed them by Calixtus' charter. Successive emperors found the Concordat sufficiently favourable that it remained, almost unaltered until the empire was dissolved by Napoleon in 1806. Popes, likewise, were able to use the powers codified to them in the Concordatto their advantage in future internal disputes with their Cardinals. Reception The most detailed contemporary description of the Concordat comes to historians through a brief chronicle known as the 1125 continuation chronicle. This is a pro-papal document lays the blame for the schism squarely upon Henry—by his recognition of Gregory VIII—and the praise for ending it on Calixtus, through his making only temporary compromises. I. S. Robinson, writing in The New Cambridge Medieval History, suggests that this was a deliberate ploy to leave further negotiations open with a more politically malleable Emperor in future. To others it was not so clear cut; Honorius of Autun, for example, writing later in the century discussed lay investiture as an aspect of papal-Imperial relations and, even a century later the Sachsenspiegel still stated that Emperors nominated bishops in Germany. Robinson suggests that, by the end of the 12th century, "it was the imperial, rather than the papal version of the Concordat of Worms that was generally accepted by German churchmen". The contemporary English historian William of Malmesbury praised the Concordat for curtailing what he perceived as the emperor's overreach, or as he put it, "severing the sprouting necks of Teuton fury with the axe of Apostolic power". However, he regarded the final settlement not as a defeat of the Empire at the hands of the church, but rather as a reconciliatory effort by the two powers. Although polemicism had died down in the years preceding the Concordat, it did not finish them completely, and factionalism within the church especially continued. Gerhoh of Reichersberg believed that the emperor now had the right to request German bishops pay homage to him, something that would never have been allowed under Paschal, due to the vague clause instructing newly-elects to the things the emperor wished. Gerhoh argued that now imperial intervention in episcopal elections had been curtailed, Henry would use this clause to extend his influence in the church by means of homage. Gerhoh was torn between viewing the concordat as the end of a long struggle between pope and empire, or whether it marked the beginning of a new one within the church itself. Likewise Adelbert of Mainz—who had casually criticised the agreement in his report to Calixtus continued to lobby against it, and continued to bring complaints against Henry, whom, for example, he alleged had illegally removed the Bishop of Strassburg who was suspected of complicity in the death of Duke Berthold of Zaehringen. The reformist party within the church took a similar view, criticising the Concordat for failing to remove all secular influence on the church. For this reason, a group of followers of Paschal II unsuccessfully attempted to prevent the agreement's ratification at the Lateran Council, crying non placet! when asked to do so: "it was only when it was pointed out that much had to be accepted for the sake of peace that the atmosphere quietened". Calixtus told them that they had "not to approve but tolerate" it. At a council in Bamberg in 1122 Henry gathered those nobles who had not attended the Concordat to seek their approval of the agreement, which they did. The following month he sent cordial letters to Caslixtus agreeing with the pope's position that as brothers in Christ they were bound by God to work together, etc., and that he would soon visit personally to discuss the repatriation of papal land. These letters were, in turn, responded to positively by Calixtus, who instructed his delegates to make good the promises they had made at Worms. Historiography Gottfried Wilhelm Leibniz called the agreements made at Worms "the oldest concordat in German history, an international treaty", while Augustin Fliche argued that the Concordat effectively instituted the statutes of Ivo of Chartres, a prominent reformer in the early years of the Investiture Contest, a view, it has been suggested, that most historians agree with. The historian Uta-Renate Blumenthal writes that, despite its shortcomings, the Concordat freed "[the church and the Empire] from antiquated concepts with their increasingly anachronistic restrictions". According to the historian William Chester Jordan, the Concordat was "of enormous significance" because it demonstrated that the emperor, in spite of his great secular power, did not have any religious authority. On the other hand, argues Karl F. Morrison, any victory the papacy felt it had won was pyrrhic, as "the king was left in possession of the field". The new peace also now allowed the papacy to expand its territories in Italy, such as the Sabina, which were unobtainable while the dispute with Henry was ongoing, while in Germany, a new class of ecclesiastics was created, what Horst Fuhrmann calls the "ecclesiastical princes of the Empire". While most historians agree that the Concordat marks a clear close to the fifty-year-old struggle between church and empire, disagreement continues on just how decisive a termination that was. Historians are also unclear as to the commitment of the pope to the concordat. Stroll, for example, notes that, while Henry's oaths were made to the church corporate, so in perpetuity, while Calixtus's may have been in a personal capacity. This, Stroll argues, would mean that it could be argued that while Henry's commitments to the church applied forever, Calixtus's applied only for the duration of Henry's reign, and at least one contemporary, Otto of Freising, wrote later in the century that he believed this to be the church's position. Stroll considers it "implausible" that Henry and his counsel would ever have entered into such a one-sided agreement. Indeed, John O'Malley has argued that the emperor had effectively been granted a veto from Calixtus; while in the strictest interpretation of the Gregorian reformers the only two important things in the making of a bishop were his election and consecration, Calixtus had effectively codified a role—however small—for the emperor in this process. Conversely, Benson reckons that while Henry's agreement was with the church in perpetuity, Calixtus'—based on the personal mode of address—was with him personally, and as such not binding on his successors. However, this was also an acknowledgement, he suggests, that much of what the pope did not address was already considered customary, and so did not need addressing. There has also been disagreement in why the Investiture contest ended with the Concordat as it did. Benson notes that, as a truce, it was primarily intended to stop the fighting rather than to address its original causes. It was "a straightforward, political engagement...a pragmatic agreement" between two political bodies. Indeed, controversy over investiture continued for at least another decade; in that light, suggests Benson, it could be argued that the Concordat did not end the dispute at all. There were "many problems unsolved, and [it] left much room for the free play of power". Political scientist Bruce Bueno de Mesquita has argued that, in the long term, the Concordat was an essential component to the later—gradual—creation of the European nation state. Notes References Bibliography 1120s in the Holy Roman Empire 1122 in Europe Catholic canonical documents History of the Catholic Church History of the papacy Investiture Controversy Pope Callixtus II Salian dynasty Worms, Germany Treaties of the Holy See (754–1870) 12th-century treaties Henry V, Holy Roman Emperor
3,126
6,886
https://en.wikipedia.org/wiki/Chicago
Chicago
Chicago ( , ; ; ) is the most populous city in the U.S. state of Illinois and the third most populous in the United States after New York City and Los Angeles. With a population of 2,746,388 in the 2020 census, it is also the most populous city in the Midwest. As the seat of Cook County (the second-most populous U.S. county), the city is the center of the Chicago metropolitan area, one of the largest in the world. On the shore of Lake Michigan, Chicago was incorporated as a city in 1837 near a portage between the Great Lakes and the Mississippi River watershed. It grew rapidly in the mid-19th century; by 1860, Chicago was the youngest U.S. city to exceed a population of 100,000. The Great Chicago Fire in 1871 destroyed several square miles and left more than 100,000 homeless, but Chicago's population continued to grow to 503,000 by 1880 and then doubled to more than a million within the decade. The construction boom accelerated population growth throughout the following decades, and by 1900, less than 30 years after the fire, Chicago was the fifth-largest city in the world. Chicago made noted contributions to urban planning and zoning standards, including new construction styles (such as Chicago School architecture, the development of the City Beautiful Movement, and the steel-framed skyscraper). Chicago is an international hub for finance, culture, commerce, industry, education, technology, telecommunications, and transportation. It is the site of the creation of the first standardized futures contracts, issued by the Chicago Board of Trade, which today is part of the largest and most diverse derivatives market in the world, generating 20% of all volume in commodities and financial futures alone. O'Hare International Airport is routinely ranked among the world's top six busiest airports according to tracked data by the Airports Council International. The region also has the largest number of federal highways and is the nation's railroad hub. The Chicago area has one of the highest gross domestic products (GDP) in the world, generating $689 billion in 2018. The economy of Chicago is diverse, with no single industry employing more than 14% of the workforce. It is home to several Fortune 500 companies, including Archer-Daniels-Midland, Conagra Brands, Exelon, JLL, Kraft Heinz, McDonald's, Mondelez International, Motorola Solutions, Sears, and United Airlines Holdings. Chicago's 58 million tourist visitors in 2018 set a new record. Landmarks in the city include Millennium Park, Navy Pier, the Magnificent Mile, Art Institute of Chicago, Museum Campus, Willis (Sears) Tower, Grant Park, Museum of Science and Industry, and Lincoln Park Zoo. Chicago is also home to the Barack Obama Presidential Center being built in Hyde Park on the city's South Side. Chicago's culture includes the visual arts, literature, film, theater, comedy (especially improvisational comedy), food, dance (including modern dance and jazz troupes and the Joffrey Ballet), and music (particularly jazz, blues, soul, hip-hop, gospel, and electronic dance music, including house music). Chicago is also the location of the Chicago Symphony Orchestra and the Lyric Opera of Chicago. Of the area's colleges and universities, the University of Chicago, Northwestern University, and the University of Illinois Chicago are classified as "highest research" doctoral universities. Chicago has professional sports teams in each of the major professional leagues, including two Major League Baseball teams. Etymology and nicknames The name Chicago is derived from a French rendering of the indigenous Miami-Illinois word for a wild relative of the onion; it is known to botanists as Allium tricoccum and known more commonly as "ramps". The first known reference to the site of the current city of Chicago as "" was by Robert de LaSalle around 1679 in a memoir. Henri Joutel, in his journal of 1688, noted that the eponymous wild "garlic" grew profusely in the area. According to his diary of late September 1687: The city has had several nicknames throughout its history, such as the Windy City, Chi-Town, Second City, and City of the Big Shoulders. History Beginnings In the mid-18th century, the area was inhabited by the Potawatomi, a Native American tribe who had succeeded the Miami and Sauk and Fox peoples in this region. The first known non-indigenous permanent settler in Chicago was trader Jean Baptiste Point du Sable. Du Sable was of African descent, perhaps born in the French colony of Saint-Domingue (Haiti), and established the settlement in the 1780s. He is commonly known as the "Founder of Chicago". In 1795, following the victory of the new United States in the Northwest Indian War, an area that was to be part of Chicago was turned over to the US for a military post by native tribes in accordance with the Treaty of Greenville. In 1803, the U.S. Army constructed Fort Dearborn, which was destroyed during the War of 1812 in the Battle of Fort Dearborn by the Potawatomi before being later rebuilt. After the War of 1812, the Ottawa, Ojibwe, and Potawatomi tribes ceded additional land to the United States in the 1816 Treaty of St. Louis. The Potawatomi were forcibly removed from their land after the 1833 Treaty of Chicago and sent west of the Mississippi River as part of the federal policy of Indian removal. 19th century On August 12, 1833, the Town of Chicago was organized with a population of about 200. Within seven years it grew to more than 6,000 people. On June 15, 1835, the first public land sales began with Edmund Dick Taylor as Receiver of Public Monies. The City of Chicago was incorporated on Saturday, March 4, 1837, and for several decades was the world's fastest-growing city. As the site of the Chicago Portage, the city became an important transportation hub between the eastern and western United States. Chicago's first railway, Galena and Chicago Union Railroad, and the Illinois and Michigan Canal opened in 1848. The canal allowed steamboats and sailing ships on the Great Lakes to connect to the Mississippi River. A flourishing economy brought residents from rural communities and immigrants from abroad. Manufacturing and retail and finance sectors became dominant, influencing the American economy. The Chicago Board of Trade (established 1848) listed the first-ever standardized "exchange-traded" forward contracts, which were called futures contracts. In the 1850s, Chicago gained national political prominence as the home of Senator Stephen Douglas, the champion of the Kansas–Nebraska Act and the "popular sovereignty" approach to the issue of the spread of slavery. These issues also helped propel another Illinoisan, Abraham Lincoln, to the national stage. Lincoln was nominated in Chicago for US president at the 1860 Republican National Convention, which was held in a purpose-built auditorium called, the Wigwam. He defeated Douglas in the general election, and this set the stage for the American Civil War. To accommodate rapid population growth and demand for better sanitation, the city improved its infrastructure. In February 1856, Chicago's Common Council approved Chesbrough's plan to build the United States' first comprehensive sewerage system. The project raised much of central Chicago to a new grade with the use of jackscrews for raising buildings. While elevating Chicago, and at first improving the city's health, the untreated sewage and industrial waste now flowed into the Chicago River, and subsequently into Lake Michigan, polluting the city's primary freshwater source. The city responded by tunneling out into Lake Michigan to newly built water cribs. In 1900, the problem of sewage contamination was largely resolved when the city completed a major engineering feat. It reversed the flow of the Chicago River so that the water flowed away from Lake Michigan rather than into it. This project began with the construction and improvement of the Illinois and Michigan Canal, and was completed with the Chicago Sanitary and Ship Canal that connects to the Illinois River, which flows into the Mississippi River. In 1871, the Great Chicago Fire destroyed an area about long and wide, a large section of the city at the time. Much of the city, including railroads and stockyards, survived intact, and from the ruins of the previous wooden structures arose more modern constructions of steel and stone. These set a precedent for worldwide construction. During its rebuilding period, Chicago constructed the world's first skyscraper in 1885, using steel-skeleton construction. The city grew significantly in size and population by incorporating many neighboring townships between 1851 and 1920, with the largest annexation happening in 1889, with five townships joining the city, including the Hyde Park Township, which now comprises most of the South Side of Chicago and the far southeast of Chicago, and the Jefferson Township, which now makes up most of Chicago's Northwest Side. The desire to join the city was driven by municipal services that the city could provide its residents. Chicago's flourishing economy attracted huge numbers of new immigrants from Europe and migrants from the Eastern United States. Of the total population in 1900, more than 77% were either foreign-born or born in the United States of foreign parentage. Germans, Irish, Poles, Swedes, and Czechs made up nearly two-thirds of the foreign-born population (by 1900, whites were 98.1% of the city's population). Labor conflicts followed the industrial boom and the rapid expansion of the labor pool, including the Haymarket affair on May 4, 1886, and in 1894 the Pullman Strike. Anarchist and socialist groups played prominent roles in creating very large and highly organized labor actions. Concern for social problems among Chicago's immigrant poor led Jane Addams and Ellen Gates Starr to found Hull House in 1889. Programs that were developed there became a model for the new field of social work. During the 1870s and 1880s, Chicago attained national stature as the leader in the movement to improve public health. City laws and later, state laws that upgraded standards for the medical profession and fought urban epidemics of cholera, smallpox, and yellow fever were both passed and enforced. These laws became templates for public health reform in other cities and states. The city established many large, well-landscaped municipal parks, which also included public sanitation facilities. The chief advocate for improving public health in Chicago was Dr. John H. Rauch, M.D. Rauch established a plan for Chicago's park system in 1866. He created Lincoln Park by closing a cemetery filled with shallow graves, and in 1867, in response to an outbreak of cholera he helped establish a new Chicago Board of Health. Ten years later, he became the secretary and then the president of the first Illinois State Board of Health, which carried out most of its activities in Chicago. In the 1800s, Chicago became the nation's railroad hub, and by 1910 over 20 railroads operated passenger service out of six different downtown terminals. In 1883, Chicago's railway managers needed a general time convention, so they developed the standardized system of North American time zones. This system for telling time spread throughout the continent. In 1893, Chicago hosted the World's Columbian Exposition on former marshland at the present location of Jackson Park. The Exposition drew 27.5 million visitors, and is considered the most influential world's fair in history. The University of Chicago, formerly at another location, moved to the same South Side location in 1892. The term "midway" for a fair or carnival referred originally to the Midway Plaisance, a strip of park land that still runs through the University of Chicago campus and connects the Washington and Jackson Parks. 20th and 21st centuries 1900 to 1939 During World War I and the 1920s there was a major expansion in industry. The availability of jobs attracted African Americans from the Southern United States. Between 1910 and 1930, the African American population of Chicago increased dramatically, from 44,103 to 233,903. This Great Migration had an immense cultural impact, called the Chicago Black Renaissance, part of the New Negro Movement, in art, literature, and music. Continuing racial tensions and violence, such as the Chicago Race Riot of 1919, also occurred. The ratification of the 18th amendment to the Constitution in 1919 made the production and sale (including exportation) of alcoholic beverages illegal in the United States. This ushered in the beginning of what is known as the Gangster Era, a time that roughly spans from 1919 until 1933 when Prohibition was repealed. The 1920s saw gangsters, including Al Capone, Dion O'Banion, Bugs Moran and Tony Accardo battle law enforcement and each other on the streets of Chicago during the Prohibition era. Chicago was the location of the infamous St. Valentine's Day Massacre in 1929, when Al Capone sent men to gun down members of a rival gang, North Side, led by Bugs Moran. Chicago was the first American city to have a homosexual-rights organization. The organization, formed in 1924, was called the Society for Human Rights. It produced the first American publication for homosexuals, Friendship and Freedom. Police and political pressure caused the organization to disband. The Great Depression brought unprecedented suffering to Chicago, in no small part due to the city's heavy reliance on heavy industry. Notably, industrial areas on the south side and neighborhoods lining both branches of the Chicago River were devastated; by 1933 over 50% of industrial jobs in the city had been lost, and unemployment rates amongst blacks and Mexicans in the city were over 40%. The Republican political machine in Chicago was utterly destroyed by the economic crisis, and every mayor since 1931 has been a Democrat. From 1928 to 1933, the city witnessed a tax revolt, and the city was unable to meet payroll or provide relief efforts. The fiscal crisis was resolved by 1933, and at the same time, federal relief funding began to flow into Chicago. Chicago was also a hotbed of labor activism, with Unemployed Councils contributing heavily in the early depression to create solidarity for the poor and demand relief, these organizations were created by socialist and communist groups. By 1935 the Workers Alliance of America begun organizing the poor, workers, the unemployed. In the spring of 1937 Republic Steel Works witnessed the Memorial Day massacre of 1937 in the neighborhood of East Side. In 1933, Chicago Mayor Anton Cermak was fatally wounded in Miami, Florida, during a failed assassination attempt on President-elect Franklin D. Roosevelt. In 1933 and 1934, the city celebrated its centennial by hosting the Century of Progress International Exposition World's Fair. The theme of the fair was technological innovation over the century since Chicago's founding. 1940 to 1979 During World War II, the city of Chicago alone produced more steel than the United Kingdom every year from 1939 – 1945, and more than Nazi Germany from 1943 – 1945. The Great Migration, which had been on pause due to the Depression, resumed at an even faster pace in the second wave, as hundreds of thousands of blacks from the South arrived in the city to work in the steel mills, railroads, and shipping yards. On December 2, 1942, physicist Enrico Fermi conducted the world's first controlled nuclear reaction at the University of Chicago as part of the top-secret Manhattan Project. This led to the creation of the atomic bomb by the United States, which it used in World War II in 1945. Mayor Richard J. Daley, a Democrat, was elected in 1955, in the era of machine politics. In 1956, the city conducted its last major expansion when it annexed the land under O'Hare airport, including a small portion of DuPage County. By the 1960s, white residents in several neighborhoods left the city for the suburban areas – in many American cities, a process known as white flight – as Blacks continued to move beyond the Black Belt. While home loan discriminatory redlining against blacks continued, the real estate industry practiced what became known as blockbusting, completely changing the racial composition of whole neighborhoods. Structural changes in industry, such as globalization and job outsourcing, caused heavy job losses for lower-skilled workers. At its peak during the 1960s, some 250,000 workers were employed in the steel industry in Chicago, but the steel crisis of the 1970s and 1980s reduced this number to just 28,000 in 2015. In 1966, Martin Luther King Jr. and Albert Raby led the Chicago Freedom Movement, which culminated in agreements between Mayor Richard J. Daley and the movement leaders. Two years later, the city hosted the tumultuous 1968 Democratic National Convention, which featured physical confrontations both inside and outside the convention hall, with anti-war protesters, journalists and bystanders being beaten by police. Major construction projects, including the Sears Tower (now known as the Willis Tower, which in 1974 became the world's tallest building), University of Illinois at Chicago, McCormick Place, and O'Hare International Airport, were undertaken during Richard J. Daley's tenure. In 1979, Jane Byrne, the city's first female mayor, was elected. She was notable for temporarily moving into the crime-ridden Cabrini-Green housing project and for leading Chicago's school system out of a financial crisis. 1980 to present In 1983, Harold Washington became the first black mayor of Chicago. Washington's first term in office directed attention to poor and previously neglected minority neighborhoods. He was re‑elected in 1987 but died of a heart attack soon after. Washington was succeeded by 6th ward Alderman Eugene Sawyer, who was elected by the Chicago City Council and served until a special election. Richard M. Daley, son of Richard J. Daley, was elected in 1989. His accomplishments included improvements to parks and creating incentives for sustainable development, as well as closing Meigs Field in the middle of the night and destroying the runways. After successfully running for re-election five times, and becoming Chicago's longest-serving mayor, Richard M. Daley declined to run for a seventh term. In 1992, a construction accident near the Kinzie Street Bridge produced a breach connecting the Chicago River to a tunnel below, which was part of an abandoned freight tunnel system extending throughout the downtown Loop district. The tunnels filled with of water, affecting buildings throughout the district and forcing a shutdown of electrical power. The area was shut down for three days and some buildings did not reopen for weeks; losses were estimated at $1.95 billion. On February 23, 2011, former Illinois Congressman and White House Chief of Staff Rahm Emanuel won the mayoral election. Emanuel was sworn in as mayor on May 16, 2011, and won re-election in 2015. Lori Lightfoot, the city's first African American woman mayor and its first openly LGBTQ Mayor, was elected to succeed Emanuel as mayor in 2019. All three city-wide elective offices were held by women (and women of color) for the first time in Chicago history: in addition to Lightfoot, the City Clerk was Anna Valencia and City Treasurer, Melissa Conyears-Ervin. Geography Topography Chicago is located in northeastern Illinois on the southwestern shores of freshwater Lake Michigan. It is the principal city in the Chicago metropolitan area, situated in both the Midwestern United States and the Great Lakes region. The city rests on a continental divide at the site of the Chicago Portage, connecting the Mississippi River and the Great Lakes watersheds. In addition to it lying beside Lake Michigan, two rivers—the Chicago River in downtown and the Calumet River in the industrial far South Side—flow either entirely or partially through the city. Chicago's history and economy are closely tied to its proximity to Lake Michigan. While the Chicago River historically handled much of the region's waterborne cargo, today's huge lake freighters use the city's Lake Calumet Harbor on the South Side. The lake also provides another positive effect: moderating Chicago's climate, making waterfront neighborhoods slightly warmer in winter and cooler in summer. When Chicago was founded in 1837, most of the early building was around the mouth of the Chicago River, as can be seen on a map of the city's original 58 blocks. The overall grade of the city's central, built-up areas is relatively consistent with the natural flatness of its overall natural geography, generally exhibiting only slight differentiation otherwise. The average land elevation is above sea level. While measurements vary somewhat, the lowest points are along the lake shore at , while the highest point, at , is the morainal ridge of Blue Island in the city's far south side. While the Chicago Loop is the central business district, Chicago is also a city of neighborhoods. Lake Shore Drive runs adjacent to a large portion of Chicago's waterfront. Some of the parks along the waterfront include Lincoln Park, Grant Park, Burnham Park, and Jackson Park. There are 24 public beaches across of the waterfront. Landfill extends into portions of the lake providing space for Navy Pier, Northerly Island, the Museum Campus, and large portions of the McCormick Place Convention Center. Most of the city's high-rise commercial and residential buildings are close to the waterfront. An informal name for the entire Chicago metropolitan area is "Chicagoland", which generally means the city and all its suburbs. The Chicago Tribune, which coined the term, includes the city of Chicago, the rest of Cook County, and eight nearby Illinois counties: Lake, McHenry, DuPage, Kane, Kendall, Grundy, Will and Kankakee, and three counties in Indiana: Lake, Porter and LaPorte. The Illinois Department of Tourism defines Chicagoland as Cook County without the city of Chicago, and only Lake, DuPage, Kane, and Will counties. The Chicagoland Chamber of Commerce defines it as all of Cook and DuPage, Kane, Lake, McHenry, and Will counties. Communities Major sections of the city include the central business district, called The Loop, and the North, South, and West Sides. The three sides of the city are represented on the Flag of Chicago by three horizontal white stripes. The North Side is the most-densely-populated residential section of the city, and many high-rises are located on this side of the city along the lakefront. The South Side is the largest section of the city, encompassing roughly 60% of the city's land area. The South Side contains most of the facilities of the Port of Chicago. In the late-1920s, sociologists at the University of Chicago subdivided the city into 77 distinct community areas, which can further be subdivided into over 200 informally defined neighborhoods. Streetscape Chicago's streets were laid out in a street grid that grew from the city's original townsite plot, which was bounded by Lake Michigan on the east, North Avenue on the north, Wood Street on the west, and 22nd Street on the south. Streets following the Public Land Survey System section lines later became arterial streets in outlying sections. As new additions to the city were platted, city ordinance required them to be laid out with eight streets to the mile in one direction and sixteen in the other direction (about one street per 200 meters in one direction and one street per 100 meters in the other direction). The grid's regularity provided an efficient means of developing new real estate property. A scattering of diagonal streets, many of them originally Native American trails, also cross the city (Elston, Milwaukee, Ogden, Lincoln, etc.). Many additional diagonal streets were recommended in the Plan of Chicago, but only the extension of Ogden Avenue was ever constructed. In 2016, Chicago was ranked the sixth-most walkable large city in the United States. Many of the city's residential streets have a wide patch of grass or trees between the street and the sidewalk itself. This helps to keep pedestrians on the sidewalk further away from the street traffic. Chicago's Western Avenue is the longest continuous urban street in the world. Other notable streets include Michigan Avenue, State Street, Oak, Rush, Clark Street, and Belmont Avenue. The City Beautiful movement inspired Chicago's boulevards and parkways. Architecture The destruction caused by the Great Chicago Fire led to the largest building boom in the history of the nation. In 1885, the first steel-framed high-rise building, the Home Insurance Building, rose in the city as Chicago ushered in the skyscraper era, which would then be followed by many other cities around the world. Today, Chicago's skyline is among the world's tallest and densest. Some of the United States' tallest towers are located in Chicago; Willis Tower (formerly Sears Tower) is the second tallest building in the Western Hemisphere after One World Trade Center, and Trump International Hotel and Tower is the third tallest in the country. The Loop's historic buildings include the Chicago Board of Trade Building, the Fine Arts Building, 35 East Wacker, and the Chicago Building, 860-880 Lake Shore Drive Apartments by Mies van der Rohe. Many other architects have left their impression on the Chicago skyline such as Daniel Burnham, Louis Sullivan, Charles B. Atwood, John Root, and Helmut Jahn. The Merchandise Mart, once first on the list of largest buildings in the world, currently listed as 44th-largest (), had its own zip code until 2008, and stands near the junction of the North and South branches of the Chicago River. Presently, the four tallest buildings in the city are Willis Tower (formerly the Sears Tower, also a building with its own zip code), Trump International Hotel and Tower, the Aon Center (previously the Standard Oil Building), and the John Hancock Center. Industrial districts, such as some areas on the South Side, the areas along the Chicago Sanitary and Ship Canal, and the Northwest Indiana area are clustered. Chicago gave its name to the Chicago School and was home to the Prairie School, two movements in architecture. Multiple kinds and scales of houses, townhouses, condominiums, and apartment buildings can be found throughout Chicago. Large swaths of the city's residential areas away from the lake are characterized by brick bungalows built from the early 20th century through the end of World War II. Chicago is also a prominent center of the Polish Cathedral style of church architecture. The Chicago suburb of Oak Park was home to famous architect Frank Lloyd Wright, who had designed The Robie House located near the University of Chicago. A popular tourist activity is to take an architecture boat tour along the Chicago River. Monuments and public art Chicago is famous for its outdoor public art with donors establishing funding for such art as far back as Benjamin Ferguson's 1905 trust. A number of Chicago's public art works are by modern figurative artists. Among these are Chagall's Four Seasons; the Chicago Picasso; Miro's Chicago; Calder's Flamingo; Oldenburg's Batcolumn; Moore's Large Interior Form, 1953-54, Man Enters the Cosmos and Nuclear Energy; Dubuffet's Monument with Standing Beast, Abakanowicz's Agora; and, Anish Kapoor's Cloud Gate which has become an icon of the city. Some events which shaped the city's history have also been memorialized by art works, including the Great Northern Migration (Saar) and the centennial of statehood for Illinois. Finally, two fountains near the Loop also function as monumental works of art: Plensa's Crown Fountain as well as Burnham and Bennett's Buckingham Fountain. More representational and portrait statuary includes a number of works by Lorado Taft (Fountain of Time, The Crusader, Eternal Silence, and the Heald Square Monument completed by Crunelle), French's Statue of the Republic, Edward Kemys's Lions, Saint-Gaudens's Abraham Lincoln: The Man (a.k.a. Standing Lincoln) and Abraham Lincoln: The Head of State (a.k.a. Seated Lincoln), Brioschi's Christopher Columbus, Meštrović's The Bowman and The Spearman, Dallin's Signal of Peace, Fairbanks's The Chicago Lincoln, Boyle's The Alarm, Polasek's memorial to Masaryk, memorials along Solidarity Promenade to Kościuszko, Havliček and Copernicus by Chodzinski, Strachovský, and Thorvaldsen, a memorial to General Logan by Saint-Gaudens, and Kearney's Moose (W-02-03). A number of statues also honor recent local heroes such as Michael Jordan (by Amrany and Rotblatt-Amrany), Stan Mikita, and Bobby Hull outside of the United Center; Harry Caray (by Amrany and Cella) outside Wrigley field, Jack Brickhouse (by McKenna) next to the WGN studios, and Irv Kupcinet at the Wabash Avenue Bridge. There are preliminary plans to erect a 1:1‑scale replica of Wacław Szymanowski's Art Nouveau statue of Frédéric Chopin found in Warsaw's Royal Baths along Chicago's lakefront in addition to a different sculpture commemorating the artist in Chopin Park for the 200th anniversary of Frédéric Chopin's birth. Climate The city lies within the typical hot-summer humid continental climate (Köppen: Dfa), and experiences four distinct seasons. Summers are hot and humid, with frequent heat waves. The July daily average temperature is , with afternoon temperatures peaking at . In a normal summer, temperatures reach at least on as many as 23 days, with lakefront locations staying cooler when winds blow off the lake. Winters are relatively cold and snowy, although the city typically sees less snow and rain in winter than that experienced in the eastern Great Lakes region. Still, blizzards do occur, such as the one in 2011. There are many sunny but cold days in winter. The normal winter high from December through March is about , with January and February being the coldest months; a polar vortex in January 2019 nearly broke the city's cold record of , which was set on January 20, 1985. Spring and autumn are mild, short seasons, typically with low humidity. Dew point temperatures in the summer range from an average of in June to in July, but can reach nearly , such as during the July 2019 heat wave. The city lies within USDA plant hardiness zone 6a, transitioning to 5b in the suburbs. According to the National Weather Service, Chicago's highest official temperature reading of was recorded on July 24, 1934, although Midway Airport reached one day prior and recorded a heat index of during the 1995 heatwave. The lowest official temperature of was recorded on January 20, 1985, at O'Hare Airport. Most of the city's rainfall is brought by thunderstorms, averaging 38 a year. The region is also prone to severe thunderstorms during the spring and summer which can produce large hail, damaging winds, and occasionally tornadoes. Like other major cities, Chicago experiences an urban heat island, making the city and its suburbs milder than surrounding rural areas, especially at night and in winter. The proximity to Lake Michigan tends to keep the Chicago lakefront somewhat cooler in summer and less brutally cold in winter than inland parts of the city and suburbs away from the lake. Northeast winds from wintertime cyclones departing south of the region sometimes bring the city lake-effect snow. Time zone As in the rest of the state of Illinois, Chicago forms part of the Central Time Zone. The border with the Eastern Time Zone is located a short distance to the east, used in Michigan and certain parts of Indiana. Demographics During its first hundred years, Chicago was one of the fastest-growing cities in the world. When founded in 1833, fewer than 200 people had settled on what was then the American frontier. By the time of its first census, seven years later, the population had reached over 4,000. In the forty years from 1850 to 1890, the city's population grew from slightly under 30,000 to over 1 million. At the end of the 19th century, Chicago was the fifth-largest city in the world, and the largest of the cities that did not exist at the dawn of the century. Within sixty years of the Great Chicago Fire of 1871, the population went from about 300,000 to over 3 million, and reached its highest ever recorded population of 3.6 million for the 1950 census. From the last two decades of the 19th century, Chicago was the destination of waves of immigrants from Ireland, Southern, Central and Eastern Europe, including Italians, Jews, Russians, Poles, Greeks, Lithuanians, Bulgarians, Albanians, Romanians, Turkish, Croatians, Serbs, Bosnians, Montenegrins and Czechs. To these ethnic groups, the basis of the city's industrial working class, were added an additional influx of African Americans from the American South—with Chicago's black population doubling between 1910 and 1920 and doubling again between 1920 and 1930. In the 1920s and 1930s, the great majority of African Americans moving to Chicago settled in a so‑called "Black Belt" on the city's South Side. A large number of blacks also settled on the West Side. By 1930, two-thirds of Chicago's black population lived in sections of the city which were 90% black in racial composition. Chicago's South Side emerged as United States second-largest urban black concentration, following New York's Harlem. In 1990, Chicago's South Side and the adjoining south suburbs constituted the largest black majority region in the entire United States. Chicago's population declined in the latter half of the 20th century, from over 3.6 million in 1950 down to under 2.7 million by 2010. By the time of the official census count in 1990, it was overtaken by Los Angeles as the United States' second largest city. The city has seen a rise in population for the 2000 census and after a decrease in 2010, it rose again for the 2020 census. According to U.S. census estimates , Chicago's largest racial or ethnic group is non-Hispanic White at 32.8% of the population, Blacks at 30.1% and the Hispanic population at 29.0% of the population. Chicago has the third-largest LGBT population in the United States. In 2018, the Chicago Department of Health, estimated 7.5% of the adult population, approximately 146,000 Chicagoans, were LGBTQ. In 2015, roughly 4% of the population identified as LGBT. Since the 2013 legalization of same-sex marriage in Illinois, over 10,000 same-sex couples have wed in Cook County, a majority of them in Chicago. Chicago became a "de jure" sanctuary city in 2012 when Mayor Rahm Emanuel and the City Council passed the Welcoming City Ordinance. According to the U.S. Census Bureau's American Community Survey data estimates for 2008–2012, the median income for a household in the city was $47,408, and the median income for a family was $54,188. Male full-time workers had a median income of $47,074 versus $42,063 for females. About 18.3% of families and 22.1% of the population lived below the poverty line. In 2018, Chicago ranked seventh globally for the highest number of ultra-high-net-worth residents with roughly 3,300 residents worth more than $30 million. According to the 2008–2012 American Community Survey, the ancestral groups having 10,000 or more persons in Chicago were: Ireland (137,799) Poland (134,032) Germany (120,328) Italy (77,967) China (66,978) American (37,118) UK (36,145) recent African (32,727) India (25,000) Russia (19,771) Arab (17,598) European (15,753) Sweden (15,151) Japan (15,142) Greece (15,129) France (except Basque) (11,410) Ukraine (11,104) West Indian (except Hispanic groups) (10,349) Persons identifying themselves in "Other groups" were classified at 1.72 million, and unclassified or not reported were approximately 153,000. Religion According to a 2014 study by the Pew Research Center, Christianity is the most prevalently practiced religion in Chicago (71%), with the city being the fourth-most religious metropolis in the United States after Dallas, Atlanta and Houston. Roman Catholicism and Protestantism are the largest branches (34% and 35% respectively), followed by Eastern Orthodoxy and Jehovah's Witnesses with 1% each. Chicago also has a sizable non-Christian population. Non-Christian groups include Irreligious (22%), Judaism (3%), Islam (2%), Buddhism (1%) and Hinduism (1%). Chicago is the headquarters of several religious denominations, including the Evangelical Covenant Church and the Evangelical Lutheran Church in America. It is the seat of several dioceses. The Fourth Presbyterian Church is one of the largest Presbyterian congregations in the United States based on memberships. Since the 20th century Chicago has also been the headquarters of the Assyrian Church of the East. In 2014 the Catholic Church was the largest individual Christian denomination (34%), with the Roman Catholic Archdiocese of Chicago being the largest Catholic jurisdiction. Evangelical Protestantism form the largest theological Protestant branch (16%), followed by Mainline Protestants (11%), and historically Black churches (8%). Among denominational Protestant branches, Baptists formed the largest group in Chicago (10%); followed by Nondenominational (5%); Lutherans (4%); and Pentecostals (3%). Non-Christian faiths accounted for 7% of the religious population in 2014. Judaism has at least 261,000 adherents which is 3% of the population, making it the second largest religion. A 2020 study estimated the total Jewish population of the Chicago metropolitan area, both religious and irreligious, at 319,600. The first two Parliament of the World's Religions in 1893 and 1993 were held in Chicago. Many international religious leaders have visited Chicago, including Mother Teresa, the Dalai Lama and Pope John Paul II in 1979. Economy Chicago has the third-largest gross metropolitan product in the United States—about $670.5 billion according to September 2017 estimates. The city has also been rated as having the most balanced economy in the United States, due to its high level of diversification. In 2007, Chicago was named the fourth-most important business center in the world in the MasterCard Worldwide Centers of Commerce Index. Additionally, the Chicago metropolitan area recorded the greatest number of new or expanded corporate facilities in the United States for calendar year 2014. The Chicago metropolitan area has the third-largest science and engineering work force of any metropolitan area in the nation. In 2009 Chicago placed ninth on the UBS list of the world's richest cities. Chicago was the base of commercial operations for industrialists John Crerar, John Whitfield Bunn, Richard Teller Crane, Marshall Field, John Farwell, Julius Rosenwald and many other commercial visionaries who laid the foundation for Midwestern and global industry. Chicago is a major world financial center, with the second-largest central business district in the United States. The city is the seat of the Federal Reserve Bank of Chicago, the Bank's Seventh District. The city has major financial and futures exchanges, including the Chicago Stock Exchange, the Chicago Board Options Exchange (CBOE), and the Chicago Mercantile Exchange (the "Merc"), which is owned, along with the Chicago Board of Trade (CBOT) by Chicago's CME Group. In 2017, Chicago exchanges traded 4.7 billion derivatives with a face value of over one quadrillion dollars. Chase Bank has its commercial and retail banking headquarters in Chicago's Chase Tower. Academically, Chicago has been influential through the Chicago school of economics, which fielded some 12 Nobel Prize winners. The city and its surrounding metropolitan area contain the third-largest labor pool in the United States with about 4.63 million workers. Illinois is home to 66 Fortune 1000 companies, including those in Chicago. The city of Chicago also hosts 12 Fortune Global 500 companies and 17 Financial Times 500 companies. The city claims three Dow 30 companies: aerospace giant Boeing, which moved its headquarters from Seattle to the Chicago Loop in 2001, McDonald's and Walgreens Boots Alliance. For six consecutive years since 2013, Chicago was ranked the nation's top metropolitan area for corporate relocations. Three Fortune 500 companies left Chicago in 2022, leaving the city with 35, still second to New York City. Manufacturing, printing, publishing, and food processing also play major roles in the city's economy. Several medical products and services companies are headquartered in the Chicago area, including Baxter International, Boeing, Abbott Laboratories, and the Healthcare division of General Electric. In addition to Boeing, which located its headquarters in Chicago in 2001, and United Airlines in 2011, GE Transportation moved its offices to the city in 2013 and GE Healthcare moved its HQ to the city in 2016, as did ThyssenKrupp North America, and agriculture giant Archer Daniels Midland. Moreover, the construction of the Illinois and Michigan Canal, which helped move goods from the Great Lakes south on the Mississippi River, and of the railroads in the 19th century made the city a major transportation center in the United States. In the 1840s, Chicago became a major grain port, and in the 1850s and 1860s Chicago's pork and beef industry expanded. As the major meat companies grew in Chicago many, such as Armour and Company, created global enterprises. Although the meatpacking industry currently plays a lesser role in the city's economy, Chicago continues to be a major transportation and distribution center. Lured by a combination of large business customers, federal research dollars, and a large hiring pool fed by the area's universities, Chicago is also the site of a growing number of web startup companies like CareerBuilder, Orbitz, Basecamp, Groupon, Feedburner, Grubhub and NowSecure. Prominent food companies based in Chicago include the world headquarters of Conagra, Ferrara Candy Company, Kraft Heinz, McDonald's, Mondelez International, Quaker Oats, and US Foods. Chicago has been a hub of the retail sector since its early development, with Montgomery Ward, Sears, and Marshall Field's. Today the Chicago metropolitan area is the headquarters of several retailers, including Walgreens, Sears, Ace Hardware, Claire's, ULTA Beauty and Crate & Barrel. Since the 2020 COVID-19 pandemic, four large companies left the Chicago area, Boeing left to focus on its defense contracts, Caterpillar, and Tyson Foods left to consolidate operations, and Citadel LLC cited crime-related factors. Citadel's CEO Ken Griffin, formerly the richest Illinois resident, had been engaged in a three-year feud with Illinois governor J. B. Pritzker. In 2022, Kellogg's announced that the new spin-off of its snack business will move to the Chicago area, and Google announced a major real estate acquisition and expansion in the Loop. Late in the 19th century, Chicago was part of the bicycle craze, with the Western Wheel Company, which introduced stamping to the production process and significantly reduced costs, while early in the 20th century, the city was part of the automobile revolution, hosting the Brass Era car builder Bugmobile, which was founded there in 1907. Chicago was also the site of the Schwinn Bicycle Company. Chicago is a major world convention destination. The city's main convention center is McCormick Place. With its four interconnected buildings, it is the largest convention center in the nation and third-largest in the world. Chicago also ranks third in the U.S. (behind Las Vegas and Orlando) in number of conventions hosted annually. Chicago's minimum wage for non-tipped employees is one of the highest in the nation and reached $15 in 2021. Culture and contemporary life The city's waterfront location and nightlife has attracted residents and tourists alike. Over a third of the city population is concentrated in the lakefront neighborhoods from Rogers Park in the north to South Shore in the south. The city has many upscale dining establishments as well as many ethnic restaurant districts. These districts include the Mexican American neighborhoods, such as Pilsen along 18th street, and La Villita along 26th Street; the Puerto Rican enclave of Paseo Boricua in the Humboldt Park neighborhood; Greektown, along South Halsted Street, immediately west of downtown; Little Italy, along Taylor Street; Chinatown in Armour Square; Polish Patches in West Town; Little Seoul in Albany Park around Lawrence Avenue; Little Vietnam near Broadway in Uptown; and the Desi area, along Devon Avenue in West Ridge. Downtown is the center of Chicago's financial, cultural, governmental and commercial institutions and the site of Grant Park and many of the city's skyscrapers. Many of the city's financial institutions, such as the CBOT and the Federal Reserve Bank of Chicago, are located within a section of downtown called "The Loop", which is an eight-block by five-block area of city streets that is encircled by elevated rail tracks. The term "The Loop" is largely used by locals to refer to the entire downtown area as well. The central area includes the Near North Side, the Near South Side, and the Near West Side, as well as the Loop. These areas contribute famous skyscrapers, abundant restaurants, shopping, museums, a stadium for the Chicago Bears, convention facilities, parkland, and beaches. Lincoln Park contains the Lincoln Park Zoo and the Lincoln Park Conservatory. The River North Gallery District features the nation's largest concentration of contemporary art galleries outside of New York City. Lakeview is home to Boystown, the city's large LGBT nightlife and culture center. The Chicago Pride Parade, held the last Sunday in June, is one of the world's largest with over a million people in attendance. North Halsted Street is the main thoroughfare of Boystown. The South Side neighborhood of Hyde Park is the home of former US President Barack Obama. It also contains the University of Chicago, ranked one of the world's top ten universities, and the Museum of Science and Industry. The long Burnham Park stretches along the waterfront of the South Side. Two of the city's largest parks are also located on this side of the city: Jackson Park, bordering the waterfront, hosted the World's Columbian Exposition in 1893, and is the site of the aforementioned museum; and slightly west sits Washington Park. The two parks themselves are connected by a wide strip of parkland called the Midway Plaisance, running adjacent to the University of Chicago. The South Side hosts one of the city's largest parades, the annual African American Bud Billiken Parade and Picnic, which travels through Bronzeville to Washington Park. Ford Motor Company has an automobile assembly plant on the South Side in Hegewisch, and most of the facilities of the Port of Chicago are also on the South Side. The West Side holds the Garfield Park Conservatory, one of the largest collections of tropical plants in any U.S. city. Prominent Latino cultural attractions found here include Humboldt Park's Institute of Puerto Rican Arts and Culture and the annual Puerto Rican People's Parade, as well as the National Museum of Mexican Art and St. Adalbert's Church in Pilsen. The Near West Side holds the University of Illinois at Chicago and was once home to Oprah Winfrey's Harpo Studios, the site of which has been rebuilt as the global headquarters of McDonald's. The city's distinctive accent, made famous by its use in classic films like The Blues Brothers and television programs like the Saturday Night Live skit "Bill Swerski's Superfans", is an advanced form of Inland Northern American English. This dialect can also be found in other cities bordering the Great Lakes such as Cleveland, Milwaukee, Detroit, and Rochester, New York, and most prominently features a rearrangement of certain vowel sounds, such as the short 'a' sound as in "cat", which can sound more like "kyet" to outsiders. The accent remains well associated with the city. Entertainment and the arts Renowned Chicago theater companies include the Goodman Theatre in the Loop; the Steppenwolf Theatre Company and Victory Gardens Theater in Lincoln Park; and the Chicago Shakespeare Theater at Navy Pier. Broadway In Chicago offers Broadway-style entertainment at five theaters: the Nederlander Theatre, CIBC Theatre, Cadillac Palace Theatre, Auditorium Building of Roosevelt University, and Broadway Playhouse at Water Tower Place. Polish language productions for Chicago's large Polish speaking population can be seen at the historic Gateway Theatre in Jefferson Park. Since 1968, the Joseph Jefferson Awards are given annually to acknowledge excellence in theater in the Chicago area. Chicago's theater community spawned modern improvisational theater, and includes the prominent groups The Second City and I.O. (formerly ImprovOlympic). The Chicago Symphony Orchestra (CSO) performs at Symphony Center, and is recognized as one of the best orchestras in the world. Also performing regularly at Symphony Center is the Chicago Sinfonietta, a more diverse and multicultural counterpart to the CSO. In the summer, many outdoor concerts are given in Grant Park and Millennium Park. Ravinia Festival, located north of Chicago, is the summer home of the CSO, and is a favorite destination for many Chicagoans. The Civic Opera House is home to the Lyric Opera of Chicago. The Lithuanian Opera Company of Chicago was founded by Lithuanian Chicagoans in 1956, and presents operas in Lithuanian. The Joffrey Ballet and Chicago Festival Ballet perform in various venues, including the Harris Theater in Millennium Park. Chicago has several other contemporary and jazz dance troupes, such as the Hubbard Street Dance Chicago and Chicago Dance Crash. Other live-music genre which are part of the city's cultural heritage include Chicago blues, Chicago soul, jazz, and gospel. The city is the birthplace of house music (a popular form of electronic dance music) and industrial music, and is the site of an influential hip hop scene. In the 1980s and 90s, the city was the global center for house and industrial music, two forms of music created in Chicago, as well as being popular for alternative rock, punk, and new wave. The city has been a center for rave culture, since the 1980s. A flourishing independent rock music culture brought forth Chicago indie. Annual festivals feature various acts, such as Lollapalooza and the Pitchfork Music Festival. Lollapalooza originated in Chicago in 1991 and at first travelled to many cities, but as of 2005 its home has been Chicago. A 2007 report on the Chicago music industry by the University of Chicago Cultural Policy Center ranked Chicago third among metropolitan U.S. areas in "size of music industry" and fourth among all U.S. cities in "number of concerts and performances". Chicago has a distinctive fine art tradition. For much of the twentieth century, it nurtured a strong style of figurative surrealism, as in the works of Ivan Albright and Ed Paschke. In 1968 and 1969, members of the Chicago Imagists, such as Roger Brown, Leon Golub, Robert Lostutter, Jim Nutt, and Barbara Rossi produced bizarre representational paintings. Henry Darger is one of the most celebrated figures of outsider art. Chicago contains a number of large, outdoor works by well-known artists. These include the Chicago Picasso, We Will by Richard Hunt, Miró's Chicago, Flamingo and Flying Dragon by Alexander Calder, Agora by Magdalena Abakanowicz, Monument with Standing Beast by Jean Dubuffet, Batcolumn by Claes Oldenburg, Cloud Gate by Anish Kapoor, Crown Fountain by Jaume Plensa, and the Four Seasons mosaic by Marc Chagall. Chicago also hosts a nationally televised Thanksgiving parade that occurs annually. The Chicago Thanksgiving Parade is broadcast live nationally on WGN-TV and WGN America, featuring a variety of diverse acts from the community, marching bands from across the country, and is the only parade in the city to feature inflatable balloons every year. Tourism , Chicago attracted 50.17 million domestic leisure travelers, 11.09 million domestic business travelers and 1.308 million overseas visitors. These visitors contributed more than billion to Chicago's economy. Upscale shopping along the Magnificent Mile and State Street, thousands of restaurants, as well as Chicago's eminent architecture, continue to draw tourists. The city is the United States' third-largest convention destination. A 2017 study by Walk Score ranked Chicago the sixth-most walkable of fifty largest cities in the United States. Most conventions are held at McCormick Place, just south of Soldier Field. The historic Chicago Cultural Center (1897), originally serving as the Chicago Public Library, now houses the city's Visitor Information Center, galleries and exhibit halls. The ceiling of its Preston Bradley Hall includes a Tiffany glass dome. Grant Park holds Millennium Park, Buckingham Fountain (1927), and the Art Institute of Chicago. The park also hosts the annual Taste of Chicago festival. In Millennium Park, the reflective Cloud Gate public sculpture by artist Anish Kapoor is the centerpiece of the AT&T Plaza in Millennium Park. Also, an outdoor restaurant transforms into an ice rink in the winter season. Two tall glass sculptures make up the Crown Fountain. The fountain's two towers display visual effects from LED images of Chicagoans' faces, along with water spouting from their lips. Frank Gehry's detailed, stainless steel band shell, the Jay Pritzker Pavilion, hosts the classical Grant Park Music Festival concert series. Behind the pavilion's stage is the Harris Theater for Music and Dance, an indoor venue for mid-sized performing arts companies, including the Chicago Opera Theater and Music of the Baroque. Navy Pier, located just east of Streeterville, is long and houses retail stores, restaurants, museums, exhibition halls and auditoriums. In the summer of 2016, Navy Pier constructed a DW60 Ferris wheel. Dutch Wheels, a world renowned company that manufactures ferris wheels, was selected to design the new wheel. It features 42 navy blue gondolas that can hold up to eight adults and two children. It also has entertainment systems inside the gondolas as well as a climate controlled environment. The DW60 stands at approximately , which is taller than the previous wheel. The new DW60 is the first in the United States and is the sixth tallest in the U.S. Chicago was the first city in the world to ever erect a ferris wheel. On June 4, 1998, the city officially opened the Museum Campus, a lakefront park, surrounding three of the city's main museums, each of which is of national importance: the Adler Planetarium & Astronomy Museum, the Field Museum of Natural History, and the Shedd Aquarium. The Museum Campus joins the southern section of Grant Park, which includes the renowned Art Institute of Chicago. Buckingham Fountain anchors the downtown park along the lakefront. The University of Chicago Oriental Institute has an extensive collection of ancient Egyptian and Near Eastern archaeological artifacts. Other museums and galleries in Chicago include the Chicago History Museum, the Driehaus Museum, the DuSable Museum of African American History, the Museum of Contemporary Art, the Peggy Notebaert Nature Museum, the Polish Museum of America, the Museum of Broadcast Communications, the Pritzker Military Library, the Chicago Architecture Foundation, and the Museum of Science and Industry. With an estimated completion date of 2020, the Barack Obama Presidential Center will be housed at the University of Chicago in Hyde Park and include both the Obama presidential library and offices of the Obama Foundation. The Willis Tower (formerly named Sears Tower) is a popular destination for tourists. The Willis Tower has an observation deck open to tourists year round with high up views overlooking Chicago and Lake Michigan. The observation deck includes an enclosed glass balcony that extends out on the side of the building. Tourists are able to look straight down. In 2013, Chicago was chosen as one of the "Top Ten Cities in the United States" to visit for its restaurants, skyscrapers, museums, and waterfront, by the readers of Condé Nast Traveler, and in 2020 for the fourth year in a row, Chicago was named the top U.S. city tourism destination. Cuisine Chicago lays claim to a large number of regional specialties that reflect the city's ethnic and working-class roots. Included among these are its nationally renowned deep-dish pizza; this style is said to have originated at Pizzeria Uno. The Chicago-style thin crust is also popular in the city. Certain Chicago pizza favorites include Lou Malnati's and Giordano's. The Chicago-style hot dog, typically an all-beef hot dog, is loaded with an array of toppings that often includes pickle relish, yellow mustard, pickled sport peppers, tomato wedges, dill pickle spear and topped off with celery salt on a poppy seed bun. Enthusiasts of the Chicago-style hot dog frown upon the use of ketchup as a garnish, but may prefer to add giardiniera. A distinctly Chicago sandwich, the Italian beef sandwich is thinly sliced beef simmered in au jus and served on an Italian roll with sweet peppers or spicy giardiniera. A popular modification is the Combo—an Italian beef sandwich with the addition of an Italian sausage. The Maxwell Street Polish is a grilled or deep-fried kielbasa—on a hot dog roll, topped with grilled onions, yellow mustard, and hot sport peppers. Chicken Vesuvio is roasted bone-in chicken cooked in oil and garlic next to garlicky oven-roasted potato wedges and a sprinkling of green peas. The Puerto Rican-influenced jibarito is a sandwich made with flattened, fried green plantains instead of bread. The mother-in-law is a tamale topped with chili and served on a hot dog bun. The tradition of serving the Greek dish saganaki while aflame has its origins in Chicago's Greek community. The appetizer, which consists of a square of fried cheese, is doused with Metaxa and flambéed table-side. Annual festivals feature various Chicago signature dishes, such as Taste of Chicago and the Chicago Food Truck Festival. One of the world's most decorated restaurants and a recipient of three Michelin stars, Alinea is located in Chicago. Well-known chefs who have had restaurants in Chicago include: Charlie Trotter, Rick Tramonto, Grant Achatz, and Rick Bayless. In 2003, Robb Report named Chicago the country's "most exceptional dining destination". Literature Chicago literature finds its roots in the city's tradition of lucid, direct journalism, lending to a strong tradition of social realism. In the Encyclopedia of Chicago, Northwestern University Professor Bill Savage describes Chicago fiction as prose which tries to "capture the essence of the city, its spaces and its people". The challenge for early writers was that Chicago was a frontier outpost that transformed into a global metropolis in the span of two generations. Narrative fiction of that time, much of it in the style of "high-flown romance" and "genteel realism", needed a new approach to describe the urban social, political, and economic conditions of Chicago. Nonetheless, Chicagoans worked hard to create a literary tradition that would stand the test of time, and create a "city of feeling" out of concrete, steel, vast lake, and open prairie. Much notable Chicago fiction focuses on the city itself, with social criticism keeping exultation in check. At least three short periods in the history of Chicago have had a lasting influence on American literature. These include from the time of the Great Chicago Fire to about 1900, what became known as the Chicago Literary Renaissance in the 1910s and early 1920s, and the period of the Great Depression through the 1940s. What would become the influential Poetry magazine was founded in 1912 by Harriet Monroe, who was working as an art critic for the Chicago Tribune. The magazine discovered such poets as Gwendolyn Brooks, James Merrill, and John Ashbery. T. S. Eliot's first professionally published poem, "The Love Song of J. Alfred Prufrock", was first published by Poetry. Contributors have included Ezra Pound, William Butler Yeats, William Carlos Williams, Langston Hughes, and Carl Sandburg, among others. The magazine was instrumental in launching the Imagist and Objectivist poetic movements. From the 1950s through 1970s, American poetry continued to evolve in Chicago. In the 1980s, a modern form of poetry performance began in Chicago, the poetry slam. Sports Sporting News named Chicago the "Best Sports City" in the United States in 1993, 2006, and 2010. Along with Boston, Chicago is the only city to continuously host major professional sports since 1871, having only taken 1872 and 1873 off due to the Great Chicago Fire. Additionally, Chicago is one of the eight cities in the United States to have won championships in the four major professional leagues and, along with Los Angeles, New York, Philadelphia and Washington, is one of five cities to have won soccer championships as well. All of its major franchises have won championships within recent years – the Bears (1985), the Bulls (1991, 1992, 1993, 1996, 1997, and 1998), the White Sox (2005), the Cubs (2016), the Blackhawks (2010, 2013, 2015), and the Fire (1998). Chicago has the third most franchises in the four major North American sports leagues with five, behind the New York and Los Angeles Metropolitan Areas, and have six top-level professional sports clubs when including Chicago Fire FC of Major League Soccer (MLS). The city has two Major League Baseball (MLB) teams: the Chicago Cubs of the National League play in Wrigley Field on the North Side; and the Chicago White Sox of the American League play in Guaranteed Rate Field on the South Side. Chicago is the only city that has had more than one MLB franchise every year since the AL began in 1901 (New York hosted only one between 1958 and early 1962). The two teams have faced each other in a World Series only once: in 1906, when the White Sox, known as the "Hitless Wonders," defeated the Cubs, 4–2. The Cubs are the oldest Major League Baseball team to have never changed their city; they have played in Chicago since 1871, and continuously so since 1874 due to the Great Chicago Fire. They have played more games and have more wins than any other team in Major League baseball since 1876. They have won three World Series titles, including the 2016 World Series, but had the dubious honor of having the two longest droughts in American professional sports: They had not won their sport's title since 1908, and had not participated in a World Series since 1945, both records, until they beat the Cleveland Indians in the 2016 World Series. The White Sox have played on the South Side continuously since 1901, with all three of their home fields throughout the years being within blocks of one another. They have won three World Series titles (1906, 1917, 2005) and six American League pennants, including the first in 1901. The Sox are fifth in the American League in all-time wins, and sixth in pennants. The Chicago Bears, one of the last two remaining charter members of the National Football League (NFL), have won nine NFL Championships, including the 1985 Super Bowl XX. The other remaining charter franchise, the Chicago Cardinals, also started out in the city, but is now known as the Arizona Cardinals. The Bears have won more games in the history of the NFL than any other team, and only the Green Bay Packers, their longtime rivals, have won more championships. The Bears play their home games at Soldier Field. Soldier Field re-opened in 2003 after an extensive renovation. The Chicago Bulls of the National Basketball Association (NBA) is one of the most recognized basketball teams in the world. During the 1990s, with Michael Jordan leading them, the Bulls won six NBA championships in eight seasons. They also boast the youngest player to win the NBA Most Valuable Player Award, Derrick Rose, who won it for the 2010–11 season. The Chicago Blackhawks of the National Hockey League (NHL) began play in 1926, and are one of the "Original Six" teams of the NHL. The Blackhawks have won six Stanley Cups, including in 2010, 2013, and 2015. Both the Bulls and the Blackhawks play at the United Center. Chicago Fire FC is a member of Major League Soccer (MLS) and plays at Soldier Field. After playing its first eight seasons at Soldier Field, the team moved to suburban Bridgeview to play at SeatGeek Stadium. In 2019, the team announced a move back to Soldier Field. The Fire have won one league title and four U.S. Open Cups, since their founding in 1997. In 1994, the United States hosted a successful FIFA World Cup with games played at Soldier Field. The Chicago Sky is a professional basketball team playing in the Women's National Basketball Association (WNBA). They play home games at the Wintrust Arena. The team was founded before the 2006 WNBA season began. The Chicago Marathon has been held each year since 1977 except for 1987, when a half marathon was run in its place. The Chicago Marathon is one of six World Marathon Majors. Five area colleges play in Division I conferences: two from major conferences—the DePaul Blue Demons (Big East Conference) and the Northwestern Wildcats (Big Ten Conference)—and three from other D1 conferences—the Chicago State Cougars (Western Athletic Conference); the Loyola Ramblers (Missouri Valley Conference); and the UIC Flames (Horizon League). Chicago has also entered into esports with the creation of the Chicago Huntsmen, a professional Call of Duty team that participates within the CDL. At the Call of Duty League's Launch Week games in Minneapolis, Minnesota, the Chicago Huntsmen went on to beat both the Dallas Empire and Optic Gaming Los Angeles. Parks and greenspace When Chicago was incorporated in 1837, it chose the motto Urbs in Horto, a Latin phrase which means "City in a Garden". Today, the Chicago Park District consists of more than 570 parks with over of municipal parkland. There are 31 sand beaches, a plethora of museums, two world-class conservatories, and 50 nature areas. Lincoln Park, the largest of the city's parks, covers and has over 20 million visitors each year, making it third in the number of visitors after Central Park in New York City, and the National Mall and Memorial Parks in Washington, D.C. There is a historic boulevard system, a network of wide, tree-lined boulevards which connect a number of Chicago parks. The boulevards and the parks were authorized by the Illinois legislature in 1869. A number of Chicago neighborhoods emerged along these roadways in the 19th century. The building of the boulevard system continued intermittently until 1942. It includes nineteen boulevards, eight parks, and six squares, along twenty-six miles of interconnected streets. The Chicago Park Boulevard System Historic District was listed on the National Register of Historic Places in 2018. With berths for more than 6,000 boats, the Chicago Park District operates the nation's largest municipal harbor system. In addition to ongoing beautification and renewal projects for the existing parks, a number of new parks have been added in recent years, such as the Ping Tom Memorial Park in Chinatown, DuSable Park on the Near North Side, and most notably, Millennium Park, which is in the northwestern corner of one of Chicago's oldest parks, Grant Park in the Chicago Loop. The wealth of greenspace afforded by Chicago's parks is further augmented by the Cook County Forest Preserves, a network of open spaces containing forest, prairie, wetland, streams, and lakes that are set aside as natural areas which lie along the city's outskirts, including both the Chicago Botanic Garden in Glencoe and the Brookfield Zoo in Brookfield. Washington Park is also one of the city's biggest parks; covering nearly . The park is listed on the National Register of Historic Places listings in South Side Chicago. Law and government Government The government of the City of Chicago is divided into executive and legislative branches. The mayor of Chicago is the chief executive, elected by general election for a term of four years, with no term limits. The current mayor is Lori Lightfoot. The mayor appoints commissioners and other officials who oversee the various departments. As well as the mayor, Chicago's clerk and treasurer are also elected citywide. The City Council is the legislative branch and is made up of 50 aldermen, one elected from each ward in the city. The council takes official action through the passage of ordinances and resolutions and approves the city budget. The Chicago Police Department provides law enforcement and the Chicago Fire Department provides fire suppression and emergency medical services for the city and its residents. Civil and criminal law cases are heard in the Cook County Circuit Court of the State of Illinois court system, or in the Northern District of Illinois, in the federal system. In the state court, the public prosecutor is the Illinois state's attorney; in the Federal court it is the United States attorney. Politics During much of the last half of the 19th century, Chicago's politics were dominated by a growing Democratic Party organization. During the 1880s and 1890s, Chicago had a powerful radical tradition with large and highly organized socialist, anarchist and labor organizations. For much of the 20th century, Chicago has been among the largest and most reliable Democratic strongholds in the United States; with Chicago's Democratic vote the state of Illinois has been "solid blue" in presidential elections since 1992. Even before then, it was not unheard of for Republican presidential candidates to win handily in downstate Illinois, only to lose statewide due to large Democratic margins in Chicago. The citizens of Chicago have not elected a Republican mayor since 1927, when William Thompson was voted into office. The strength of the party in the city is partly a consequence of Illinois state politics, where the Republicans have come to represent rural and farm concerns while the Democrats support urban issues such as Chicago's public school funding. Chicago contains less than 25% of the state's population, but it is split between eight of Illinois' 19 districts in the United States House of Representatives. All eight of the city's representatives are Democrats; only two Republicans have represented a significant portion of the city since 1973, for one term each: Robert P. Hanrahan from 1973 to 1975, and Michael Patrick Flanagan from 1995 to 1997. Machine politics persisted in Chicago after the decline of similar machines in other large U.S. cities. During much of that time, the city administration found opposition mainly from a liberal "independent" faction of the Democratic Party. The independents finally gained control of city government in 1983 with the election of Harold Washington (in office 1983–1987). From 1989 until May 16, 2011, Chicago was under the leadership of its longest-serving mayor, Richard M. Daley, the son of Richard J. Daley. Because of the dominance of the Democratic Party in Chicago, the Democratic primary vote held in the spring is generally more significant than the general elections in November for U.S. House and Illinois State seats. The aldermanic, mayoral, and other city offices are filled through nonpartisan elections with runoffs as needed. The city is home of former United States President Barack Obama and First Lady Michelle Obama; Barack Obama was formerly a state legislator representing Chicago and later a US senator. The Obamas' residence is located near the University of Chicago in Kenwood on the city's south side. Crime Chicago's crime rate in 2020 was 3,926 per 100,000 people. Chicago had a murder rate of 18.5 per 100,000 residents in 2012, ranking 16th among US cities with 100,000 people or more. This was higher than in New York City and Los Angeles, the two largest cities in the United States, which have lower murder rates and lower total homicides. However, it was less than in many smaller American cities, including New Orleans, Newark, and Detroit, although the latter has fallen substantially in recent years. The 2015 year-end crime statistics showed there were 468 murders in Chicago in 2015 compared with 416 the year before, a 12.5% increase, as well as 2,900 shootings—13% more than the year prior, and up 29% since 2013. Chicago had more homicides than any other city in 2015 in total but not on per capita basis, according to the Chicago Tribune. In its annual crime statistics for 2016, the Chicago Police Department reported that the city experienced a dramatic rise in gun violence, with 4,331 shooting victims. The department also reported 762 murders in Chicago for the year 2016, a total that marked a 62.79% increase in homicides from 2015. In June 2017, the Chicago Police Department and the Federal ATF announced a new task force, similar to past task forces, to address the flow of illegal guns and repeat offenses with guns. According to reports in 2013, "most of Chicago's violent crime comes from gangs trying to maintain control of drug-selling territories", and is specifically related to the activities of the Sinaloa Cartel, which is active in several American cities. By 2006, the cartel sought to control most illicit drug sales. Violent crime rates vary significantly by area of the city, with more economically developed areas having low rates, but other sections have much higher rates of crime. In 2013, the violent crime rate was 910 per 100,000 people; the murder rate was 10.4 – while high crime districts saw 38.9, low crime districts saw 2.5 murders per 100,000. The number of murders in Chicago peaked at 970 in 1974, when the city's population was over 3 million people (a murder rate of about 29 per 100,000), and it reached 943 murders in 1992, (a murder rate of 34 per 100,000). However, Chicago, like other major U.S. cities, experienced a significant reduction in violent crime rates through the 1990s, falling to 448 homicides in 2004, its lowest total since 1965 and only 15.65 murders per 100,000. Chicago's homicide tally remained low during 2005 (449), 2006 (452), and 2007 (435) but rose to 510 in 2008, breaking 500 for the first time since 2003. In 2009, the murder count fell to 458 (10% down). and in 2010 Chicago's murder rate fell to 435 (16.14 per 100,000), a 5% decrease from 2009 and lowest levels since 1965. In 2011, Chicago's murders fell another 1.2% to 431 (a rate of 15.94 per 100,000). but shot up to 506 in 2012. In 2012, Chicago ranked 21st in the United States in numbers of homicides per person, and in the first half of 2013 there was a significant drop per-person, in all categories of violent crime, including homicide (down 26%). Chicago ended 2013 with 415 murders, the lowest number of murders since 1965, and overall crime rates dropped by 16 percent. In 2013, the city's murder rate was only slightly higher than the national average as a whole. According to the FBI, St. Louis, New Orleans, Detroit, and Baltimore had the highest murder rate along with several other cities. Jens Ludwig, director of the University of Chicago Crime Lab, estimated that shootings cost the city of Chicago $2.5 billion in 2012. Chicago began experiencing a massive surge in carjackings after 2019, and at least 1,415 such crimes took place in the city in 2020. According to the Chicago Police Department, carjackers are using face masks that are widely worn due to the ongoing COVID-19 pandemic to effectively blend in with the public and conceal their identity. On January 27, 2021, Mayor Lightfoot described the worsening wave of carjackings as being 'top of mind,' and added 40 police officers to the CPD carjacking unit. Employee pensions In September 2016, an Illinois state appellate court found that cities do not have an obligation under the Illinois Constitution to pay certain benefits if those benefits had included an expiration date under whichever negotiated agreement they were covered. The Illinois Constitution prohibits governments from doing anything that could cause retirement benefits for government workers to be "diminished or impaired." In this particular case, the fact that the workers' agreements had expiration dates let the city of Chicago set an expiration date of 2013 for contribution to health benefits for workers who retired after 1989. Education Schools and libraries Chicago Public Schools (CPS) is the governing body of the school district that contains over 600 public elementary and high schools citywide, including several selective-admission magnet schools. There are eleven selective enrollment high schools in the Chicago Public Schools, designed to meet the needs of Chicago's most academically advanced students. These schools offer a rigorous curriculum with mainly honors and Advanced Placement (AP) courses. Walter Payton College Prep High School is ranked number one in the city of Chicago and the state of Illinois. Northside College Preparatory High School is ranked second, Jones College Prep is third, and the oldest magnet school in the city, Whitney M. Young Magnet High School, which was opened in 1975, is ranked fourth. The magnet school with the largest enrollment is Lane Technical College Prep High School. Lane is one of the oldest schools in Chicago and in 2012 was designated a National Blue Ribbon School by the U.S. Department of Education. Chicago high school rankings are determined by the average test scores on state achievement tests. The district, with an enrollment exceeding 400,545 students (2013–2014 20th Day Enrollment), is the third-largest in the U.S. On September 10, 2012, teachers for the Chicago Teachers Union went on strike for the first time since 1987 over pay, resources and other issues. According to data compiled in 2014, Chicago's "choice system", where students who test or apply and may attend one of a number of public high schools (there are about 130), sorts students of different achievement levels into different schools (high performing, middle performing, and low performing schools). Chicago has a network of Lutheran schools, and several private schools are run by other denominations and faiths, such as the Ida Crown Jewish Academy in West Ridge. Several private schools are completely secular, such as the Latin School of Chicago in the Near North Side neighborhood, the University of Chicago Laboratory Schools in Hyde Park, the British School of Chicago and the Francis W. Parker School in Lincoln Park, the Lycée Français de Chicago in Uptown, the Feltre School in River North and the Morgan Park Academy. There are also the private Chicago Academy for the Arts, a high school focused on six different categories of the arts and the public Chicago High School for the Arts, a high school focused on five categories (visual arts, theatre, musical theatre, dance, and music) of the arts. The Roman Catholic Archdiocese of Chicago operates Catholic schools, that include Jesuit preparatory schools and others including St. Rita of Cascia High School, De La Salle Institute, Josephinum Academy, DePaul College Prep, Cristo Rey Jesuit High School, Brother Rice High School, St. Ignatius College Preparatory School, Mount Carmel High School, Queen of Peace High School, Mother McAuley Liberal Arts High School, Marist High School, St. Patrick High School and Resurrection High School. The Chicago Public Library system operates 3 regional libraries and 77 neighbourhood branches, including the central library. Colleges and universities Since the 1850s, Chicago has been a world center of higher education and research with several universities. These institutions consistently rank among the top "National Universities" in the United States, as determined by U.S. News & World Report. Highly regarded universities in Chicago and the surrounding area are: the University of Chicago; Northwestern University; Illinois Institute of Technology; Loyola University Chicago; DePaul University; Columbia College Chicago and University of Illinois at Chicago. Other notable schools include: Chicago State University; the School of the Art Institute of Chicago; East–West University; National Louis University; North Park University; Northeastern Illinois University; Robert Morris University Illinois; Roosevelt University; Saint Xavier University; Rush University; and Shimer College. William Rainey Harper, the first president of the University of Chicago, was instrumental in the creation of the junior college concept, establishing nearby Joliet Junior College as the first in the nation in 1901. His legacy continues with the multiple community colleges in the Chicago proper, including the seven City Colleges of Chicago: Richard J. Daley College, Kennedy–King College, Malcolm X College, Olive–Harvey College, Truman College, Harold Washington College and Wilbur Wright College, in addition to the privately held MacCormac College. Chicago also has a high concentration of post-baccalaureate institutions, graduate schools, seminaries, and theological schools, such as the Adler School of Professional Psychology, The Chicago School of Professional Psychology, the Erikson Institute, The Institute for Clinical Social Work, the Lutheran School of Theology at Chicago, the Catholic Theological Union, the Moody Bible Institute, the John Marshall Law School and the University of Chicago Divinity School. Media Television The Chicago metropolitan area is the third-largest media market in North America, after New York City and Los Angeles and a major media hub. Each of the big four U.S. television networks, CBS, ABC, NBC and Fox, directly owns and operates a high-definition television station in Chicago (WBBM 2, WLS 7, WMAQ 5 and WFLD 32, respectively). Former CW affiliate WGN-TV 9, which was owned from its inception by Tribune Broadcasting (now owned by the Nexstar Media Group since 2019), is carried with some programming differences, as "WGN America" on cable and satellite TV nationwide and in parts of the Caribbean. WGN America eventually became NewsNation in 2021. Chicago has also been the home of several prominent talk shows, including The Oprah Winfrey Show, Steve Harvey Show, The Rosie Show, The Jerry Springer Show, The Phil Donahue Show, The Jenny Jones Show, and more. The city also has one PBS member station (its second: WYCC 20, removed its affiliation with PBS in 2017): WTTW 11, producer of shows such as Sneak Previews, The Frugal Gourmet, Lamb Chop's Play-Along and The McLaughlin Group. , Windy City Live is Chicago's only daytime talk show, which is hosted by Val Warner and Ryan Chiaverini at ABC7 Studios with a live weekday audience. Since 1999, Judge Mathis also films his syndicated arbitration-based reality court show at the NBC Tower. Beginning in January 2019, Newsy began producing 12 of its 14 hours of live news programming per day from its new facility in Chicago. Newspapers Two major daily newspapers are published in Chicago: the Chicago Tribune and the Chicago Sun-Times, with the Tribune having the larger circulation. There are also several regional and special-interest newspapers and magazines, such as Chicago, the Dziennik Związkowy (Polish Daily News), Draugas (the Lithuanian daily newspaper), the Chicago Reader, the SouthtownStar, the Chicago Defender, the Daily Herald, Newcity, StreetWise and the Windy City Times. The entertainment and cultural magazine Time Out Chicago and GRAB magazine are also published in the city, as well as local music magazine Chicago Innerview. In addition, Chicago is the home of satirical national news outlet, The Onion, as well as its sister pop-culture publication, The A.V. Club. Movies and filming Since the 1980s, many motion pictures have been filmed or set in the city such as The Untouchables, The Blues Brothers, The Matrix, Brewster's Millions, Ferris Bueller's Day Off, Sixteen Candles, Home Alone, The Fugitive, I, Robot, Mean Girls, Wanted, Batman Begins, The Dark Knight, Dhoom 3, Transformers: Dark of the Moon, Transformers: Age of Extinction, Transformers: The Last Knight, Divergent, Man of Steel, Batman v Superman: Dawn of Justice, Sinister 2, Suicide Squad, Justice League, Rampage and The Batman. In The Dark Knight Trilogy and the DC Extended Universe, Chicago was used as the inspiration and filming site for Gotham City and Metropolis respectively. Chicago has also been the setting of a number of television shows, including the situation comedies Perfect Strangers and its spinoff Family Matters, Married... with Children, Punky Brewster, Kenan & Kel, Still Standing, The League, The Bob Newhart Show, and Shake It Up. The city served as the venue for the medical dramas ER and Chicago Hope, as well as the fantasy drama series Early Edition and the 2005–2009 drama Prison Break. Discovery Channel films two shows in Chicago: Cook County Jail and the Chicago version of Cash Cab. Other notable shows include CBS's The Good Wife and Mike and Molly. Chicago is currently the setting for Showtime's Shameless, and NBC's Chicago Fire, Chicago P.D. and Chicago Med. All three Chicago franchise shows are filmed locally throughout Chicago and maintain strong national viewership averaging 7 million viewers per show. Radio Chicago has five 50,000 watt AM radio stations: the CBS Radio-owned WBBM and WSCR; the Tribune Broadcasting-owned WGN; the Cumulus Media-owned WLS; and the ESPN Radio-owned WMVP. Chicago is also home to a number of national radio shows, including Beyond the Beltway with Bruce DuMont on Sunday evenings. Chicago Public Radio produces nationally aired programs such as PRI's This American Life and NPR's Wait Wait...Don't Tell Me!. Music In 2005, indie rock artist Sufjan Stevens created a concept album about Illinois titled Illinois; many of its songs were about Chicago and its history. Industrial genre The city was particularly important for the development of the harsh and electronic based music genre known as industrial. Many themes are transgressive and derived from the works of authors such as William S. Burroughs. While the genre was pioneered by Throbbing Gristle in the late 70s, the genre was largely started in the United Kingdom, with the Chicago-based record label Wax Trax! later establishing itself as America's home for the genre. The label first found success with Ministry, with the release of the cold life single, which entered the US Dance charts in 1982. The record label later signed many prominent industrial acts, with the most notable being: My Life with the Thrill Kill Kult, KMFDM, Front Line Assembly and Front 242. Richard Giraldi of the Chicago Sun-Times remarked on the significance of the label and wrote, "As important as Chess Records was to blues and soul music, Chicago's Wax Trax imprint was just as significant to the punk rock, new wave and industrial genres." Video games Chicago is also featured in a few video games, including Watch Dogs and Midtown Madness, a real-life, car-driving simulation game. Chicago is home to NetherRealm Studios, the developers of the Mortal Kombat series. Infrastructure Transportation Chicago is a major transportation hub in the United States. It is an important component in global distribution, as it is the third-largest inter-modal port in the world after Hong Kong and Singapore. The city of Chicago has a higher than average percentage of households without a car. In 2015, 26.5 percent of Chicago households were without a car, and increased slightly to 27.5 percent in 2016. The national average was 8.7 percent in 2016. Chicago averaged 1.12 cars per household in 2016, compared to a national average of 1.8. Expressways Seven mainline and four auxiliary interstate highways (55, 57, 65 (only in Indiana), 80 (also in Indiana), 88, 90 (also in Indiana), 94 (also in Indiana), 190, 290, 294, and 355) run through Chicago and its suburbs. Segments that link to the city center are named after influential politicians, with three of them named after former U.S. Presidents (Eisenhower, Kennedy, and Reagan) and one named after two-time Democratic candidate Adlai Stevenson. The Kennedy and Dan Ryan Expressways are the busiest state maintained routes in the entire state of Illinois. Transit systems The Regional Transportation Authority (RTA) coordinates the operation of the three service boards: CTA, Metra, and Pace. The Chicago Transit Authority (CTA) handles public transportation in the City of Chicago and a few adjacent suburbs outside of the Chicago city limits. The CTA operates an extensive network of buses and a rapid transit elevated and subway system known as the Chicago "L" or just "L" (short for "elevated"), with lines designated by colors. These rapid transit lines also serve both Midway and O'Hare Airports. The CTA's rail lines consist of the Red, Blue, Green, Orange, Brown, Purple, Pink, and Yellow lines. Both the Red and Blue lines offer 24‑hour service which makes Chicago one of a handful of cities around the world (and one of two in the United States, the other being New York City) to offer rail service 24 hours a day, every day of the year, within the city's limits. Metra, the nation's second-most used passenger regional rail network, operates an 11-line commuter rail service in Chicago and throughout the Chicago suburbs. The Metra Electric Line shares its trackage with Northern Indiana Commuter Transportation District's South Shore Line, which provides commuter service between South Bend and Chicago. Pace provides bus and paratransit service in over 200 surrounding suburbs with some extensions into the city as well. A 2005 study found that one quarter of commuters used public transit. Greyhound Lines provides inter-city bus service to and from the city, and Chicago is also the hub for the Midwest network of Megabus (North America). Passenger rail Amtrak long distance and commuter rail services originate from Union Station. Chicago is one of the largest hubs of passenger rail service in the nation. The services terminate in the San Francisco area, Washington, D.C., New York City, New Orleans, Portland, Seattle, Milwaukee, Quincy, St. Louis, Carbondale, Boston, Grand Rapids, Port Huron, Pontiac, Los Angeles, and San Antonio. Future services will terminate at Rockford and Moline. An attempt was made in the early 20th century to link Chicago with New York City via the Chicago – New York Electric Air Line Railroad. Parts of this were built, but it was never completed. Bicycle and scooter sharing systems In July 2013, the bicycle-sharing system Divvy was launched with 750 bikes and 75 docking stations It is operated by Lyft for the Chicago Department of Transportation. As of July 2019, Divvy operated 5800 bicycles at 608 stations, covering almost all of the city, excluding Pullman, Rosedale, Beverly, Belmont Cragin and Edison Park. In May 2019, The City of Chicago announced its Chicago's Electric Shared Scooter Pilot Program, scheduled to run from June 15 to October 15. The program started on June 15 with 10 different scooter companies, including scooter sharing market leaders Bird, Jump, Lime and Lyft. Each company was allowed to bring 250 electric scooters, although both Bird and Lime claimed that they experienced a higher demand for their scooters. The program ended on October 15, with nearly 800,000 rides taken. Freight rail Chicago is the largest hub in the railroad industry. Six of the seven Class I railroads meet in Chicago, with the exception being the Kansas City Southern Railway. , severe freight train congestion caused trains to take as long to get through the Chicago region as it took to get there from the West Coast of the country (about 2 days). According to U.S. Department of Transportation, the volume of imported and exported goods transported via rail to, from, or through Chicago is forecast to increase nearly 150 percent between 2010 and 2040. CREATE, the Chicago Region Environmental and Transportation Efficiency Program, comprises about 70 programs, including crossovers, overpasses and underpasses, that intend to significantly improve the speed of freight movements in the Chicago area. Airports Chicago is served by O'Hare International Airport, the world's busiest airport measured by airline operations, on the far Northwest Side, and Midway International Airport on the Southwest Side. In 2005, O'Hare was the world's busiest airport by aircraft movements and the second-busiest by total passenger traffic. Both O'Hare and Midway are owned and operated by the City of Chicago. Gary/Chicago International Airport and Chicago Rockford International Airport, located in Gary, Indiana and Rockford, Illinois, respectively, can serve as alternative Chicago area airports, however they do not offer as many commercial flights as O'Hare and Midway. In recent years the state of Illinois has been leaning towards building an entirely new airport in the Illinois suburbs of Chicago. The City of Chicago is the world headquarters for United Airlines, the world's third-largest airline. Port authority The Port of Chicago consists of several major port facilities within the city of Chicago operated by the Illinois International Port District (formerly known as the Chicago Regional Port District). The central element of the Port District, Calumet Harbor, is maintained by the U.S. Army Corps of Engineers. Iroquois Landing Lakefront Terminal: at the mouth of the Calumet River, it includes of warehouses and facilities on Lake Michigan with over of storage. Lake Calumet terminal: located at the union of the Grand Calumet River and Little Calumet River inland from Lake Michigan. Includes three transit sheds totaling over adjacent to over 900 linear meters (3,000 linear feet) of ship and barge berthing. Grain (14 million bushels) and bulk liquid (800,000 barrels) storage facilities along Lake Calumet. The Illinois International Port district also operates Foreign trade zone No. 22, which extends from Chicago's city limits. Utilities Electricity for most of northern Illinois is provided by Commonwealth Edison, also known as ComEd. Their service territory borders Iroquois County to the south, the Wisconsin border to the north, the Iowa border to the west and the Indiana border to the east. In northern Illinois, ComEd (a division of Exelon) operates the greatest number of nuclear generating plants in any US state. Because of this, ComEd reports indicate that Chicago receives about 75% of its electricity from nuclear power. Recently, the city began installing wind turbines on government buildings to promote renewable energy. Natural gas is provided by Peoples Gas, a subsidiary of Integrys Energy Group, which is headquartered in Chicago. Domestic and industrial waste was once incinerated but it is now landfilled, mainly in the Calumet area. From 1995 to 2008, the city had a blue bag program to divert recyclable refuse from landfills. Because of low participation in the blue bag programs, the city began a pilot program for blue bin recycling like other cities. This proved successful and blue bins were rolled out across the city. Health systems The Illinois Medical District is on the Near West Side. It includes Rush University Medical Center, ranked as the second best hospital in the Chicago metropolitan area by U.S. News & World Report for 2014–16, the University of Illinois Medical Center at Chicago, Jesse Brown VA Hospital, and John H. Stroger Jr. Hospital of Cook County, one of the busiest trauma centers in the nation. Two of the country's premier academic medical centers reside in Chicago, including Northwestern Memorial Hospital and the University of Chicago Medical Center. The Chicago campus of Northwestern University includes the Feinberg School of Medicine; Northwestern Memorial Hospital, which is ranked as the best hospital in the Chicago metropolitan area by U.S. News & World Report for 2017–18; the Shirley Ryan AbilityLab (formerly named the Rehabilitation Institute of Chicago), which is ranked the best U.S. rehabilitation hospital by U.S. News & World Report; the new Prentice Women's Hospital; and Ann & Robert H. Lurie Children's Hospital of Chicago. The University of Illinois College of Medicine at UIC is the second largest medical school in the United States (2,600 students including those at campuses in Peoria, Rockford and Urbana–Champaign). In addition, the Chicago Medical School and Loyola University Chicago's Stritch School of Medicine are located in the suburbs of North Chicago and Maywood, respectively. The Midwestern University Chicago College of Osteopathic Medicine is in Downers Grove. The American Medical Association, Accreditation Council for Graduate Medical Education, Accreditation Council for Continuing Medical Education, American Osteopathic Association, American Dental Association, Academy of General Dentistry, Academy of Nutrition and Dietetics, American Association of Nurse Anesthetists, American College of Surgeons, American Society for Clinical Pathology, American College of Healthcare Executives, the American Hospital Association and Blue Cross and Blue Shield Association are all based in Chicago. Sister cities Chicago has 28 sister cities around the world. Like Chicago, many of them are the main city of a country that has had large numbers of immigrants settle in Chicago. These relationships have sought to promote economic, cultural, educational, and other ties. To celebrate the sister cities, Chicago hosts a yearly festival in Daley Plaza, which features cultural acts and food tastings from the other cities. In addition, the Chicago Sister Cities program hosts a number of delegation and formal exchanges. In some cases, these exchanges have led to further informal collaborations, such as the academic relationship between the Buehler Center on Aging, Health & Society at the Feinberg School of Medicine of Northwestern University and the Institute of Gerontology of Ukraine (originally of the Soviet Union), that was originally established as part of the Chicago-Kyiv sister cities program. Sister cities Warsaw (Poland) 1960 Milan (Italy) 1973 Osaka (Japan) 1973 Casablanca (Morocco) 1982 Shanghai (China) 1985 Shenyang (China) 1985 Gothenburg (Sweden) 1987 Accra (Ghana) 1989 Prague (Czech Republic) 1990 Kyiv (Ukraine) 1991 Mexico City (Mexico) 1991 Toronto (Canada) 1991 Birmingham (United Kingdom) 1993 Vilnius (Lithuania) 1993 Hamburg (Germany) 1994 Petah Tikva (Israel) 1994 Paris (France) 1996 (friendship and cooperation agreement only) Athens (Greece) 1997 Durban (South Africa) 1997 Galway (Ireland) 1997 Moscow (Russia) 1997 (Suspended) Lucerne (Switzerland) 1998 Delhi (India) 2001 Amman (Jordan) 2004 Belgrade (Serbia) 2005 São Paulo (Brazil) 2007 Lahore (Pakistan) 2007 Busan (South Korea) 2007 Bogotá (Colombia) 2009 City of Sydney (Australia) February 21, 2019 (The City of Sydney considers the City of Chicago a "friendship city", while the City of Chicago considers the City of Sydney a "sister city.") See also Chicago area water quality Chicago Wilderness Gentrification of Chicago List of cities with the most skyscrapers List of people from Chicago List of fiction set in Chicago National Register of Historic Places listings in Central Chicago National Register of Historic Places listings in North Side Chicago National Register of Historic Places listings in West Side Chicago Explanatory notes Citations General and cited references External links () Choose Chicago—Official tourism website Chicago History Maps of Chicago from the American Geographical Society Library Chicago – LocalWiki Local Chicago Wiki 1833 establishments in Illinois Populated places established in 1833 Articles containing video clips Cities in Cook County, Illinois Cities in DuPage County, Illinois Cities in the Chicago metropolitan area Cities in Illinois County seats in Illinois Inland port cities and towns of the United States Populated places established in the 1780s Illinois populated places on Lake Michigan Railway towns in Illinois Majority-minority cities and towns in Cook County, Illinois Majority-minority cities and towns in DuPage County, Illinois
3,137
6,887
https://en.wikipedia.org/wiki/Cyrix%206x86
Cyrix 6x86
The Cyrix 6x86 is a line of sixth-generation, 32-bit x86 microprocessors designed and released by Cyrix in 1995. Cyrix, being a fabless company, had the chips manufactured by IBM and SGS-Thomson. The 6x86 was made as a direct competitor to Intel's Pentium microprocessor line, and was pin compatible. During the 6x86's development, the majority of applications (office software as well as games) performed almost entirely integer operations. The designers foresaw that future applications would most likely maintain this instruction focus. So, to optimize the chip's performance for what they believed to be the most likely application of the CPU, the integer execution resources received most of the transistor budget. This would later prove to be a strategic mistake, as the popularity of the P5 Pentium caused many software developers to hand-optimize code in assembly language, to take advantage of the P5 Pentium's tightly pipelined and lower latency FPU. For example, the highly anticipated first-person shooter Quake used highly optimized assembly code designed almost entirely around the P5 Pentium's FPU. As a result, the P5 Pentium significantly outperformed other CPUs in the game. History The 6x86, previously under the codename "M1" was announced by Cyrix in October 1995. On release only the 100 MHz (P120+) version was available, but a 120 MHz (P150+) version was planned for mid-1995 with a 133 MHz (P166+) model later. The 100 MHz (P120+) 6x86 was available to OEMs for a price of $450 per chip in bulk quantities. In mid February 1996 Cyrix announced the P166+, P150+, and P133+ to be added to the 6x86 model line. IBM, who produced the chips, also announced they will be selling their own versions of the chips. The 6x86 P200+ was planned for the end of 1996, and ended up being released in June. The M2 (6x86MX) was first announced to be in development in mid 1996. It would have MMX and 32-bit optimization. The M2 would also have some of the same features as the Intel Pentium Pro such as register renaming, out-of-order completion, and speculative execution. Additionally it would have 64 KB of cache over the original 6x86 and Pentium Pro's 16 KB. In March 1997 when asked about when the M2 line of processors would begin shipping, Cyrix UK managing director Brendan Sherry stated, "I've read it's going to be May but we've said late Q2 all along and I'm pretty sure we'll make that." The 6x86L was first released in January 1997 to address the heat issues with the original 6x86 line. The 6x86L had a lower V-core voltage and required a split powerplane voltage regulator. In April 1997 the first laptop to use the 6x86 processor was put on sale. They were sold by TigerDirect and had a 12.1in DSTN display, 16 MB of memory, 10x CD-ROM, 1.3 GB hard disk drive, and cost $1,899 for the base price. Later by the end of May 1997 on the 27th, Cyrix said they would announce details of the new chip line (6x86MX) the day before Computex in June 1997. For the low end of the series, the PR166 6x86MX was available for $190 with higher end PR200 and PR233 versions available for $240 and $320. IBM being the producer of Cyrix's chips, would also sell their own version. Cyrix hoped to ship tens of thousands within June 1997 with up to 1 million by the end of the year. Cyrix also expected to release a 266 MHz chip by the end of the 1997 and a 300 MHz in the first quarter of 1998. They had slightly better floating point performance, which cut adding and multiply times by a third, but it was still slower than the Intel Pentium. The M2 also had full MMX instructions, 64KB of cache over the original 16KB, and had a lower core voltage of 2.5V over 3.3V of the original 6x86 line. National Semiconductor acquired Cyrix in July 1997. National Semiconductor was not interested in high performance processors but rather system on a chip devices, and wanted to shift the focus of Cyrix to the MediaGX line. In January 1998 National Semiconductors produced a 6x86MX processor on a 0.25 micron process technology. This reduced the chip size from 150 millimeters squared to 88. National shifted their production of the MII and MediaGX to 0.25 by August. In September 1998 IBM's licensing partnership with Cyrix was said to be ended by National Semiconductors. This was due to National wanting to increase production of Cyrix chips in their own facilities, and because having IBM produce Cyrix's chips was causing issues such as profit losses due to IBM frequently pricing their versions of Cyrix's chips lower. National would be paying $50–55 million to IBM to end the partnership, which would end the following April. National would then be moving chip production to their own facility in South Portland, Maine. The Cyrix MII was released in May 1998. These chips were not exciting like people had hoped, as they were just a rebranding of the 6x86MX. In December these chips cost $80 for a MII-333, $59 for a MII-300, $55 for a MII-266, and $48 for a MII-233. In May 1999 National Semiconductor decided to leave the PC chip market due to significant losses, and put the Cyrix CPU division up for sale. VIA bought the Cyrix line in June 1999, and ended the development of high performance processors. The MII-433GP would be the last processor produced by Cyrix. Additionally after VIA's acquisition, the 6x86/L was discontinued, but the 6x86MX/MII line continued to be sold by VIA. VIA would continue to produce the MII throughout the early 2000s. It was expected to be discontinued when the VIA Cyrix MII was released. However, the MII was still available for sale until mid/late 2003, being shown on VIA's website as a product until October, and it still saw use in devices such as network computers. Architecture The 6x86 is superscalar and superpipelined and performs register renaming, speculative execution, out-of-order execution, and data dependency removal. However, it continued to use native x86 execution and ordinary microcode only, like Centaur's Winchip, unlike competitors Intel and AMD which introduced the method of dynamic translation to micro-operations with Pentium Pro and K5. The 6x86 is socket-compatible with the Intel P54C Pentium, and was offered in six performance levels: PR 90+, PR 120+, PR 133+, PR 150+, PR 166+ and PR 200+. These performance levels do not map to the clock speed of the chip itself (for example, a PR 133+ ran at 110 MHz, a PR 166+ ran at 133 MHz, etc.). With regard to internal caches, it has a 16-KB primary cache and a fully associative 256-byte instruction line cache is included alongside the primary cache, which functions as the primary instruction cache. The 6x86 and 6x86L were not completely compatible with the Intel P5 Pentium instruction set and is not multi-processor capable. For this reason, the chip identified itself as an 80486 and disabled the CPUID instruction by default. CPUID support could be enabled by first enabling extended CCR registers then setting bit 7 in CCR4. The lack of full P5 Pentium compatibility caused problems with some applications because programmers had begun to use P5 Pentium-specific instructions. Some companies released patches for their products to make them function on the 6x86. Compatibility with the Pentium was improved in the 6x86MX, by adding a Time Stamp Counter to support the P5 Pentium's RDTSC instruction. Support for the Pentium Pro's CMOVcc instructions were also added. Performance Similarly to AMD with their K5 and early K6 processors, Cyrix used a PR rating (Performance Rating) to relate their performance to the Intel P5 Pentium (pre-P55C), as the 6x86's higher per-clock performance relative to a P5 Pentium could be quantified against a higher-clocked Pentium part. For example, a 133 MHz 6x86 will match or outperform a P5 Pentium at 166 MHz, and as a result Cyrix could market the 133 MHz chip as being a P5 Pentium 166's equal. However, the PR rating was not an entirely truthful representation of the 6x86's performance. While the 6x86's integer performance was significantly higher than P5 Pentium's, its floating point performance was more mediocre—between 2 and 4 times the performance of the 486 FPU per clock cycle (depending on the operation and precision). The FPU in the 6x86 was largely the same circuitry that was developed for Cyrix's earlier high performance 8087/80287/80387-compatible coprocessors, which was very fast for its time—the Cyrix FPU was much faster than the 80387, and even the 80486 FPU. However, it was still considerably slower than the new and completely redesigned P5 Pentium and P6 Pentium Pro-Pentium III FPUs. One of the main features of the P5/P6 FPUs is that they supported interleaving of FPU and integer instructions in their design, which Cyrix chips did not integrate. This caused very poor performance with Cyrix CPUs on games and software that took advantage of this. Therefore, despite being very fast clock by clock, the 6x86 and MII were forced to compete at the low-end of the market as AMD K6 and Intel P6 Pentium II were always ahead on clock speed. The 6x86's and MII's old generation "486 class" floating point unit combined with an integer section that was at best on-par with the newer P6 and K6 chips meant that Cyrix could no longer compete in performance. Models and variants 6x86 The 6x86 (codename M1) was released by Cyrix in 1996. The first generation of 6x86 had heat problems. This was primarily caused by their higher heat output than other x86 CPUs of the day and, as such, computer builders sometimes did not equip them with adequate cooling. The CPUs topped out at around 25 W heat output (like the AMD K6), whereas the P5 Pentium produced around 15 W of waste heat at its peak. However, both numbers would be a fraction of the heat generated by many high performance processors, some years later. Shortly after the original M1, the M1R was released. The M1R was a switch from SGS-Thomson 3M process to IBM 5M process, making the 6x86 chips 50% smaller. 6x86L The 6x86L (codename M1L) was later released by Cyrix to address heat issues; the L standing for low-power. Improved manufacturing technologies permitted usage of a lower Vcore. Just like the Pentium MMX, the 6x86L required a split powerplane voltage regulator with separate voltages for I/O and CPU core. 6x86MX / MII Another release of the 6x86, the 6x86MX, added MMX compatibility along with the EMMI instruction set, improved compatibility with the Pentium and Pentium Pro by adding a Time Stamp Counter and CMOVcc instructions respectively, and quadrupled the primary cache size to 64 KB. The 256-byte instruction line cache can be turned into a scratchpad cache to provide support for multimedia operations. Later revisions of this chip were renamed MII, to better compete with the Pentium II processor. Unfortunately, 6x86MX / MII was late to market, and couldn't scale well in clock speed with the manufacturing processes used at the time. Model table Timeline See also Competitors Pentium, Original Pentium II AMD K5 AMD K6 rise mP6 WinChip References Further reading Gwennap, Linley (October 25, 1993). "Cyrix Describes Pentium Competitor" Microprocessor Report. Gwennap, Linley (December 5, 1994). "Cyrix M1 Design Tapes Out". Microprocessor Report. Gwennap, Linley (June 2, 1997). "Cyrix 6x68MX Outperforms AMD K6". Microprocessor Report. Slater, Michael (February 12, 1996). "Cyrix, IBM Push 6x86 to 133 MHz". Microprocessor Report. Slater, Michael (October 28, 1996). "Cyrix Doubles x86 Performance with M2". Microprocessor Report. External links Cyrix 6x86 ("M1") at PCGuide cpu-collection.de Cyrix 6x86 processor images and descriptions Paul Hsieh's 6th Generation x86 CPU Comparison in-depth analysis of 6th generation x86 CPUs, including the 6x86MX. Cyrix M1 stats at Sandpile.org Cyrix Datasheets 6x86 (M1/M1R) Guide 6x86 (M1/M1R) Technical Brief 6x86 (MX) Guide 6x86 (MX) Technical Brief 6x86 (MII) Technical Brief 686 Superscalar microprocessors X86 microarchitectures
3,138
6,888
https://en.wikipedia.org/wiki/Colon%20classification
Colon classification
Colon classification (CC) is a library classification system developed by Shiyali Ramamrita Ranganathan. It was an early faceted (or analytico-synthetic) classification system. The first edition of colon classification was published in 1933, followed by six more editions. It is especially used in libraries in India. Its name originates from its use of colons to separate facets into classes. Many other classification schemes, some of which are unrelated, also use colons and other punctuation to perform various functions. Originally, CC used only the colon as a separator, but since the second edition, CC has used four other punctuation symbols to identify each facet type. In CC, facets describe "personality" (the most specific subject), matter, energy, space, and time (PMEST). These facets are generally associated with every item in a library, and thus form a reasonably universal sorting system. As an example, the subject "research in the cure of tuberculosis of lungs by x-ray conducted in India in 1950" would be categorized as: This is summarized in a specific call number: Organization The colon classification system uses 42 main classes that are combined with other letters, numbers, and marks in a manner resembling the Library of Congress Classification. Facets CC uses five primary categories, or facets, to specify the sorting of a publication. Collectively, they are called PMEST: Other symbols can be used to indicate components of facets called isolates, and to specify complex combinations or relationships between disciplines. Classes The following are the main classes of CC, with some subclasses, the main method used to sort the subclass using the PMEST scheme and examples showing application of PMEST. z Generalia 1 Universe of Knowledge 2 Library Science 3 Book science 4 Journalism A Natural science B Mathematics B2 Algebra C Physics D Engineering E Chemistry F Technology G Biology H Geology HX Mining I Botany J Agriculture J1 Horticulture J2 Feed J3 Food J4 Stimulant J5 Oil J6 Drug J7 Fabric J8 Dye K Zoology KZ Animal Husbandry L Medicine LZ3 Pharmacology LZ5 Pharmacopoeia M Useful arts M7 Textiles [material]:[work] Δ Spiritual experience and mysticism [religion],[entity]:[problem] N Fine arts ND Sculpture NN Engraving NQ Painting NR Music O Literature P Linguistics Q Religion R Philosophy S Psychology T Education U Geography V History W Political science X Economics Y Sociology YZ Social Work Z Law Example A common example of the colon classification is: "Research in the cure of the tuberculosis of lungs by x-ray conducted in India in 1950s": The main classification is Medicine; (Medicine) Within Medicine, the Lungs are the main concern; () The property of the Lungs is that they are afflicted with Tuberculosis; () The Tuberculosis is being performed (:) on, that is the intent is to cure (Treatment); () The matter that we are treating the Tuberculosis with are X-Rays; () And this discussion of treatment is regarding the Research phase; () This Research is performed within a geographical space (.) namely India; () During the time (') of 1950; () And finally, translating into the codes listed for each subject and facet the classification becomes See also Bliss bibliographic classification Faceted classification Subject (documents) Universal Decimal Classification References Further reading Colon Classification (6th Edition) by Shiyali Ramamrita Ranganathan, published by Ess Ess Publications, Delhi, India Chan, Lois Mai. Cataloging and Classification: An Introduction. 2nd ed. New York: McGraw-Hill, c. 1994. . Knowledge representation Library cataloging and classification
3,139
6,889
https://en.wikipedia.org/wiki/Census
Census
A census is the procedure of systematically acquiring, recording and calculating population information about the members of a given population. This term is used mostly in connection with national population and housing censuses; other common censuses include censuses of agriculture, traditional culture, business, supplies, and traffic censuses. The United Nations (UN) defines the essential features of population and housing censuses as "individual enumeration, universality within a defined territory, simultaneity and defined periodicity", and recommends that population censuses be taken at least every ten years. UN recommendations also cover census topics to be collected, official definitions, classifications and other useful information to co-ordinate international practices. The UN's Food and Agriculture Organization (FAO), in turn, defines the census of agriculture as "a statistical operation for collecting, processing and disseminating data on the structure of agriculture, covering the whole or a significant part of a country." "In a census of agriculture, data are collected at the holding level." The word is of Latin origin: during the Roman Republic, the census was a list that kept track of all adult males fit for military service. The modern census is essential to international comparisons of any kind of statistics, and censuses collect data on many attributes of a population, not just how many people there are. Censuses typically began as the only method of collecting national demographic data and are now part of a larger system of different surveys. Although population estimates remain an important function of a census, including exactly the geographic distribution of the population or the agricultural population, statistics can be produced about combinations of attributes, e.g., education by age and sex in different regions. Current administrative data systems allow for other approaches to enumeration with the same level of detail but raise concerns about privacy and the possibility of biasing estimates. A census can be contrasted with sampling in which information is obtained only from a subset of a population; typically, main population estimates are updated by such intercensal estimates. Modern census data are commonly used for research, business marketing, and planning, and as a baseline for designing sample surveys by providing a sampling frame such as an address register. Census counts are necessary to adjust samples to be representative of a population by weighting them as is common in opinion polling. Similarly, stratification requires knowledge of the relative sizes of different population strata, which can be derived from census enumerations. In some countries, the census provides the official counts used to apportion the number of elected representatives to regions (sometimes controversially – e.g., Utah v. Evans). In many cases, a carefully chosen random sample can provide more accurate information than attempts to get a population census. Sampling A census is often construed as the opposite of a sample as its intent is to count everyone in a population, rather than a fraction. However, population censuses do rely on a sampling frame to count the population. This is the only way to be sure that everyone has been included, as otherwise those not responding would not be followed up on and individuals could be missed. The fundamental premise of a census is that the population is not known, and a new estimate is to be made by the analysis of primary data. The use of a sampling frame is counterintuitive as it suggests that the population size is already known. However, a census is also used to collect attribute data on the individuals in the nation, not only to assess population size. This process of sampling marks the difference between a historical census, which was a house-to-house process or the product of an imperial decree, and the modern statistical project. The sampling frame used by a census is almost always an address register. Thus, it is not known if there is anyone resident or how many people there are in each household. Depending on the mode of enumeration, a form is sent to the householder, an enumerator calls, or administrative records for the dwelling are accessed. As a preliminary to the dispatch of forms, census workers will check any address problems on the ground. While it may seem straightforward to use the postal service file for this purpose, this can be out of date and some dwellings may contain a number of independent households. A particular problem is what are termed "communal establishments", a category that includes student residences, religious orders, homes for the elderly, people in prisons etc. As these are not easily enumerated by a single householder, they are often treated differently and visited by special teams of census workers to ensure they are classified appropriately. Residence definitions Individuals are normally counted within households, and information is typically collected about the household structure and the housing. For this reason, international documents refer to censuses of population and housing. Normally the census response is made by a household, indicating details of individuals resident there. An important aspect of census enumerations is determining which individuals can be counted and which cannot be counted. Broadly, three definitions can be used: de facto residence; de jure residence; and permanent residence. This is important in considering individuals who have multiple or temporary addresses. Every person should be identified uniquely as resident in one place; but the place where they happen to be on Census Day, their de facto residence, may not be the best place to count them. Where an individual uses services may be more useful, and this is at their usual residence. An individual may be recorded at a "permanent" address, which might be a family home for students or long-term migrants. A precise definition of residence is needed, to decide whether visitors to a country should be included in the population count. This is becoming more important as students travel abroad for education for a period of several years. Other groups causing problems of enumeration are new-born babies, refugees, people away on holiday, people moving home around census day, and people without a fixed address. People with second homes because they are working in another part of the country or have a holiday cottage are difficult to fix at a particular address; this sometimes causes double counting or houses being mistakenly identified as vacant. Another problem is where people use a different address at different times e.g. students living at their place of education in term time but returning to a family home during vacations, or children whose parents have separated who effectively have two family homes. Census enumeration has always been based on finding people where they live, as there is no systematic alternative: any list used to find people is likely to be derived from census activities in the first place. Recent UN guidelines provide recommendations on enumerating such complex households. In the census of agriculture, data is collected at the agricultural holding unit. An agricultural holding is an economic unit of agricultural production under single management comprising all livestock kept and all land used wholly or partly for agricultural production purposes, without regard to title, legal form, or size. Single management may be exercised by an individual or household, jointly by two or more individuals or households, by a clan or tribe, or by a juridical person such as a corporation, cooperative or government agency. The holding's land may consist of one or more parcels, located in one or more separate areas or in one or more territorial or administrative divisions, providing the parcels share the same production means, such as labour, farm buildings, machinery or draught animals. Enumeration strategies Historical censuses used crude enumeration assuming absolute accuracy. Modern approaches take into account the problems of overcount and undercount, and the coherence of census enumerations with other official sources of data. This reflects a realist approach to measurement, acknowledging that under any definition of residence there is a true value of the population but this can never be measured with complete accuracy. An important aspect of the census process is to evaluate the quality of the data. Many countries use a post-enumeration survey to adjust the raw census counts. This works in a similar manner to capture-recapture estimation for animal populations. Among census experts this method is called dual system enumeration (DSE). A sample of households are visited by interviewers who record the details of the household as at census day. These data are then matched to census records, and the number of people missed can be estimated by considering the numbers of people who are included in one count but not the other. This allows adjustments to the count for non-response, varying between different demographic groups. An explanation using a fishing analogy can be found in "Trout, Catfish and Roach..." which won an award from the Royal Statistical Society for excellence in official statistics in 2011. Triple system enumeration has been proposed as an improvement as it would allow evaluation of the statistical dependence of pairs of sources. However, as the matching process is the most difficult aspect of census estimation this has never been implemented for a national enumeration. It would also be difficult to identify three different sources that were sufficiently different to make the triple system effort worthwhile. The DSE approach has another weakness in that it assumes there is no person counted twice (over count). In de facto residence definitions this would not be a problem but in de jure definitions individuals risk being recorded on more than one form leading to double counting. A particular problem here is students who often have a term time and family address. Several countries have used a system known as short form/long form. This is a sampling strategy that randomly chooses a proportion of people to send a more detailed questionnaire to (the long form). Everyone receives the short form questions. This means more data are collected, but without imposing a burden on the whole population. This also reduces the burden on the statistical office. Indeed, in the UK until 2001 all residents were required to fill in the whole form but only a 10% sample were coded and analysed in detail. New technology means that all data are now scanned and processed. During the 2011 Canadian census there was controversy about the cessation of the mandatory long form census; the head of Statistics Canada, Munir Sheikh, resigned upon the federal government's decision to do so. The use of alternative enumeration strategies is increasing but these are not as simple as many people assume and are only used in developed countries. The Netherlands has been most advanced in adopting a census using administrative data. This allows a simulated census to be conducted by linking several different administrative databases at an agreed time. Data can be matched, and an overall enumeration established allowing for discrepancies between different data sources. A validation survey is still conducted in a similar way to the post enumeration survey employed in a traditional census. Other countries that have a population register use this as a basis for all the census statistics needed by users. This is most common among Nordic countries, but requires many distinct registers to be combined, including population, housing, employment and education. These registers are then combined and brought up to the standard of a statistical register by comparing the data in different sources and ensuring the quality is sufficient for official statistics to be produced. A recent innovation is the French instigation of a rolling census programme with different regions enumerated each year, so that the whole country is completely enumerated every 5 to 10 years. In Europe, in connection with the 2010 census round, many countries adopted alternative census methodologies, often based on the combination of data from registers, surveys and other sources. Technology Censuses have evolved in their use of technology: censuses in 2010 used many new types of computing. In Brazil, handheld devices were used by enumerators to locate residences on the ground. In many countries, census returns could be made via the Internet as well as in paper form. DSE is facilitated by computer matching techniques that can be automated, such as propensity score matching. In the UK, all census formats are scanned and stored electronically before being destroyed, replacing the need for physical archives. The record linking to perform an administrative census would not be possible without large databases being stored on computer systems. There are sometimes problems in introducing new technology. The US census had been intended to use handheld computers, but cost escalated, and this was abandoned, with the contract being sold to Brazil. Online response has some advantages, but one of the functions of the census is to make sure everyone is counted accurately. A system that allowed people to enter their address without verification would be open to abuse. Therefore, households have to be verified on the ground, typically by an enumerator visit or post out. Paper forms are still necessary for those without access to the internet. It is also possible that the hidden nature of an administrative census means that users are not engaged with the importance of contributing their data to official statistics. Alternatively, population estimations may be carried out remotely with geographic information system (GIS) and remote sensing technologies. Development According to the United Nations Population Fund (UNFPA), "The information generated by a population and housing census – numbers of people, their distribution, their living conditions and other key data – is critical for development." This is because this type of data is essential for policymakers so that they know where to invest. Unfortunately, many countries have outdated or inaccurate data about their populations and thus have difficulty in addressing the needs of the population. The UNFPA said: "The unique advantage of the census is that it represents the entire statistical universe, down to the smallest geographical units, of a country or region. Planners need this information for all kinds of development work, including: assessing demographic trends; analysing socio-economic conditions; designing evidence-based poverty-reduction strategies; monitoring and evaluating the effectiveness of policies; and tracking progress toward national and internationally agreed development goals." In addition to making policymakers aware of population issues, the census is also an important tool for identifying forms of social, demographic or economic exclusions, such as inequalities relating to race, ethics, and religion as well as disadvantaged groups such as those with disabilities and the poor. An accurate census can empower local communities by providing them with the necessary information to participate in local decision-making and ensuring they are represented. The importance of the census of agriculture for development is that it gives a snapshot of the structure of the agricultural sector in a country and, when compared with previous censuses, provides an opportunity to identify trends and structural transformations of the sector, and points towards areas for policy intervention. Census data are used as a benchmark for current statistics and their value is increased when they are employed together with other data sources. Uses of census data Early censuses in the 19th and 20th centuries collected paper documents which had to be collated by hand, so the statistical information obtained was quite basic. The government that owned the data could publish statistics on the state of the nation. The results were used to measure changes in the population and apportion representation. Population estimates could be compared to those of other countries. By the beginning of the 20th century, censuses were recording households and some indications of their employment. In some countries, census archives are released for public examination after many decades, allowing genealogists to track the ancestry of interested people. Archives provide a substantial historical record which may challenge established views. Information such as job titles and arrangements for the destitute and sick may also shed light on the historical structure of society. Political considerations influence the census in many countries. In Canada in 2010 for example, the government under the leadership of Stephen Harper abolished the mandatory long-form census. This abolition was a response to protests from some Canadians who resented the personal questions. The long-form census was reinstated by the Justin Trudeau government in 2016. Census data and research As governments assumed responsibility for schooling and welfare, large government research departments made extensive use of census data. Population projections could be made, to help plan for provision in local government and regions. Central government could also use census data to allocate funding. Even in the mid 20th century, census data was only directly accessible to large government departments. However, computers meant that tabulations could be used directly by university researchers, large businesses and local government offices. They could use the detail of the data to answer new questions and add to local and specialist knowledge. Nowadays, census data are published in a wide variety of formats to be accessible to business, all levels of government, media, students and teachers, charities, and any citizen who is interested; researchers in particular have an interest in the role of Census Field Officers (CFO) and their assistants. Data can be represented visually or analysed in complex statistical models, to show the difference between certain areas, or to understand the association between different personal characteristics. Census data offer a unique insight into small areas and small demographic groups which sample data would be unable to capture with precision. In the census of agriculture, users need census data to: support and contribute to evidence-based agricultural planning and policy-making. The census information is essential, for example, to monitor the performance of a policy or programme designed for crop diversification or to address food security issues; provide data to facilitate research, investment and business decisions both in the public and private sector; contribute to monitoring environmental changes and evaluating the impact of agricultural practices on the environment such as tillage practices, crop rotation or sources of greenhouse gas (GHG) emissions; provide relevant data on work inputs and main work activities, as well as on the labour force in the agriculture sector; provide an important information base for monitoring some key indicators of the Sustainable Development Goals (SDGs), in particular those goals related to food security in agricultural holdings, the role of women in agricultural activities and rural poverty; provide baseline data both at the national and small administrative and geographical levels for formulating, monitoring and evaluating programmes and projects interventions; provide essential information on subsistence agriculture and for the estimation of the non-observed economy, which plays an important role in the compilation of the national accounts and the economic accounts for agriculture. Privacy and data stewardship Although the census provides useful statistical information about a population, the availability of this information could sometimes lead to abuses, political or otherwise, by the linking of individuals' identities to anonymous census data. This is particularly important when individuals' census responses are made available in microdata form, but even aggregate-level data can result in privacy breaches when dealing with small areas and/or rare subpopulations. For instance, when reporting data from a large city, it might be appropriate to give the average income for black males aged between 50 and 60. However, doing this for a town that only has two black males in this age group would be a breach of privacy because either of those persons, knowing his own income and the reported average, could determine the other man's income. Typically, census data are processed to obscure such individual information. Some agencies do this by intentionally introducing small statistical errors to prevent the identification of individuals in marginal populations; others swap variables for similar respondents. Whatever is done to reduce the privacy risk, new improved electronic analysis of data can threaten to reveal sensitive individual information. This is known as statistical disclosure control. Another possibility is to present survey results by means of statistical models in the form of a multivariate distribution mixture. The statistical information in the form of conditional distributions (histograms) can be derived interactively from the estimated mixture model without any further access to the original database. As the final product does not contain any protected microdata, the model-based interactive software can be distributed without any confidentiality concerns. Another method is simply to release no data at all, except very large scale data directly to the central government. Differing release strategies of governments have led to an international project (IPUMS) to co-ordinate access to microdata and corresponding metadata. Such projects such as SDMX also promote standardising metadata, so that best use can be made of the minimal data available. History of censuses Egypt Censuses in Egypt first appeared in the late Middle Kingdom and developed in the New Kingdom Pharaoh Amasis, according to Herodotus, required every Egyptian to declare annually to the nomarch, "whence he gained his living". Under the Ptolemies and the Romans several censuses were conducted in Egypt by government officials Ancient Greece There are several accounts of ancient Greek city states carrying out censuses. Israel Censuses are mentioned in the Bible. God commands a per capita tax to be paid with the census for the upkeep of the Tabernacle. The Book of Numbers is named after the counting of the Israelite population according to the house of the Fathers after the exodus from Egypt. A second census was taken while the Israelites were camped in the "plains of Moab". King David performed a census that produced disastrous results. His son, King Solomon, had all of the foreigners in Israel counted. When the Romans took over Judea in AD6, the legate Publius Sulpicius Quirinius organised a census for tax purposes. The Gospel of Luke links the birth of Jesus either to this event, or to an otherwise unknown census conducted prior to Quirinius' tenure. China One of the world's earliest preserved censuses was held in China in AD2 during the Han dynasty, and is still considered by scholars to be quite accurate. The population was registered as having 57,671,400 individuals in 12,366,470 households but on this occasion only taxable families had been taken into account - indicating the income and the number of soldiers who could be mobilized. Another census was held in AD144. India The oldest recorded census in India is thought to have occurred around 330BC during the reign of Emperor Chandragupta Maurya under the leadership of Chanakya and Ashoka. Rome The English term is taken directly from the Latin census, from ("to estimate"). The census played a crucial role in the administration of the Roman government, as it was used to determine the class a citizen belonged to for both military and tax purposes. Beginning in the middle republic, it was usually carried out every five years. It provided a register of citizens and their property from which their duties and privileges could be listed. It is said to have been instituted by the Roman king Servius Tullius in the at which time the number of arms-bearing citizens was supposedly counted at around 80,000. The 6AD "census of Quirinius" undertaken following the imposition of direct Roman rule in Judea was partially responsible for the development of the Zealot movement and several failed rebellions against Rome that ended in the Diaspora. The 15-year indiction cycle established by Diocletian in AD297 was based on quindecennial censuses and formed the basis for dating in late antiquity and under the Byzantine Empire. Rashidun and Umayyad Caliphates In the Middle Ages, the Caliphate began conducting regular censuses soon after its formation, beginning with the one ordered by the second Rashidun caliph, Umar. Medieval Europe The Domesday Book was undertaken in AD1086 by William I of England so that he could properly tax the land he had recently conquered. In 1183, a census was taken of the crusader Kingdom of Jerusalem, to ascertain the number of men and amount of money that could possibly be raised against an invasion by Saladin, sultan of Egypt and Syria. 1328 : First national census of France (L'État des paroisses et des feux) mostly for fiscal purposes. It estimated the French population at 16 to 17 million. Inca Empire In the 15th century, the Inca Empire had a unique way to record census information. The Incas did not have any written language but recorded information collected during censuses and other numeric information as well as non-numeric data on quipus, strings from llama or alpaca hair or cotton cords with numeric and other values encoded by knots in a base-10 positional system. Spanish Empire On May 25, 1577, King Philip II of Spain ordered by royal cédula the preparation of a general description of Spain's holdings in the Indies. Instructions and a questionnaire, issued in 1577 by the Office of the Cronista Mayor, were distributed to local officials in the Viceroyalties of New Spain and Peru to direct the gathering of information. The questionnaire, composed of fifty items, was designed to elicit basic information about the nature of the land and the life of its peoples. The replies, known as "", were written between 1579 and 1585 and were returned to the Cronista Mayor in Spain by the Council of the Indies. World population estimates The earliest estimate of the world population was made by Giovanni Battista Riccioli in 1661; the next by Johann Peter Süssmilch in 1741, revised in 1762; the third by Karl Friedrich Wilhelm Dieterici in 1859. In 1931, Walter Willcox published a table in his book, International Migrations: Volume II Interpretations, that estimated the 1929 world population to be roughly 1.8 billion. Impact of COVID-19 on census Impact The UNFPA predicts that the COVID-19 pandemic will threaten the successful conduct of censuses of population and housing in many countries through delays, interruptions that compromise quality, or complete cancellation of census projects. Domestic and donor financing for census may be diverted to address COVID-19 leaving census without crucial funds. Several countries have already taken decisions to postpone the census, with many others yet to announce the way forward. In some countries this is already happening. The pandemic has also affected the planning and implementation of censuses of agriculture in all world regions. The extent of the impact has varied according to the stages at which the censuses are, ranging from planning (i.e. staffing, procurement, preparation of frames, questionnaires), fieldwork (field training and enumeration) or data processing/analysis stages. The census of agriculture's reference period is the agricultural year. Thus, a delay in any census activity may be critical and can result in a full year postponement of the enumeration if the agricultural season is missed. Some publications have discussed the impact of COVID-19 on national censuses of agriculture. Adaptation The United Nations Population Fund (UNFPA) has requested a global effort to assure that even where census is delayed, census planning and preparations are not cancelled, but continue in order to assure that implementation can proceed safely when the pandemic is under control. While new census methods, including online, register-based, and hybrid approaches are being used across the world, these demand extensive planning and preconditions that cannot be created at short notice. The continuing low supply of personal protective equipment to protect against COVID-19 has immediate implications for conducting census in communities at risk of transmission. The UNFPA Procurement Office is partnering with other agencies to explore new supply chains and resources. Modern implementation See also List of national and international statistical services Sources Notes References Alterman, Hyman, (1969). Counting People: The Census in History. Harcourt, Brace & Company. Behrisch, Lars. (2016) "Statistics and Politics in the 18th Century." Historical Social Research/Historische Sozialforschung (2016): 238–57. Bielenstein, Hans, (1978). "Wang Mang, the restoration of the Han dynasty, and Later Han." In The Cambridge History of China, vol. 1, eds. Denis Twitchett and John K. Fairbank, pp. 223–90, Cambridge: Cambridge University Press. Krüger, Stephen, (Fall 1991). "The Decennial Census", 19 Western State University Law Review 1; available at HeinOnline . Effects of UK 'Jedi' hoax on 2001 UK census from ONS. U.S. Census Press Release on 1930 Census. U.S. Census Press Release on Soundex and WPA. External links Census of Ireland 1911. Online Historical Population Reports Project (OHPR). PR as a function of census management: comparative analysis of fifteen census experiences Population Survey methodology Sampling (statistics) Latin words and phrases
3,140
6,902
https://en.wikipedia.org/wiki/Cotswolds
Cotswolds
The Cotswolds (, ) is a region in central-southwest England, along a range of rolling hills that rise from the meadows of the upper Thames to an escarpment above the Severn Valley and Evesham Vale. The area is defined by the bedrock of Jurassic limestone that creates a type of grassland habitat rare in the UK and that is quarried for the golden-coloured Cotswold stone. The predominantly rural landscape contains stone-built villages, towns, and stately homes and gardens featuring the local stone. Designated as an Area of Outstanding Natural Beauty (AONB) in 1966, the Cotswolds covers making it the largest AONB. It is the third largest protected landscape in England after the Lake District and Yorkshire Dales national parks. Its boundaries are roughly across and long, stretching southwest from just south of Stratford-upon-Avon to just south of Bath near Radstock. It lies across the boundaries of several English counties; mainly Gloucestershire and Oxfordshire, and parts of Wiltshire, Somerset, Worcestershire, and Warwickshire. The highest point of the region is Cleeve Hill at , just east of Cheltenham. The hills give their name to the Cotswold local government district, formed on 1 April 1974, which is within the county of Gloucestershire. Its main town is Cirencester, where the Cotswold District Council offices are located. The population of the District was about 83,000 in 2011. The much larger area referred to as the Cotswolds encompasses nearly , over five counties: Gloucestershire, Oxfordshire, Warwickshire, Wiltshire, and Worcestershire. The population of the Area of Outstanding Natural Beauty was 139,000 in 2016. History The largest excavation of Jurassic-era echinoderm fossils, including of rare and previously unknown species, occurred at a quarry in the Cotswolds in 2021. There is evidence of Neolithic settlement from burial chambers on Cotswold Edge, and there are remains of Bronze and Iron Age forts. Later the Romans built villas, such as at Chedworth, settlements such as Gloucester, and paved the Celtic path later known as Fosse Way. During the Middle Ages, thanks to the breed of sheep known as the Cotswold Lion, the Cotswolds became prosperous from the wool trade with the continent, with much of the money made from wool directed towards the building of churches. The most successful era for the wool trade was 1250–1350; much of the wool at that time was sold to Italian merchants. The area still preserves numerous large, handsome Cotswold Stone "wool churches". The affluent area in the 21st century has attracted wealthy Londoners and others who own second homes there or have chosen to retire to the Cotswolds. Etymology The name Cotswold is popularly attributed the meaning "sheep enclosure in rolling hillsides", incorporating the term, wold, meaning hills. Compare also the Weald from the Saxon/German word Wald meaning 'forest'. However, the English Place-Name Society has for many years accepted that the term Cotswold is derived from Codesuualt of the 12th century or other variations on this form, the etymology of which was given, 'Cod's-wold', which is 'Cod's high open land'. Cod was interpreted as an Old English personal name, which may be recognised in further names: Cutsdean, Codeswellan, and Codesbyrig, some of which date back to the eighth century AD. It has subsequently been noticed that "Cod" could derive philologically from a Brittonic female cognate "Cuda", a hypothetical mother goddess in Celtic mythology postulated to have been worshipped in the Cotswold region. Geography The spine of the Cotswolds runs southwest to northeast through six counties, particularly Gloucestershire, west Oxfordshire and southwestern Warwickshire. The northern and western edges of the Cotswolds are marked by steep escarpments down to the Severn valley and the Warwickshire Avon. This feature, known as the Cotswold escarpment, or sometimes the Cotswold Edge, is a result of the uplifting (tilting) of the limestone layer, exposing its broken edge. This is a cuesta, in geological terms. The dip slope is to the southeast. On the eastern boundary lies the city of Oxford and on the west is Stroud. To the southeast, the upper reaches of the Thames Valley and towns such as Lechlade, Tetbury, and Fairford are often considered to mark the limit of this region. To the south the Cotswolds, with the characteristic uplift of the Cotswold Edge, reach beyond Bath, and towns such as Chipping Sodbury and Marshfield share elements of Cotswold character. The area is characterised by attractive small towns and villages built of the underlying Cotswold stone (a yellow oolitic limestone). This limestone is rich in fossils, particularly of fossilised sea urchins. Cotswold towns include Bourton-on-the-Water, Broadway, Burford, Chalford, Chipping Campden, Chipping Norton, Cricklade, Dursley, Malmesbury, Minchinhampton, Moreton-in-Marsh, Nailsworth, Northleach, Painswick, Stow-on-the-Wold, Stroud, Tetbury, Witney, Winchcombe and Wotton-under-Edge. In addition, much of Box lies in the Cotswolds. Bath, Cheltenham, Cirencester, Gloucester, Stroud, and Swindon are larger urban centres that border on, or are virtually surrounded by, the Cotswold AONB. The town of Chipping Campden is notable for being the home of the Arts and Crafts movement, founded by William Morris at the end of the 19th and beginning of the 20th centuries. William Morris lived occasionally in Broadway Tower, a folly, now part of a country park. Chipping Campden is also known for the annual Cotswold Olimpick Games, a celebration of sports and games dating back to the early 17th century. Of the nearly of the Cotswolds, roughly eighty percent is farmland. There are over of footpaths and bridleways. There are also of historic stone walls. Economy A 2017 report on employment within the Area of Outstanding Natural Beauty, stated that the main sources of income were real estate, renting and business activities, manufacturing and wholesale & retail trade repairs. Some 44% of residents were employed in these sectors. Agriculture is also important. Some 86% of the land in the AONB is used for this purpose. The primary crops include barley, beans, rape seed oil and wheat, while the raising of sheep is also important; cows and pigs are also reared. The livestock sector has been declining since 2002, however. According to the 2011 Census data for the Cotswolds, the wholesale and retail trade was the largest employer (15.8% of the workforce), followed by education (9.7%) and health and social work (9.3%). The report also indicates that a relatively higher proportion of residents were working in agriculture, forestry and fishing, accommodation and food services as well as in professional, scientific and technical activities. Unemployment in the Cotswold District was among the lowest in the country. A report in August 2017 showed only 315 unemployed persons, a slight decrease of five from a year earlier. Tourism Tourism is a significant part of the economy. The Cotswold District area alone gained over £373 million from visitor spending on accommodation, £157 million on local attractions and entertainments, and about £100m on travel in 2016. In the larger Cotswolds Tourism area, including Stroud, Cheltenham, Gloucester and Tewkesbury, tourism generated about £1 billion in 2016, providing 200,000 jobs. Some 38 million day visits were made to the Cotswold Tourism area that year. Many travel guides direct tourists to Chipping Campden, Stow-on-the-Wold, Bourton-on-the-Water, Broadway, Bibury, and Stanton. Some of these locations can be very crowded at times. Roughly 300,000 people visit Bourton per year, for example, with about half staying for a day or less. The area also has numerous public walking trails and footpaths that attract visitors, including the Cotswold Way (part of the National Trails System) from Bath to Chipping Campden. Housing development In August 2018, the final decision was made for a Local Plan that would lead to the building of nearly 7,000 additional homes by 2031, in addition to over 3,000 already built. Areas for development include Cirencester, Bourton-on-the-Water, Down Ampney, Fairford, Kemble, Lechlade, Northleach, South Cerney, Stow-on-the-Wold, Tetbury and Moreton-in-Marsh. Some of the money received from developers will be earmarked for new infrastructure to support the increasing population. Cotswold stone Cotswold stone is a yellow oolitic Jurassic limestone. This limestone is rich in fossils, particularly of fossilised sea urchins. When weathered, the colour of buildings made or faced with this stone is often described as honey or golden. The stone varies in colour from north to south, being honey-coloured in the north and north east of the region, as shown in Cotswold villages such as Stanton and Broadway; golden-coloured in the central and southern areas, as shown in Dursley and Cirencester; and pearly white in Bath. The rock outcrops at places on the Cotswold Edge; small quarries are common. The exposures are rarely sufficiently compact to be good for rock-climbing, but an exception is Castle Rock, on Cleeve Hill, near Cheltenham. Because of the rapid expansion of the Cotswolds in order for nearby areas to capitalize on increased house prices, well-known ironstone villages, such as Hook Norton, have even been claimed by some to be in the Cotswolds despite lacking key features of Cotswolds villages such as Cotswold stone, and are instead built using a deep red/orange ironstone, known locally as Hornton Stone. In his 1934 book English Journey, J. B. Priestley made this comment about Cotswold buildings made of the local stone. The truth is that it has no colour that can be described. Even when the sun is obscured and the light is cold, these walls are still faintly warm and luminous, as if they knew the trick of keeping the lost sunlight of centuries glimmering about them Area of Outstanding Natural Beauty The Cotswolds were designated as an Area of Outstanding Natural Beauty (AONB) in 1966, with an expansion on 21 December 1990 to . In 1991, all AONBs were measured again using modern methods, and the official area of the Cotswolds AONB was increased to . In 2000, the government confirmed that AONBs have the same landscape quality and status as National Parks. The Cotswolds AONB, which is the largest in England and Wales, stretches from the border regions of South Warwickshire and Worcestershire, through West Oxfordshire and Gloucestershire, and takes in parts of Wiltshire and of Bath and North East Somerset in the south. Gloucestershire County Council is responsible for sixty-three percent of the AONB. The Cotswolds Conservation Board has the task of conserving and enhancing the AONB. Established under statute in 2004 as an independent public body, the Board carries out a range of work from securing funding for 'on the ground' conservation projects, to providing a strategic overview of the area for key decision makers, such as planning officials. The Board is funded by Natural England and the seventeen local authorities that are covered by the AONB. The Cotswolds AONB Management Plan 2018–2023 was adopted by the Board in September 2018. The landscape of the AONB is varied, including escarpment outliers, escarpments, rolling hills and valleys, enclosed limestone valleys, settled valleys, ironstone hills and valleys, high wolds and high wold valleys, high wold dip-slopes, dip-slope lowland and valleys, a Low limestone plateau, cornbrash lowlands, farmed slopes, a broad floodplain valley, a large pastoral lowland vale, a settled unwooded vale, and an unwooded vale. While the beauty of the Cotswolds AONB is intertwined with that of the villages that seem almost to grow out of the landscape, the Cotswolds were primarily designated an Area of Outstanding Natural Beauty for the rare limestone grassland habitats as well as the old growth beech woodlands that typify the area. These habitat areas are also the last refuge for many other flora and fauna, with some so endangered that they are protected under the Wildlife and Countryside Act 1981. Cleeve Hill, and its associated commons, is a fine example of a limestone grassland and it is one of the few locations where the Duke of Burgundy butterfly may still be found in abundance. A June 2018 report stated that the AONB receives "23 million visitors a year, the third largest of any protected landscape". Earlier that year, Environment secretary Michael Gove announced that a panel would be formed to consider making some of the AONBs into National Parks. The review will file its report in 2019. In April 2018, the Cotswolds Conservation Board had written to Natural England "requesting that consideration be given to making the Cotswolds a National Park", according to Liz Eyre, Chairman. This has led to some concern as stated by one member of the Cotswold District Council, "National Park designation is a significant step further and raises the prospect of key decision making powers being taken away from democratically elected councillors". In other words, Cotswold District Council would no longer have the authority to grant and refuse housing applications. The uniqueness and value of the Cotswolds is shown in the fact that five European Special Areas of Conservation, three national nature reserves and more than 80 Sites of Special Scientific Interest are within the Cotswolds AONB. The Cotswold Voluntary Wardens Service was established in 1968 to help conserve and enhance the area, and now has more than 300 wardens. The Cotswold Way is a long-distance footpath, just over long, running the length of the AONB, mainly on the edge of the Cotswold escarpment with views over the Severn Valley and the Vale of Evesham. In September 2020, the Cotswolds AONB rebranded itself as the "Cotswolds National Landscape", using a proposed name replacement for "Areas of Outstanding Natural Beauty". Places of interest Pictured is the Garden of Sudeley Castle at Winchcombe. The present structure was built in the 15th century and may be on the site of a 12th-century castle. It is north of the spa town of Cheltenham which has much Georgian architecture of some merit. Further south, towards Tetbury, is the ancient fortress known as Beverston Castle, founded in 1229 by Maurice de Gaunt. In the same area is Calcot Manor, a manor house with origins in about 1300 as a tithe barn. Tetbury Market House was built in 1655. During the Middle Ages, Tetbury became an important market for Cotswold wool and yarn. Chavenage House is an Elizabethan-era manor house northwest of Tetbury. Chedworth Roman Villa, where several mosaic floors are on display, is near the Roman road known as the Fosse Way, north of the important town of Corinium Dobunnorum (Cirencester). Cirencester Abbey was founded as an Augustinian monastery in 1117 and Malmesbury Abbey was one of the few English houses with a continual history from the 7th century through to the Dissolution of the Monasteries. An unusual house in this area is Quarwood, a Victorian Gothic house in Stow-on-the-Wold. The grounds, covering , include parkland, fish ponds, paddocks, garages, woodlands and seven cottages. Another is Woodchester Mansion, an unfinished, Gothic revival mansion house in Woodchester Park near Nympsfield. Newark Park is a Grade I listed country house of Tudor origins near the village of Ozleworth, Wotton-under-Edge. The house sits in an estate of at the southern end of the Cotswold escarpment. Another of the many manor houses in the area, Owlpen Manor in the village of Owlpen in the Stroud district, is also Tudor and also Grade I listed. Further north, Broadway Tower is a folly on Broadway Hill, near the village of Broadway, Worcestershire. To the south of the Cotswolds is Corsham Court, a country house in a park designed by Capability Brown in the town of Corsham, west of Chippenham, Wiltshire. Top attractions According to users of the worldwide TripAdvisor travel site, in 2018 the following were among the best attractions in the Cotswolds: Walks With Hawks, Cheltenham Cotswolds Distillery, Stourton Cotswold Falconry Centre, Moreton-in-Marsh Mechanical Music Museum, Northleach Chavenage House, Tetbury Tewkesbury Abbey, Tewkesbury Gloucestershire Warwickshire Steam Railway, Cheltenham Gloucester Cathedral, Gloucester The Royal Gardens at Highgrove, Tetbury Jet Age Museum, Gloucester Cotswold Wildlife Park, Burford Hook Norton Brewery, Hook Norton Transport The Cotswolds lie between the M5, M40 and M4 motorways. The main A-roads through the area are: the A46: Bath – Stroud – Cheltenham – Evesham; the A419: Swindon – Cirencester – Stroud; the A417: Lechlade – Cirencester – Gloucester; the A429: Malmesbury – Cirencester – Stow-on-the-Wold – Moreton-in-Marsh; the A44: Chipping Norton – Moreton-in-Marsh – Evesham; the A40: Oxford – Burford – Cheltenham – Gloucester. These all roughly follow the routes of ancient roads, some laid down by the Romans, such as Ermin Way and the Fosse Way. There are local bus services across the area, but some are infrequent. The River Thames flows from the Cotswolds and is navigable from Inglesham and Lechlade-on-Thames downstream to Oxford. West of Inglesham. the Thames and Severn Canal and the Stroudwater Navigation connected the Thames to the River Severn; this route is mostly disused nowadays but several parts are in the process of being restored. Railways The area is bounded by two major rail routes: in the south by the main Bristol–Bath–London line (including the South Wales main line) and in the west by the Bristol–Birmingham main line. In addition, the Cotswold line runs through the Cotswolds from Oxford to Worcester, and the Golden Valley line runs across the hills from Swindon via Stroud to Gloucester, carrying fast and local services. Mainline rail services to the big cities run from railway stations such as Bath, Swindon, Oxford, Cheltenham, and Worcester. Mainline trains run by Great Western Railway to London Paddington also are available from Kemble station near Cirencester, Kingham station near Stow-on-the-Wold, Charlbury station, and Moreton-in-Marsh station. Additionally, there is the Gloucestershire Warwickshire Railway, a steam heritage railway over part of the closed Stratford–Cheltenham line, running from Cheltenham Racecourse through Gotherington, Winchcombe, and Hayles Abbey Halt to Toddington and Laverton. The preserved line has been extended to Broadway. In culture The Cotswold region has inspired several notable English composers. In the early 1900s, Herbert Howells and Ivor Gurney used to take long walks together over the hills, and Gurney urged Howells to make the landscape, including the nearby Malvern Hills, the inspiration for his future work. In 1916, Howells wrote his first major piece, the Piano Quartet in A minor, inspired by the magnificent view of the Malverns; he dedicated it to "the hill at Chosen (Churchdown) and Ivor Gurney who knows it". Another contemporary of theirs, Gerald Finzi, lived in nearby Painswick. Gustav Holst, who was born in Cheltenham, spent much of his early years playing the organ in Cotswold village churches, including at Cranham, after which village he titled his tune for In the Bleak Midwinter. He also called his Symphony in F major, Op. 8 H47 The Cotswolds. Holst’s friend, the composer Ralph Vaughan Williams, was born at Down Ampney in the Cotswolds and, though he moved to Surrey as a boy, he gave the name of his native village to the tune for Come Down, O Love Divine. He also composed his opera Hugh the Drover from 1913 to 1924, which depicts life in a Cotswold village and incorporates local folk melodies. In 1988, the 6th symphony (Op. 109) of composer Derek Bourgeois was titled "A Cotswold Symphony". The Cotswolds are a popular location for filming scenes for movies and television programmes. The film Better Things (2008), directed by Duane Hopkins, is set in a small Cotswold village. The fictional detective Agatha Raisin lives in the fictional village of Carsely in the Cotswolds. Other movies filmed in the Cotswolds or nearby, at least in part, include some of the Harry Potter series (Gloucester Cathedral), Bridget Jones's Diary (Snowshill), Pride and Prejudice (Cheltenham Town Hall), and Braveheart (Cotswold Farm Park). In 2014, some scenes of the 2016 movie Alice Through the Looking Glass were filmed at the Gloucester Docks just outside the Cotswold District; some scenes for the 2006 movie Amazing Grace were also filmed at the Docks. The television series Father Brown was almost entirely filmed in the Cotswolds. Scenes and buildings in Sudeley Castle was often featured in the series. The vicarage in Blockley was used for the main character's residence and the Anglican St Peter and St Paul church was the Roman Catholic St Mary's in the series. Other filming locations included Guiting Power, the former hospital in Moreton-in-Marsh, the Winchcombe railway station, Lower Slaughter, and St Peter's Church in Upper Slaughter. In the 2010s, BBC TV series Poldark, the location for Ross Poldark's family home "Trenwith" is Chavenage House, Tetbury, which is open to the public. Many exterior shots of village life in the Downton Abbey TV series were filmed in Bampton, Oxfordshire. Other filming locations in that county included Swinbrook, Cogges, and Shilton. The city of Bath hosted crews that filmed parts of the movies Vanity Fair, Persuasion, Dracula, and The Duchess. Gloucester and other places in Gloucestershire, some within the Area of Natural Beauty, have been a popular location for filming period films and television programmes over the years. Gloucester Cathedral has been particularly popular. The sighting of peregrine falcons in the landscape of the Cotswolds is mentioned in The Peregrine by John Alec Baker. The television documentary agriculture-themed series Clarkson's Farm were filmed at various locations around Chipping Norton. See also Chilterns Cotswold architecture Geology of Great Britain References Further reading Brace, Catherine. "Looking back: the Cotswolds and English national identity, c. 1890–1950." Journal of Historical Geography 25.4 (1999): 502-516. Brace, Catherine. "A pleasure ground for the noisy herds? Incompatible encounters with the Cotswolds and England, 1900–1950." Rural History 11.1 (2000): 75-94. Briggs, Katharine Mary. The folklore of the Cotswolds (BT Batsford Limited, 1974). Hilton, R. H. "The Cotswolds and Regional History." History Today (July 1953) 3#7 pp 490–499. Verey, David Cecil Wynter. The buildings of England: Gloucestershire. I. The Cotswolds (Penguin Books, 1979). External links National Character Area profile – Natural England Cotswolds Area of Outstanding Natural Beauty – Cotswolds Conservation Board Cotswolds Tourism Partnership Independent tourist guides: cotswolds.org thecotswolds.com icotswolds.com Explore Gloucestershire Hills of Gloucestershire Hills of Oxfordshire Hills of Somerset Hills of Warwickshire Hills of Wiltshire Hills of Worcestershire Protected areas of Gloucestershire Protected areas of Oxfordshire Protected areas of Somerset Protected areas of Warwickshire Protected areas of Wiltshire Protected areas of Worcestershire Areas of Outstanding Natural Beauty in England Natural regions of England
3,143
6,905
https://en.wikipedia.org/wiki/Carnatic
Carnatic
Carnatic most often refers to: Carnatic region, Southern India Carnatic music, the classical music of Southern India Carnatic may also refer to: Carnatic Wars, a series of military conflicts in India during the 18th century , a Bangor-class minesweeper of the Royal Indian Navy, that served in World War II , a 74-gun third rate ship of the line of the Royal Navy, launched at Deptford in 1783 , a 74-gun third rate ship of the line of the Royal Navy, launched at Portsmouth Dockyard in 1823 , one of several vessels of that name Carnatic Hall, built by slave trader, now closed university residence ca:Carnàtic
3,146
6,938
https://en.wikipedia.org/wiki/Classical%20order
Classical order
An order in architecture is a certain assemblage of parts subject to uniform established proportions, regulated by the office that each part has to perform. Coming down to the present from Ancient Greek and Ancient Roman civilization, the architectural orders are the styles of classical architecture, each distinguished by its proportions and characteristic profiles and details, and most readily recognizable by the type of column employed. The three orders of architecture—the Doric, Ionic, and Corinthian—originated in Greece. To these the Romans added, in practice if not in name, the Tuscan, which they made simpler than Doric, and the Composite, which was more ornamental than the Corinthian. The architectural order of a classical building is akin to the mode or key of classical music; the grammar or rhetoric of a written composition. It is established by certain modules like the intervals of music, and it raises certain expectations in an audience attuned to its language. Whereas the orders were essentially structural in Ancient Greek architecture, which made little use of the arch until its late period, in Roman architecture where the arch was often dominant, the orders became increasingly decorative elements except in porticos and similar uses. Columns shrank into half-columns emerging from walls or turned into pilasters. This treatment continued after the conscious and "correct" use of the orders, initially following exclusively Roman models, returned in the Italian Renaissance. Greek Revival architecture, inspired by increasing knowledge of Greek originals, returned to more authentic models, including ones from relatively early periods. Elements Each style has distinctive capitals at the top of columns and horizontal entablatures which it supports, while the rest of the building does not in itself vary between the orders. The column shaft and base also varies with the order, and is sometimes articulated with vertical hollow grooves known as fluting. The shaft is wider at the bottom than at the top, because its entasis, beginning a third of the way up, imperceptibly makes the column slightly more slender at the top, although some Doric columns, especially early Greek ones, are visibly "flared", with straight profiles that narrow going up the shaft. The capital rests on the shaft. It has a load-bearing function, which concentrates the weight of the entablature on the supportive column, but it primarily serves an aesthetic purpose. The necking is the continuation of the shaft, but is visually separated by one or many grooves. The echinus lies atop the necking. It is a circular block that bulges outwards towards the top to support the abacus, which is a square or shaped block that in turn supports the entablature. The entablature consists of three horizontal layers, all of which are visually separated from each other using moldings or bands. In Roman and post-Renaissance work, the entablature may be carried from column to column in the form of an arch that springs from the column that bears its weight, retaining its divisions and sculptural enrichment, if any. There are names for all the many parts of the orders. Measurement The height of columns are calculated in terms of a ratio between the diameter of the shaft at its base and the height of the column. A Doric column can be described as seven diameters high, an Ionic column as eight diameters high, and a Corinthian column nine diameters high, although the actual ratios used vary considerably in both ancient and revived examples, but keeping to the trend of increasing slimness between the orders. Sometimes this is phrased as "lower diameters high", to establish which part of the shaft has been measured. Greek orders There are three distinct orders in Ancient Greek architecture: Doric, Ionic, and Corinthian. These three were adopted by the Romans, who modified their capitals. The Roman adoption of the Greek orders took place in the 1st century BC. The three ancient Greek orders have since been consistently used in European Neoclassical architecture. Sometimes the Doric order is considered the earliest order, but there is no evidence to support this. Rather, the Doric and Ionic orders seem to have appeared at around the same time, the Ionic in eastern Greece and the Doric in the west and mainland. Both the Doric and the Ionic order appear to have originated in wood. The Temple of Hera in Olympia is the oldest well-preserved temple of Doric architecture. It was built just after 600 BC. The Doric order later spread across Greece and into Sicily, where it was the chief order for monumental architecture for 800 years. Early Greeks were no doubt aware of the use of stone columns with bases and capitals in ancient Egyptian architecture, and that of other Near Eastern cultures, although there they were mostly used in interiors, rather than as a dominant feature of all or part of exteriors, in the Greek style. Doric order The Doric order originated on the mainland and western Greece. It is the simplest of the orders, characterized by short, organized, heavy columns with plain, round capitals (tops) and no base. With a height that is only four to eight times its diameter, the columns are the most squat of all orders. The shaft of the Doric order is channeled with 20 flutes. The capital consists of a necking or annulet, which is a simple ring. The echinus is convex, or circular cushion like stone, and the abacus is square slab of stone. Above the capital is a square abacus connecting the capital to the entablature. The entablature is divided into three horizontal registers, the lower part of which is either smooth or divided by horizontal lines. The upper half is distinctive for the Doric order. The frieze of the Doric entablature is divided into triglyphs and metopes. A triglyph is a unit consisting of three vertical bands which are separated by grooves. Metopes are the plain or carved reliefs between two triglyphs. The Greek forms of the Doric order come without an individual base. They instead are placed directly on the stylobate. Later forms, however, came with the conventional base consisting of a plinth and a torus. The Roman versions of the Doric order have smaller proportions. As a result, they appear lighter than the Greek orders. Ionic order The Ionic order came from eastern Greece, where its origins are entwined with the similar but little known Aeolic order. It is distinguished by slender, fluted pillars with a large base and two opposed volutes (also called "scrolls") in the echinus of the capital. The echinus itself is decorated with an egg-and-dart motif. The Ionic shaft comes with four more flutes than the Doric counterpart (totalling 24). The Ionic base has two convex moldings called tori, which are separated by a scotia. The Ionic order is also marked by an entasis, a curved tapering in the column shaft. A column of the Ionic order is nine times its lower diameter. The shaft itself is eight diameters high. The architrave of the entablature commonly consists of three stepped bands (fasciae). The frieze comes without the Doric triglyph and metope. The frieze sometimes comes with a continuous ornament such as carved figures instead. Corinthian order The Corinthian order is the most elaborated of the Greek orders, characterized by a slender fluted column having an ornate capital decorated with two rows of acanthus leaves and four scrolls. The shaft of the Corinthian order has 24 flutes. The column is commonly ten diameters high. The Roman writer Vitruvius credited the invention of the Corinthian order to Callimachus, a Greek sculptor of the 5th century BC. The oldest known building built according to this order is the Choragic Monument of Lysicrates in Athens, constructed from 335 to 334 BC. The Corinthian order was raised to rank by the writings of Vitruvius in the 1st century BC. Roman orders The Romans adapted all the Greek orders and also developed two orders of their own, basically modifications of Greek orders. However, it was not until the Renaissance that these were named and formalized as the Tuscan and Composite, respectively the plainest and most ornate of the orders. The Romans also invented the Superposed order. A superposed order is when successive stories of a building have different orders. The heaviest orders were at the bottom, whilst the lightest came at the top. This means that the Doric order was the order of the ground floor, the Ionic order was used for the middle story, while the Corinthian or the Composite order was used for the top story. The Giant order was invented by architects in the Renaissance. The Giant order is characterized by columns that extend the height of two or more stories. Tuscan order The Tuscan order has a very plain design, with a plain shaft, and a simple capital, base, and frieze. It is a simplified adaptation of the Doric order by the Greeks. The Tuscan order is characterized by an unfluted shaft and a capital that only consists of an echinus and an abacus. In proportions it is similar to the Doric order, but overall it is significantly plainer. The column is normally seven diameters high. Compared to the other orders, the Tuscan order looks the most solid. Composite order The Composite order is a mixed order, combining the volutes of the Ionic with the leaves of the Corinthian order. Until the Renaissance it was not ranked as a separate order. Instead it was considered as a late Roman form of the Corinthian order. The column of the Composite order is typically ten diameters high. Historical development The Renaissance period saw renewed interest in the literary sources of the ancient cultures of Greece and Rome, and the fertile development of a new architecture based on classical principles. The treatise De architectura by Roman theoretician, architect and engineer Vitruvius, is the only architectural writing that survived from Antiquity. Rediscovered in the 15th century, Vitruvius was instantly hailed as the authority on architecture. However, in his text the word order is not to be found. To describe the four species of columns (he only mentions: Tuscan, Doric, Ionic and Corinthian) he uses, in fact, various words such as: genus (gender), mos (habit, fashion, manner), opera (work). The term order, as well as the idea of redefining the canon started circulating in Rome, at the beginning of the 16th century, probably during the studies of Vitruvius' text conducted and shared by Peruzzi, Raphael, and Sangallo. Ever since, the definition of the canon has been a collective endeavor that involved several generations of European architects, from Renaissance and Baroque periods, basing their theories both on the study of Vitruvius' writings and the observation of Roman ruins (the Greek ruins became available only after Greek Independence, 1821–23). What was added were rules for the use of the Architectural Orders, and the exact proportions of them down to the most minute detail. Commentary on the appropriateness of the orders for temples devoted to particular deities (Vitruvius I.2.5) were elaborated by Renaissance theorists, with Doric characterized as bold and manly, Ionic as matronly, and Corinthian as maidenly. Vignola defining the concept of "order" Following the examples of Vitruvius and the five books of the Regole generali di architettura sopra le cinque maniere de gli edifici by Sebastiano Serlio published from 1537 onwards, Giacomo Barozzi da Vignola produced an architecture rule book that was not only more practical than the previous two treatises, but also was systematically and consistently adopting, for the first time, the term 'order' to define each of the five different species of columns inherited from antiquity. A first publication of the various plates, as separate sheets, appeared in Rome in 1562, with the title: Regola delli cinque ordini d'architettura ("Canon of the Five Orders of Architecture"). As David Watkin has pointed out, Vignola's book "was to have an astonishing publishing history of over 500 editions in 400 years in ten languages, Italian, Dutch, English, Flemish, French, German, Portuguese, Russian, Spanish, Swedish, during which it became perhaps the most influential book of all times". The book consisted simply of an introduction followed by 32 annotated plates, highlighting the proportional system with all the minute details of the Five Architectural Orders. According to Christof Thoenes, the main expert of Renaissance architectural treatises, "in accordance with Vitruvius’s example, Vignola chose a "module" equal to a half-diameter which is the base of the system. All the other measurements are expressed in fractions or in multiples of this module. The result is an arithmetical model, and with its help each order, harmoniously proportioned, can easily be adapted to any given height, of a façade or an interior. From this point of view, Vignola's Regola is a remarkable intellectual achievement". In America, The American Builder's Companion, written in the early 19th century by the architect Asher Benjamin, influenced many builders in the eastern states, particularly those who developed what became known as the Federal style. The last American re-interpretation of Vignola's Regola, was edited in 1904 by William Robert Ware. The break from the classical mode came first with the Gothic Revival architecture, then the development of modernism during the 19th century. The Bauhaus promoted pure functionalism, stripped of superfluous ornament, and that has become one of the defining characteristics of modern architecture. There are some exceptions. Postmodernism introduced an ironic use of the orders as a cultural reference, divorced from the strict rules of composition. On the other hand, a number of practitioners such as Quinlan Terry in England, and Michael Dwyer, Richard Sammons, and Duncan Stroik in the United States, continue the classical tradition, and use the classical orders in their work. Nonce orders Several orders, usually based upon the composite order and only varying in the design of the capitals, have been invented under the inspiration of specific occasions, but have not been used again. They are termed "nonce orders" by analogy to nonce words; several examples follow below. These nonce orders all express the “speaking architecture” (architecture parlante) that was taught in the Paris courses, most explicitly by Étienne-Louis Boullée, in which sculptural details of classical architecture could be enlisted to speak symbolically, the better to express the purpose of the structure and enrich its visual meaning with specific appropriateness. This idea was taken up strongly in the training of Beaux-Arts architecture, . French order The Hall of Mirrors in the Palace of Versailles contains pilasters with bronze capitals in the "French order". Designed by Charles Le Brun, the capitals display the national emblems of the Kingdom of France: the royal sun between two Gallic roosters above a fleur-de-lis. British orders Robert Adam's brother James was in Rome in 1762, drawing antiquities under the direction of Clérisseau; he invented a "British order" and published an engraving of it. Its capital the heraldic lion and unicorn take the place of the Composite's volutes, a Byzantine or Romanesque conception, but expressed in terms of neoclassical realism. Adam's ink-and-wash rendering with red highlighting is at the Avery Library, Columbia University. In 1789 George Dance invented an Ammonite order, a variant of Ionic, substituting volutes in the form of fossil ammonites for John Boydell's Shakespeare Gallery in Pall Mall, London. An adaptation of the Corinthian order by William Donthorne that used turnip leaves and mangelwurzel is termed the Agricultural order. Sir Edwin Lutyens, who from 1912 laid out New Delhi as the new seat of government for the British Empire in India, designed a Delhi order having a capital displaying a band of vertical ridges, and with bells hanging at each corner as a replacement for volutes. His design for the new city's central palace, Viceroy's House, now the Presidential residence Rashtrapati Bhavan, was a thorough integration of elements of Indian architecture into a building of classical forms and proportions, and made use of the order throughout. The Delhi Order reappears in some later Lutyens buildings including Campion Hall, Oxford. American orders In the United States Benjamin Latrobe, the architect of the Capitol building in Washington DC, designed a series of botanical American orders. Most famous is the Corinthian order substituting corncobs and their husks for the acanthus leaves, which was executed by Giuseppe Franzoni and used in the small domed vestibule of the Senate. Only this vestibule survived the Burning of Washington in 1814, nearly intact. With peace restored, Latrobe designed an American order that substituted tobacco leaves for the acanthus, of which he sent a sketch to Thomas Jefferson in a letter, 5 November 1816. He was encouraged to send a model of it, which remains at Monticello. In the 1830s Alexander Jackson Davis admired it enough to make a drawing of it. In 1809 Latrobe invented a second American order, employing magnolia flowers constrained within the profile of classical mouldings, as his drawing demonstrates. It was intended for "the Upper Columns in the Gallery of the Entrance of the Chamber of the Senate" (). See also Temple (Greek) Temple (Roman) Notes References Summerson, John, The Classical Language of Architecture, 1980 edition, Thames and Hudson World of Art series, Frédérique Lemerle et Yves Pauwels (dir.), Histoires d’ordres : le langage européen de l’architecture, Turhout, Brepols, 2021 Further reading Barletta, Barbara A., The Origins of the Greek Architectural Orders (Cambridge University Press) 2001 Barozzi da Vignola, Giacomo, Canon of the Five Orders, Translated into English, with an introduction and commentary by Branko Mitrovic, Acanthus Press, N.Y., 1999 Barozzi da Vignola, Giacomo, Canon of the Five Orders, Translated by John Leeke (1669), with an introduction by David Watkin, Dover Publications, N.Y., 2011 Classical orders and elements Ancient Roman architectural elements Ancient Greek architecture Classical architecture Neoclassical architecture Design history
3,164
6,941
https://en.wikipedia.org/wiki/Colin%20Kapp
Colin Kapp
Derek Ivor Colin Kapp (3 April 1928 – 3 August 2007), Known as Colin Kapp, was a British science fiction author best known for his stories about the Unorthodox Engineers. As an electronic engineer, he began his career with Mullard Electronics then specialised in electroplating techniques, eventually becoming a freelance consultant engineer. He was born in Southwark, south London, 3 April 1928 to John L. F. Kapp and Annie M.A. (née Towner). Works Cageworld series Search for the sun! (1982) (also published as Cageworld) The Lost worlds of Cronus (1982) The Tyrant of Hades (1984) Star Search (1984) Chaos series The Patterns of Chaos (1972) The Chaos Weapon (1977) Standalone novels The Dark Mind (1964) (also published as Transfinite Man) The Wizard of Anharitte (1973) The Survival Game (1976) Manalone (1977) The Ion War (1978) The Timewinders (1980) Short stories Unorthodox Engineers "The Railways Up on Cannis" (1959) "The Subways of Tazoo" (1964) "The Pen and the Dark" (1966) "Getaway from Getawehi" (1969) "The Black Hole of Negrav" (1975) Collected in The Unorthodox Engineers (1979) Other stories "Breaking Point" (1959) "Survival Problem" (1959) "Lambda I" (1962) "The Night-Flame" (1964) "Hunger Over Sweet Waters" (1965) "Ambassador to Verdammt" (1967) "The Imagination Trap" (1967) "The Cloudbuilders" (1968) "I Bring You Hands" (1968) "Gottlos" (1969), notable for having (along with Keith Laumer's Bolo series) inspired Steve Jackson's classic game of 21st century tank warfare Ogre. "The Teacher" (1969) "Letter from an Unknown Genius" (1971) "What the Thunder Said" (1972) "Which Way Do I Go For Jericho?" (1972) "The Old King's Answers" (1973) "Crimescan" (1973) "What The Thunder Said" (1973) "Mephisto and the Ion Explorer" (1974) "War of the Wastelife" (1974) "Cassius and the Mind-Jaunt" (1975) "Something in the City" (1984) "An Alternative to Salt" (1986) References External links Bibliography kept by Jarl Totland Bibliography at SciFan 1929 births 2007 deaths British science fiction writers British short story writers British male novelists British male short story writers 20th-century British novelists 20th-century British short story writers 20th-century British male writers
3,165
6,948
https://en.wikipedia.org/wiki/Crossbow
Crossbow
A crossbow is a ranged weapon using an elastic launching device consisting of a bow-like assembly called a prod, mounted horizontally on a main frame called a tiller, which is hand-held in a similar fashion to the stock of a long firearm. Crossbows shoot arrow-like projectiles called bolts or quarrels. A person who shoots crossbow is called a crossbowman or an arbalist (after the arbalest, a European crossbow variant used during the 12th century). Crossbows and bows use the same launch principle, but an archer must maintain a bow's draw by pitching the bowstring with fingers, pulling it back with arm and back muscles and then holding that form in order to aim. This demands significant physical strength. A crossbow has a locking mechanism to maintain the draw, limiting the shooter's exertion to pulling the string into the lock and then releasing the shot by depressing a lever/trigger. This enables a crossbowman to handle more draw weight, and to hold it with significantly less physical strain, thus potentially achieving better precision. The earliest known crossbows were made in the first millennium BC, not later than the 7th century BC in ancient China, and not later than the 1st century AD in Greece (as the gastraphetes); each civilization developed the weapon independently. Crossbows brought about a major shift in the role of projectile weaponry in wars, such as during Qin's unification wars and later the Han campaigns against northern nomads and western states. The medieval European crossbow was called by many names, including "crossbow" itself; most of these names derived from the word ballista, an ancient Greek torsion siege engine similar in appearance but different in design principle. In modern times, firearms have largely supplanted bows and crossbows as weapons of warfare. However, crossbows still remain widely used for competitive shooting sports and hunting, or for relatively silent shooting. Terminology A crossbowman or crossbow-maker is sometimes called an arbalista, arbalist or arbalest. The last two are also used to refer to the crossbow. Arrow, bolt and quarrel are all suitable terms for crossbow projectiles. The lath, also called the prod, is the bow of the crossbow. According to W.F. Peterson, the prod came into usage in the 19th century as a result of mistranslating rodd in a 16th-century list of crossbow effects. The stock is the wooden body on which the bow is mounted, although the medieval tiller is also used. The lock refers to the release mechanism, including the string, sears, trigger lever, and housing. Construction A crossbow is essentially a bow mounted on an elongated frame (called a tiller or stock) with a built-in mechanism that holds the drawn bow string, as well as a trigger mechanism which is used to release the string. Chinese vertical trigger lock The Chinese trigger was a mechanism typically composed of three cast bronze pieces housed inside a hollow bronze enclosure. The entire mechanism is then dropped into a carved slot within the tiller and secured together by two bronze rods. The string catch (nut) is shaped like a "J" because it usually has a tall erect rear spine that protrudes above the housing, which serves the function of both a cocking lever (by pushing the drawn string onto it) and a primitive rear sight. It is held stationary against tension by the second piece, which is shaped like a flattened "C" and acts as the sear. The sear cannot move as it is trapped by the third piece, i.e. the actual trigger blade, which hangs vertically below the enclosure and catches the sear via a notch. The two bearing surfaces between the three trigger pieces each offers a mechanical advantage, which allow for handling significant draw weights with a much smaller pull weight. During shooting, the user will hold the crossbow at eye level by a vertical handle and aim along the arrow using the sighting spine for elevation, similar to how a modern rifleman shoots with iron sights. When the trigger blade is pulled, its notch disengages from the sear and allows the latter to drop downwards, which in turn frees up the nuts to pivot forward and release the bowstring. European rolling nut lock The earliest European designs featured a transverse slot in the top surface of the frame, down into which the string was placed. To shoot this design, a vertical rod is thrust up through a hole in the bottom of the notch, forcing the string out. This rod is usually attached perpendicular to a rear-facing lever called a tickler. A later design implemented a rolling cylindrical pawl called a nut to retain the string. This nut has a perpendicular centre slot for the bolt, and an intersecting axial slot for the string, along with a lower face or slot against which the internal trigger sits. They often also have some form of strengthening internal sear or trigger face, usually of metal. These roller nuts were either free-floating in their close-fitting hole across the stock, tied in with a binding of sinew or other strong cording; or mounted on a metal axle or pins. Removable or integral plates of wood, ivory, or metal on the sides of the stock kept the nut in place laterally. Nuts were made of antler, bone, or metal. Bows could be kept taut and ready to shoot for some time with little physical straining, allowing crossbowmen to aim better without fatiguing. Bow Chinese crossbow bows were made of composite material from the start. European crossbows from the 10th to 12th centuries used wood for the bow, also called the prod or lath, which tended to be ash or yew. Composite bows started appearing in Europe during the 13th century and could be made from layers of different material, often wood, horn, and sinew glued together and bound with animal tendon. These composite bows made of several layers are much stronger and more efficient in releasing energy than simple wooden bows. As steel became more widely available in Europe around the 14th century, steel prods came into use. Traditionally, the prod was often lashed to the stock with rope, whipcord, or other strong cording. This is called the bridle Spanning mechanism The Chinese used winches for large crossbows mounted on fortifications or wagons, known as "bedded crossbows" (床弩). Winches may have been used for handheld crossbows during the Han dynasty (202 BC–9 AD, 25–220 AD), but there is only one known depiction of it. The 11th century Chinese military text Wujing Zongyao mentions types of crossbows using winch mechanisms, but it is not known if these were actually handheld crossbows or mounted crossbows. Another drawing method involved the shooters sitting on the ground, and using the combined strength of leg, waist, back and arm muscles to help span much heavier crossbows, which were aptly called "waist-spun crossbows" (腰張弩). During the Medieval period, both Chinese and European crossbows used stirrups as well as belt hooks. In the 13th century, European crossbows started using winches, and from the 14th century an assortment of spanning mechanisms such as winch pulleys, cord pulleys, gaffles (such as gaffe levers, goat's foot levers, and rarer internal lever-action mechanisms), cranequins, and even screws. Variants The smallest crossbows are pistol crossbows. Others are simple long stocks with the crossbow mounted on them. These could be shot from under the arm. The next step in development was stocks of the shape that would later be used for firearms, which allowed better aiming. The arbalest was a heavy crossbow that required special systems for pulling the sinew via windlasses. For siege warfare, the size of crossbows was further increased to hurl large projectiles, such as rocks, at fortifications. The required crossbows needed a massive base frame and powerful windlass devices. Projectiles The arrow-like projectiles of a crossbow are called crossbow bolts. These are usually much shorter than arrows, but can be several times heavier. There is an optimum weight for bolts to achieve maximum kinetic energy, which varies depending on the strength and characteristics of the crossbow, but most could pass through common mail. Crossbow bolts can be fitted with a variety of heads, some with sickle-shaped heads to cut rope or rigging; but the most common today is a four-sided point called a quarrel. A highly specialized type of bolt is employed to collect blubber biopsy samples used in biology research. Even relatively small differences in arrow weight can have a considerable impact on its drop and, conversely, its flight trajectory. Bullet-shooting crossbows are modified crossbows that use bullets or stones as projectiles. Accessories The ancient Chinese crossbow often included a metal (i.e. bronze or steel) grid serving as iron sights. Modern crossbow sights often use similar technology to modern firearm sights, such as red dot sights and telescopic sights. Many crossbow scopes feature multiple crosshairs to compensate for the significant effects of gravity over different ranges. In most cases, a newly bought crossbow will need to be sighted for accurate shooting. A major cause of the sound of shooting a crossbow is vibration of various components. Crossbow silencers are multiple components placed on high vibration parts, such as the string and limbs, to dampen vibration and suppress the sound of loosing the bolt. History China In terms of archaeological evidence, crossbow locks made of cast bronze have been found in China dating to around 650 BC. They have also been found in Tombs 3 and 12 at Qufu, Shandong, previously the capital of Lu, and date to the 6th century BC. Bronze crossbow bolts dating from the mid-5th century BC have been found at a Chu burial site in Yutaishan, Jiangling County, Hubei Province. Other early finds of crossbows were discovered in Tomb 138 at Saobatang, Hunan Province, and date to the mid-4th century BC. It is possible that these early crossbows used spherical pellets for ammunition. A Western-Han mathematician and music theorist, Jing Fang (78–37 BC), compared the moon to the shape of a round crossbow bullet. The Zhuangzi also mentions crossbow bullets. The earliest Chinese documents mentioning a crossbow were texts from the 4th to 3rd centuries BC attributed to the followers of Mozi. This source refers to the use of a giant crossbow between the 6th and 5th centuries BC, corresponding to the late Spring and Autumn Period. Sun Tzu's The Art of War (first appearance dated between 500 BC to 300 BC) refers to the characteristics and use of crossbows in chapters 5 and 12 respectively, and compares a drawn crossbow to "might". The Huainanzi advises its readers not to use crossbows in marshland where the surface is soft and it is hard to arm the crossbow with the foot. The Records of the Grand Historian, completed in 94 BC, mentions that Sun Bin defeated Pang Juan by ambushing him with a body of crossbowmen at the Battle of Maling in 342 BC. The Book of Han, finished 111 AD, lists two military treatises on crossbows. Handheld crossbows with complex bronze trigger mechanisms have also been found with the Terracotta Army in the tomb of Qin Shihuang (r. 221–210 BC) that are similar to specimens from the subsequent Han Dynasty (202 BC–220 AD), while crossbowmen described in the Qin and Han Dynasty learned drill formations, some were even mounted as charioteers and cavalry units, and Han Dynasty writers attributed the success of numerous battles against the Xiongnu and Western Regions city-states to massed crossbow volleys. The bronze triggers were designed in such a way that they were able to store a large amount of energy within the bow when drawn, but was easily shot with little resistance and recoil when the trigger were pulled. The trigger nut also had a long vertical spine that could be used like a primitive rear sight for elevation adjustment, which allowed precision shooting over longer distances. The Qin/Han-era crossbow was also an early example of modular design, as the bronze trigger components were also mass-produced with relative precise tolerances so that the parts are interchangeable between different crossbows. The trigger mechanism from one crossbow can be installed into another simply by dropping into a tiller slot of the same specifications and secured with dowel pins. Some crossbow designs were also found to be fitted with bronze buttplates and trigger guard. It is clear from surviving inventory lists in Gansu and Xinjiang that the crossbow was greatly favored by the Han dynasty. For example, in one batch of slips there are only two mentions of bows, but thirty mentions of crossbows. Crossbows were mass-produced in state armories with designs improving as time went on, such as the use of a mulberry wood stock and brass; a crossbow in 1068 could pierce a tree at 140 paces. Crossbows were used in numbers as large as 50,000 starting from the Qin dynasty and upwards of several hundred thousand during the Han. According to one authority, the crossbow had become "nothing less than the standard weapon of the Han armies", by the second century BC. Han soldiers were required to pull a crossbow with a draw weight equivalent of to qualify as an entry level crossbowman, while it was claimed that a few elite troops were capable of bending crossbows by the hands-and-feet method, with a draw-weight in excess of 750lb. After the Han dynasty, the crossbow lost favor during the Six Dynasties until it experienced a mild resurgence during the Tang dynasty, under which the ideal expeditionary army of 20,000 included 2,200 archers and 2,000 crossbowmen. Li Jing and Li Quan prescribed 20 percent of the infantry to be armed with crossbows. During the Song dynasty, the crossbow received a huge upsurge in military usage, and often overshadowed the bow 2 to 1 in numbers. During this time period, a stirrup was added for ease of loading. The Song government attempted to restrict the public use of crossbows and sought ways to keep both body armors and crossbows out of civilian ownership. Despite the ban on certain types of crossbows, the weapon experienced an upsurge in civilian usage as both a hunting weapon and pastime. The "romantic young people from rich families, and others who had nothing particular to do" formed crossbow shooting clubs as a way to pass time. During the late Ming dynasty, no crossbows were mentioned to have been produced in the three-year period from 1619 to 1622. With 21,188,366 taels, the Ming manufactured 25,134 cannons, 8,252 small guns, 6,425 muskets, 4,090 culverins, 98,547 polearms and swords, 26,214 great "horse decapitator" swords, 42,800 bows, 1,000 great axes, 2,284,000 arrows, 180,000 fire arrows, 64,000 bow strings, and hundreds of transport carts. Military crossbows were armed by treading, or basically placing the feet on the bow stave and drawing it using one's arms and back muscles. During the Song dynasty, stirrups were added for ease of drawing and to mitigate damage to the bow. Alternatively the bow could also be drawn by a belt claw attached to the waist, but this was done lying down, as was the case for all large crossbows. Winch-drawing was used for the large mounted crossbows as seen below, but evidence for its use in Chinese hand-crossbows is scant. Other sorts of crossbows also existed, such as the repeating crossbow, multi-shot crossbow, larger field artillery crossbows, and repeating multi-shot crossbow. Southeast Asia Around the third century BC, King An Dương of Âu Lạc (modern-day northern Vietnam) and (modern-day southern China) commissioned a man named Cao Lỗ (or Cao Thông) to construct a crossbow and christened it "Saintly Crossbow of the Supernaturally Luminous Golden Claw" (nỏ thần), which could kill 300 men in one shot. According to historian Keith Taylor, the crossbow, along with the word for it, seems to have been introduced into China from Austroasiatic peoples in the south around the fourth century BC. However, this is contradicted by crossbow locks found in ancient Chinese Zhou Dynasty tombs dating to the 600s BC. In 315 AD, Nu Wen taught the Chams how to build fortifications and use crossbows. The Chams would later give the Chinese crossbows as presents on at least one occasion. Crossbow technology for crossbows with more than one prod was transferred from the Chinese to Champa, which Champa used in its invasion of the Khmer Empire's Angkor in 1177. When the Chams sacked Angkor they used the Chinese siege crossbow. The Chinese taught the Chams how to use crossbows and mounted archery Crossbows and archery in 1171. The Khmer also had double bow crossbows mounted on elephants, which Michel Jacq-Hergoualc'h suggests were elements of Cham mercenaries in Jayavarman VII's army. The native Montagnards of Vietnam's Central Highlands were also known to have used crossbows, as both a tool for hunting, and later, an effective weapon against the Viet Cong during the Vietnam War. Montagnard fighters armed with crossbows proved a highly valuable asset to the US Special Forces operating in Vietnam, and it was not uncommon for the Green Berets to integrate Montagnard crossbowmen into their strike teams. Ancient Greece The earliest crossbow-like weapons in Europe probably emerged around the late 5th century BC when the gastraphetes, an ancient Greek crossbow, appeared. The device was described by the Greek author Heron of Alexandria in his Belopoeica ("On Catapult-making"), which draws on an earlier account of his compatriot engineer Ctesibius (fl. 285–222 BC). According to Heron, the gastraphetes was the forerunner of the later catapult, which places its invention some unknown time prior to 399 BC. The gastraphetes was a crossbow mounted on a stock divided into a lower and upper section. The lower was a case fixed to the bow while the upper was a slider which had the same dimensions as the case. Meaning "belly-bow", it was called as such because the concave withdrawal rest at one end of the stock was placed against the stomach of the operator, which he could press to withdraw the slider before attaching a string to the trigger and loading the bolt; this could thus store more energy than regular Greek bows. It was used in the Siege of Motya in 397 BC. This was a key Carthaginian stronghold in Sicily, as described in the 1st century AD by Heron of Alexandria in his book Belopoeica. Other arrow shooting machines such as the larger ballista and smaller Scorpio also existed starting from around 338 BC, but these are torsion catapults and not considered crossbows. Arrow-shooting machines (katapeltai) are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An Athenian inventory from 330–329 BC includes catapults bolts with heads and flights. Arrow-shooting machines in action are reported from Philip II's siege of Perinthos in Thrace in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, presumably to house anti-personnel arrow shooters, as in Aigosthena. Ancient Rome The late 4th century author Vegetius, in his De Re Militari, describes arcubalistarii (crossbowmen) working together with archers and artillerymen. However it is disputed if arcuballistas were crossbows or torsion powered weapons. The idea that the arcuballista was a crossbow is based on the fact that Vegetius refers to it and the manuballista, which was torsion powered, separately. Therefore, if the arcuballista was not like the manuballista, it may have been a crossbow. The etymology is not clear and their definitions obscure. According to Vegetius, these were well-known devices, and hence he did not describe them in depth. Joseph Needham argues against the existence of Roman crossbowmen: On the other hand, Arrian's earlier Ars Tactica, written around 136 AD, also mentions 'missiles shot not from a bow but from a machine' and that this machine was used on horseback while in full gallop. It is presumed that this was a crossbow. The only pictorial evidence of Roman arcuballistas comes from sculptural reliefs in Roman Gaul depicting them in hunting scenes. These are aesthetically similar to both the Greek and Chinese crossbows, but it is not clear what kind of release mechanism they used. Archaeological evidence suggests they were similar to the rolling nut mechanism of medieval Europe. Medieval Europe References to the crossbow are basically nonexistent in Europe from the 5th century until the 10th century. There is however a depiction of a crossbow as a hunting weapon on four Pictish stones from early medieval Scotland (6th to 9th centuries): St. Vigeans no. 1, Glenferness, Shandwick, and Meigle. The crossbow reappeared again in 947 as a French weapon during the siege of Senlis and again in 984 at the siege of Verdun. Crossbows were used at the battle of Hastings in 1066 and by the 12th century they had become common battlefield weapons. The earliest extant European crossbow remains to date were found at Lake Paladru and has been dated to the 11th century. The crossbow superseded hand bows in many European armies during the 12th century, except in England, where the longbow was more popular. Later crossbows (sometimes referred to as arbalests), utilizing all-steel prods, were able to achieve power close (and sometime superior) to longbows, but were more expensive to produce and slower to reload because they required the aid of mechanical devices such as the cranequin or windlass to draw back their extremely heavy bows. Usually these could only shoot two bolts per minute versus twelve or more with a skilled archer, often necessitating the use of a pavise (shield) to protect the operator from enemy fire. Along with polearm weapons made from farming equipment, the crossbow was also a weapon of choice for insurgent peasants such as the Taborites. Genoese crossbowmen were famous mercenaries hired throughout medieval Europe, while the crossbow also played an important role in anti-personnel defense of ships. Crossbows were eventually replaced in warfare by gunpowder weapons. Early hand cannons had slower rates of fire and much worse accuracy than contemporary crossbows, but the arquebus (which proliferated in the mid to late 15th century) matched their rate of fire while being far more powerful. The Battle of Cerignola in 1503 was largely won by Spain through the use of matchlock arquebuses, marking the first time a major battle was won through the use of hand-held firearms. Later, similar competing tactics would feature harquebusiers or musketeers in formation with pikemen, pitted against cavalry firing pistols or carbines. While the military crossbow had largely been supplanted by firearms on the battlefield by 1525, the sporting crossbow in various forms remained a popular hunting weapon in Europe until the eighteenth century. Crossbows saw irregular use throughout the rest of the 16th century; for example, Maria Pita's husband was killed by a crossbowman of the English Armada in 1589. Islamic world There are no references to crossbows in Islamic texts earlier than the 14th century. Arabs in general were averse to the crossbow and considered it a foreign weapon. They called it qaus al-rijl (foot-drawn bow), qaus al-zanbūrak (bolt bow) and qaus al-faranjīyah (Frankish bow). Although Muslims did have crossbows, there seems to be a split between eastern and western types. Muslims in Spain used the typical European trigger while eastern Muslim crossbows had a more complex trigger mechanism. Mamluk cavalry used crossbows. Elsewhere Oyumi were ancient Japanese artillery pieces that first appeared in the seventh century (during the Asuka Period). According to Japanese records, the Oyumi was different from the hand held crossbow also in use during the same time period. A quote from a seventh-century source seems to suggest that the Oyumi may have able to fire multiple arrows at once: "the Oyumi were lined up and fired at random, the arrows fell like rain". A ninth century Japanese artisan named Shimaki no Fubito claimed to have improved on a version of the weapon used by the Chinese; his version could rotate and fire projectiles in multiple directions. The last recorded use of the Oyumi was in 1189. In Western Africa and Central Africa, crossbows served as a scouting weapon and for hunting, with African slaves bringing this technology to natives in America. In the US South, the crossbow was used for hunting and warfare when firearms or gunpowder were unavailable because of economic hardships or isolation. In the North of Northern America, light hunting crossbows were traditionally used by the Inuit. These are technologically similar to the African-derived crossbows, but have a different route of influence. Spanish conquistadors continued to use crossbows in the Americas long after they were replaced in European battlefields by firearms. Only in the 1570s did firearms become completely dominant among the Spanish in the Americas. The French and the British used a Sauterelle (French for grasshopper) in World War I. It was lighter and more portable than the Leach Trench Catapult, but less powerful. It weighed and could throw an F1 grenade or Mills bomb . The Sauterelle replaced the Leach Catapult in British service and was in turn replaced in 1916 by the 2-inch Medium Trench Mortar and Stokes mortar. Modern use Hunting, leisure and science Crossbows are used for shooting sports and bowhunting in modern archery and for blubber biopsy samples in scientific research. In some countries such as Canada or the United Kingdom, they may be less heavily regulated than firearms, and thus more popular for hunting; some jurisdictions have bow and/or crossbow only seasons. Modern military and paramilitary use In modern times, crossbows are no longer used for war, but there are still some applications. For example, in the Americas, the Peruvian army (Ejército) equips some soldiers with crossbows and rope, to establish a zip-line in difficult terrain. In Brazil the CIGS (Jungle Warfare Training Center) also trains soldiers in the use of crossbows. In the United States, SAA International Ltd manufacture a crossbow-launched version of the U.S. Army type classified Launched Grapnel Hook (LGH), among other mine countermeasure solutions designed for the Middle Eastern theatre. It has been successfully evaluated in Cambodia and Bosnia. It is used to probe for and detonate tripwire initiated mines and booby traps at up to . The concept is similar to the LGH device originally fired from a rifle, as a plastic retrieval line is attached. Reusable up to 20 times, the line can be reeled back in without exposing oneself. The device is of particular use in tactical situations where noise discipline is important. In Europe, Barnett International sold crossbows to Serbian forces which according to The Guardian were later used "in ambushes and as a counter-sniper weapon" against the Kosovo Liberation Army during the Kosovo War in the areas of Pec and Djakovica, south west of Kosovo. Whitehall launched an investigation, though the Department of Trade and Industry established that not being "on the military list", crossbows were not covered by such export regulations. Paul Beaver of Jane's Defence Publications commented that, "They are not only a silent killer, they also have a psychological effect". On 15 February 2008, Serbian Minister of Defence Dragan Sutanovac was pictured testing a Barnett crossbow during a public exercise of the Serbian Army's Special Forces in Nis, south of Belgrade. Special forces in both Greece and Turkey also continue to employ the crossbow. Spain's Green Berets still use the crossbow as well. In Asia, some Chinese armed forces use crossbows, including the special force Snow Leopard Commando Unit of the People's Armed Police and the People's Liberation Army. One justification for this comes in the crossbow's ability to stop persons carrying explosives without risk of causing detonation. During the Xinjiang riots of July 2009, crossbows were used alongside modern military hardware to quell protests. The Indian Navy's Marine Commando Force were equipped until the late 1980s with crossbows with cyanide-tipped bolts, as an alternative to suppressed handguns. Comparison to conventional bows With a crossbow, archers could release a draw force far in excess of what they could have handled with a bow. Furthermore, the crossbow could hold the tension for a long time, whereas even the strongest longbowman could only hold a drawn bow for a short period of time. The ease of use of a crossbow allows it to be used effectively with little training, while other types of bows take far more skill to shoot accurately. The disadvantage is the greater weight and clumsiness to reload compared to a bow, as well as the slower rate of shooting and the lower efficiency of the acceleration system, but there would be reduced elastic hysteresis, making the crossbow a more accurate weapon. Medieval European crossbows had a much smaller draw length than bows. This means that for the same energy to be imparted to the arrow (or bolt), the crossbow had to have a much higher draw weight. A direct comparison between a fast hand-drawn replica crossbow and a longbow show a 6:10 rate of shooting or a 4:9 rate within 30 seconds and comparable weapons. Legal issues Today, the crossbow often has a complicated legal status due to the possibility of lethal use and its similarities to both firearms and archery weapons. While some jurisdictions regard crossbows the same as firearms, many others do not require any sort of license to own a crossbow. The legality of using a crossbow for hunting varies widely around the world, and even within different jurisdictions of some federal countries. In popular culture The Star Wars franchise features Wookiees, including Chewbacca, wielding bowcasters, crossbow-themed blasters. In The Walking Dead, character Darryl Dixon wields a crossbow. In George R. R. Martin's fantasy novel series A Song of Ice and Fire, crossbows are a common weapon. One is famously used by Tyrion Lannister to kill his father. That particular crossbow is featured even more prominently in the derivative HBO TV show Game of Thrones. See also Arbalist (crossbowman) Bow and arrow Crossbow bolt History of crossbows Master of Crossbowmen Match crossbow Modern competitive archery and target archery for bows Sauterelle Shooting sport References Citations Sources Payne-Gallwey, Ralph, Sir, The Crossbow: Mediaeval and Modern, Military and Sporting; its Construction, History & Management with a Treatise on the Balista and Catapult of the Ancients and An Appendix on the Catapult, Balista & the Turkish Bow, New York : Bramhall House, 1958. External links International Crossbow Shooting Union (IAU) World Crossbow Shooting Association (WCSA) The Crossbow by Sir Ralph Payne-Gallwey, BT Ancient weapons Medieval weapons Chinese inventions Greek inventions Heraldic charges Bows (archery) Renaissance-era weapons Weapons of China
3,172
6,949
https://en.wikipedia.org/wiki/Carbamazepine
Carbamazepine
Carbamazepine, sold under the brand name Tegretol among others, is an anticonvulsant medication used in the treatment of epilepsy and neuropathic pain. It does not belong to the class of the benzodiazepines. It is used as an adjunctive treatment in schizophrenia along with other medications and as a second-line agent in bipolar disorder. Carbamazepine appears to work as well as phenytoin and valproate for focal and generalized seizures. It is not effective for absence or myoclonic seizures. Carbamazepine was discovered in 1953 by Swiss chemist Walter Schindler. It was first marketed in 1962. It is available as a generic medication. It is on the World Health Organization's List of Essential Medicines. In 2020, it was the 185th most commonly prescribed medication in the United States, with more than 2million prescriptions. Medical uses Carbamazepine is typically used for the treatment of seizure disorders and neuropathic pain. It is used off-label as a second-line treatment for bipolar disorder and in combination with an antipsychotic in some cases of schizophrenia when treatment with a conventional antipsychotic alone has failed. However, evidence does not support this usage. It is not effective for absence seizures or myoclonic seizures. Although carbamazepine may have a similar effectiveness (as measured by people continuing to use a medication) and efficacy (as measured by the medicine reducing seizure recurrence and improving remission) when compared to phenytoin and valproate, choice of medication should be evaluated on an individual basis as further research is needed to determine which medication is most helpful for people with newly-onset seizures. In the United States, carbamazepine is indicated for the treatment of epilepsy (including partial seizures, generalized tonic-clonic seizures and mixed seizures), and trigeminal neuralgia. Carbamazepine is the only medication that is approved by the Food and Drug Administration for the treatment of trigeminal neuralgia. As of 2014, a controlled release formulation was available for which there is tentative evidence showing fewer side effects and unclear evidence with regard to whether there is a difference in efficacy. Adverse effects In the US, the label for carbamazepine contains warnings concerning: effects on the body's production of red blood cells, white blood cells, and platelets: rarely, there are major effects of aplastic anemia and agranulocytosis reported and more commonly, there are minor changes such as decreased white blood cell or platelet counts, but these do not progress to more serious problems. increased risks of suicide increased risks of hyponatremia and SIADH risk of seizures, if the person stops taking the drug abruptly risks to the fetus in women who are pregnant, specifically congenital malformations like spina bifida, and developmental disorders. Pancreatitis Hepatitis Dizziness Bone marrow suppression Stevens–Johnson syndrome Common adverse effects may include drowsiness, dizziness, headaches and migraines, motor coordination impairment, nausea, vomiting, and/or constipation. Alcohol use while taking carbamazepine may lead to enhanced depression of the central nervous system. Less common side effects may include increased risk of seizures in people with mixed seizure disorders, abnormal heart rhythms, blurry or double vision. Also, rare case reports of an auditory side effect have been made, whereby patients perceive sounds about a semitone lower than previously; this unusual side effect is usually not noticed by most people, and disappears after the person stops taking carbamazepine. Pharmacogenetics Serious skin reactions such as Stevens–Johnson syndrome (SJS) or toxic epidermal necrolysis (TEN) due to carbamazepine therapy are more common in people with a particular human leukocyte antigen gene-variant (allele), HLA-B*1502. Odds ratios for the development of SJS or TEN in people who carry the allele can be in the double, triple or even quadruple digits, depending on the population studied. HLA-B*1502 occurs almost exclusively in people with ancestry across broad areas of Asia, but has a very low or absent frequency in European, Japanese, Korean and African populations. However, the HLA-A*31:01 allele has been shown to be a strong predictor of both mild and severe adverse reactions, such as the DRESS form of severe cutaneous reactions, to carbamazepine among Japanese, Chinese, Korean, and Europeans. It is suggested that carbamazepine acts as a potent antigen that binds to the antigen-presenting area of HLA-B*1502 alike, triggering an everlasting activation signal on immature CD8-T cells, thus resulting in widespread cytotoxic reactions like SJS/TEN. Interactions Carbamazepine has a potential for drug interactions. Drugs that decrease breaking down of carbamazepine or otherwise increase its levels include erythromycin, cimetidine, propoxyphene, and calcium channel blockers. Grapefruit juice raises the bioavailability of carbamazepine by inhibiting the enzyme CYP3A4 in the gut wall and in the liver. Lower levels of carbamazepine are seen when administrated with phenobarbital, phenytoin, or primidone, which can result in breakthrough seizure activity. Valproic acid and valnoctamide both inhibit microsomal epoxide hydrolase (mEH), the enzyme responsible for the breakdown of the active metabolite carbamazepine-10,11-epoxide into inactive metabolites. By inhibiting mEH, valproic acid and valnoctamide cause a build-up of the active metabolite, prolonging the effects of carbamazepine and delaying its excretion. Carbamazepine, as an inducer of cytochrome P450 enzymes, may increase clearance of many drugs, decreasing their concentration in the blood to subtherapeutic levels and reducing their desired effects. Drugs that are more rapidly metabolized with carbamazepine include warfarin, lamotrigine, phenytoin, theophylline, valproic acid, many benzodiazepines, and methadone. Carbamazepine also increases the metabolism of the hormones in birth control pills and can reduce their effectiveness, potentially leading to unexpected pregnancies. Pharmacology Mechanism of action Carbamazepine is a sodium channel blocker. It binds preferentially to voltage-gated sodium channels in their inactive conformation, which prevents repetitive and sustained firing of an action potential. Carbamazepine has effects on serotonin systems but the relevance to its antiseizure effects is uncertain. There is evidence that it is a serotonin releasing agent and possibly even a serotonin reuptake inhibitor. It has been suggested that carbamazepine can also block voltage-gated calcium channels, which will reduce neurotransmitter release. Pharmacokinetics Carbamazepine is relatively slowly but practically completely absorbed after administration by mouth. Highest concentrations in the blood plasma are reached after 4 to 24 hours depending on the dosage form. Slow release tablets result in about 15% lower absorption and 25% lower peak plasma concentrations than ordinary tablets, as well as in less fluctuation of the concentration, but not in significantly lower minimum concentrations. 20 to 30% of the substance are circulating in form of carbamazepine itself, the rest are metabolites. 70 to 80% are bound to plasma proteins. Concentrations in the breast milk are 25 to 60% of those in the blood plasma. Carbamazepine itself is not pharmacologically active. It is activated, mainly by CYP3A4, to carbamazepine-10,11-epoxide, which is solely responsible for the drug's anticonvulsant effects. The epoxide is then inactivated by microsomal epoxide hydrolase (mEH) to carbamazepine-trans-10,11-diol and further to its glucuronides. Other metabolites include various hydroxyl derivatives and carbamazepine-N-glucuronide. The plasma half-life is about 35 to 40 hours when carbamazepine is given as single dose, but it is a strong inducer of liver enzymes, and the plasma half-life shortens to about 12 to 17 hours when it is given repeatedly. The half-life can be further shortened to 9–10 hours by other enzyme inducers such as phenytoin or phenobarbital. About 70% are excreted via the urine, almost exclusively in form of its metabolites, and 30% via the faeces. History Carbamazepine was discovered by chemist Walter Schindler at J.R. Geigy AG (now part of Novartis) in Basel, Switzerland, in 1953. It was first marketed as a drug to treat epilepsy in Switzerland in 1963 under the brand name Tegretol; its use for trigeminal neuralgia (formerly known as tic douloureux) was introduced at the same time. It has been used as an anticonvulsant and antiepileptic in the United Kingdom since 1965, and has been approved in the United States since 1968. Carbamazepine was studied for bipolar disorder throughout the 1970s. Society and culture Environmental impact Carbamazepine and its bio-transformation products have been detected in wastewater treatment plant effluent and in streams receiving treated wastewater. Field and laboratory studies have been conducted to understand the accumulation of carbamazepine in food plants grown in soil treated with sludge, which vary with respect to the concentrations of carbamazepine present in sludge and in the concentrations of sludge in the soil. Taking into account only studies that used concentrations commonly found in the environment, a 2014 review concluded that "the accumulation of carbamazepine into plants grown in soil amended with biosolids poses a de minimis risk to human health according to the approach." Brand names Carbamazepine is available worldwide under many brand names including Tegretol. Research The drug is claimed to be effective for ADHD. References Further reading External links Carbamazepine. UK National Health Service Anticonvulsants Antidiuretics CYP3A4 inducers Dermatoxins Dibenzazepines Novartis brands GABAA receptor positive allosteric modulators Hepatotoxins Mood stabilizers Prodrugs Ureas World Health Organization essential medicines Wikipedia medicine articles ready to translate
3,173
6,960
https://en.wikipedia.org/wiki/Car%20Talk
Car Talk
Car Talk is a radio talk show that was broadcast weekly on National Public Radio (NPR) stations and elsewhere. Its subjects were automobiles and automotive repair, often discussed humorously. It was hosted by brothers Tom and Ray Magliozzi, known also as Click and Clack, the Tappet Brothers. The show won a Peabody Award in 1992. The show ran from 1977 until October 2012, when the Magliozzi brothers retired. Edited reruns (introduced as The Best of Car Talk) continued to be available for weekly airing on NPR's national schedule up through September 30, 2017, and some NPR affiliates have continued to broadcast reruns. Past episodes are otherwise available in a podcast format. On June 11, 2021, it was announced that radio distribution of Car Talk would officially end on October 1, 2021, and that NPR would begin distribution of a twice-weekly podcast that will be 35–40 minutes in length and include early versions of every show, in sequential order. Premise Car Talk was presented in the form of a call-in radio show: listeners called in with questions related to motor vehicle maintenance and repair. Most of the advice sought was diagnostic, with callers describing symptoms and demonstrating sounds of an ailing vehicle while the Magliozzis made an attempt to identify the malfunction over the telephone and give advice on how to fix it. While the hosts peppered their call-in sessions with jokes directed at both the caller and at themselves, the Magliozzis were usually able to arrive at a diagnosis. However, when they were stumped, they attempted anyway with an answer they claimed was "unencumbered by the thought process", the official motto of the show. Edited reruns are carried on XM Satellite Radio via both the Public Radio and NPR Now channels. The Car Talk theme music was "Dawggy Mountain Breakdown" by bluegrass artist David Grisman. Call-in procedure Throughout the program, listeners were encouraged to dial the toll-free telephone number, 1-888-CAR-TALK (1-888-227-8255), which connected to a 24-hour answering service. Although the approximately 2,000 queries received each week were screened by the Car Talk staff, the questions were unknown to the Magliozzis in advance as "that would entail researching the right answer, which is what? ... Work." Features The show originally consisted of two segments with a break in between but was changed to three segments. After the shift to the three-segment format, it became a running joke to refer to the last segment as "the third half" of the program. The show opened with a short comedy segment, typically jokes sent in by listeners, followed by eight call-in sessions. The hosts ran a contest called the "Puzzler", in which a riddle, sometimes car-related, was presented. The answer to the previous week's "Puzzler" was given at the beginning of the "second half" of the show, and a new "Puzzler" was given at the start of the "third half". The hosts gave instructions to listeners to write answers addressed to "Puzzler Tower" on some non-existent or expensive object, such as a "$26 bill" or an advanced digital SLR camera. This gag initially started as suggestions that the answers be written "on the back of a $20 bill". A running gag concerned Tom's inability to remember the previous week's "Puzzler" without heavy prompting from Ray. During a tribute show following Tom's death in 2014 due to complications of Alzheimer's Disease, Ray joked, "I guess he wasn't joking about not being able to remember the puzzler all those years." For each puzzler, one correct answer was chosen at random, with the winner receiving a $26 gift certificate to the Car Talk store, referred to as the "Shameless Commerce Division". It was originally $25, but was increased for inflation after a few years. Originally, the winner received a specific item from the store, but it soon changed to a gift certificate to allow the winner to choose the item they wanted (though Tom often made an item suggestion). A recurring feature was "Stump the Chumps," in which the hosts revisited a caller from a previous show to determine the accuracy and the effect, if any, of their advice. A similar feature began in May 2001, "Where Are They Now, Tommy?" It began with a comical musical theme with a sputtering, backfiring car engine and a horn as a backdrop. Tom then announced who the previous caller was, followed by a short replay of the essence of the previous call, preceded and followed by harp music often used in other audiovisual media to indicate recalling and returning from a dream. The hosts then greeted the previous caller, confirmed that they had not spoken since their previous appearance and asked them if there had been any influences on the answer they were about to relate, such as arcane bribes by the NPR staff. The repair story was then discussed, followed by a fanfare and applause if the Tappet Brothers' diagnosis was correct, or a wah-wah-wah music piece mixed with a car starter operated by a weak battery (an engine which wouldn't start) if the diagnosis was wrong. The hosts then thanked the caller for their return appearance. The brothers also had an official Animal-Vehicle Biologist and Wildlife Guru named Kieran Lindsey. She answered questions like How do I remove a snake from my car? and offered advice on how those living in cities and suburbs could reconnect with wildlife. They also would sometimes rely on Harvard University professors Wolfgang Rueckner and Jim E. Davis for questions concerning physics and chemistry, respectively. There were numerous appearances from NPR personalities, including Bob Edwards, Susan Stamberg, Scott Simon, Ray Suarez, Will Shortz, Sylvia Poggioli, and commentator and author Daniel Pinkwater. On one occasion, the show featured Martha Stewart as an in-studio guest, whom the Magliozzis twice during the segment referred to as "Margaret". Celebrities and public figures were featured as "callers" as well, including Geena Davis, Ashley Judd, Morley Safer, Gordon Elliott, former Major League Baseball pitcher Bill Lee, and astronaut John M. Grunsfeld. Space program calls Astronaut and engineer John Grunsfeld called into the show during Space Shuttle mission STS-81 in January 1997, in which Atlantis docked to the Mir space station. In this call he complained about the performance of his serial-numbered, Rockwell-manufactured "government van". To wit, it would run very loud and rough for about two minutes, quieter and smoother for another six and a half, and then the engine would stop with a jolt. He went on to state that the brakes of the vehicle, when applied, would glow red-hot, and that the vehicle's odometer displayed "about 60 million miles". This created some consternation for the hosts, until they noticed the audio of Grunsfeld's voice, being relayed from Mir via TDRS satellite, sounded similar to that of Tom Hanks in the then-recent film Apollo 13, after which they realized the call was from space and the government van in question was, in fact, the Space Shuttle. In addition to the on-orbit call, the Brothers once received a call asking advice on winterizing an electric car. When they asked what kind of car, the caller stated it was a "kit car", a $400 million "kit car". It was a joke call from NASA's Jet Propulsion Laboratory concerning the preparation of the Mars Opportunity rover for the oncoming Martian winter, during which temperatures drop to several hundred degrees below freezing. Click and Clack have also been featured in editorial cartoons, including one where a befuddled NASA engineer called them to ask how to fix the Space Shuttle. Humor Humor and wisecracking pervaded the program. Tom and Ray are known for their self-deprecating humor, often joking about the supposedly poor quality of their advice and the show in general. They also commented at the end of each show: "Well, it's happened again—you've wasted another perfectly good hour listening to Car Talk." At some point in almost every show, usually when giving the address for the Puzzler answers or fan mail, Ray mentioned Cambridge, Massachusetts (where the show originated), at which point Tom reverently interjected with a tone of civic pride, "Our fair city". Ray invariably mocked "'Cambridge, MA', the United States Postal Service's two-letter abbreviation for 'Massachusetts"', by pronouncing the "MA" as a word. Preceding each break in the show, one of the hosts led up to the network identification with a humorous take on a disgusted reaction of some usually famous person to hearing that identification. The full line went along the pattern of, for example, "And even though Roger Clemens stabs his radio with a syringe whenever he hears us say it, this is NPR: National Public Radio" (later just "... this is NPR"). At one point in the show, often after the break, Ray usually stated that: "Support for this show is provided by," followed by an absurd fundraiser. The ending credits of the show started with thanks to the colorfully nicknamed actual staffers: producer Doug "the subway fugitive, not a slave to fashion, bongo boy frogman" Berman; "John 'Bugsy' Lawlor, just back from the ..." every week a different eating event with rhyming foodstuff names; David "Calves of Belleville" Greene; Catherine "Frau Blücher" Fenollosa, whose name caused a horse to neigh and gallop (an allusion to a running gag in the movie Young Frankenstein); and Carly "High Voltage" Nix, among others. Following the real staff was a lengthy list of pun-filled fictional staffers and sponsors such as statistician Marge Innovera ("margin of error"), customer care representative Haywood Jabuzoff ("Hey, would ya buzz off"), meteorologist Claudio Vernight ("cloudy overnight"), optometric firm C. F. Eye Care ("see if I care"), Russian chauffeur Picov Andropov ("pick up and drop off"), Leo Tolstoy biographer Warren Peace ("War and Peace"), hygiene officer and chief of the Tokyo office Oteka Shawa ("oh, take a shower"), Swedish snowboard instructor Soren Derkeister ("sore in the keister"), law firm Dewey, Cheetham & Howe ("Do we cheat 'em? And how!"), Greek tailor Euripides Eumenades ("You rip-a these, you mend-a these"), cloakroom attendant Mahatma Coate ("My hat, my coat"), seat cushion tester Mike Easter (my keister) and many, many others, usually concluding with Erasmus B. Dragon ("Her ass must be draggin'"), whose job title varied, but who was often said to be head of the show's working mothers' support group. They sometimes advised that "our chief counsel from the law firm of Dewey, Cheetham, & Howe is Hugh Louis Dewey, known to a group of people in Harvard Square as Huey Louie Dewey." Huey, Louie, and Dewey were the juvenile nephews being raised by Donald Duck in Walt Disney's Comics and Stories. Guest accommodations were provided by The Horseshoe Road Inn ("the horse you rode in"). At the end of the show, Ray warns the audience, "Don't drive like my brother!" to which Tom replies, "And don't drive like my brother!" The original tag line was "Don't drive like a knucklehead!" There were variations such as, "Don't drive like my brother ..." "And don't drive like his brother!" and "Don't drive like my sister ..." "And don't drive like my sister!" The tagline was heard in the Pixar film Cars, in which Tom and Ray voiced anthropomorphized vehicles (Rusty and Dusty Rust-eze, respectively a 1963 Dodge Dart and 1963 Dodge A100 van, as Lightning McQueen's racing sponsors) with personalities similar to their own on-air personae. Tom notoriously once owned a "convertible, green with large areas of rust!" Dodge Dart, known jokingly on the program by the faux-elegant name "Dartre". History In 1977, radio station WBUR-FM in Boston scheduled a panel of local car mechanics to discuss car repairs on one of its programs, but only Tom Magliozzi showed up. He did so well that he was asked to return as a guest, and he invited his younger brother Ray (who was actually more of a car repair expert) to join him. The brothers were soon asked to host their own radio show on WBUR, which they continued to do every week. In 1986, NPR decided to distribute their show nationally. In 1989, the brothers started a newspaper column Click and Clack Talk Cars which, like the radio show, mixed serious advice with humor. King Features distributes the column. Ray Magliozzi continues to write the column, retitled Car Talk, after his brother's death in 2014, knowing he would have wanted the advice and humor to continue. In 1992, Car Talk won a Peabody Award, saying "Each week, master mechanics Tom and Ray Magliozzi provide useful information about preserving and protecting our cars. But the real core of this program is what it tells us about human mechanics ... The insight and laughter provided by Messrs. Magliozzi, in conjunction with their producer Doug Berman, provide a weekly mental tune-up for a vast and ever-growing public radio audience." In 2005, Tom and Ray Magliozzi founded the Car Talk Vehicle Donation Program, "as a way to give back to the stations that were our friends and partners for decades — and whose programs we listen to every day." Since the Car Talk Vehicle Donation Program was founded, over 40,000 vehicles have been donated to support local NPR stations and programs, with over $40 million donated. Approximately 70% of the proceeds generated go directly toward funding local NPR affiliates and programs. In May 2007, the program, which previously had been available digitally only as a paid subscription from Audible.com, became a free podcast distributed by NPR, after a two-month test period where only a "call of the week" was available via podcast. As of 2012, it had 3.3 million listeners each week, on about 660 stations. On June 8, 2012, the brothers announced that they would no longer broadcast new episodes as of October. Executive producer Doug Berman said the best material from 25 years of past shows would be used to put together "repurposed" shows for NPR to broadcast. Berman estimated the archives contain enough for eight years' worth of material before anything would have to be repeated. Ray Magliozzi, however, would occasionally record new taglines and sponsor announcements that were aired at the end of the show. The show was inducted into the National Radio Hall of Fame in 2014. Ray Magliozzi hosted a special Car Talk memorial episode for his brother Tom after he died in November 2014. However, Ray continued to write their syndicated newspaper column, saying that his brother would want him to. The Best of Car Talk episodes ended their weekly broadcast on NPR on September 30, 2017, although past episodes would continue availability online and via podcasts. 120 of the 400 stations intended to continue airing the show. NPR announced one option for the time slot would be their new news-talk program It's Been a Minute. On June 11, 2021, it was announced that radio distribution of Car Talk would officially end on October 1, 2021, and that NPR would begin distribution of a twice-weekly podcast that will be 35–40 minutes in length and include early versions of every show, in sequential order. Hosts The Magliozzis were long-time auto mechanics. Ray Magliozzi has a bachelor of science degree in humanities and science from MIT, while Tom had a bachelor of science degree in economics from MIT, an MBA from Northeastern University, and a DBA from the Boston University School of Management. The Magliozzis operated a do-it-yourself garage together in the 1970s which became more of a conventional repair shop in the 1980s. Ray continued to have a hand in the day-to-day operations of the shop for years, while his brother Tom semi-retired, often joking on Car Talk about his distaste for doing "actual work". The show's offices were located near their shop at the corner of JFK Street and Brattle Street in Harvard Square, marked as "Dewey, Cheetham & Howe", the imaginary law firm to which they referred on-air. DC&H doubled as the business name of Tappet Brothers Associates, the corporation established to manage the business end of Car Talk. Initially a joke, the company was incorporated after the show expanded from a single station to national syndication. The two were commencement speakers at MIT in 1999. Executive producer Doug Berman said in 2012, "The guys are culturally right up there with Mark Twain and the Marx Brothers. They will stand the test of time. People will still be enjoying them years from now. They're that good." Tom Magliozzi died on November 3, 2014, at age 77, due to complications from Alzheimer's disease. Adaptations The show was the inspiration for the short-lived The George Wendt Show, which briefly aired on CBS in the 1994-1995 season as a mid-season replacement. In July 2007, PBS announced that it had green-lit an animated adaptation of Car Talk, to air on prime-time in 2008. The show, titled Click and Clack's As the Wrench Turns is based on the adventures of the fictional "Click and Clack" brothers' garage at "Car Talk Plaza". The ten episodes aired in July and August 2008. Car Talk: The Musical!!! was written and directed by Wesley Savick, and composed by Michael Wartofsky. The adaptation was presented by Suffolk University, and opened on March 31, 2011, at the Modern Theatre in Boston, Massachusetts. The play was not officially endorsed by the Magliozzis, but they participated in the production, lending their voices to a central puppet character named "The Wizard of Cahs". References Further reading External links Click and Clack's As the Wrench Turns official site (archived) Transcript of the Magliozzis' commencement address at MIT, 1999 1970s American radio programs 1977 radio programme debuts 1980s American radio programs 1990s American radio programs 2000s American radio programs 2010s American radio programs 2012 radio programme endings American talk radio programs Cambridge, Massachusetts Mass media in Boston Motor vehicle maintenance NPR programs Peabody Award-winning radio programs
3,178
6,988
https://en.wikipedia.org/wiki/Cyclic%20adenosine%20monophosphate
Cyclic adenosine monophosphate
Cyclic adenosine monophosphate (cAMP, cyclic AMP, or 3',5'-cyclic adenosine monophosphate) is a second messenger, or cellular signal occurring within cells, that is important in many biological processes. cAMP is a derivative of adenosine triphosphate (ATP) and used for intracellular signal transduction in many different organisms, conveying the cAMP-dependent pathway. History Earl Sutherland of Vanderbilt University won a Nobel Prize in Physiology or Medicine in 1971 "for his discoveries concerning the mechanisms of the action of hormones", especially epinephrine, via second messengers (such as cyclic adenosine monophosphate, cyclic AMP). Synthesis Cyclic AMP is synthesized from ATP by adenylate cyclase located on the inner side of the plasma membrane and anchored at various locations in the interior of the cell. Adenylate cyclase is activated by a range of signaling molecules through the activation of adenylate cyclase stimulatory G (Gs)-protein-coupled receptors. Adenylate cyclase is inhibited by agonists of adenylate cyclase inhibitory G (Gi)-protein-coupled receptors. Liver adenylate cyclase responds more strongly to glucagon, and muscle adenylate cyclase responds more strongly to adrenaline. cAMP decomposition into AMP is catalyzed by the enzyme phosphodiesterase. Functions cAMP is a second messenger, used for intracellular signal transduction, such as transferring into cells the effects of hormones like glucagon and adrenaline, which cannot pass through the plasma membrane. It is also involved in the activation of protein kinases. In addition, cAMP binds to and regulates the function of ion channels such as the HCN channels and a few other cyclic nucleotide-binding proteins such as Epac1 and RAPGEF2. Role in eukaryotic cells cAMP is associated with kinases function in several biochemical processes, including the regulation of glycogen, sugar, and lipid metabolism. In eukaryotes, cyclic AMP works by activating protein kinase A (PKA, or cAMP-dependent protein kinase). PKA is normally inactive as a tetrameric holoenzyme, consisting of two catalytic and two regulatory units (C2R2), with the regulatory units blocking the catalytic centers of the catalytic units. Cyclic AMP binds to specific locations on the regulatory units of the protein kinase, and causes dissociation between the regulatory and catalytic subunits, thus enabling those catalytic units to phosphorylate substrate proteins. The active subunits catalyze the transfer of phosphate from ATP to specific serine or threonine residues of protein substrates. The phosphorylated proteins may act directly on the cell's ion channels, or may become activated or inhibited enzymes. Protein kinase A can also phosphorylate specific proteins that bind to promoter regions of DNA, causing increases in transcription. Not all protein kinases respond to cAMP. Several classes of protein kinases, including protein kinase C, are not cAMP-dependent. Further effects mainly depend on cAMP-dependent protein kinase, which vary based on the type of cell. Still, there are some minor PKA-independent functions of cAMP, e.g., activation of calcium channels, providing a minor pathway by which growth hormone-releasing hormone causes a release of growth hormone. However, the view that the majority of the effects of cAMP are controlled by PKA is an outdated one. In 1998 a family of cAMP-sensitive proteins with guanine nucleotide exchange factor (GEF) activity was discovered. These are termed Exchange proteins activated by cAMP (Epac) and the family comprises Epac1 and Epac2. The mechanism of activation is similar to that of PKA: the GEF domain is usually masked by the N-terminal region containing the cAMP binding domain. When cAMP binds, the domain dissociates and exposes the now-active GEF domain, allowing Epac to activate small Ras-like GTPase proteins, such as Rap1. Additional role of secreted cAMP in social amoebae In the species Dictyostelium discoideum, cAMP acts outside the cell as a secreted signal. The chemotactic aggregation of cells is organized by periodic waves of cAMP that propagate between cells over distances as large as several centimetres. The waves are the result of a regulated production and secretion of extracellular cAMP and a spontaneous biological oscillator that initiates the waves at centers of territories. Role in bacteria In bacteria, the level of cAMP varies depending on the medium used for growth. In particular, cAMP is low when glucose is the carbon source. This occurs through inhibition of the cAMP-producing enzyme, adenylate cyclase, as a side-effect of glucose transport into the cell. The transcription factor cAMP receptor protein (CRP) also called CAP (catabolite gene activator protein) forms a complex with cAMP and thereby is activated to bind to DNA. CRP-cAMP increases expression of a large number of genes, including some encoding enzymes that can supply energy independent of glucose. cAMP, for example, is involved in the positive regulation of the lac operon. In an environment with a low glucose concentration, cAMP accumulates and binds to the allosteric site on CRP (cAMP receptor protein), a transcription activator protein. The protein assumes its active shape and binds to a specific site upstream of the lac promoter, making it easier for RNA polymerase to bind to the adjacent promoter to start transcription of the lac operon, increasing the rate of lac operon transcription. With a high glucose concentration, the cAMP concentration decreases, and the CRP disengages from the lac operon. Pathology Since cyclic AMP is a second messenger and plays vital role in cell signalling, it has been implicated in various disorders but not restricted to the roles given below: Role in human carcinoma Some research has suggested that a deregulation of cAMP pathways and an aberrant activation of cAMP-controlled genes is linked to the growth of some cancers. Role in prefrontal cortex disorders Recent research suggests that cAMP affects the function of higher-order thinking in the prefrontal cortex through its regulation of ion channels called hyperpolarization-activated cyclic nucleotide-gated channels (HCN). When cAMP stimulates the HCN, the channels open, closing the brain cell to communication and thus interfering with the function of the prefrontal cortex. This research, especially the cognitive deficits in age-related illnesses and ADHD, is of interest to researchers studying the brain. cAMP is involved in activation of trigeminocervical system leading to neurogenic inflammation and causing migraine. Role in infectious disease agents' pathogenesis Disrupted functioning of cAMP has been noted as one of the mechanisms of several bacterial exotoxins. They can be subgrouped into two distinct categories: Toxins that interfere with enzymes ADP-ribosyl-transferases, and invasive adenylate cyclases. ADP-ribosyl-transferases related toxins Cholera toxin is an AB toxin that has five B subints and one A subunit. The toxin acts by the following mechanism: First, the B subunit ring of the cholera toxin binds to GM1 gangliosides on the surface of target cells. If a cell lacks GM1 the toxin most likely binds to other types of glycans, such as Lewis Y and Lewis X, attached to proteins instead of lipids. Uses Forskolin is commonly used as a tool in biochemistry to raise levels of cAMP in the study and research of cell physiology. See also Cyclic guanosine monophosphate (cGMP) 8-Bromoadenosine 3',5'-cyclic monophosphate (8-Br-cAMP) Acrasin specific to chemotactic use in Dictyostelium discoideum. phosphodiesterase 4 (PDE 4) which degrades cAMP References Additional images Nucleotides Signal transduction Cell signaling Cyclic nucleotides
3,192
7,011
https://en.wikipedia.org/wiki/Control%20engineering
Control engineering
Control engineering or control systems engineering is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering and mechanical engineering at many institutions around the world. The practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems. Overview Modern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance F-16 fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem. History Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt, around the third century BCE. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel. This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 CE. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt in 1788. In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis. Control theory made significant strides over the next century. New mathematical techniques, as well as advances in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes. Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the very first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today. Control theory There are two major divisions in control theory, namely, classical and modern, which have direct implications for the control engineering applications. Classical SISO System Design The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model. Modern MIMO System Design Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs . Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory. Control systems Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner. Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering. Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a PID controller system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved. Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors. Control engineering education At many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, in Italy there are several master in Automation & Robotics that are fully specialised in Control engineering or the Department of Automatic Control and Systems Engineering at the University of Sheffield or the Department of Robotics and Control Engineering at the United States Naval Academy and the Department of Control and Automation Engineering at the Istanbul Technical University. Control engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education. Control engineering careers A control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are well paired with an electrical or mechanical engineering degree. Control engineers usually get jobs in technical managing where they typically lead interdisciplinary projects. There are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation. According to a Control Engineering survey, most of the people who answered were control engineers in various forms of their own career. There are not very many careers that are classified as "control engineer," most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering. Recent advancement Originally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock. The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components. Therefore, at the design stage either digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or analog components are mapped into discrete domain and design is carried out there. The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers. Similarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme. Resilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc. See also Electrical engineering Communications engineering Satellite navigation Outline of control engineering Advanced process control Building automation Computer-automated design (CAutoD, CAutoCSD) Control reconfiguration Feedback H-infinity Lead–lag compensator List of control engineering topics Quantitative feedback theory Robotic unicycle State space Sliding mode control Systems engineering Testing controller VisSim Control Engineering (magazine) EICASLAB Time series Process control system Robotic control Mechatronics References Further reading External links Control Labs Worldwide The Michigan Chemical Engineering Process Dynamics and Controls Open Textbook Control System Integrators Association List of control systems integrators Institution of Mechanical Engineers - Mechatronics, Informatics and Control Group (MICG) Systems Science & Control Engineering: An Open Access Journal Electrical engineering Mechanical engineering Systems engineering Engineering disciplines
3,198
7,015
https://en.wikipedia.org/wiki/Christiaan%20Barnard
Christiaan Barnard
Christiaan Neethling Barnard (8 November 1922 – 2 September 2001) was a South African cardiac surgeon who performed the world's first human-to-human heart transplant operation. On 3 December 1967, Barnard transplanted the heart of accident victim Denise Darvall into the chest of 54-year-old Louis Washkansky, with Washkansky regaining full consciousness and being able to talk easily with his wife, before dying eighteen days later of pneumonia, largely brought on by the anti-rejection drugs that suppressed his immune system. Barnard had told Mr. and Mrs. Washkansky that the operation had an 80% chance of success, an assessment which has been criticised as misleading. Barnard's second transplant patient, Philip Blaiberg, whose operation was performed at the beginning of 1968, returned home from the hospital and lived for a year and a half. Born in Beaufort West, Cape Province, Barnard studied medicine and practised for several years in his native South Africa. As a young doctor experimenting on dogs, Barnard developed a remedy for the infant defect of intestinal atresia. His technique saved the lives of ten babies in Cape Town and was adopted by surgeons in Britain and the United States. In 1955, he travelled to the United States and was initially assigned further gastrointestinal work by Owen Harding Wangensteen at the University of Minnesota. He was introduced to the heart-lung machine, and Barnard was allowed to transfer to the service run by open heart surgery pioneer Walt Lillehei. Upon returning to South Africa in 1958, Barnard was appointed head of the Department of Experimental Surgery at the Groote Schuur Hospital, Cape Town. He retired as head of the Department of Cardiothoracic Surgery in Cape Town in 1983 after rheumatoid arthritis in his hands ended his surgical career. He became interested in anti-aging research, and in 1986 his reputation suffered when he promoted Glycel, an expensive "anti-aging" skin cream, whose approval was withdrawn by the United States Food and Drug Administration soon thereafter. During his remaining years, he established the Christiaan Barnard Foundation, dedicated to helping underprivileged children throughout the world. He died in 2001 at the age of 78 after an asthma attack. Early life Barnard grew up in Beaufort West, Cape Province, Union of South Africa. His father, Adam Barnard, was a minister in the Dutch Reformed Church. One of his four brothers, Abraham, was a "blue baby" who died of a heart problem at the age of three (Barnard would later guess that it was tetralogy of Fallot). The family also experienced the loss of a daughter who was stillborn and who had been the fraternal twin of Barnard's older brother Johannes, who was twelve years older than Chris. Barnard matriculated from the Beaufort West High School in 1940, and went to study medicine at the University of Cape Town Medical School, where he obtained his MB ChB in 1945. His father served as a missionary to mixed-race people. His mother, the former Maria Elisabeth de Swart, instilled in the surviving brothers the belief that they could do anything they set their minds to. Career Barnard did his internship and residency at the Groote Schuur Hospital in Cape Town, after which he worked as a general practitioner in Ceres, a rural town in the Cape Province. In 1951, he returned to Cape Town where he worked at the City Hospital as a Senior Resident Medical Officer, and in the Department of Medicine at Groote Schuur as a registrar. He completed his master's degree, receiving Master of Medicine in 1953 from the University of Cape Town. In the same year he obtained a doctorate in medicine (MD) from the same university for a dissertation titled "The treatment of tuberculous meningitis". Soon after qualifying as a doctor, Barnard performed experiments on dogs while investigating intestinal atresia, congenital, life-threatening obstructions in the intestines. He followed a medical hunch that this was caused by inadequate blood flow to the fetus. After nine months and forty-three attempts, Barnard was able to reproduce this condition in a fetus puppy by tying off some of the blood supply to a puppy's intestines and then placing the animal back in the womb, after which it was born some two weeks later, with the condition of intestinal atresia. He was also able to cure the condition by removing the piece of intestine with inadequate blood supply. The mistake of previous surgeons had been attempting to reconnect ends of intestine which themselves still had inadequate blood supply. To be successful, it was typically necessary to remove between 15 and 20 centimeters of intestine (6 to 8 inches). Jannie Louw used this innovation in a clinical setting, and Barnard's method saved the lives of ten babies in Cape Town. This technique was also adapted by surgeons in Britain and the US. In addition, Barnard analyzed 259 cases of tubercular meningitis. Owen Wangensteen at the University of Minnesota in the United States had been impressed by the work of Alan Thal, a young South African doctor working in Minnesota. Wangensteen asked the Groote Schuur Head of Medicine John Brock if he might recommend any similarly talented South Africans, and Brock recommended Barnard. In December 1955, Barnard travelled to Minneapolis, Minnesota to begin a two-year scholarship under Chief of Surgery Wangensteen, who assigned Barnard more work on the intestines, which Barnard accepted even though he wanted to move onto something new. Simply by luck, whenever Barnard needed a break from this work, he could wander across the hall and talk with Vince Gott who ran the lab for open-heart surgery pioneer Walt Lillehei. Gott had begun to develop a technique of running blood backwards through the veins of the heart so Lillehei could more easily operate on the aortic valve (McRae writes, "It was the type of inspired thinking that entranced Barnard"). In March 1956, Gott asked Barnard to help him run the heart-lung machine for an operation. Shortly thereafter, Wangensteen agreed to let Barnard switch to Lillehei's service. It was during this time that Barnard became acquainted with fellow future heart transplantation surgeon Norman Shumway. Barnard also became friendly with Gil Campbell, who had demonstrated that a dog's lung could be used to oxygenate blood during open-heart surgery. (The year before Barnard arrived, Lillehei and Campbell had used this procedure for twenty minutes during surgery on a 13-year-old boy with ventricular septal defect, and the boy had made a full recovery.) Barnard and Campbell met regularly for early breakfast. In 1958, Barnard received a Master of Science in Surgery for a thesis titled "The aortic valve – problems in the fabrication and testing of a prosthetic valve". The same year he was awarded a Ph.D. for his dissertation titled "The aetiology of congenital intestinal atresia". Barnard described the two years he spent in the United States as "the most fascinating time in my life." Upon returning to South Africa in 1958, Barnard was appointed head of the Department of Experimental Surgery at Groote Schuur hospital, as well as holding a joint post at the University of Cape Town. He was promoted to full-time lecturer and Director of Surgical Research at the University of Cape Town. In 1960, he flew to Moscow in order to meet Vladimir Demikhov, a top expert on organ transplants (later he credited Demikhov's accomplishment saying that "if there is a father of heart and lung transplantation then Demikhov certainly deserves this title.") In 1961 he was appointed Head of the Division of Cardiothoracic Surgery at the teaching hospitals of the University of Cape Town. He rose to Associate Professor in the Department of Surgery at the University of Cape Town in 1962. Barnard's younger brother Marius, who also studied medicine, eventually became Barnard's right-hand man at the department of Cardiac Surgery. Over time, Barnard became known as a brilliant surgeon with many contributions to the treatment of cardiac diseases, such as the Tetralogy of Fallot and Ebstein's anomaly. He was promoted to Professor of Surgical Science in the Department of Surgery at the University of Cape Town in 1972. In 1981, Barnard became a founding member of the World Cultural Council. Among the recognition he received over the years, he was named Professor Emeritus in 1984. Historical context Following the first successful kidney transplant in 1953, in the United States, Barnard performed the second kidney transplant in South Africa in October 1967, the first having been done in Johannesburg the previous year. On 23 January 1964, James Hardy at the University of Mississippi Medical Center in Jackson, Mississippi, performed the world's first heart transplant and world's first cardiac xenotransplant by transplanting the heart of a chimpanzee into a desperately ill and dying man. This heart did beat in the patient's chest for approximately 60 to 90 minutes. The patient, Boyd Rush, died without regaining consciousness. Barnard had experimentally transplanted forty-eight hearts into dogs, which was about a fifth the number that Adrian Kantrowitz had performed at Maimonides Medical Center in New York and about a sixth the number Norman Shumway had performed at Stanford University in California. Barnard had no dogs which had survived longer than ten days, unlike Kantrowitz and Shumway who had had dogs survive for more than a year. With the availability of new breakthroughs introduced by several pioneers, also including Richard Lower at the Medical College of Virginia, several surgical teams were in a position to prepare for a human heart transplant. Barnard had a patient willing to undergo the procedure, but as with other surgeons, he needed a suitable donor. During the Apartheid era in South Africa, non-white persons and citizens were not given equal opportunities in the medical professions. At Groote Schuur Hospital, Hamilton Naki was an informally taught surgeon. He started out as a gardener and cleaner. One day he was asked to help out with an experiment on a giraffe. From this modest beginning, Naki became principal lab technician and taught hundreds of surgeons, and assisted with Barnard's organ transplant program. Barnard said, "Hamilton Naki had better technical skills than I did. He was a better craftsman than me, especially when it came to stitching, and had very good hands in the theatre". A popular myth, propagated principally by a widely discredited documentary film called Hidden Heart and an erroneous newspaper article, maintains incorrectly that Naki was present during the Washkansky transplant. First human-to-human heart transplant Barnard performed the world's first human-to-human heart transplant operation in the early morning hours of Sunday 3 December 1967. Louis Washkansky, a 54-year-old grocer who was suffering from diabetes and incurable heart disease, was the patient. Barnard was assisted by his brother Marius Barnard, as well as a team of thirty staff members. The operation lasted approximately five hours. Barnard stated to Washkansky and his wife Ann Washkansky that the transplant had an 80% chance of success. This has been criticised by the ethicists Peter Singer and Helga Kuhse as making claims for chances of success to the patient and family which were "unfounded" and "misleading". Barnard later wrote, "For a dying man it is not a difficult decision because he knows he is at the end. If a lion chases you to the bank of a river filled with crocodiles, you will leap into the water, convinced you have a chance to swim to the other side." The donor heart came from a young woman, Denise Darvall, who had been rendered brain dead in an accident on 2 December 1967, while crossing a street in Cape Town. On examination at Groote Schuur hospital, Darvall had two serious fractures in her skull, with no electrical activity in her brain detected, and no sign of pain when ice water was poured into her ear. Coert Venter and Bertie Bosman requested permission from Darvall's father for Denise's heart to be used in the transplant attempt. The afternoon before his first transplant, Barnard dozed at his home while listening to music. When he awoke, he decided to modify Shumway and Lower's technique. Instead of cutting straight across the back of the atrial chambers of the donor heart, he would avoid damage to the septum and instead cut two small holes for the venae cavae and pulmonary veins. Prior to the transplant, rather than wait for Darvall's heart to stop beating, at his brother Marius Barnard's urging, Christiaan had injected potassium into her heart to paralyse it and render her technically dead by the whole-body standard. Twenty years later, Marius Barnard recounted, "Chris stood there for a few moments, watching, then stood back and said, 'It works.'" Washkansky survived the operation and lived for 18 days, having succumbed to pneumonia possibly due to the immunosuppressive drugs he was taking. Additional heart transplants Barnard and his patient received worldwide publicity. As a 2017 BBC retrospective article describes, "Journalists and film crews flooded into Cape Town's Groote Schuur Hospital, soon making Barnard and Washkansky household names." Barnard himself was described as "charismatic" and "photogenic." And the operation was initially reported as "successful" even though Washkansky only lived a further 18 days. Worldwide, approximately 100 transplants were performed by various doctors during 1968. However, only a third of these patients lived longer than three months. Many medical centers stopped performing transplants. In fact, a U.S. National Institutes of Health publication states, "Within several years, only Shumway's team at Stanford was attempting transplants." Barnard's second transplant operation was conducted on 2 January 1968, and the patient, Philip Blaiberg, survived for 19 months. Blaiberg's heart was donated by Clive Haupt, a 24-year-old black man who suffered a stroke, inciting controversy (especially in the African-American press) during the time of South African apartheid. Dirk van Zyl, who received a new heart in 1971, was the longest-lived recipient, surviving over 23 years. Between December 1967 and November 1974 at Groote Schuur Hospital in Cape Town, South Africa, ten heart transplants were performed, as well as a heart and lung transplant in 1971. Of these ten patients, four lived longer than 18 months, with two of these four becoming long-term survivors. One patient, Dorothy Fischer, lived for over thirteen years and another for over twenty-four years. Full recovery of donor heart function often takes place over hours or days, during which time considerable damage can occur. Other deaths to patients can occur from preexisting conditions. For example, in pulmonary hypertension the patient's right ventricle has often adapted to the higher pressure over time and, although diseased and hypertrophied, is often capable of maintaining circulation to the lungs. Barnard designed the idea of the heterotopic (or "piggy back" transplant) in which the patient's diseased heart is left in place while the donor heart is added, essentially forming a "double heart". Barnard performed the first such heterotopic heart transplant in 1974. From November 1974 through December 1983, 49 consecutive heterotopic heart transplants on 43 patients were performed at Groote Schuur. The survival rate for patients at one year was over 60%, as compared to less than 40% with standard transplants, and the survival rate at five years was over 36% as compared to less than 20% with standard transplants. Many surgeons gave up cardiac transplantation due to poor results, often due to rejection of the transplanted heart by the patient's immune system. Barnard persisted until the advent of cyclosporine, an effective immunosuppressive drug, which helped revive the operation throughout the world. He also attempted xenotransplantation in two human patients, utilizing a baboon heart and chimpanzee heart, respectively. Public life Barnard was an outspoken opponent of South Africa's laws of apartheid, and was not afraid to criticise his nation's government, although he had to temper his remarks to some extent to travel abroad. Rather than leaving his homeland, he used his fame to campaign for a change in the law. Christiaan's brother, Marius Barnard, went into politics, and was elected to the legislature from the Progressive Federal Party. Barnard later stated that the reason he never won the Nobel Prize in Physiology or Medicine was probably because he was a "white South African". Shortly before his visit to Kenya in 1978, the following was written about his views regarding race relations in South Africa; "While he believes in the participation of Africans in the political process of South Africa, he is opposed to a one-man-one-vote system in South Africa". In answering a hypothetical question on how he would solve the race problem were he a "benevolent dictator in South Africa", Barnard stated the following in a long interview at the Weekly Review: While "I would abolish Social discrimination", political discrimination would continue. He favoured the total division of the country along racial lines. His words were; "I somehow feel ... but we may have to divide South Africa into two equal divisions". In a follow-up question about where the coloured people would end up in that scenario, he replied that 'I would include them in the white South Africa". That coloured people have "always been accepted" among whites. That "the black man will not accept this view" of universal suffrage. That "we are still out of the Olympic games" despite the fact that "in the field of sports where we have virtually integrated completely." Regarding the Soweto uprising, he claimed "there was ... a lot of external stirring up of turbulence". Regarding the anger from the black population when Steve Biko was murdered, he said that "I think that something like $50,000 came in from outside to work up feelings at that funeral." He stated that the National Party members were as upset about Biko's murder as were blacks; "The white community was thoroughly upset, let me tell you. The nationalists themselves were very upset." The interview ended with the following summary from he himself; "I often say that, like King Lear, South Africa is a country more sinned against than sinning." Personal life Barnard's first marriage was to Aletta Gertruida Louw, a nurse, whom he married in 1948 while practising medicine in Ceres. The couple had two children: Deirdre (born 1950) and Andre (1951–1984). International fame took a toll on his personal life, and in 1969, Barnard and his wife divorced. In 1970, he married heiress Barbara Zoellner when she was 19, the same age as his son, and they had two children: Frederick (born 1972) and Christiaan Jr. (born 1974). He divorced Zoellner in 1982. Barnard married for a third time in 1988 to Karin Setzkorn, a young model. They also had two children, Armin (born 1989) and Lara (born 1997). This last marriage also ended in divorce in 2000. Barnard described in his autobiography The Second Life a one-night extramarital affair with Italian film star Gina Lollobrigida, that occurred in January 1968. During that visit to Rome he received an audience from Pope Paul VI. In October 2016, U.S. Congresswoman Ann McLane Kuster (D-NH) stated that Barnard sexually assaulted her when she was 23 years old. According to Kuster, he attempted to grope her under her skirt, while seated at a business luncheon with Rep. Pete McCloskey (R-CA), whom she worked for at the time. Retirement Barnard retired as Head of the Department of Cardiothoracic Surgery in Cape Town in 1983 after developing rheumatoid arthritis in his hands which ended his surgical career. He had struggled with arthritis since 1956, when it was diagnosed during his postgraduate work in the United States. After retirement, he spent two years as the Scientist-In-Residence at the Oklahoma Transplantation Institute in the United States and as an acting consultant for various institutions. He had by this time become very interested in anti-aging research, and his reputation suffered in 1986 when he promoted Glycel, an expensive "anti-aging" skin cream, whose approval was withdrawn by the United States Food and Drug Administration soon thereafter. He also spent time as a research advisor to the Clinique la Prairie, in Switzerland, where the controversial "rejuvenation therapy" was practised. Barnard divided the remainder of his years between Austria, where he established the Christiaan Barnard Foundation, dedicated to helping underprivileged children throughout the world, and his game farm in Beaufort West, South Africa. In his later years, he had Basal-cell carcinoma (skin cancer) on his face, for which he was treated in Parow, South Africa. Death Christiaan Barnard died on 2 September 2001, while on holiday in Paphos, Cyprus. Early reports stated that he had died of a heart attack, but an autopsy showed his death was caused by a severe asthma attack. Books Barnard wrote two autobiographies. His first book, One Life, was published in 1969 () and sold copies worldwide. Some of the proceeds were used to set up the Chris Barnard Fund for research into heart disease and heart transplants in Cape Town. His second autobiography, The Second Life, was published in 1993, eight years before his death (). Apart from his autobiographies, Barnard wrote books including: The Donor Your Healthy Heart In The Night Season The Best Medicine Arthritis Handbook: How to Live With Arthritis Good Life Good Death: A Doctor's Case for Euthanasia and Suicide South Africa: Sharp Dissection 50 Ways to a Healthy Heart Body Machine See also Bartley P. Griffith René Favaloro Pierre Grondin Hamilton Naki Geoffrey Tovey References Further reading External links Christiaan Barnard: his first transplants and their impact on concepts of death To Transplant and Beyond : First Human Heart Transplant In Memoriam : Christiaan Neethling Barnard 40th anniversary of first human heart transplant Official Heart Transplant Museum – Heart Of Cape Town 1922 births 2001 deaths Deaths from pneumonia in Cyprus People from Beaufort West South African expatriates in the United States Afrikaner people South African cardiac surgeons South African transplant surgeons Academic staff of the University of Cape Town University of Cape Town alumni University of Minnesota alumni Founding members of the World Cultural Council Deaths from asthma 20th-century surgeons 20th-century South African physicians
3,200
7,022
https://en.wikipedia.org/wiki/Cold%20Chisel
Cold Chisel
Cold Chisel are an Australian pub rock band, which formed in Adelaide in 1973 by mainstay members Ian Moss on guitar and vocals, Steve Prestwich on drums and Don Walker on piano and keyboards. They were soon joined by Jimmy Barnes (at the time known as Jim Barnes) on lead vocals and, in 1975, Phil Small became their bass guitarist. The group disbanded in late 1983 but subsequently reformed several times. Musicologist Ian McFarlane wrote that they became "one of Australia's best-loved groups" as well as "one of the best live bands", fusing "a combination of rockabilly, hard rock and rough-house soul'n'blues that was defiantly Australian in outlook." Eight of their studio albums have reached the Australian top five, Breakfast at Sweethearts (February 1979), East (June 1980), Circus Animals (March 1982, No. 1), Twentieth Century (April 1984, No. 1), The Last Wave of Summer (October 1998, No. 1), No Plans (April 2012), The Perfect Crime (October 2015) and Blood Moon (December 2019, No. 1). Their top 10 singles are "Forever Now" (1982), "Hands Out of My Pocket" (1994) and "The Things I Love in You" (1998). At the ARIA Music Awards of 1993 they were inducted into the Hall of Fame. In 2001 Australasian Performing Right Association (APRA), listed their single, "Khe Sanh" (May 1978), at No. 8 of the all-time best Australian songs. Circus Animals was listed at No. 4 in the book, 100 Best Australian Albums (October 2010), while East appeared at No. 53. They won The Ted Albert Award for Outstanding Services to Australian Music at the APRA Music Awards of 2016. Cold Chisel's popularity is largely restricted to Australia and New Zealand, with their songs and musicianship highlighting working class life. Their early bass guitarist (1973–75), Les Kaczmarek, died in December 2008; Steve Prestwich died of a brain tumour in January 2011. History 1973–1978: Beginnings Cold Chisel originally formed as Orange in Adelaide in 1973 as a heavy metal band by Ted Broniecki on keyboards, Les Kaczmarek on bass guitar, Ian Moss on guitar and vocals, Steve Prestwich on drums and Don Walker on piano. Their early material included cover versions of Free and Deep Purple material. Broniecki left by September 1973 and seventeen-year-old singer, Jimmy Barnes – called Jim Barnes during their initial career – joined in December. The group changed its name several times before settling on Cold Chisel in 1974 after Walker's song of that title. Barnes' relationship with the others was volatile: he often came to blows with Prestwich and left the band several times. During these periods Moss would handle vocals until Barnes returned. Walker emerged as the group's primary songwriter and spent 1974 in Armidale, completing his studies in quantum mechanics. Barnes' older brother, John Swan, was a member of Cold Chisel around this time, providing backing vocals and percussion. After several violent incidents, including beating up a roadie, he was fired. In mid-1975 Barnes left to join Fraternity as Bon Scott's replacement on lead vocals, alongside Swan on drums and vocals. Kaczmarek left Cold Chisel during 1975 and was replaced by Phil Small on bass guitar. In November of that year, without Barnes, they recorded their early demos. In May 1976 Cold Chisel relocated to Melbourne, but "frustrated by their lack of progress," they moved on to Sydney in early 1977. In May 1977, Barnes told his fellow members that he would leave again. From July he joined Feather for a few weeks, on co-lead vocals with Swan – they were a Sydney-based hard rock group, which had evolved from Blackfeather. A farewell performance for Cold Chisel, with Barnes aboard, went so well that the singer changed his mind and returned. In the following month the Warner Music Group signed the group. 1978–1979: Cold Chisel and Breakfast at Sweethearts In the early months of 1978 Cold Chisel recorded their self-titled debut album with their manager and producer, Peter Walker (ex-Bakery). All tracks were written by Don Walker, except "Juliet", where Barnes composed its melody and Walker the lyrics. Cold Chisel was released in April and included guest studio musicians: Dave Blight on harmonica (who became a regular on-stage guest) and saxophonists Joe Camilleri and Wilbur Wilde (from Jo Jo Zep & The Falcons). Australian musicologist, Ian McFarlane, described how, "[it] failed to capture the band's renowned live firepower, despite the presence of such crowd favourites as 'Khe Sanh', 'Home and Broken Hearted' and 'One Long Day'." It reached the top 40 on the Kent Music Report and was certified gold. In May 1978, "Khe Sanh", was released as their debut single but it was declared too offensive for commercial radio due to the sexual implication of the lyrics, "Their legs were often open/But their minds were always closed." However, it was played regularly on Sydney youth radio station, Double J, which was not subject to the restrictions as it was part of the Australian Broadcasting Corporation (ABC). Another ABC program, Countdowns producers asked them to change the lyric but they refused. Despite such setbacks, "Khe Sanh" reached No. 41 on the Kent Music Report singles chart. It became Cold Chisel's signature tune and was popular among their fans. They later remixed the track, with re-recorded vocals, for inclusion on the international version of their third album, East (June 1980). The band's next release was a live five-track extended play, You're Thirteen, You're Beautiful, and You're Mine, in November 1978. McFarlane observed, "It captured the band in its favoured element, fired by raucous versions of Walker's 'Merry-Go-Round' and Chip Taylor's 'Wild Thing'." It was recorded at the Regent Theatre, Sydney in 1977, when they had Midnight Oil as one of the support acts. Australian writer, Ed Nimmervoll, described a typical performance by Cold Chisel, "Everybody was talking about them anyway, drawn by the songs, and Jim Barnes' presence on stage, crouched, sweating, as he roared his vocals into the microphone at the top of his lungs." The EP peaked at No. 35 on the Kent Music Report Singles Chart. "Merry Go Round" was re-recorded for their second studio album, Breakfast at Sweethearts (February 1979). This was recorded between July 1978 and January 1979 with producer, Richard Batchens, who had previously worked with Richard Clapton, Sherbet and Blackfeather. Batchens smoothed out the band's rough edges and attempted to give their songs a sophisticated sound. With regards to this approach, the band were unsatisfied with the finished product. It peaked at No. 4 and was the top selling album in Australia by a locally based artist for that year; it was certified platinum. The majority of its tracks were written by Walker, with Barnes and Walker on the lead single, "Goodbye (Astrid, Goodbye)" (September 1978), and Moss contributed to "Dresden". "Goodbye (Astrid, Goodbye)" became a live favourite, and was covered by U2 during Australian tours in the 1980s. 1979-1980: East Cold Chisel had gained national chart success and increased popularity of their fans without significant commercial radio airplay. The members developed reputations for wild behaviour, particularly Barnes who claimed to have had sex with over 1000 women and who consumed more than a bottle of vodka each night while performing. In late 1979, severing their relationship with Batchens, Cold Chisel chose Mark Opitz to produce the next single, "Choirgirl" (November). It is a Walker composition dealing with a young woman's experience with abortion. Despite the subject matter it reached No. 14. "Choirgirl" paved the way for the group's third studio album, East (June 1980), with Opitz producing. Recorded over two months in early 1980, East, reached No. 2 and is the second highest selling album by an Australian artist for that year. The Australian Women's Weeklys Gregg Flynn noticed, "[they are] one of the few Australian bands in which each member is capable of writing hit songs." Despite the continued dominance of Walker, the other members contributed more tracks to their play list, and this was their first album to have songs written by each one. McFarlane described it as, "a confident, fully realised work of tremendous scope." Nimmervoll explained how, "This time everything fell into place, the sound, the songs, the playing... East was a triumph. [The group] were now the undisputed No. 1 rock band in Australia." The album varied from straight ahead rock tracks, "Standing on the Outside" and "My Turn to Cry", to rockabilly-flavoured work-outs ("Rising Sun", written about Barnes' relationship with his then-girlfriend Jane Mahoney) and pop-laced love songs ("My Baby", featuring Joe Camilleri on saxophone) to a poignant piano ballad about prison life, "Four Walls". The cover art showed Barnes reclined in a bathtub wearing a kamikaze bandanna in a room littered with junk and was inspired by Jacques-Louis David's 1793 painting, The Death of Marat. The Ian Moss-penned "Never Before" was chosen as the first song to air on the ABC's youth radio station, Triple J, when it switched to the FM band that year. Supporting the release of East, Cold Chisel embarked on the Youth in Asia Tour from May 1980, which took its name from a lyric in "Star Hotel". In late 1980, the Aboriginal rock reggae band No Fixed Address supported the band on its "Summer Offensive" tour to the east coast, with the final concert on 20 December at the University of Adelaide. 1981-1982: Swingshift to Circus Animals The Youth in Asia Tour performances were used for Cold Chisel's double live album, Swingshift (March 1981). Nimmervoll declared, "[the group] rammed what they were all about with [this album]." In March 1981 the band won seven categories: Best Australian Album, Most Outstanding Achievement, Best Recorded Song Writer, Best Australian Producer, Best Australian Record Cover Design, Most Popular Group and Most Popular Record, at the Countdown/TV Week pop music awards for 1980. They attended the ceremony at the Sydney Entertainment Centre and were due to perform: however, as a protest against a TV magazine's involvement, they refused to accept any trophy and finished the night with, "My Turn to Cry". After one verse and chorus, they smashed up the set and left the stage. Swingshift debuted at No 1, which demonstrated their status as the highest selling local act. With a slightly different track-listing, East, was issued in the United States and they undertook their first US tour in mid-1981. Ahead of the tour they had issued, "My Baby", for the North America market and it reached the top 40 on Billboards chart, Mainstream Rock. They were generally popular as a live act there, but the US branch of their label did little to promote the album. According to Barnes' biographer, Toby Creswell, at one point they were ushered into an office to listen to the US master tape to find it had substantial hiss and other ambient noise, which made it almost unable to be released. Notwithstanding, the album reached the lower region of the Billboard 200 in July. The group were booed off stage after a lacklustre performance in Dayton, Ohio in May 1981 opening for Ted Nugent. Other support slots they took were for Cheap Trick, Joe Walsh, Heart and the Marshall Tucker Band. European audiences were more accepting of the Australian band and they developed a fan base in Germany. In August 1981 Cold Chisel began work on a fourth studio album, Circus Animals (March 1982), again with Opitz producing. To launch the album, the band performed under a circus tent at Wentworth Park in Sydney and toured heavily once more, including a show in Darwin that attracted more than 10 percent of the city's population. It peaked at No. 1 in both Australia and on the Official New Zealand Music Chart. In October 2010 it was listed at No. 4 in the book, 100 Best Australian Albums, by music journalists, Creswell, Craig Mathieson and John O'Donnell. Its lead single, "You Got Nothing I Want" (November 1981), is an aggressive Barnes-penned hard rock track, which attacked the US industry for its handling of the band on their recent tour. The song caused problems for Barnes when he later attempted to break into the US market as a solo performer; senior music executives there continued to hold it against him. Like its predecessor, Circus Animals, contained songs of contrasting styles, with harder-edged tracks like "Bow River" and "Hound Dog" in place beside more expansive ballads such as the next two singles, "Forever Now" (March 1982) and "When the War Is Over" (August), both are written by Prestwich. "Forever Now" is their highest charting single in two Australasian markets: No. 4 on the Kent Music Report Singles Chart and No. 2 on the Official New Zealand Music Chart. "When the War Is Over" is the most covered Cold Chisel track – Uriah Heep included a version on their 1989 album, Raging Silence; John Farnham recorded it while he and Prestwich were members of Little River Band in the mid-1980s and again for his 1990 solo album, Age of Reason. The song was also a No. 1 hit for former Australian Idol contestant, Cosima De Vito, in 2004 and was performed by Bobby Flynn during that show's 2006 season. "Forever Now" was covered, as a country waltz, by Australian band, the Reels. 1983: Break-up Success outside Australasia continued to elude Cold Chisel and friction occurred between the members. According to McFarlane, "[the] failed attempts to break into the American market represented a major blow... [their] earthy, high-energy rock was overlooked." In early 1983 they toured Germany but the shows went so badly that in the middle of the tour Walker up-ended his keyboard and stormed off stage during one show. After returning to Australia, Prestwich was fired and replaced by Ray Arnott, formerly of the 1970s progressive rockers, Spectrum, and country rockers, the Dingoes. After this, Barnes requested a large advance from management. Now married with a young child, reckless spending had left him almost broke. His request was refused as there was a standing arrangement that any advance to one band member had to be paid to all the others. After a meeting on 17 August during which Barnes quit the band it was decided that the group would split up. A farewell concert series, The Last Stand, was planned and a final studio album, Twentieth Century (February 1984), was recorded. Prestwich returned for that tour, which began in October. Before the last four scheduled shows in Sydney, Barnes lost his voice and those dates were postponed to mid-December. The band's final performances were at the Sydney Entertainment Centre from 12 to 15 December 1983 – ten years since their first live appearance as Cold Chisel in Adelaide – and the group then disbanded. The Sydney shows formed the basis of a concert film, The Last Stand (July 1984), which became the biggest-selling cinema-released concert documentary by an Australian band to that time. Other recordings from the tour were used on a live album, The Barking Spiders Live: 1983 (1984), the title is a reference to the pseudonym the group occasionally used when playing warm-up shows before tours. Some were also used as b-sides for a three-CD singles package, Three Big XXX Hits, issued ahead of the release of their 1994 compilation album, Teenage Love. During breaks in the tour, Twentieth Century was recorded. It was a fragmentary process, spread across various studios and sessions as the individual members often refused to work together – both Arnott (on ten tracks) and Prestwich (on three tracks) are recorded as drummers. The album reached No. 1 and provided the singles "Saturday Night" (March 1984) and "Flame Trees" (August), both of which remain radio staples. "Flame Trees", co-written by Prestwich and Walker, took its title from the BBC series The Flame Trees of Thika, although it was lyrically inspired by Walker's hometown of Grafton. Barnes later recorded an acoustic version for his 1993 solo album, Flesh and Wood, and it was also covered by Sarah Blasko in 2006. 1984-1996: Aftermath and ARIA Hall of Fame Barnes launched his solo career in January 1984, which has provided nine Australian number-one studio albums and an array of hit singles, including "Too Much Ain't Enough Love", which peaked at No. 1. He has recorded with INXS, Tina Turner, Joe Cocker and John Farnham to become one of the country's most popular male rock singers. Prestwich joined Little River Band in 1984 and appeared on the albums Playing to Win and No Reins, before departing in 1986 to join Farnham's touring band. Moss, Small and Walker took extended breaks from music. Small maintained a low profile as a member in a variety of minor groups Pound, the Earls of Duke and the Outsiders. Walker formed Catfish in 1988, ostensibly a solo band with a variable membership, which included Moss, Charlie Owen and Dave Blight at times. Catfish's recordings during this phase attracted little commercial success. During 1988 and 1989 Walker wrote several tracks for Moss including the singles "Tucker's Daughter" (November 1988) and "Telephone Booth" (June 1989), which appeared on Moss' debut solo album, Matchbook (August 1989). Both the album and "Tucker's Daughter" peaked at No. 1. Moss won five trophies at the ARIA Music Awards of 1990. His other solo albums met with less chart or award success. Throughout the 1980s and most of the 1990s, Cold Chisel were courted to re-form but refused, at one point reportedly turning down a $5 million offer to play a sole show in each of the major Australian state capitals. Moss and Walker often collaborated on projects; neither worked with Barnes until Walker wrote "Stone Cold" for the singer's sixth studio album, Heat (October 1993). The pair recorded an acoustic version for Flesh and Wood (December). Thanks primarily to continued radio airplay and Barnes' solo success, Cold Chisel's legacy remained solidly intact. By the early 1990s the group had surpassed 3 million album sales, most sold since 1983. The 1991 compilation album, Chisel, was re-issued and re-packaged several times, once with the long-deleted 1978 EP as a bonus disc and a second time in 2001 as a double album. The Last Stand soundtrack album was finally released in 1992. In 1994 a complete album of previously unreleased demo and rare live recordings, Teenage Love, was released, which provided three singles. 1997–2010: Reunited Cold Chisel reunited in October 1997, with the line-up of Barnes, Moss, Prestwich, Small and Walker. They recorded their sixth studio album, The Last Wave of Summer (October 1998), from February to July with the band members co-producing. They supported it with a national tour. The album debuted at No. 1 on the ARIA Albums Chart. In 2003 they re-grouped for the Ringside Tour and in 2005 again to perform at a benefit for the victims of the Boxing Day tsunami at the Myer Music Bowl in Melbourne. Founding bass guitarist, Les Kaczmarek, died of liver failure on 5 December 2008, aged 53. Walker described him as "a wonderful and beguiling man in every respect." On 10 September 2009 Cold Chisel announced they would reform for a one-off performance at the Sydney 500 V8 Supercars event on 5 December. The band performed at Stadium Australia to the largest crowd of its career, with more than 45,000 fans in attendance. They played a single live show in 2010: at the Deniliquin ute muster in October. In December Moss confirmed that Cold Chisel were working on new material for an album. 2011–2019: Death of Steve Prestwich & The Perfect Crime In January 2011 Steve Prestwich was diagnosed with a brain tumour; he underwent surgery on 14 January but never regained consciousness and died two days later, aged 56. All six of Cold Chisel's studio albums were re-released in digital and CD formats in mid-2011. Three digital-only albums were released – Never Before, Besides and Covered – as well as a new compilation album, The Best of Cold Chisel: All for You, which peaked at No. 2 on the ARIA Charts. The thirty-date Light the Nitro Tour was announced in July along with the news that former Divinyls and Catfish drummer, Charley Drayton, had replaced Prestwich. Most shows on the tour sold out within days and new dates were later announced for early 2012. No Plans, their seventh studio album, was released in April 2012, with Kevin Shirley producing, which peaked at No. 2. The Australians Stephen Fitzpatrick rated it as four-and-a-half out of five and found its lead track, "All for You", "speaks of redemption; of a man's ability to make something of himself through love." The track "I Got Things to Do" was written and sung by Prestwich, which Fitzpatrick described as "the bittersweet finale", a song that had "a vocal track the other band members did not know existed until after [Prestwich's] death." Midway through 2012 they embarked on a short UK tour and played with Soundgarden and Mars Volta at Hard Rock Calling at London's Hyde Park. The group's eighth studio album, The Perfect Crime, appeared in October 2015, again with Shirley producing, which peaked at No. 2. Martin Boulton of The Sydney Morning Herald rated it at four out of five stars and explained that the album does what Cold Chisel always does: "work incredibly hard, not take any shortcuts and play the hell out of the songs." The album, Boulton writes, "delves further back to their rock'n'roll roots with chief songwriter [Walker] carving up the keys, guitarist [Moss] both gritty and sublime and the [Small/Drayton] engine room firing on every cylinder. Barnes' voice sounds worn, wonderful and better than ever." The band's latest album, Blood Moon, was released in December 2019. The album debuted at No. 1 on the ARIA Album Chart, the band's fifth to reach the top. Half of the songs had lyrics written by Barnes and music by Walker, a new combination for Cold Chisel, with Barnes noting his increased confidence after writing two autobiographies. Musical style and lyrical themes McFarlane described Cold Chisel's early career in his Encyclopedia of Australian Rock and Pop (1999), "after ten years on the road, [they] called it a day. Not that the band split up for want of success; by that stage [they] had built up a reputation previously uncharted in Australian rock history. By virtue of the profound effect the band's music had on the many thousands of fans who witnessed its awesome power, Cold Chisel remains one of Australia's best-loved groups. As one of the best live bands of its day, [they] fused a combination of rockabilly, hard rock and rough-house soul'n'blues that was defiantly Australian in outlook." The Canberra Times Luis Feliu, in July 1978, observed how, "This is not just another Australian rock band, no mediocrity here, and their honest, hard-working approach looks like paying off." While "the range of styles tackled and done convincingly, from hard rock to blues, boogie, rhythm and blues, is where the appeal lies." Influences from blues and early rock n' roll was broadly apparent, fostered by the love of those styles by Moss, Barnes and Walker. Small and Prestwich contributed strong pop sensibilities. This allowed volatile rock songs like "You Got Nothing I Want" and "Merry-Go-Round" to stand beside thoughtful ballads like "Choirgirl", pop-flavoured love songs like "My Baby" and caustic political statements like "Star Hotel", an attack on the late 1970s government of Malcolm Fraser, inspired by the Star Hotel riot in Newcastle. The songs were not overtly political but rather observations of everyday life within Australian society and culture, in which the members with their various backgrounds (Moss was from Alice Springs, Walker grew up in rural New South Wales, Barnes and Prestwich were working-class immigrants from the UK) were quite well able to provide. Cold Chisel's songs were about distinctly Australian experiences, a factor often cited as a major reason for the band's lack of international appeal. "Saturday Night" and "Breakfast at Sweethearts" were observations of the urban experience of Sydney's Kings Cross district where Walker lived for many years. "Misfits", which featured on the b-side to "My Baby", was about homeless kids in the suburbs surrounding Sydney. Songs like "Shipping Steel" and "Standing on The Outside" were working class anthems and many others featured characters trapped in mundane, everyday existences, yearning for the good times of the past ("Flame Trees") or for something better from life ("Bow River"). Reputation and recognition Alongside contemporaries like The Angels and Midnight Oil, Cold Chisel was renowned as one of the most dynamic live acts of their day and from early in their career concerts routinely became sell-out events. But the band was also famous for its wild lifestyle, particularly the hard-drinking Barnes, who played his role as one of the wild men of Australian rock to the hilt, never seen on stage without at least one bottle of vodka and often so drunk he could barely stand upright. Despite this, by 1982 he was a devoted family man who refused to tour without his wife and daughter. All the other band members were also settled or married; Ian Moss had a long-term relationship with the actress, Megan Williams, (she even sang on Twentieth Century) whose own public persona could have hardly been more different. It was the band's public image that often had them compared less favourably with other important acts like Midnight Oil, whose music and politics (while rather more overt) were often similar but whose image and reputation was more clean-cut. Cold Chisel remained hugely popular however and by the mid-1990s they continued to sell records at such a consistent rate they became the first Australian band to achieve higher sales after their split than during their active years. At the ARIA Music Awards of 1993 they were inducted into the Hall of Fame. While repackages and compilations accounted for much of these sales, 1994's Teenage Love provided two of its singles, which were top ten hits. When the group finally reformed in 1998 the resultant album was also a major hit and the follow-up tour sold out almost immediately. In 2001 Australasian Performing Right Association (APRA), listed their single, "Khe Sanh" (May 1978), at No. 8 of the all-time best Australian songs. Cold Chisel were one of the first Australian acts to have become the subject of a major tribute album. In 2007, Standing on the Outside: The Songs of Cold Chisel was released, featuring a collection of the band's songs as performed by artists including The Living End, Evermore, Something for Kate, Pete Murray, Katie Noonan, You Am I, Paul Kelly, Alex Lloyd, Thirsty Merc and Ben Lee, many of whom were children when Cold Chisel first disbanded and some, like the members of Evermore, had not even been born. Circus Animals was listed at No. 4 in the book, 100 Best Australian Albums (October 2010), while East appeared at No. 53. They won The Ted Albert Award for Outstanding Services to Australian Music at the APRA Music Awards of 2016. In March 2021, a previously unnamed lane off Burnett Street (off Currie Street) in the Adelaide central business district, near where the band had its first residency in the 1970s, was officially named Cold Chisel Lane. On one of its walls, there is a mural by Adelaide artist James Dodd, inspired by the band. Members Current members Ian Moss – lead guitar, backing and lead vocals (1973–1983, 1997–1999, 2003, 2009–present) Don Walker – keyboards, backing vocals (1973–1983, 1997–1999, 2003, 2009–present) Jimmy Barnes – lead and backing vocals, occasional guitar (1973–1975, 1976–1977, 1978–1984, 1997–1999, 2003, 2009–present) Phil Small – bass guitar, backing vocals (1975–1984, 1997–1999, 2003, 2009–present) Charley Drayton – drums, backing vocals (2011–present) Former members Steve Prestwich – drums, backing vocals (1973–1983, 1997–1999, 2003, 2009–2011; died 2011) Ted Broniecki – keyboards (1973) Les Kaczmarek – bass guitar (1973–1975; died 2008) John Swan – percussion, backing vocals (1975) Ray Arnott – drums (1983–1984) Additional musicians Dave Blight – harmonica Billy Rodgers – saxophone Jimmy Sloggett – saxophone Andy Bickers – saxophone Renée Geyer – backing vocals Venetta Fields – backing vocals Megan Williams – backing vocals Peter Walker – acoustic guitar Joe Camilleri – saxophone Wilbur Wilde – saxophone Timeline Discography Cold Chisel (1978) Breakfast at Sweethearts (1979) East (1980) Circus Animals (1982) Twentieth Century (1984) The Last Wave of Summer (1998) No Plans (2012) The Perfect Crime (2015) Blood Moon (2019) Awards and nominations APRA Awards The APRA Awards are presented annually from 1982 by the Australasian Performing Right Association (APRA), "honouring composers and songwriters". They commenced in 1982. ! |- | 2012 | "All for You" (Don Walker) | Song of the Year | | |- | 2016 || "Lost" (Don Walker, Wes Carr) || Song of the Year || || |- | rowspan="2"| 2021 ||rowspan="2"| "Getting the Band Back Together" (Don Walker) || Most Performed Rock Work || || |- | Song of the Year | | |- ARIA Music Awards The ARIA Music Awards is an annual awards ceremony that recognises excellence, innovation, and achievement across all genres of Australian music. They commenced in 1987. Cold Chisel was inducted into the Hall of Fame in 1993. |- | 1992 | Chisel | Highest Selling Album | |- | 1993 | Cold Chisel | ARIA Hall of Fame | |- | rowspan="2" | 1999 | rowspan="2" | The Last Wave of Summer | Best Rock Album | |- | Highest Selling Album | |- | rowspan="3" | 2012 | rowspan="2" | No Plans | Best Rock Album | |- | Best Group | |- | Light The Nitro Tour | Best Australian Live Act | |- | rowspan="3" | 2020 | Blood Moon | Best Rock Album | |- | Kevin Shirley for Blood Moon by Cold Chisel | Producer of the Year | |- | Blood Moon Tour | Best Australian Live Act | |- Helpmann Awards The Helpmann Awards is an awards show, celebrating live entertainment and performing arts in Australia, presented by industry group Live Performance Australia since 2001. Note: 2020 and 2021 were cancelled due to the COVID-19 pandemic. ! |- | 2012 | Light the Nitro Tour | Helpmann Award for Best Australian Contemporary Concert | | |- South Australian Music Awards The South Australian Music Awards are annual awards that exist to recognise, promote and celebrate excellence in the South Australian contemporary music industry. They commenced in 2012. The South Australian Music Hall of Fame celebrates the careers of successful music industry personalities. ! |- | 2016 | Cold Chisel | Hall of Fame | | |- TV Week / Countdown Awards Countdown was an Australian pop music TV series on national broadcaster ABC-TV from 1974–1987, it presented music awards from 1979–1987, initially in conjunction with magazine TV Week. The TV Week / Countdown Awards were a combination of popular-voted and peer-voted awards. |- | rowspan="3" |1979 | rowspan="2" | Breakfast at Sweethearts | Best Australian Album | |- | Best Australian Record Cover Design | |- | Don Walker for "Choirgirl" by Cold Chisel | Best Recorded Songwriter | |- | rowspan="8" |1980 | rowspan="3" | East | Best Australian Album | |- | Best Australian Record Cover Design | |- | Most Popular Australia Album | |- | rowspan="2" | Cold Chisel | Most Outstanding Achievement | |- | Most Popular Group | |- | Jimmy Barnes (Cold Chisel) | Most Popular Male Performer | |- | Don Walker by Cold Chisel | Best Recorded Songwriter | |- | Mark Opitz for East by Cold Chisel | Best Australian Producer | |- | 1981 | themselves | Most Consistent Live Act | |- |1982 | Circus Animals | Best Australian Album | |- | 1984 | "Saturday Night" | Best Video | |- See also Timeline of trends in Australian music ARIA Hall of Fame References General Note: Archived [on-line] copy has limited functionality. Specific External links APRA Award winners ARIA Award winners ARIA Hall of Fame inductees Australian hard rock musical groups Pub rock musical groups Musical quintets Musical groups established in 1973 Musical groups disestablished in 1983 Musical groups reestablished in 2009 Musical groups from Adelaide 1973 establishments in Australia
3,206
7,023
https://en.wikipedia.org/wiki/Confederate%20States%20of%20America
Confederate States of America
The Confederate States of America (CSA), commonly referred to as the Confederate States or the Confederacy, was an unrecognized breakaway herrenvolk republic in the Southern United States that existed from February 8, 1861, to May 9, 1865. The Confederacy comprised U.S. states that declared secession and warred against the United States during the American Civil War: South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, Texas, Virginia, Arkansas, Tennessee, and North Carolina. Kentucky and Missouri also declared secession and had full representation in the Confederate Congress, though their territory was largely controlled by Union forces after 1862. The Confederacy was formed on February 8, 1861, by seven slave states: South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas. All seven were in the Deep South region of the United States, whose economy was heavily dependent upon agriculture—particularly cotton—and a plantation system that relied upon enslaved Americans of African descent for labor. Convinced that white supremacy and slavery were threatened by the November 1860 election of Abraham Lincoln to the U.S. presidency on a platform that opposed the expansion of slavery into the western territories, the seven slave states seceded from the United States, with the loyal states becoming known as the Union during the ensuing American Civil War. In the Cornerstone Speech, Confederate Vice President Alexander H. Stephens described its ideology as centrally based "upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition." Before Lincoln took office on March 4, 1861, a provisional Confederate government was established on February 8, 1861. It was considered illegal by the United States federal government, and Northerners thought of the Confederates as traitors. After war began in April, four slave states of the Upper South—Virginia, Arkansas, Tennessee, and North Carolina—also joined the Confederacy. Four slave states — Delaware, Maryland, Kentucky, and Missouri — remained in the Union and became known as the border states. The Confederacy nevertheless recognized two of them — Missouri and Kentucky — as members, accepting rump state assembly declarations of secession as authorization for full delegations of representatives and senators in the Confederate Congress; In the early part of the war the Confederacy controlled and governed more than half of Kentucky and the southern portion of Missouri, but they were never substantially controlled by Confederate forces after 1862, despite the efforts of Confederate shadow governments, which were eventually defeated and expelled from both states. The Union rejected the claims of secession as illegitimate, while the Confederacy fully recognized them. The Civil War began on April 12, 1861, when the Confederates attacked Fort Sumter, a Union fort in the harbor of Charleston, South Carolina. No foreign government ever recognized the Confederacy as an independent country, although Great Britain and France granted it belligerent status, which allowed Confederate agents to contract with private concerns for weapons and other supplies. By 1865, the Confederacy's civilian government dissolved into chaos: the Confederate States Congress adjourned sine die, effectively ceasing to exist as a legislative body on March 18. After four years of heavy fighting, nearly all Confederate land and naval forces either surrendered or otherwise ceased hostilities by May 1865. The war lacked a clean end, with Confederate forces surrendering or disbanding sporadically throughout most of 1865. The most significant capitulation was Confederate general Robert E. Lee's surrender to Ulysses S. Grant at Appomattox on April 9, after which any doubt about the war's outcome or the Confederacy's survival was extinguished, although another large army under Confederate general Joseph E. Johnston did not formally surrender to William T. Sherman until April 26. Contemporaneously, President Lincoln was assassinated by Confederate sympathizer John Wilkes Booth on April 15. Confederate President Jefferson Davis's administration declared the Confederacy dissolved on May 5, and acknowledged in later writings that the Confederacy "disappeared" in 1865. On May 9, 1865, U.S. president Andrew Johnson officially called an end to the armed resistance in the South. After the war, Confederate states were readmitted to the Congress during the Reconstruction era, after each ratified the 13th Amendment to the U.S. Constitution outlawing slavery. Lost Cause ideology, an idealized view of the Confederacy valiantly fighting for a just cause, emerged in the decades after the war among former Confederate generals and politicians, as well as organizations such as the United Daughters of the Confederacy and the Sons of Confederate Veterans. Intense periods of Lost Cause activity developed around the turn of the 20th century, and during the civil rights movement of the 1950s and 1960s in reaction to growing support for racial equality. Advocates sought to ensure future generations of Southern whites would continue to support white supremacist policies such as the Jim Crow laws through activities such as building Confederate monuments and influencing textbooks to write on Lost Cause ideology. The modern display of Confederate flags primarily started during the 1948 presidential election, when the battle flag was used by the Dixiecrats. During the Civil Rights Movement, segregationists used it for demonstrations. Span of control On February 22, 1862, the Confederate States Constitution of seven state signatories – Mississippi, South Carolina, Florida, Alabama, Georgia, Louisiana, and Texas – replaced the Provisional Constitution of February 8, 1861, with one stating in its preamble a desire for a "permanent federal government". Four additional slave-holding states – Virginia, Arkansas, Tennessee, and North Carolina – declared their secession and joined the Confederacy following a call by U.S. President Abraham Lincoln for troops from each state to recapture Sumter and other seized federal properties in the South. Missouri and Kentucky were represented by partisan factions adopting the forms of state governments without control of substantial territory or population in either case. The antebellum state governments in both maintained their representation in the Union. Also fighting for the Confederacy were two of the "Five Civilized Tribes" – the Choctaw and the Chickasaw – in Indian Territory, and a new, but uncontrolled, Confederate Territory of Arizona. Efforts by certain factions in Maryland to secede were halted by federal imposition of martial law; Delaware, though of divided loyalty, did not attempt it. A Unionist government was formed in opposition to the secessionist state government in Richmond and administered the western parts of Virginia that had been occupied by Federal troops. The Restored Government of Virginia later recognized the new state of West Virginia, which was admitted to the Union during the war on June 20, 1863, and relocated to Alexandria for the rest of the war. Confederate control over its claimed territory and population in congressional districts steadily shrank from three-quarters to a third during the American Civil War due to the Union's successful overland campaigns, its control of inland waterways into the South, and its blockade of the southern coast. With the Emancipation Proclamation on January 1, 1863, the Union made abolition of slavery a war goal (in addition to reunion). As Union forces moved southward, large numbers of plantation slaves were freed. Many joined the Union lines, enrolling in service as soldiers, teamsters and laborers. The most notable advance was Sherman's "March to the Sea" in late 1864. Much of the Confederacy's infrastructure was destroyed, including telegraphs, railroads, and bridges. Plantations in the path of Sherman's forces were severely damaged. Internal movement within the Confederacy became increasingly difficult, weakening its economy and limiting army mobility. These losses created an insurmountable disadvantage in men, materiel, and finance. Public support for Confederate President Jefferson Davis's administration eroded over time due to repeated military reverses, economic hardships, and allegations of autocratic government. After four years of campaigning, Richmond was captured by Union forces in April 1865. A few days later General Robert E. Lee surrendered to Union General Ulysses S. Grant, effectively signaling the collapse of the Confederacy. President Davis was captured on May 10, 1865, and jailed for treason, but no trial was ever held. History The Confederacy was established by the Montgomery Convention in February 1861 by seven states (South Carolina, Mississippi, Alabama, Florida, Georgia, Louisiana, adding Texas in March before Lincoln's inauguration), expanded in May–July 1861 (with Virginia, Arkansas, Tennessee, North Carolina), and disintegrated in April–May 1865. It was formed by delegations from seven slave states of the Lower South that had proclaimed their secession from the Union. After the fighting began in April, four additional slave states seceded and were admitted. Later, two slave states (Missouri and Kentucky) and two territories were given seats in the Confederate Congress. Southern nationalism was rising and pride supported the new founding. Confederate nationalism prepared men to fight for "The Southern Cause". For the duration of its existence, the Confederacy underwent trial by war. The Southern Cause transcended the ideology of states' rights, tariff policy, and internal improvements. This "Cause" supported, or derived from, cultural and financial dependence on the South's slavery-based economy. The convergence of race and slavery, politics, and economics raised almost all South-related policy questions to the status of moral questions over way of life, merging love of things Southern and hatred of things Northern. Not only did political parties split, but national churches and interstate families as well divided along sectional lines as the war approached. According to historian John M. Coski, Southern Democrats had chosen John Breckinridge as their candidate during the U.S. presidential election of 1860, but in no Southern state (other than South Carolina, where the legislature chose the electors) was support for him unanimous, as all of the other states recorded at least some popular votes for one or more of the other three candidates (Abraham Lincoln, Stephen A. Douglas and John Bell). Support for these candidates, collectively, ranged from significant to an outright majority, with extremes running from 25% in Texas to 81% in Missouri. There were minority views everywhere, especially in the upland and plateau areas of the South, being particularly concentrated in western Virginia and eastern Tennessee. Following South Carolina's unanimous 1860 secession vote, no other Southern states considered the question until 1861, and when they did none had a unanimous vote. All had residents who cast significant numbers of Unionist votes in either the legislature, conventions, popular referendums, or in all three. Voting to remain in the Union did not necessarily mean that individuals were sympathizers with the North. Once fighting began, many of these who voted to remain in the Union, particularly in the Deep South, accepted the majority decision, and supported the Confederacy. Many writers have evaluated the Civil War as an American tragedy—a "Brothers' War", pitting "brother against brother, father against son, kin against kin of every degree". A revolution in disunion According to historian Avery O. Craven in 1950, the Confederate States of America nation, as a state power, was created by secessionists in Southern slave states, who believed that the federal government was making them second-class citizens. They judged the agents of change to be abolitionists and anti-slavery elements in the Republican Party, whom they believed used repeated insult and injury to subject them to intolerable "humiliation and degradation". The "Black Republicans" (as the Southerners called them) and their allies soon dominated the U.S. House, Senate, and Presidency. On the U.S. Supreme Court, Chief Justice Roger B. Taney (a presumed supporter of slavery) was 83 years old and ailing. During the campaign for president in 1860, some secessionists threatened disunion should Lincoln (who opposed the expansion of slavery into the territories) be elected, including William L. Yancey. Yancey toured the North calling for secession as Stephen A. Douglas toured the South calling for union if Lincoln was elected. To the secessionists the Republican intent was clear: to contain slavery within its present bounds and, eventually, to eliminate it entirely. A Lincoln victory presented them with a momentous choice (as they saw it), even before his inauguration – "the Union without slavery, or slavery without the Union". Causes of secession The immediate catalyst for secession was the victory of the Republican Party and the election of Abraham Lincoln as president in the 1860 elections. American Civil War historian James M. McPherson suggested that, for Southerners, the most ominous feature of the Republican victories in the congressional and presidential elections of 1860 was the magnitude of those victories: Republicans captured over 60 percent of the Northern vote and three-fourths of its Congressional delegations. The Southern press said that such Republicans represented the anti-slavery portion of the North, "a party founded on the single sentiment ... of hatred of African slavery", and now the controlling power in national affairs. The "Black Republican party" could overwhelm conservative Yankees. The New Orleans Delta said of the Republicans, "It is in fact, essentially, a revolutionary party" to overthrow slavery. By 1860, sectional disagreements between North and South concerned primarily the maintenance or expansion of slavery in the United States. Historian Drew Gilpin Faust observed that "leaders of the secession movement across the South cited slavery as the most compelling reason for southern independence". Although most white Southerners did not own slaves, the majority supported the institution of slavery and benefited indirectly from the slave society. For struggling yeomen and subsistence farmers, the slave society provided a large class of people ranked lower in the social scale than themselves. Secondary differences related to issues of free speech, runaway slaves, expansion into Cuba, and states' rights. Historian Emory Thomas assessed the Confederacy's self-image by studying correspondence sent by the Confederate government in 1861–62 to foreign governments. He found that Confederate diplomacy projected multiple contradictory self-images: In what later became known as the Cornerstone Speech, Confederate Vice President Alexander H. Stephens declared that the "cornerstone" of the new government "rest[ed] upon the great truth that the negro is not equal to the white man; that slavery – subordination to the superior race – is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth". After the war Stephens tried to qualify his remarks, claiming they were extemporaneous, metaphorical, and intended to refer to public sentiment rather than "the principles of the new Government on this subject". Four of the seceding states, the Deep South states of South Carolina, Mississippi, Georgia, and Texas, issued formal declarations of the causes of their decision; each identified the threat to slaveholders' rights as the cause of, or a major cause of, secession. Georgia also claimed a general Federal policy of favoring Northern over Southern economic interests. Texas mentioned slavery 21 times, but also listed the failure of the federal government to live up to its obligations, in the original annexation agreement, to protect settlers along the exposed western frontier. Texas resolutions further stated that governments of the states and the nation were established "exclusively by the white race, for themselves and their posterity". They also stated that although equal civil and political rights applied to all white men, they did not apply to those of the "African race", further opining that the end of racial enslavement would "bring inevitable calamities upon both [races] and desolation upon the fifteen slave-holding states". Alabama did not provide a separate declaration of causes. Instead, the Alabama ordinance stated "the election of Abraham Lincoln ... by a sectional party, avowedly hostile to the domestic institutions and to the peace and security of the people of the State of Alabama, preceded by many and dangerous infractions of the Constitution of the United States by many of the States and people of the northern section, is a political wrong of so insulting and menacing a character as to justify the people of the State of Alabama in the adoption of prompt and decided measures for their future peace and security". The ordinance invited "the slaveholding States of the South, who may approve such purpose, in order to frame a provisional as well as a permanent Government upon the principles of the Constitution of the United States" to participate in a February 4, 1861 convention in Montgomery, Alabama. The secession ordinances of the remaining two states, Florida and Louisiana, simply declared their severing ties with the federal Union, without stating any causes. Afterward, the Florida secession convention formed a committee to draft a declaration of causes, but the committee was discharged before completion of the task. Only an undated, untitled draft remains. Four of the Upper South states (Virginia, Arkansas, Tennessee, and North Carolina) rejected secession until after the clash at Ft. Sumter. Virginia's ordinance stated a kinship with the slave-holding states of the Lower South, but did not name the institution itself as a primary reason for its course. Arkansas's secession ordinance encompassed a strong objection to the use of military force to preserve the Union as its motivating reason. Before the outbreak of war, the Arkansas Convention had on March 20 given as their first resolution: "The people of the Northern States have organized a political party, purely sectional in its character, the central and controlling idea of which is hostility to the institution of African slavery, as it exists in the Southern States; and that party has elected a President ... pledged to administer the Government upon principles inconsistent with the rights and subversive of the interests of the Southern States." North Carolina and Tennessee limited their ordinances to simply withdrawing, although Tennessee went so far as to make clear they wished to make no comment at all on the "abstract doctrine of secession". In a message to the Confederate Congress on April 29, 1861 Jefferson Davis cited both the tariff and slavery for the South's secession. Secessionists and conventions The pro-slavery "Fire-Eaters" group of Southern Democrats, calling for immediate secession, were opposed by two factions. "Cooperationists" in the Deep South would delay secession until several states left the union, perhaps in a Southern Convention. Under the influence of men such as Texas Governor Sam Houston, delay would have the effect of sustaining the Union. "Unionists", especially in the Border South, often former Whigs, appealed to sentimental attachment to the United States. Southern Unionists' favorite presidential candidate was John Bell of Tennessee, sometimes running under an "Opposition Party" banner. Many secessionists were active politically. Governor William Henry Gist of South Carolina corresponded secretly with other Deep South governors, and most southern governors exchanged clandestine commissioners. Charleston's secessionist "1860 Association" published over 200,000 pamphlets to persuade the youth of the South. The most influential were: "The Doom of Slavery" and "The South Alone Should Govern the South", both by John Townsend of South Carolina; and James D. B. De Bow's "The Interest of Slavery of the Southern Non-slaveholder". Developments in South Carolina started a chain of events. The foreman of a jury refused the legitimacy of federal courts, so Federal Judge Andrew Magrath ruled that U.S. judicial authority in South Carolina was vacated. A mass meeting in Charleston celebrating the Charleston and Savannah railroad and state cooperation led to the South Carolina legislature to call for a Secession Convention. U.S. Senator James Chesnut, Jr. resigned, as did Senator James Henry Hammond. Elections for Secessionist conventions were heated to "an almost raving pitch, no one dared dissent", according to historian William W. Freehling. Even once–respected voices, including the Chief Justice of South Carolina, John Belton O'Neall, lost election to the Secession Convention on a Cooperationist ticket. Across the South mobs expelled Yankees and (in Texas) executed German-Americans suspected of loyalty to the United States. Generally, seceding conventions which followed did not call for a referendum to ratify, although Texas, Arkansas, and Tennessee did, as well as Virginia's second convention. Kentucky declared neutrality, while Missouri had its own civil war until the Unionists took power and drove the Confederate legislators out of the state. Attempts to thwart secession In the antebellum months, the Corwin Amendment was an unsuccessful attempt by the Congress to bring the seceding states back to the Union and to convince the border slave states to remain. It was a proposed amendment to the United States Constitution by Ohio Congressman Thomas Corwin that would shield "domestic institutions" of the states (which in 1861 included slavery) from the constitutional amendment process and from abolition or interference by Congress. It was passed by the 36th Congress on March 2, 1861. The House approved it by a vote of 133 to 65 and the United States Senate adopted it, with no changes, on a vote of 24 to 12. It was then submitted to the state legislatures for ratification. In his inaugural address Lincoln endorsed the proposed amendment. The text was as follows: Had it been ratified by the required number of states prior to 1865, it would have made institutionalized slavery immune to the constitutional amendment procedures and to interference by Congress. Inauguration and response The first secession state conventions from the Deep South sent representatives to meet at the Montgomery Convention in Montgomery, Alabama, on February 4, 1861. There the fundamental documents of government were promulgated, a provisional government was established, and a representative Congress met for the Confederate States of America. The new 'provisional' Confederate President Jefferson Davis issued a call for 100,000 men from the various states' militias to defend the newly formed Confederacy. All Federal property was seized, along with gold bullion and coining dies at the U.S. mints in Charlotte, North Carolina; Dahlonega, Georgia; and New Orleans. The Confederate capital was moved from Montgomery to Richmond, Virginia, in May 1861. On February 22, 1862, Davis was inaugurated as president with a term of six years. The newly inaugurated Confederate administration pursued a policy of national territorial integrity, continuing earlier state efforts in 1860 and early 1861 to remove U.S. government presence from within their boundaries. These efforts included taking possession of U.S. courts, custom houses, post offices, and most notably, arsenals and forts. But after the Confederate attack and capture of Fort Sumter in April 1861, Lincoln called up 75,000 of the states' militia to muster under his command. The stated purpose was to re-occupy U.S. properties throughout the South, as the U.S. Congress had not authorized their abandonment. The resistance at Fort Sumter signaled his change of policy from that of the Buchanan Administration. Lincoln's response ignited a firestorm of emotion. The people of both North and South demanded war, with soldiers rushing to their colors in the hundreds of thousands. Four more states (Virginia, North Carolina, Tennessee, and Arkansas) refused Lincoln's call for troops and declared secession, while Kentucky maintained an uneasy "neutrality". Secession Secessionists argued that the United States Constitution was a contract among sovereign states that could be abandoned at any time without consultation and that each state had a right to secede. After intense debates and statewide votes, seven Deep South cotton states passed secession ordinances by February 1861 (before Abraham Lincoln took office as president), while secession efforts failed in the other eight slave states. Delegates from those seven formed the CSA in February 1861, selecting Jefferson Davis as the provisional president. Unionist talk of reunion failed and Davis began raising a 100,000 man army. States Initially, some secessionists may have hoped for a peaceful departure. Moderates in the Confederate Constitutional Convention included a provision against importation of slaves from Africa to appeal to the Upper South. Non-slave states might join, but the radicals secured a two-thirds requirement in both houses of Congress to accept them. Seven states declared their secession from the United States before Lincoln took office on March 4, 1861. After the Confederate attack on Fort Sumter April 12, 1861, and Lincoln's subsequent call for troops on April 15, four more states declared their secession: Kentucky declared neutrality but after Confederate troops moved in, the state government asked for Union troops to drive them out. The splinter Confederate state government relocated to accompany western Confederate armies and never controlled the state population. By the end of the war, 90,000 Kentuckians had fought on the side of the Union, compared to 35,000 for the Confederate States. In Missouri, a constitutional convention was approved and delegates elected by voters. The convention rejected secession 89–1 on March 19, 1861. The governor maneuvered to take control of the St. Louis Arsenal and restrict Federal movements. This led to confrontation, and in June Federal forces drove him and the General Assembly from Jefferson City. The executive committee of the constitutional convention called the members together in July. The convention declared the state offices vacant, and appointed a Unionist interim state government. The exiled governor called a rump session of the former General Assembly together in Neosho and, on October 31, 1861, passed an ordinance of secession. It is still a matter of debate as to whether a quorum existed for this vote. The Confederate state government was unable to control very much Missouri territory. It had its capital first at Neosho, then at Cassville, before being driven out of the state. For the remainder of the war, it operated as a government in exile at Marshall, Texas. Neither Kentucky nor Missouri was declared in rebellion in Lincoln's Emancipation Proclamation. The Confederacy recognized the pro-Confederate claimants in both Kentucky (December 10, 1861) and Missouri (November 28, 1861) and laid claim to those states, granting them Congressional representation and adding two stars to the Confederate flag. Voting for the representatives was mostly done by Confederate soldiers from Kentucky and Missouri. The order of secession resolutions and dates are: 1. South Carolina (December 20, 1860) 2. Mississippi (January 9, 1861) 3. Florida (January 10) 4. Alabama (January 11) 5. Georgia (January 19) 6. Louisiana (January 26) 7. Texas (February 1; referendum February 23) Inauguration of President Lincoln, March 4 Bombardment of Fort Sumter (April 12) and President Lincoln's call-up (April 15) 8. Virginia (April 17; referendum May 23, 1861) 9. Arkansas (May 6) 10. Tennessee (May 7; referendum June 8) 11. North Carolina (May 20) In Virginia, the populous counties along the Ohio and Pennsylvania borders rejected the Confederacy. Unionists held a Convention in Wheeling in June 1861, establishing a "restored government" with a rump legislature, but sentiment in the region remained deeply divided. In the 50 counties that would make up the state of West Virginia, voters from 24 counties had voted for disunion in Virginia's May 23 referendum on the ordinance of secession. In the 1860 Presidential election "Constitutional Democrat" Breckenridge had outpolled "Constitutional Unionist" Bell in the 50 counties by 1,900 votes, 44% to 42%. Regardless of scholarly disputes over election procedures and results county by county, altogether they simultaneously supplied over 20,000 soldiers to each side of the conflict. Representatives for most of the counties were seated in both state legislatures at Wheeling and at Richmond for the duration of the war. Attempts to secede from the Confederacy by some counties in East Tennessee were checked by martial law. Although slave-holding Delaware and Maryland did not secede, citizens from those states exhibited divided loyalties. Regiments of Marylanders fought in Lee's Army of Northern Virginia. But overall, 24,000 men from Maryland joined the Confederate armed forces, compared to 63,000 who joined Union forces. Delaware never produced a full regiment for the Confederacy, but neither did it emancipate slaves as did Missouri and West Virginia. District of Columbia citizens made no attempts to secede and through the war years, referendums sponsored by President Lincoln approved systems of compensated emancipation and slave confiscation from "disloyal citizens". Territories Citizens at Mesilla and Tucson in the southern part of New Mexico Territory formed a secession convention, which voted to join the Confederacy on March 16, 1861, and appointed Dr. Lewis S. Owings as the new territorial governor. They won the Battle of Mesilla and established a territorial government with Mesilla serving as its capital. The Confederacy proclaimed the Confederate Arizona Territory on February 14, 1862, north to the 34th parallel. Marcus H. MacWillie served in both Confederate Congresses as Arizona's delegate. In 1862, the Confederate New Mexico Campaign to take the northern half of the U.S. territory failed and the Confederate territorial government in exile relocated to San Antonio, Texas. Confederate supporters in the trans-Mississippi west also claimed portions of the Indian Territory after the United States evacuated the federal forts and installations. Over half of the American Indian troops participating in the Civil War from the Indian Territory supported the Confederacy; troops and one general were enlisted from each tribe. On July 12, 1861, the Confederate government signed a treaty with both the Choctaw and Chickasaw Indian nations. After several battles, Union armies took control of the territory. The Indian Territory never formally joined the Confederacy, but it did receive representation in the Confederate Congress. Many Indians from the Territory were integrated into regular Confederate Army units. After 1863, the tribal governments sent representatives to the Confederate Congress: Elias Cornelius Boudinot representing the Cherokee and Samuel Benton Callahan representing the Seminole and Creek. The Cherokee Nation aligned with the Confederacy. They practiced and supported slavery, opposed abolition, and feared their lands would be seized by the Union. After the war, the Indian territory was disestablished, their black slaves were freed, and the tribes lost some of their lands. Capitals Montgomery, Alabama, served as the capital of the Confederate States of America from February 4 until May 29, 1861, in the Alabama State Capitol. Six states created the Confederate States of America there on February 8, 1861. The Texas delegation was seated at the time, so it is counted in the "original seven" states of the Confederacy; it had no roll call vote until after its referendum made secession "operative". Two sessions of the Provisional Congress were held in Montgomery, adjourning May 21. The Permanent Constitution was adopted there on March 12, 1861. The permanent capital provided for in the Confederate Constitution called for a state cession of a ten-miles square (100 square mile) district to the central government. Atlanta, which had not yet supplanted Milledgeville, Georgia, as its state capital, put in a bid noting its central location and rail connections, as did Opelika, Alabama, noting its strategically interior situation, rail connections and nearby deposits of coal and iron. Richmond, Virginia, was chosen for the interim capital at the Virginia State Capitol. The move was used by Vice President Stephens and others to encourage other border states to follow Virginia into the Confederacy. In the political moment it was a show of "defiance and strength". The war for Southern independence was surely to be fought in Virginia, but it also had the largest Southern military-aged white population, with infrastructure, resources, and supplies required to sustain a war. The Davis Administration's policy was that, "It must be held at all hazards." The naming of Richmond as the new capital took place on May 30, 1861, and the last two sessions of the Provisional Congress were held in the new capital. The Permanent Confederate Congress and President were elected in the states and army camps on November 6, 1861. The First Congress met in four sessions in Richmond from February 18, 1862, to February 17, 1864. The Second Congress met there in two sessions, from May 2, 1864, to March 18, 1865. As war dragged on, Richmond became crowded with training and transfers, logistics and hospitals. Prices rose dramatically despite government efforts at price regulation. A movement in Congress led by Henry S. Foote of Tennessee argued for moving the capital from Richmond. At the approach of Federal armies in mid-1862, the government's archives were readied for removal. As the Wilderness Campaign progressed, Congress authorized Davis to remove the executive department and call Congress to session elsewhere in 1864 and again in 1865. Shortly before the end of the war, the Confederate government evacuated Richmond, planning to relocate farther south. Little came of these plans before Lee's surrender at Appomattox Court House, Virginia on April 9, 1865. Davis and most of his cabinet fled to Danville, Virginia, which served as their headquarters for eight days. Diplomacy United States, a foreign power During the four years of its existence under trial by war, the Confederate States of America asserted its independence and appointed dozens of diplomatic agents abroad. None were ever officially recognized by a foreign government. The United States government regarded the Southern states as being in rebellion or insurrection and so refused any formal recognition of their status. Even before Fort Sumter, U.S. Secretary of State William H. Seward issued formal instructions to the American minister to Britain, Charles Francis Adams: Seward instructed Adams that if the British government seemed inclined to recognize the Confederacy, or even waver in that regard, it was to receive a sharp warning, with a strong hint of war: The United States government never declared war on those "kindred and countrymen" in the Confederacy, but conducted its military efforts beginning with a presidential proclamation issued April 15, 1861. It called for troops to recapture forts and suppress what Lincoln later called an "insurrection and rebellion". Mid-war parleys between the two sides occurred without formal political recognition, though the laws of war predominantly governed military relationships on both sides of uniformed conflict. On the part of the Confederacy, immediately following Fort Sumter the Confederate Congress proclaimed that "war exists between the Confederate States and the Government of the United States, and the States and Territories thereof". A state of war was not to formally exist between the Confederacy and those states and territories in the United States allowing slavery, although Confederate Rangers were compensated for destruction they could effect there throughout the war. Concerning the international status and nationhood of the Confederate States of America, in 1869 the United States Supreme Court in ruled Texas' declaration of secession was legally null and void. Jefferson Davis, former President of the Confederacy, and Alexander H. Stephens, its former vice-president, both wrote postwar arguments in favor of secession's legality and the international legitimacy of the Government of the Confederate States of America, most notably Davis' The Rise and Fall of the Confederate Government. International diplomacy Once war with the United States began, the Confederacy pinned its hopes for survival on military intervention by Great Britain and/or France. The Confederate government sent James M. Mason to London and John Slidell to Paris. On their way to Europe in 1861, the U.S. Navy intercepted their ship, the Trent, and forcibly took them to Boston, an international episode known as the Trent Affair. The diplomats were eventually released and continued their voyage to Europe. However, their mission was unsuccessful; historians give them low marks for their poor diplomacy. Neither secured diplomatic recognition for the Confederacy, much less military assistance. The Confederates who had believed that "cotton is king", that is, that Britain had to support the Confederacy to obtain cotton, proved mistaken. The British had stocks to last over a year and had been developing alternative sources of cotton, most notably India and Egypt. Britain had so much cotton that it was exporting some to France. England was not about to go to war with the U.S. to acquire more cotton at the risk of losing the large quantities of food imported from the North. Aside from the purely economic questions, there was also the clamorous ethical debate. Great Britain took pride in being a leader in ending the transatlantic enslavement of Africans; phasing the practice out within its empire starting in 1833, and deploying the Royal Navy to patrol the waters of the middle passage to prevent additional slave ships from reaching the Western Hemisphere. Confederate diplomats found little support for American slavery, cotton trade or not. A series of slave narratives about American slavery was being published in London. It was in London that the first World Anti-Slavery Convention had been held in 1840; it was followed by regular smaller conferences. A string of eloquent and sometimes well-educated Negro abolitionist speakers crisscrossed not just England but Scotland and Ireland as well. In addition to exposing the reality of America's shameful and sinful chattel slavery—some were fugitive slaves—they rebutted the Confederate position that negroes were "unintellectual, timid, and dependent", and "not equal to the white man...the superior race," as it was put by Confederate Vice-president Alexander H. Stephens in his famous Cornerstone Speech. Frederick Douglass, Henry Highland Garnet, Sarah Parker Remond, her brother Charles Lenox Remond, James W. C. Pennington, Martin Delany, Samuel Ringgold Ward, and William G. Allen all spent years in Britain, where fugitive slaves were safe and, as Allen said, there was an "absence of prejudice against color. Here the colored man feels himself among friends, and not among enemies". One speaker alone, William Wells Brown, gave more than 1,000 lectures on the shame of American chattel slavery. Throughout the early years of the war, British foreign secretary Lord John Russell, Emperor Napoleon III of France, and, to a lesser extent, British Prime Minister Lord Palmerston, showed interest in recognition of the Confederacy or at least mediation of the war. British Chancellor of the Exchequer William Gladstone, convinced of the necessity of intervention on the Confederate side based on the successful diplomatic intervention in Second Italian War of Independence against Austria, attempted unsuccessfully to convince Lord Palmerston to intervene. By September 1862 the Union victory at the Battle of Antietam, Lincoln's preliminary Emancipation Proclamation and abolitionist opposition in Britain put an end to these possibilities. The cost to Britain of a war with the U.S. would have been high: the immediate loss of American grain-shipments, the end of British exports to the U.S., and the seizure of billions of pounds invested in American securities. War would have meant higher taxes in Britain, another invasion of Canada, and full-scale worldwide attacks on the British merchant fleet. Outright recognition would have meant certain war with the United States. In mid-1862, fears of a race war (as had transpired in the Haitian Revolution of 1791–1804) led to the British considering intervention for humanitarian reasons. Lincoln's Emancipation Proclamation did not lead to interracial violence, let alone a bloodbath, but it did give the friends of the Union strong talking points in the arguments that raged across Britain. John Slidell, the Confederate States emissary to France, did succeed in negotiating a loan of $15,000,000 from Erlanger and other French capitalists. The money went to buy ironclad warships, as well as military supplies that came in with blockade runners. The British government did allow the construction of blockade runners in Britain; they were owned and operated by British financiers and ship owners; a few were owned and operated by the Confederacy. The British investors' goal was to get highly profitable cotton. Several European nations maintained diplomats in place who had been appointed to the U.S., but no country appointed any diplomat to the Confederacy. Those nations recognized the Union and Confederate sides as belligerents. In 1863 the Confederacy expelled European diplomatic missions for advising their resident subjects to refuse to serve in the Confederate army. Both Confederate and Union agents were allowed to work openly in British territories. Some state governments in northern Mexico negotiated local agreements to cover trade on the Texas border. The Confederacy appointed Ambrose Dudley Mann as special agent to the Holy See on September 24, 1863. But the Holy See never released a formal statement supporting or recognizing the Confederacy. In November 1863, Mann met Pope Pius IX in person and received a letter supposedly addressed "to the Illustrious and Honorable Jefferson Davis, President of the Confederate States of America"; Mann had mistranslated the address. In his report to Richmond, Mann claimed a great diplomatic achievement for himself, asserting the letter was "a positive recognition of our Government". The letter was indeed used in propaganda, but Confederate Secretary of State Judah P. Benjamin told Mann it was "a mere inferential recognition, unconnected with political action or the regular establishment of diplomatic relations" and thus did not assign it the weight of formal recognition. Nevertheless, the Confederacy was seen internationally as a serious attempt at nationhood, and European governments sent military observers, both official and unofficial, to assess whether there had been a de facto establishment of independence. These observers included Arthur Lyon Fremantle of the British Coldstream Guards, who entered the Confederacy via Mexico, Fitzgerald Ross of the Austrian Hussars, and Justus Scheibert of the Prussian Army. European travelers visited and wrote accounts for publication. Importantly in 1862, the Frenchman Charles Girard's Seven months in the rebel states during the North American War testified "this government ... is no longer a trial government ... but really a normal government, the expression of popular will". Fremantle went on to write in his book Three Months in the Southern States that he had French Emperor Napoleon III assured Confederate diplomat John Slidell that he would make "direct proposition" to Britain for joint recognition. The Emperor made the same assurance to British Members of Parliament John A. Roebuck and John A. Lindsay. Roebuck in turn publicly prepared a bill to submit to Parliament June 30 supporting joint Anglo-French recognition of the Confederacy. "Southerners had a right to be optimistic, or at least hopeful, that their revolution would prevail, or at least endure." Following the double disasters at Vicksburg and Gettysburg in July 1863, the Confederates "suffered a severe loss of confidence in themselves", and withdrew into an interior defensive position. There would be no help from the Europeans. By December 1864, Davis considered sacrificing slavery in order to enlist recognition and aid from Paris and London; he secretly sent Duncan F. Kenner to Europe with a message that the war was fought solely for "the vindication of our rights to self-government and independence" and that "no sacrifice is too great, save that of honor". The message stated that if the French or British governments made their recognition conditional on anything at all, the Confederacy would consent to such terms. Davis's message could not explicitly acknowledge that slavery was on the bargaining table due to still-strong domestic support for slavery among the wealthy and politically influential. European leaders all saw that the Confederacy was on the verge of total defeat. Cuba and Brazil The Confederacy's biggest foreign policy successes were with Cuba and Brazil. Militarily this meant little during the war. Brazil represented the "peoples most identical to us in Institutions", in which slavery remained legal until the 1880s. Cuba was a Spanish colony and the Captain–General of Cuba declared in writing that Confederate ships were welcome, and would be protected in Cuban ports. They were also welcome in Brazilian ports; slavery was legal throughout Brazil, and the abolitionist movement was small. After the end of the war, Brazil was the primary destination of those Southerners who wanted to continue living in a slave society, where, as one immigrant remarked, slaves were cheap (see Confederados). Historians speculate that if the Confederacy had achieved independence, it probably would have tried to acquire Cuba as a base of expansion. Confederacy at war Motivations of soldiers Most soldiers who joined Confederate national or state military units joined voluntarily. Perman (2010) says historians are of two minds on why millions of soldiers seemed so eager to fight, suffer and die over four years: Military strategy Civil War historian E. Merton Coulter wrote that for those who would secure its independence, "The Confederacy was unfortunate in its failure to work out a general strategy for the whole war". Aggressive strategy called for offensive force concentration. Defensive strategy sought dispersal to meet demands of locally minded governors. The controlling philosophy evolved into a combination "dispersal with a defensive concentration around Richmond". The Davis administration considered the war purely defensive, a "simple demand that the people of the United States would cease to war upon us". Historian James M. McPherson is a critic of Lee's offensive strategy: "Lee pursued a faulty military strategy that ensured Confederate defeat". As the Confederate government lost control of territory in campaign after campaign, it was said that "the vast size of the Confederacy would make its conquest impossible". The enemy would be struck down by the same elements which so often debilitated or destroyed visitors and transplants in the South. Heat exhaustion, sunstroke, endemic diseases such as malaria and typhoid would match the destructive effectiveness of the Moscow winter on the invading armies of Napoleon. Early in the war both sides believed that one great battle would decide the conflict; the Confederates won a surprise victory at the First Battle of Bull Run, also known as First Manassas (the name used by Confederate forces). It drove the Confederate people "insane with joy"; the public demanded a forward movement to capture Washington, relocate the Confederate capital there, and admit Maryland to the Confederacy. A council of war by the victorious Confederate generals decided not to advance against larger numbers of fresh Federal troops in defensive positions. Davis did not countermand it. Following the Confederate incursion into Maryland halted at the Battle of Antietam in October 1862, generals proposed concentrating forces from state commands to re-invade the north. Nothing came of it. Again in mid-1863 at his incursion into Pennsylvania, Lee requested that Davis Beauregard simultaneously attack Washington with troops taken from the Carolinas. But the troops there remained in place during the Gettysburg Campaign. The eleven states of the Confederacy were outnumbered by the North about four-to-one in military manpower. It was overmatched far more in military equipment, industrial facilities, railroads for transport, and wagons supplying the front. Confederates slowed the Yankee invaders, at heavy cost to the Southern infrastructure. The Confederates burned bridges, laid land mines in the roads, and made harbors inlets and inland waterways unusable with sunken mines (called "torpedoes" at the time). Coulter reports: The Confederacy relied on external sources for war materials. The first came from trade with the enemy. "Vast amounts of war supplies" came through Kentucky, and thereafter, western armies were "to a very considerable extent" provisioned with illicit trade via Federal agents and northern private traders. But that trade was interrupted in the first year of war by Admiral Porter's river gunboats as they gained dominance along navigable rivers north–south and east–west. Overseas blockade running then came to be of "outstanding importance". On April 17, President Davis called on privateer raiders, the "militia of the sea", to wage war on U.S. seaborne commerce. Despite noteworthy effort, over the course of the war the Confederacy was found unable to match the Union in ships and seamanship, materials and marine construction. An inescapable obstacle to success in the warfare of mass armies was the Confederacy's lack of manpower, and sufficient numbers of disciplined, equipped troops in the field at the point of contact with the enemy. During the winter of 1862–63, Lee observed that none of his famous victories had resulted in the destruction of the opposing army. He lacked reserve troops to exploit an advantage on the battlefield as Napoleon had done. Lee explained, "More than once have most promising opportunities been lost for want of men to take advantage of them, and victory itself had been made to put on the appearance of defeat, because our diminished and exhausted troops have been unable to renew a successful struggle against fresh numbers of the enemy." Armed forces The military armed forces of the Confederacy comprised three branches: Army, Navy and Marine Corps. The Confederate military leadership included many veterans from the United States Army and United States Navy who had resigned their Federal commissions and were appointed to senior positions. Many had served in the Mexican–American War (including Robert E. Lee and Jefferson Davis), but some such as Leonidas Polk (who graduated from West Point but did not serve in the Army) had little or no experience. The Confederate officer corps consisted of men from both slave-owning and non-slave-owning families. The Confederacy appointed junior and field grade officers by election from the enlisted ranks. Although no Army service academy was established for the Confederacy, some colleges (such as The Citadel and Virginia Military Institute) maintained cadet corps that trained Confederate military leadership. A naval academy was established at Drewry's Bluff, Virginia in 1863, but no midshipmen graduated before the Confederacy's end. Most soldiers were white males aged between 16 and 28. The median year of birth was 1838, so half the soldiers were 23 or older by 1861. In early 1862, the Confederate Army was allowed to disintegrate for two months following expiration of short-term enlistments. Most of those in uniform would not re-enlist following their one-year commitment, so on April 16, 1862, the Confederate Congress enacted the first mass conscription on the North American continent. (The U.S. Congress followed a year later on March 3, 1863, with the Enrollment Act.) Rather than a universal draft, the initial program was a selective service with physical, religious, professional and industrial exemptions. These were narrowed as the war progressed. Initially substitutes were permitted, but by December 1863 these were disallowed. In September 1862 the age limit was increased from 35 to 45 and by February 1864, all men under 18 and over 45 were conscripted to form a reserve for state defense inside state borders. By March 1864, the Superintendent of Conscription reported that all across the Confederacy, every officer in constituted authority, man and woman, "engaged in opposing the enrolling officer in the execution of his duties". Although challenged in the state courts, the Confederate State Supreme Courts routinely rejected legal challenges to conscription. Many thousands of slaves served as personal servants to their owner, or were hired as laborers, cooks, and pioneers. Some freed blacks and men of color served in local state militia units of the Confederacy, primarily in Louisiana and South Carolina, but their officers deployed them for "local defense, not combat". Depleted by casualties and desertions, the military suffered chronic manpower shortages. In early 1865, the Confederate Congress, influenced by the public support by General Lee, approved the recruitment of black infantry units. Contrary to Lee's and Davis's recommendations, the Congress refused "to guarantee the freedom of black volunteers". No more than two hundred black combat troops were ever raised. Raising troops The immediate onset of war meant that it was fought by the "Provisional" or "Volunteer Army". State governors resisted concentrating a national effort. Several wanted a strong state army for self-defense. Others feared large "Provisional" armies answering only to Davis. When filling the Confederate government's call for 100,000 men, another 200,000 were turned away by accepting only those enlisted "for the duration" or twelve-month volunteers who brought their own arms or horses. It was important to raise troops; it was just as important to provide capable officers to command them. With few exceptions the Confederacy secured excellent general officers. Efficiency in the lower officers was "greater than could have been reasonably expected". As with the Federals, political appointees could be indifferent. Otherwise, the officer corps was governor-appointed or elected by unit enlisted. Promotion to fill vacancies was made internally regardless of merit, even if better officers were immediately available. Anticipating the need for more "duration" men, in January 1862 Congress provided for company level recruiters to return home for two months, but their efforts met little success on the heels of Confederate battlefield defeats in February. Congress allowed for Davis to require numbers of recruits from each governor to supply the volunteer shortfall. States responded by passing their own draft laws. The veteran Confederate army of early 1862 was mostly twelve-month volunteers with terms about to expire. Enlisted reorganization elections disintegrated the army for two months. Officers pleaded with the ranks to re-enlist, but a majority did not. Those remaining elected majors and colonels whose performance led to officer review boards in October. The boards caused a "rapid and widespread" thinning out of 1,700 incompetent officers. Troops thereafter would elect only second lieutenants. In early 1862, the popular press suggested the Confederacy required a million men under arms. But veteran soldiers were not re-enlisting, and earlier secessionist volunteers did not reappear to serve in war. One Macon, Georgia, newspaper asked how two million brave fighting men of the South were about to be overcome by four million northerners who were said to be cowards. Conscription The Confederacy passed the first American law of national conscription on April 16, 1862. The white males of the Confederate States from 18 to 35 were declared members of the Confederate army for three years, and all men then enlisted were extended to a three-year term. They would serve only in units and under officers of their state. Those under 18 and over 35 could substitute for conscripts, in September those from 35 to 45 became conscripts. The cry of "rich man's war and a poor man's fight" led Congress to abolish the substitute system altogether in December 1863. All principals benefiting earlier were made eligible for service. By February 1864, the age bracket was made 17 to 50, those under eighteen and over forty-five to be limited to in-state duty. Confederate conscription was not universal; it was a selective service. The First Conscription Act of April 1862 exempted occupations related to transportation, communication, industry, ministers, teaching and physical fitness. The Second Conscription Act of October 1862 expanded exemptions in industry, agriculture and conscientious objection. Exemption fraud proliferated in medical examinations, army furloughs, churches, schools, apothecaries and newspapers. Rich men's sons were appointed to the socially outcast "overseer" occupation, but the measure was received in the country with "universal odium". The legislative vehicle was the controversial Twenty Negro Law that specifically exempted one white overseer or owner for every plantation with at least 20 slaves. Backpedaling six months later, Congress provided overseers under 45 could be exempted only if they held the occupation before the first Conscription Act. The number of officials under state exemptions appointed by state Governor patronage expanded significantly. By law, substitutes could not be subject to conscription, but instead of adding to Confederate manpower, unit officers in the field reported that over-50 and under-17-year-old substitutes made up to 90% of the desertions. The Conscription Act of February 1864 "radically changed the whole system" of selection. It abolished industrial exemptions, placing detail authority in President Davis. As the shame of conscription was greater than a felony conviction, the system brought in "about as many volunteers as it did conscripts." Many men in otherwise "bombproof" positions were enlisted in one way or another, nearly 160,000 additional volunteers and conscripts in uniform. Still there was shirking. To administer the draft, a Bureau of Conscription was set up to use state officers, as state Governors would allow. It had a checkered career of "contention, opposition and futility". Armies appointed alternative military "recruiters" to bring in the out-of-uniform 17–50-year-old conscripts and deserters. Nearly 3,000 officers were tasked with the job. By late 1864, Lee was calling for more troops. "Our ranks are constantly diminishing by battle and disease, and few recruits are received; the consequences are inevitable." By March 1865 conscription was to be administered by generals of the state reserves calling out men over 45 and under 18 years old. All exemptions were abolished. These regiments were assigned to recruit conscripts ages 17–50, recover deserters, and repel enemy cavalry raids. The service retained men who had lost but one arm or a leg in home guards. Ultimately, conscription was a failure, and its main value was in goading men to volunteer. The survival of the Confederacy depended on a strong base of civilians and soldiers devoted to victory. The soldiers performed well, though increasing numbers deserted in the last year of fighting, and the Confederacy never succeeded in replacing casualties as the Union could. The civilians, although enthusiastic in 1861–62, seem to have lost faith in the future of the Confederacy by 1864, and instead looked to protect their homes and communities. As Rable explains, "This contraction of civic vision was more than a crabbed libertarianism; it represented an increasingly widespread disillusionment with the Confederate experiment." Victories: 1861 The American Civil War broke out in April 1861 with a Confederate victory at the Battle of Fort Sumter in Charleston. In January, President James Buchanan had attempted to resupply the garrison with the steamship, Star of the West, but Confederate artillery drove it away. In March, President Lincoln notified South Carolina Governor Pickens that without Confederate resistance to the resupply there would be no military reinforcement without further notice, but Lincoln prepared to force resupply if it were not allowed. Confederate President Davis, in cabinet, decided to seize Fort Sumter before the relief fleet arrived, and on April 12, 1861, General Beauregard forced its surrender. Following Sumter, Lincoln directed states to provide 75,000 troops for three months to recapture the Charleston Harbor forts and all other federal property. This emboldened secessionists in Virginia, Arkansas, Tennessee and North Carolina to secede rather than provide troops to march into neighboring Southern states. In May, Federal troops crossed into Confederate territory along the entire border from the Chesapeake Bay to New Mexico. The first battles were Confederate victories at Big Bethel (Bethel Church, Virginia), First Bull Run (First Manassas) in Virginia July and in August, Wilson's Creek (Oak Hills) in Missouri. At all three, Confederate forces could not follow up their victory due to inadequate supply and shortages of fresh troops to exploit their successes. Following each battle, Federals maintained a military presence and occupied Washington, DC; Fort Monroe, Virginia; and Springfield, Missouri. Both North and South began training up armies for major fighting the next year. Union General George B. McClellan's forces gained possession of much of northwestern Virginia in mid-1861, concentrating on towns and roads; the interior was too large to control and became the center of guerrilla activity. General Robert E. Lee was defeated at Cheat Mountain in September and no serious Confederate advance in western Virginia occurred until the next year. Meanwhile, the Union Navy seized control of much of the Confederate coastline from Virginia to South Carolina. It took over plantations and the abandoned slaves. Federals there began a war-long policy of burning grain supplies up rivers into the interior wherever they could not occupy. The Union Navy began a blockade of the major southern ports and prepared an invasion of Louisiana to capture New Orleans in early 1862. Incursions: 1862 The victories of 1861 were followed by a series of defeats east and west in early 1862. To restore the Union by military force, the Federal strategy was to (1) secure the Mississippi River, (2) seize or close Confederate ports, and (3) march on Richmond. To secure independence, the Confederate intent was to (1) repel the invader on all fronts, costing him blood and treasure, and (2) carry the war into the North by two offensives in time to affect the mid-term elections. Much of northwestern Virginia was under Federal control. In February and March, most of Missouri and Kentucky were Union "occupied, consolidated, and used as staging areas for advances further South". Following the repulse of Confederate counter-attack at the Battle of Shiloh, Tennessee, permanent Federal occupation expanded west, south and east. Confederate forces repositioned south along the Mississippi River to Memphis, Tennessee, where at the naval Battle of Memphis, its River Defense Fleet was sunk. Confederates withdrew from northern Mississippi and northern Alabama. New Orleans was captured April 29 by a combined Army-Navy force under U.S. Admiral David Farragut, and the Confederacy lost control of the mouth of the Mississippi River. It had to concede extensive agricultural resources that had supported the Union's sea-supplied logistics base. Although Confederates had suffered major reverses everywhere, as of the end of April the Confederacy still controlled territory holding 72% of its population. Federal forces disrupted Missouri and Arkansas; they had broken through in western Virginia, Kentucky, Tennessee and Louisiana. Along the Confederacy's shores, Union forces had closed ports and made garrisoned lodgments on every coastal Confederate state except Alabama and Texas. Although scholars sometimes assess the Union blockade as ineffectual under international law until the last few months of the war, from the first months it disrupted Confederate privateers, making it "almost impossible to bring their prizes into Confederate ports". British firms developed small fleets of blockade running companies, such as John Fraser and Company and S. Isaac, Campbell & Company while the Ordnance Department secured its own blockade runners for dedicated munitions cargoes. During the Civil War fleets of armored warships were deployed for the first time in sustained blockades at sea. After some success against the Union blockade, in March the ironclad CSS Virginia was forced into port and burned by Confederates at their retreat. Despite several attempts mounted from their port cities, CSA naval forces were unable to break the Union blockade. Attempts were made by Commodore Josiah Tattnall III's ironclads from Savannah in 1862 with the CSS Atlanta. Secretary of the Navy Stephen Mallory placed his hopes in a European-built ironclad fleet, but they were never realized. On the other hand, four new English-built commerce raiders served the Confederacy, and several fast blockade runners were sold in Confederate ports. They were converted into commerce-raiding cruisers, and manned by their British crews. In the east, Union forces could not close on Richmond. General McClellan landed his army on the Lower Peninsula of Virginia. Lee subsequently ended that threat from the east, then Union General John Pope attacked overland from the north only to be repulsed at Second Bull Run (Second Manassas). Lee's strike north was turned back at Antietam MD, then Union Major General Ambrose Burnside's offensive was disastrously ended at Fredericksburg VA in December. Both armies then turned to winter quarters to recruit and train for the coming spring. In an attempt to seize the initiative, reprove, protect farms in mid-growing season and influence U.S. Congressional elections, two major Confederate incursions into Union territory had been launched in August and September 1862. Both Braxton Bragg's invasion of Kentucky and Lee's invasion of Maryland were decisively repulsed, leaving Confederates in control of but 63% of its population. Civil War scholar Allan Nevins argues that 1862 was the strategic high-water mark of the Confederacy. The failures of the two invasions were attributed to the same irrecoverable shortcomings: lack of manpower at the front, lack of supplies including serviceable shoes, and exhaustion after long marches without adequate food. Also in September Confederate General William W. Loring pushed Federal forces from Charleston, Virginia, and the Kanawha Valley in western Virginia, but lacking reinforcements Loring abandoned his position and by November the region was back in Federal control. Anaconda: 1863–1864 The failed Middle Tennessee campaign was ended January 2, 1863, at the inconclusive Battle of Stones River (Murfreesboro), both sides losing the largest percentage of casualties suffered during the war. It was followed by another strategic withdrawal by Confederate forces. The Confederacy won a significant victory April 1863, repulsing the Federal advance on Richmond at Chancellorsville, but the Union consolidated positions along the Virginia coast and the Chesapeake Bay. Without an effective answer to Federal gunboats, river transport and supply, the Confederacy lost the Mississippi River following the capture of Vicksburg, Mississippi, and Port Hudson in July, ending Southern access to the trans-Mississippi West. July brought short-lived counters, Morgan's Raid into Ohio and the New York City draft riots. Robert E. Lee's strike into Pennsylvania was repulsed at Gettysburg, Pennsylvania despite Pickett's famous charge and other acts of valor. Southern newspapers assessed the campaign as "The Confederates did not gain a victory, neither did the enemy." September and November left Confederates yielding Chattanooga, Tennessee, the gateway to the lower south. For the remainder of the war fighting was restricted inside the South, resulting in a slow but continuous loss of territory. In early 1864, the Confederacy still controlled 53% of its population, but it withdrew further to reestablish defensive positions. Union offensives continued with Sherman's March to the Sea to take Savannah and Grant's Wilderness Campaign to encircle Richmond and besiege Lee's army at Petersburg. In April 1863, the C.S. Congress authorized a uniformed Volunteer Navy, many of whom were British. The Confederacy had altogether eighteen commerce-destroying cruisers, which seriously disrupted Federal commerce at sea and increased shipping insurance rates 900%. Commodore Tattnall again unsuccessfully attempted to break the Union blockade on the Savannah River in Georgia with an ironclad in 1863. Beginning in April 1864 the ironclad CSS Albemarle engaged Union gunboats for six months on the Roanoke River in North Carolina. The Federals closed Mobile Bay by sea-based amphibious assault in August, ending Gulf coast trade east of the Mississippi River. In December, the Battle of Nashville ended Confederate operations in the western theater. Large numbers of families relocated to safer places, usually remote rural areas, bringing along household slaves if they had any. Mary Massey argues these elite exiles introduced an element of defeatism into the southern outlook. Collapse: 1865 The first three months of 1865 saw the Federal Carolinas Campaign, devastating a wide swath of the remaining Confederate heartland. The "breadbasket of the Confederacy" in the Great Valley of Virginia was occupied by Philip Sheridan. The Union Blockade captured Fort Fisher in North Carolina, and Sherman finally took Charleston, South Carolina, by land attack. The Confederacy controlled no ports, harbors or navigable rivers. Railroads were captured or had ceased operating. Its major food-producing regions had been war-ravaged or occupied. Its administration survived in only three pockets of territory holding only one-third of its population. Its armies were defeated or disbanding. At the February 1865 Hampton Roads Conference with Lincoln, senior Confederate officials rejected his invitation to restore the Union with compensation for emancipated slaves. The three pockets of unoccupied Confederacy were southern Virginia – North Carolina, central Alabama – Florida, and Texas, the latter two areas less from any notion of resistance than from the disinterest of Federal forces to occupy them. The Davis policy was independence or nothing, while Lee's army was wracked by disease and desertion, barely holding the trenches defending Jefferson Davis' capital. The Confederacy's last remaining blockade-running port, Wilmington, North Carolina, was lost. When the Union broke through Lee's lines at Petersburg, Richmond fell immediately. Lee surrendered a remnant of 50,000 from the Army of Northern Virginia at Appomattox Court House, Virginia, on April 9, 1865. "The Surrender" marked the end of the Confederacy. The CSS Stonewall sailed from Europe to break the Union blockade in March; on making Havana, Cuba, it surrendered. Some high officials escaped to Europe, but President Davis was captured May 10; all remaining Confederate land forces surrendered by June 1865. The U.S. Army took control of the Confederate areas without post-surrender insurgency or guerrilla warfare against them, but peace was subsequently marred by a great deal of local violence, feuding and revenge killings. The last confederate military unit, the commerce raider CSS Shenandoah, surrendered on November 6, 1865, in Liverpool. Historian Gary Gallagher concluded that the Confederacy capitulated in early 1865 because northern armies crushed "organized southern military resistance". The Confederacy's population, soldier and civilian, had suffered material hardship and social disruption. They had expended and extracted a profusion of blood and treasure until collapse; "the end had come". Jefferson Davis' assessment in 1890 determined, "With the capture of the capital, the dispersion of the civil authorities, the surrender of the armies in the field, and the arrest of the President, the Confederate States of America disappeared ... their history henceforth became a part of the history of the United States." Postwar history Amnesty and treason issue When the war ended over 14,000 Confederates petitioned President Johnson for a pardon; he was generous in giving them out. He issued a general amnesty to all Confederate participants in the "late Civil War" in 1868. Congress passed additional Amnesty Acts in May 1866 with restrictions on office holding, and the Amnesty Act in May 1872 lifting those restrictions. There was a great deal of discussion in 1865 about bringing treason trials, especially against Jefferson Davis. There was no consensus in President Johnson's cabinet, and no one was charged with treason. An acquittal of Davis would have been humiliating for the government. Davis was indicted for treason but never tried; he was released from prison on bail in May 1867. The amnesty of December 25, 1868, by President Johnson eliminated any possibility of Jefferson Davis (or anyone else associated with the Confederacy) standing trial for treason. Henry Wirz, the commandant of a notorious prisoner-of-war camp near Andersonville, Georgia, was tried and convicted by a military court, and executed on November 10, 1865. The charges against him involved conspiracy and cruelty, not treason. The U.S. government began a decade-long process known as Reconstruction which attempted to resolve the political and constitutional issues of the Civil War. The priorities were: to guarantee that Confederate nationalism and slavery were ended, to ratify and enforce the Thirteenth Amendment which outlawed slavery; the Fourteenth which guaranteed dual U.S. and state citizenship to all native-born residents, regardless of race; and the Fifteenth, which made it illegal to deny the right to vote because of race. By 1877, the Compromise of 1877 ended Reconstruction in the former Confederate states. Federal troops were withdrawn from the South, where conservative white Democrats had already regained political control of state governments, often through extreme violence and fraud to suppress black voting. The prewar South had many rich areas; the war left the entire region economically devastated by military action, ruined infrastructure, and exhausted resources. Still dependent on an agricultural economy and resisting investment in infrastructure, it remained dominated by the planter elite into the next century. Confederate veterans had been temporarily disenfranchised by Reconstruction policy, and Democrat-dominated legislatures passed new constitutions and amendments to now exclude most blacks and many poor whites. This exclusion and a weakened Republican Party remained the norm until the Voting Rights Act of 1965. The Solid South of the early 20th century did not achieve national levels of prosperity until long after World War II. Texas v. White In Texas v. White, the United States Supreme Court ruled – by a 5–3 majority – that Texas had remained a state ever since it first joined the Union, despite claims that it joined the Confederate States of America. In this case, the court held that the Constitution did not permit a state to unilaterally secede from the United States. Further, that the ordinances of secession, and all the acts of the legislatures within seceding states intended to give effect to such ordinances, were "absolutely null", under the Constitution. This case settled the law that applied to all questions regarding state legislation during the war. Furthermore, it decided one of the "central constitutional questions" of the Civil War: The Union is perpetual and indestructible, as a matter of constitutional law. In declaring that no state could leave the Union, "except through revolution or through consent of the States", it was "explicitly repudiating the position of the Confederate states that the United States was a voluntary compact between sovereign states". Theories regarding the Confederacy's demise "Died of states' rights" Historian Frank Lawrence Owsley argued that the Confederacy "died of states' rights". The central government was denied requisitioned soldiers and money by governors and state legislatures because they feared that Richmond would encroach on the rights of the states. Georgia's governor Joseph Brown warned of a secret conspiracy by Jefferson Davis to destroy states' rights and individual liberty. The first conscription act in North America, authorizing Davis to draft soldiers, was said to be the "essence of military despotism". Vice President Alexander H. Stephens feared losing the very form of republican government. Allowing President Davis to threaten "arbitrary arrests" to draft hundreds of governor-appointed "bomb-proof" bureaucrats conferred "more power than the English Parliament had ever bestowed on the king. History proved the dangers of such unchecked authority." The abolishment of draft exemptions for newspaper editors was interpreted as an attempt by the Confederate government to muzzle presses, such as the Raleigh NC Standard, to control elections and to suppress the peace meetings there. As Rable concludes, "For Stephens, the essence of patriotism, the heart of the Confederate cause, rested on an unyielding commitment to traditional rights" without considerations of military necessity, pragmatism or compromise. In 1863, Governor Pendleton Murrah of Texas determined that state troops were required for defense against Plains Indians and Union forces that might attack from Kansas. He refused to send his soldiers to the East. Governor Zebulon Vance of North Carolina showed intense opposition to conscription, limiting recruitment success. Vance's faith in states' rights drove him into repeated, stubborn opposition to the Davis administration. Despite political differences within the Confederacy, no national political parties were formed because they were seen as illegitimate. "Anti-partyism became an article of political faith." Without a system of political parties building alternate sets of national leaders, electoral protests tended to be narrowly state-based, "negative, carping and petty". The 1863 mid-term elections became mere expressions of futile and frustrated dissatisfaction. According to historian David M. Potter, the lack of a functioning two-party system caused "real and direct damage" to the Confederate war effort since it prevented the formulation of any effective alternatives to the conduct of the war by the Davis administration. "Died of Davis" The enemies of President Davis proposed that the Confederacy "died of Davis". He was unfavorably compared to George Washington by critics such as Edward Alfred Pollard, editor of the most influential newspaper in the Confederacy, the Richmond (Virginia) Examiner. E. Merton Coulter summarizes, "The American Revolution had its Washington; the Southern Revolution had its Davis ... one succeeded and the other failed." Beyond the early honeymoon period, Davis was never popular. He unwittingly caused much internal dissension from early on. His ill health and temporary bouts of blindness disabled him for days at a time. Coulter, viewed by today's historians as a Confederate apologist, says Davis was heroic and his will was indomitable. But his "tenacity, determination, and will power" stirred up lasting opposition from enemies that Davis could not shake. He failed to overcome "petty leaders of the states" who made the term "Confederacy" into a label for tyranny and oppression, preventing the "Stars and Bars" from becoming a symbol of larger patriotic service and sacrifice. Instead of campaigning to develop nationalism and gain support for his administration, he rarely courted public opinion, assuming an aloofness, "almost like an Adams". Escott argues that Davis was unable to mobilize Confederate nationalism in support of his government effectively, and especially failed to appeal to the small farmers who comprised the bulk of the population. In addition to the problems caused by states rights, Escott also emphasizes that the widespread opposition to any strong central government combined with the vast difference in wealth between the slave-owning class and the small farmers created insolvable dilemmas when the Confederate survival presupposed a strong central government backed by a united populace. The prewar claim that white solidarity was necessary to provide a unified Southern voice in Washington no longer held. Davis failed to build a network of supporters who would speak up when he came under criticism, and he repeatedly alienated governors and other state-based leaders by demanding centralized control of the war effort. According to Coulter, Davis was not an efficient administrator as he attended to too many details, protected his friends after their failures were obvious, and spent too much time on military affairs versus his civic responsibilities. Coulter concludes he was not the ideal leader for the Southern Revolution, but he showed "fewer weaknesses than any other" contemporary character available for the role. Robert E. Lee's assessment of Davis as president was, "I knew of none that could have done as well." Government and politics Political divisions Constitution The Southern leaders met in Montgomery, Alabama, to write their constitution. Much of the Confederate States Constitution replicated the United States Constitution verbatim, but it contained several explicit protections of the institution of slavery including provisions for the recognition and protection of slavery in any territory of the Confederacy. It maintained the ban on international slave-trading, though it made the ban's application explicit to "Negroes of the African race" in contrast to the U.S. Constitution's reference to "such Persons as any of the States now existing shall think proper to admit". It protected the existing internal trade of slaves among slaveholding states. In certain areas, the Confederate Constitution gave greater powers to the states (or curtailed the powers of the central government more) than the U.S. Constitution of the time did, but in other areas, the states lost rights they had under the U.S. Constitution. Although the Confederate Constitution, like the U.S. Constitution, contained a commerce clause, the Confederate version prohibited the central government from using revenues collected in one state for funding internal improvements in another state. The Confederate Constitution's equivalent to the U.S. Constitution's general welfare clause prohibited protective tariffs (but allowed tariffs for providing domestic revenue), and spoke of "carry[ing] on the Government of the Confederate States" rather than providing for the "general welfare". State legislatures had the power to impeach officials of the Confederate government in some cases. On the other hand, the Confederate Constitution contained a Necessary and Proper Clause and a Supremacy Clause that essentially duplicated the respective clauses of the U.S. Constitution. The Confederate Constitution also incorporated each of the 12 amendments to the U.S. Constitution that had been ratified up to that point. The Confederate Constitution did not specifically include a provision allowing states to secede; the Preamble spoke of each state "acting in its sovereign and independent character" but also of the formation of a "permanent federal government". During the debates on drafting the Confederate Constitution, one proposal would have allowed states to secede from the Confederacy. The proposal was tabled with only the South Carolina delegates voting in favor of considering the motion. The Confederate Constitution also explicitly denied States the power to bar slaveholders from other parts of the Confederacy from bringing their slaves into any state of the Confederacy or to interfere with the property rights of slave owners traveling between different parts of the Confederacy. In contrast with the secular language of the United States Constitution, the Confederate Constitution overtly asked God's blessing ("... invoking the favor and guidance of Almighty God ..."). Some historians have referred to the Confederacy as a form of Herrenvolk democracy. Executive The Montgomery Convention to establish the Confederacy and its executive met on February 4, 1861. Each state as a sovereignty had one vote, with the same delegation size as it held in the U.S. Congress, and generally 41 to 50 members attended. Offices were "provisional", limited to a term not to exceed one year. One name was placed in nomination for president, one for vice president. Both were elected unanimously, 6–0. Jefferson Davis was elected provisional president. His U.S. Senate resignation speech greatly impressed with its clear rationale for secession and his pleading for a peaceful departure from the Union to independence. Although he had made it known that he wanted to be commander-in-chief of the Confederate armies, when elected, he assumed the office of Provisional President. Three candidates for provisional Vice President were under consideration the night before the February 9 election. All were from Georgia, and the various delegations meeting in different places determined two would not do, so Alexander H. Stephens was elected unanimously provisional Vice President, though with some privately held reservations. Stephens was inaugurated February 11, Davis February 18. Davis and Stephens were elected president and vice president, unopposed on November 6, 1861. They were inaugurated on February 22, 1862. Coulter stated, "No president of the U.S. ever had a more difficult task." Washington was inaugurated in peacetime. Lincoln inherited an established government of long standing. The creation of the Confederacy was accomplished by men who saw themselves as fundamentally conservative. Although they referred to their "Revolution", it was in their eyes more a counter-revolution against changes away from their understanding of U.S. founding documents. In Davis' inauguration speech, he explained the Confederacy was not a French-like revolution, but a transfer of rule. The Montgomery Convention had assumed all the laws of the United States until superseded by the Confederate Congress. The Permanent Constitution provided for a President of the Confederate States of America, elected to serve a six-year term but without the possibility of re-election. Unlike the United States Constitution, the Confederate Constitution gave the president the ability to subject a bill to a line item veto, a power also held by some state governors. The Confederate Congress could overturn either the general or the line item vetoes with the same two-thirds votes required in the U.S. Congress. In addition, appropriations not specifically requested by the executive branch required passage by a two-thirds vote in both houses of Congress. The only person to serve as president was Jefferson Davis, as the Confederacy was defeated before the completion of his term. Administration and cabinet Legislative The only two "formal, national, functioning, civilian administrative bodies" in the Civil War South were the Jefferson Davis administration and the Confederate Congresses. The Confederacy was begun by the Provisional Congress in Convention at Montgomery, Alabama on February 28, 1861. The Provisional Confederate Congress was a unicameral assembly; each state received one vote. The Permanent Confederate Congress was elected and began its first session February 18, 1862. The Permanent Congress for the Confederacy followed the United States forms with a bicameral legislature. The Senate had two per state, twenty-six Senators. The House numbered 106 representatives apportioned by free and slave populations within each state. Two Congresses sat in six sessions until March 18, 1865. The political influences of the civilian, soldier vote and appointed representatives reflected divisions of political geography of a diverse South. These in turn changed over time relative to Union occupation and disruption, the war impact on the local economy, and the course of the war. Without political parties, key candidate identification related to adopting secession before or after Lincoln's call for volunteers to retake Federal property. Previous party affiliation played a part in voter selection, predominantly secessionist Democrat or unionist Whig. The absence of political parties made individual roll call voting all the more important, as the Confederate "freedom of roll-call voting [was] unprecedented in American legislative history." Key issues throughout the life of the Confederacy related to (1) suspension of habeas corpus, (2) military concerns such as control of state militia, conscription and exemption, (3) economic and fiscal policy including impressment of slaves, goods and scorched earth, and (4) support of the Jefferson Davis administration in its foreign affairs and negotiating peace. Provisional Congress For the first year, the unicameral Provisional Confederate Congress functioned as the Confederacy's legislative branch. President of the Provisional Congress Howell Cobb, Sr. of Georgia, February 4, 1861 – February 17, 1862 Presidents pro tempore of the Provisional Congress Robert Woodward Barnwell of South Carolina, February 4, 1861 Thomas Stanhope Bocock of Virginia, December 10–21, 1861 and January 7–8, 1862 Josiah Abigail Patterson Campbell of Mississippi, December 23–24, 1861 and January 6, 1862 Sessions of the Confederate Congress Provisional Congress 1st Congress 2nd Congress Tribal Representatives to Confederate Congress Elias Cornelius Boudinot 1862–65, Cherokee Samuel Benton Callahan Unknown years, Creek, Seminole Burton Allen Holder 1864–65, Chickasaw Robert McDonald Jones 1863–65, Choctaw Judicial The Confederate Constitution outlined a judicial branch of the government, but the ongoing war and resistance from states-rights advocates, particularly on the question of whether it would have appellate jurisdiction over the state courts, prevented the creation or seating of the "Supreme Court of the Confederate States;" the state courts generally continued to operate as they had done, simply recognizing the Confederate States as the national government. Confederate district courts were authorized by Article III, Section 1, of the Confederate Constitution, and President Davis appointed judges within the individual states of the Confederate States of America. In many cases, the same US Federal District Judges were appointed as Confederate States District Judges. Confederate district courts began reopening in early 1861, handling many of the same type cases as had been done before. Prize cases, in which Union ships were captured by the Confederate Navy or raiders and sold through court proceedings, were heard until the blockade of southern ports made this impossible. After a Sequestration Act was passed by the Confederate Congress, the Confederate district courts heard many cases in which enemy aliens (typically Northern absentee landlords owning property in the South) had their property sequestered (seized) by Confederate Receivers. When the matter came before the Confederate court, the property owner could not appear because he was unable to travel across the front lines between Union and Confederate forces. Thus, the District Attorney won the case by default, the property was typically sold, and the money used to further the Southern war effort. Eventually, because there was no Confederate Supreme Court, sharp attorneys like South Carolina's Edward McCrady began filing appeals. This prevented their clients' property from being sold until a supreme court could be constituted to hear the appeal, which never occurred. Where Federal troops gained control over parts of the Confederacy and re-established civilian government, US district courts sometimes resumed jurisdiction. Supreme Court – not established. District Courts – judges Alabama William Giles Jones 1861–1865 Arkansas Daniel Ringo 1861–1865 Florida Jesse J. Finley 1861–1862 Georgia Henry R. Jackson 1861, Edward J. Harden 1861–1865 Louisiana Edwin Warren Moise 1861–1865 Mississippi Alexander Mosby Clayton 1861–1865 North Carolina Asa Biggs 1861–1865 South Carolina Andrew G. Magrath 1861–1864, Benjamin F. Perry 1865 Tennessee West H. Humphreys 1861–1865 Texas-East William Pinckney Hill 1861–1865 Texas-West Thomas J. Devine 1861–1865 Virginia-East James D. Halyburton 1861–1865 Virginia-West John W. Brockenbrough 1861–1865 Post Office When the Confederacy was formed and its seceding states broke from the Union, it was at once confronted with the arduous task of providing its citizens with a mail delivery system, and, in the midst of the American Civil War, the newly formed Confederacy created and established the Confederate Post Office. One of the first undertakings in establishing the Post Office was the appointment of John H. Reagan to the position of Postmaster General, by Jefferson Davis in 1861, making him the first Postmaster General of the Confederate Post Office as well as a member of Davis' presidential cabinet. Writing in 1906, historian Walter Flavius McCaleb praised Reagan's "energy and intelligence... in a degree scarcely matched by any of his associates." When the war began, the US Post Office still delivered mail from the secessionist states for a brief period of time. Mail that was postmarked after the date of a state's admission into the Confederacy through May 31, 1861, and bearing US postage was still delivered. After this time, private express companies still managed to carry some of the mail across enemy lines. Later, mail that crossed lines had to be sent by 'Flag of Truce' and was allowed to pass at only two specific points. Mail sent from the Confederacy to the U.S. was received, opened and inspected at Fortress Monroe on the Virginia coast before being passed on into the U.S. mail stream. Mail sent from the North to the South passed at City Point, also in Virginia, where it was also inspected before being sent on. With the chaos of the war, a working postal system was more important than ever for the Confederacy. The Civil War had divided family members and friends and consequently letter writing increased dramatically across the entire divided nation, especially to and from the men who were away serving in an army. Mail delivery was also important for the Confederacy for a myriad of business and military reasons. Because of the Union blockade, basic supplies were always in demand and so getting mailed correspondence out of the country to suppliers was imperative to the successful operation of the Confederacy. Volumes of material have been written about the Blockade runners who evaded Union ships on blockade patrol, usually at night, and who moved cargo and mail in and out of the Confederate States throughout the course of the war. Of particular interest to students and historians of the American Civil War is Prisoner of War mail and Blockade mail as these items were often involved with a variety of military and other war time activities. The postal history of the Confederacy along with surviving Confederate mail has helped historians document the various people, places and events that were involved in the American Civil War as it unfolded. Civil liberties The Confederacy actively used the army to arrest people suspected of loyalty to the United States. Historian Mark Neely found 4,108 names of men arrested and estimated a much larger total. The Confederacy arrested pro-Union civilians in the South at about the same rate as the Union arrested pro-Confederate civilians in the North. Neely argues: Economy Slaves Across the South, widespread rumors alarmed the whites by predicting the slaves were planning some sort of insurrection. Patrols were stepped up. The slaves did become increasingly independent, and resistant to punishment, but historians agree there were no insurrections. In the invaded areas, insubordination was more the norm than was loyalty to the old master; Bell Wiley says, "It was not disloyalty, but the lure of freedom." Many slaves became spies for the North, and large numbers ran away to federal lines. Lincoln's Emancipation Proclamation, an executive order of the U.S. government on January 1, 1863, changed the legal status of three million slaves in designated areas of the Confederacy from "slave" to "free". The long-term effect was that the Confederacy could not preserve the institution of slavery, and lost the use of the core element of its plantation labor force. Slaves were legally freed by the Proclamation, and became free by escaping to federal lines, or by advances of federal troops. Over 200,000 freed slaves were hired by the federal army as teamsters, cooks, launderers and laborers, and eventually as soldiers. Plantation owners, realizing that emancipation would destroy their economic system, sometimes moved their slaves as far as possible out of reach of the Union army. Though the concept was promoted within certain circles of the Union hierarchy during and immediately following the war, no program of reparations for freed slaves was ever attempted. Unlike other Western countries, such as Britain and France, the U.S. government never paid compensation to Southern slave owners for their "lost property". Political economy Most whites were subsistence farmers who traded their surpluses locally. The plantations of the South, with white ownership and an enslaved labor force, produced substantial wealth from cash crops. It supplied two-thirds of the world's cotton, which was in high demand for textiles, along with tobacco, sugar, and naval stores (such as turpentine). These raw materials were exported to factories in Europe and the Northeast. Planters reinvested their profits in more slaves and fresh land, as cotton and tobacco depleted the soil. There was little manufacturing or mining; shipping was controlled by non-southerners. The plantations that enslaved over three million black people were the principal source of wealth. Most were concentrated in "black belt" plantation areas (because few white families in the poor regions owned slaves). For decades, there had been widespread fear of slave revolts. During the war, extra men were assigned to "home guard" patrol duty and governors sought to keep militia units at home for protection. Historian William Barney reports, "no major slave revolts erupted during the Civil War." Nevertheless, slaves took the opportunity to enlarge their sphere of independence, and when union forces were nearby, many ran off to join them. Slave labor was applied in industry in a limited way in the Upper South and in a few port cities. One reason for the regional lag in industrial development was top-heavy income distribution. Mass production requires mass markets, and slaves living in small cabins, using self-made tools and outfitted with one suit of work clothes each year of inferior fabric, did not generate consumer demand to sustain local manufactures of any description in the same way as did a mechanized family farm of free labor in the North. The Southern economy was "pre-capitalist" in that slaves were put to work in the largest revenue-producing enterprises, not free labor markets. That labor system as practiced in the American South encompassed paternalism, whether abusive or indulgent, and that meant labor management considerations apart from productivity. Approximately 85% of both the North and South white populations lived on family farms, both regions were predominantly agricultural, and mid-century industry in both was mostly domestic. But the Southern economy was pre-capitalist in its overwhelming reliance on the agriculture of cash crops to produce wealth, while the great majority of farmers fed themselves and supplied a small local market. Southern cities and industries grew faster than ever before, but the thrust of the rest of the country's exponential growth elsewhere was toward urban industrial development along transportation systems of canals and railroads. The South was following the dominant currents of the American economic mainstream, but at a "great distance" as it lagged in the all-weather modes of transportation that brought cheaper, speedier freight shipment and forged new, expanding inter-regional markets. A third count of the pre-capitalist Southern economy relates to the cultural setting. The South and southerners did not adopt a work ethic, nor the habits of thrift that marked the rest of the country. It had access to the tools of capitalism, but it did not adopt its culture. The Southern Cause as a national economy in the Confederacy was grounded in "slavery and race, planters and patricians, plain folk and folk culture, cotton and plantations". National production The Confederacy started its existence as an agrarian economy with exports, to a world market, of cotton, and, to a lesser extent, tobacco and sugarcane. Local food production included grains, hogs, cattle, and gardens. The cash came from exports but the Southern people spontaneously stopped exports in early 1861 to hasten the impact of "King Cotton", a failed strategy to coerce international support for the Confederacy through its cotton exports. When the blockade was announced, commercial shipping practically ended (the ships could not get insurance), and only a trickle of supplies came via blockade runners. The cutoff of exports was an economic disaster for the South, rendering useless its most valuable properties, its plantations and their enslaved workers. Many planters kept growing cotton, which piled up everywhere, but most turned to food production. All across the region, the lack of repair and maintenance wasted away the physical assets. The eleven states had produced $155 million in manufactured goods in 1860, chiefly from local grist-mills, and lumber, processed tobacco, cotton goods and naval stores such as turpentine. The main industrial areas were border cities such as Baltimore, Wheeling, Louisville and St. Louis, that were never under Confederate control. The government did set up munitions factories in the Deep South. Combined with captured munitions and those coming via blockade runners, the armies were kept minimally supplied with weapons. The soldiers suffered from reduced rations, lack of medicines, and the growing shortages of uniforms, shoes and boots. Shortages were much worse for civilians, and the prices of necessities steadily rose. The Confederacy adopted a tariff or tax on imports of 15%, and imposed it on all imports from other countries, including the United States. The tariff mattered little; the Union blockade minimized commercial traffic through the Confederacy's ports, and very few people paid taxes on goods smuggled from the North. The Confederate government in its entire history collected only $3.5 million in tariff revenue. The lack of adequate financial resources led the Confederacy to finance the war through printing money, which led to high inflation. The Confederacy underwent an economic revolution by centralization and standardization, but it was too little too late as its economy was systematically strangled by blockade and raids. Transportation systems In peacetime, the South's extensive and connected systems of navigable rivers and coastal access allowed for cheap and easy transportation of agricultural products. The railroad system in the South had developed as a supplement to the navigable rivers to enhance the all-weather shipment of cash crops to market. Railroads tied plantation areas to the nearest river or seaport and so made supply more dependable, lowered costs and increased profits. In the event of invasion, the vast geography of the Confederacy made logistics difficult for the Union. Wherever Union armies invaded, they assigned many of their soldiers to garrison captured areas and to protect rail lines. At the onset of the Civil War the South had a rail network disjointed and plagued by changes in track gauge as well as lack of interchange. Locomotives and freight cars had fixed axles and could not use tracks of different gauges (widths). Railroads of different gauges leading to the same city required all freight to be off-loaded onto wagons for transport to the connecting railroad station, where it had to await freight cars and a locomotive before proceeding. Centers requiring off-loading included Vicksburg, New Orleans, Montgomery, Wilmington and Richmond. In addition, most rail lines led from coastal or river ports to inland cities, with few lateral railroads. Because of this design limitation, the relatively primitive railroads of the Confederacy were unable to overcome the Union naval blockade of the South's crucial intra-coastal and river routes. The Confederacy had no plan to expand, protect or encourage its railroads. Southerners' refusal to export the cotton crop in 1861 left railroads bereft of their main source of income. Many lines had to lay off employees; many critical skilled technicians and engineers were permanently lost to military service. In the early years of the war the Confederate government had a hands-off approach to the railroads. Only in mid-1863 did the Confederate government initiate a national policy, and it was confined solely to aiding the war effort. Railroads came under the de facto control of the military. In contrast, the U.S. Congress had authorized military administration of Union-controlled railroad and telegraph systems in January 1862, imposed a standard gauge, and built railroads into the South using that gauge. Confederate armies successfully reoccupying territory could not be resupplied directly by rail as they advanced. The C.S. Congress formally authorized military administration of railroads in February 1865. In the last year before the end of the war, the Confederate railroad system stood permanently on the verge of collapse. There was no new equipment and raids on both sides systematically destroyed key bridges, as well as locomotives and freight cars. Spare parts were cannibalized; feeder lines were torn up to get replacement rails for trunk lines, and rolling stock wore out through heavy use. Horses and mules The Confederate army experienced a persistent shortage of horses and mules, and requisitioned them with dubious promissory notes given to local farmers and breeders. Union forces paid in real money and found ready sellers in the South. Both armies needed horses for cavalry and for artillery. Mules pulled the wagons. The supply was undermined by an unprecedented epidemic of glanders, a fatal disease that baffled veterinarians. After 1863 the invading Union forces had a policy of shooting all the local horses and mules that they did not need, in order to keep them out of Confederate hands. The Confederate armies and farmers experienced a growing shortage of horses and mules, which hurt the Southern economy and the war effort. The South lost half of its 2.5 million horses and mules; many farmers ended the war with none left. Army horses were used up by hard work, malnourishment, disease and battle wounds; they had a life expectancy of about seven months. Financial instruments Both the individual Confederate states and later the Confederate government printed Confederate States of America dollars as paper currency in various denominations, with a total face value of $1.5 billion. Much of it was signed by Treasurer Edward C. Elmore. Inflation became rampant as the paper money depreciated and eventually became worthless. The state governments and some localities printed their own paper money, adding to the runaway inflation. Many bills still exist, although in recent years counterfeit copies have proliferated. The Confederate government initially wanted to finance its war mostly through tariffs on imports, export taxes, and voluntary donations of gold. After the spontaneous imposition of an embargo on cotton sales to Europe in 1861, these sources of revenue dried up and the Confederacy increasingly turned to issuing debt and printing money to pay for war expenses. The Confederate States politicians were worried about angering the general population with hard taxes. A tax increase might disillusion many Southerners, so the Confederacy resorted to printing more money. As a result, inflation increased and remained a problem for the southern states throughout the rest of the war. By April 1863, for example, the cost of flour in Richmond had risen to $100 a barrel and housewives were rioting. The Confederate government took over the three national mints in its territory: the Charlotte Mint in North Carolina, the Dahlonega Mint in Georgia, and the New Orleans Mint in Louisiana. During 1861 all of these facilities produced small amounts of gold coinage, and the latter half dollars as well. Since the mints used the current dies on hand, all appear to be U.S. issues. However, by comparing slight differences in the dies specialists can distinguish 1861-O half dollars that were minted either under the authority of the U.S. government, the State of Louisiana, or finally the Confederate States. Unlike the gold coins, this issue was produced in significant numbers (over 2.5 million) and is inexpensive in lower grades, although fakes have been made for sale to the public. However, before the New Orleans Mint ceased operation in May, 1861, the Confederate government used its own reverse design to strike four half dollars. This made one of the great rarities of American numismatics. A lack of silver and gold precluded further coinage. The Confederacy apparently also experimented with issuing one cent coins, although only 12 were produced by a jeweler in Philadelphia, who was afraid to send them to the South. Like the half dollars, copies were later made as souvenirs. US coinage was hoarded and did not have any general circulation. U.S. coinage was admitted as legal tender up to $10, as were British sovereigns, French Napoleons and Spanish and Mexican doubloons at a fixed rate of exchange. Confederate money was paper and postage stamps. Food shortages and riots By mid-1861, the Union naval blockade virtually shut down the export of cotton and the import of manufactured goods. Food that formerly came overland was cut off. As women were the ones who remained at home, they had to make do with the lack of food and supplies. They cut back on purchases, used old materials, and planted more flax and peas to provide clothing and food. They used ersatz substitutes when possible, but there was no real coffee, only okra and chicory substitutes. The households were severely hurt by inflation in the cost of everyday items like flour, and the shortages of food, fodder for the animals, and medical supplies for the wounded. State governments requested that planters grow less cotton and more food, but most refused. When cotton prices soared in Europe, expectations were that Europe would soon intervene to break the blockade and make them rich, but Europe remained neutral. The Georgia legislature imposed cotton quotas, making it a crime to grow an excess. But food shortages only worsened, especially in the towns. The overall decline in food supplies, made worse by the inadequate transportation system, led to serious shortages and high prices in urban areas. When bacon reached a dollar a pound in 1863, the poor women of Richmond, Atlanta and many other cities began to riot; they broke into shops and warehouses to seize food, as they were angry at ineffective state relief efforts, speculators, and merchants. As wives and widows of soldiers, they were hurt by the inadequate welfare system. Devastation by 1865 By the end of the war deterioration of the Southern infrastructure was widespread. The number of civilian deaths is unknown. Every Confederate state was affected, but most of the war was fought in Virginia and Tennessee, while Texas and Florida saw the least military action. Much of the damage was caused by direct military action, but most was caused by lack of repairs and upkeep, and by deliberately using up resources. Historians have recently estimated how much of the devastation was caused by military action. Paul Paskoff calculates that Union military operations were conducted in 56% of 645 counties in nine Confederate states (excluding Texas and Florida). These counties contained 63% of the 1860 white population and 64% of the slaves. By the time the fighting took place, undoubtedly some people had fled to safer areas, so the exact population exposed to war is unknown. The eleven Confederate States in the 1860 United States Census had 297 towns and cities with 835,000 people; of these 162 with 681,000 people were at one point occupied by Union forces. Eleven were destroyed or severely damaged by war action, including Atlanta (with an 1860 population of 9,600), Charleston, Columbia, and Richmond (with prewar populations of 40,500, 8,100, and 37,900, respectively); the eleven contained 115,900 people in the 1860 census, or 14% of the urban South. Historians have not estimated what their actual population was when Union forces arrived. The number of people (as of 1860) who lived in the destroyed towns represented just over 1% of the Confederacy's 1860 population. In addition, 45 court houses were burned (out of 830). The South's agriculture was not highly mechanized. The value of farm implements and machinery in the 1860 Census was $81 million; by 1870, there was 40% less, worth just $48 million. Many old tools had broken through heavy use; new tools were rarely available; even repairs were difficult. The economic losses affected everyone. Banks and insurance companies were mostly bankrupt. Confederate currency and bonds were worthless. The billions of dollars invested in slaves vanished. Most debts were also left behind. Most farms were intact but most had lost their horses, mules and cattle; fences and barns were in disrepair. Paskoff shows the loss of farm infrastructure was about the same whether or not fighting took place nearby. The loss of infrastructure and productive capacity meant that rural widows throughout the region faced not only the absence of able-bodied men, but a depleted stock of material resources that they could manage and operate themselves. During four years of warfare, disruption, and blockades, the South used up about half its capital stock. The North, by contrast, absorbed its material losses so effortlessly that it appeared richer at the end of the war than at the beginning. The rebuilding took years and was hindered by the low price of cotton after the war. Outside investment was essential, especially in railroads. One historian has summarized the collapse of the transportation infrastructure needed for economic recovery: Effect on women and families More than 250,000 Confederate soldiers died during the war. Some widows abandoned their family farms and merged into the households of relatives, or even became refugees living in camps with high rates of disease and death. In the Old South, being an "old maid" was something of an embarrassment to the woman and her family, but after the war, it became almost a norm. Some women welcomed the freedom of not having to marry. Divorce, while never fully accepted, became more common. The concept of the "New Woman" emerged – she was self-sufficient and independent, and stood in sharp contrast to the "Southern Belle" of antebellum lore. National flags The first official flag of the Confederate States of America – called the "Stars and Bars" – originally had seven stars, representing the first seven states that initially formed the Confederacy. As more states joined, more stars were added, until the total was 13 (two stars were added for the divided states of Kentucky and Missouri). During the First Battle of Bull Run, (First Manassas) it sometimes proved difficult to distinguish the Stars and Bars from the Union flag. To rectify the situation, a separate "Battle Flag" was designed for use by troops in the field. Also known as the "Southern Cross", many variations sprang from the original square configuration. Although it was never officially adopted by the Confederate government, the popularity of the Southern Cross among both soldiers and the civilian population was a primary reason why it was made the main color feature when a new national flag was adopted in 1863. This new standard – known as the "Stainless Banner" – consisted of a lengthened white field area with a Battle Flag canton. This flag too had its problems when used in military operations as, on a windless day, it could easily be mistaken for a flag of truce or surrender. Thus, in 1865, a modified version of the Stainless Banner was adopted. This final national flag of the Confederacy kept the Battle Flag canton, but shortened the white field and added a vertical red bar to the fly end. Because of its depiction in the 20th-century and popular media, many people consider the rectangular battle flag with the dark blue bars as being synonymous with "the Confederate Flag", but this flag was never adopted as a Confederate national flag. The "Confederate Flag" has a color scheme similar to that of the most common Battle Flag design, but is rectangular, not square. The "Confederate Flag" is a highly recognizable symbol of the South in the United States today, and continues to be a controversial icon. Southern Unionism Unionism—opposition to the Confederacy—was strong in certain areas within the Confederate States. Southern Unionists (white Southerners who were opposed to the Confederacy) were widespread in the mountain regions of Appalachia and the Ozarks. Unionists, led by Parson Brownlow and Senator Andrew Johnson, took control of eastern Tennessee in 1863. Unionists also attempted control over western Virginia, but never effectively held more than half of the counties that formed the new state of West Virginia. Union forces captured parts of coastal North Carolina, and at first were largely welcomed by local unionists. That view would change for some, as the occupiers became perceived as oppressive, callous, radical and favorable to Freedmen. Occupiers pillaged, freed slaves, and evicted those who refused to swear loyalty oaths to the Union. Support for the Confederacy was also low in parts of Texas, where Unionism persisted in certain areas. Claude Elliott estimates that only a third of the population actively supported the Confederacy. Many Unionists supported the Confederacy after the war began, but many others clung to their Unionism throughout the war, especially in the northern counties, German districts in the Texas Hill Country, and majority Mexican areas. According to Ernest Wallace: "This account of a dissatisfied Unionist minority, although historically essential, must be kept in its proper perspective, for throughout the war the overwhelming majority of the people zealously supported the Confederacy ..." Randolph B. Campbell states, "In spite of terrible losses and hardships, most Texans continued throughout the war to support the Confederacy as they had supported secession". Dale Baum in his analysis of Texas politics in the era counters: "This idea of a Confederate Texas united politically against northern adversaries was shaped more by nostalgic fantasies than by wartime realities." He characterizes Texas Civil War history as "a morose story of intragovernmental rivalries coupled with wide-ranging disaffection that prevented effective implementation of state wartime policies". In Texas, local officials harassed and murdered Unionists and Germans during the Civil War. In Cooke County, Texas, 150 suspected Unionists were arrested; 25 were lynched without trial and 40 more were hanged after a summary trial. Draft resistance was widespread especially among Texans of German or Mexican descent; many of the latter leaving to Mexico. Confederate officials would attempt to hunt down and kill potential draftees who had gone into hiding. Civil liberties were of small concern in both the North and South. Lincoln and Davis both took a hard line against dissent. Neely explores how the Confederacy became a virtual police state with guards and patrols all about, and a domestic passport system whereby everyone needed official permission each time they wanted to travel. Over 4,000 suspected Unionists were imprisoned in the Confederate States without trial. Southerner Unionists were also known as Union Loyalists or Lincoln's Loyalists. Within the eleven Confederate states, states such as Tennessee (especially East Tennessee), Virginia (which included West Virginia at the time), and North Carolina were home to the largest populations of Unionists. Many areas of Southern Appalachia harbored pro-Union sentiment as well. As many as 100,000 men living in states under Confederate control would serve in the Union Army or pro-Union guerilla groups. Although Southern Unionists came from all classes, most differed socially, culturally, and economically from the regions dominant pre-war planter class. Geography Region and climate The Confederate States of America claimed a total of of coastline, thus a large part of its territory lay on the seacoast with level and often sandy or marshy ground. Most of the interior portion consisted of arable farmland, though much was also hilly and mountainous, and the far western territories were deserts. The lower reaches of the Mississippi River bisected the country, with the western half often referred to as the Trans-Mississippi. The highest point (excluding Arizona and New Mexico) was Guadalupe Peak in Texas at . Climate Much of the area claimed by the Confederate States of America had a humid subtropical climate with mild winters and long, hot, humid summers. The climate and terrain varied from vast swamps (such as those in Florida and Louisiana) to semi-arid steppes and arid deserts west of longitude 100 degrees west. The subtropical climate made winters mild but allowed infectious diseases to flourish. Consequently, on both sides more soldiers died from disease than were killed in combat, a fact hardly atypical of pre-World War I conflicts. Demographics Population The United States Census of 1860 gives a picture of the overall 1860 population for the areas that had joined the Confederacy. Note that the population numbers exclude non-assimilated Indian tribes. In 1860, the areas that later formed the eleven Confederate states (and including the future West Virginia) had 132,760 (1.46%) free blacks. Males made up 49.2% of the total population and females 50.8% (whites: 48.60% male, 51.40% female; slaves: 50.15% male, 49.85% female; free blacks: 47.43% male, 52.57% female). Rural and urban population The CSA was overwhelmingly rural. Few towns had populations of more than 1,000 – the typical county seat had a population of fewer than 500. Cities were rare; of the twenty largest U.S. cities in the 1860 census, only New Orleans lay in Confederate territory – and the Union captured New Orleans in 1862. Only 13 Confederate-controlled cities ranked among the top 100 U.S. cities in 1860, most of them ports whose economic activities vanished or suffered severely in the Union blockade. The population of Richmond swelled after it became the Confederate capital, reaching an estimated 128,000 in 1864. Other Southern cities in the border slave-holding states such as Baltimore, Washington, D.C., Wheeling, Alexandria, Louisville, and St. Louis never came under the control of the Confederate government. The cities of the Confederacy included most prominently in order of size of population: (See also Atlanta in the Civil War, Charleston, South Carolina, in the Civil War, Nashville in the Civil War, New Orleans in the Civil War, Wilmington, North Carolina, in the American Civil War, and Richmond in the Civil War). Religion The CSA was overwhelmingly Protestant. Both free and enslaved populations identified with evangelical Protestantism. Baptists and Methodists together formed majorities of both the white and the slave population (see Black church). Freedom of religion and separation of church and state were fully ensured by Confederate laws. Church attendance was very high and chaplains played a major role in the Army. Most large denominations experienced a North–South split in the prewar era on the issue of slavery. The creation of a new country necessitated independent structures. For example, the Presbyterian Church in the United States split, with much of the new leadership provided by Joseph Ruggles Wilson (father of President Woodrow Wilson). In 1861, he organized the meeting that formed the General Assembly of the Southern Presbyterian Church and served as its chief executive for 37 years. Baptists and Methodists both broke off from their Northern coreligionists over the slavery issue, forming the Southern Baptist Convention and the Methodist Episcopal Church, South, respectively. Elites in the southeast favored the Protestant Episcopal Church in the Confederate States of America, which had reluctantly split from the Episcopal Church in 1861. Other elites were Presbyterians belonging to the 1861-founded Presbyterian Church in the United States. Catholics included an Irish working class element in coastal cities and an old French element in southern Louisiana. Other insignificant and scattered religious populations included Lutherans, the Holiness movement, other Reformed, other Christian fundamentalists, the Stone-Campbell Restoration Movement, the Churches of Christ, the Latter Day Saint movement, Adventists, Muslims, Jews, Native American animists, deists and irreligious people. The southern churches met the shortage of Army chaplains by sending missionaries. The Southern Baptists started in 1862 and had a total of 78 missionaries. Presbyterians were even more active with 112 missionaries in January 1865. Other missionaries were funded and supported by the Episcopalians, Methodists, and Lutherans. One result was wave after wave of revivals in the Army. Military leaders Military leaders of the Confederacy (with their state or country of birth and highest rank) included: Robert E. Lee (Virginia) – General & General in Chief P. G. T. Beauregard (Louisiana) – General Braxton Bragg (North Carolina) – General Samuel Cooper (New York) – General Albert Sidney Johnston (Kentucky) – General Joseph E. Johnston (Virginia) – General Edmund Kirby Smith (Florida)General Simon Bolivar Buckner, Sr. (Kentucky)Lieutenant General Jubal Early (Virginia) – Lieutenant-General Richard S. Ewell (Virginia) – Lieutenant-General Nathan Bedford Forrest (Tennessee) – Lieutenant-General Wade Hampton III (South Carolina) – Lieutenant-General William J. Hardee (Georgia)Lieutenant-General A. P. Hill (Virginia) – Lieutenant-General Theophilus H. Holmes (North Carolina) Lieutenant-General John Bell Hood (Kentucky) – Lieutenant-General (temporary General) Thomas J. "Stonewall" Jackson (Virginia) – Lieutenant-General Stephen D. Lee (South Carolina)Lieutenant-General James Longstreet (South Carolina) – Lieutenant-General John C. Pemberton (Pennsylvania)Lieutenant-General Leonidas Polk (North Carolina) – Lieutenant-General Alexander P. Stewart (North Carolina)Lieutenant-General Richard Taylor (Kentucky) – Lieutenant-General (son of U.S. President Zachary Taylor) Joseph Wheeler (Georgia)Lieutenant-General John C. Breckinridge (Kentucky)Major-General & Secretary of War Richard H. Anderson (South Carolina)Major-General (temporary Lieutenant-General) Patrick Cleburne (County Cork, Ireland) – Major-General John Brown Gordon (Georgia)Major-General Henry Heth (Virginia)Major-General Daniel Harvey Hill (South Carolina)Major-General Edward Johnson (Virginia)Major-General Joseph B. Kershaw (South Carolina)Major-General Fitzhugh Lee (Virginia)Major-General George Washington Custis Lee (Virginia)Major-General William Henry Fitzhugh Lee (Virginia)Major-General William Mahone (Virginia)Major-General George Pickett (Virginia)Major-General Camillus J. Polignac (France) – Major-General Sterling Price (Missouri) – Major-General Stephen Dodson Ramseur (North Carolina) – Major-General Thomas L. Rosser (Virginia) – Major-General J. E. B. Stuart (Virginia) – Major-General Earl Van Dorn (Mississippi)Major-General John A. Wharton (Tennessee) – Major-General Edward Porter Alexander (Georgia) – Brigadier-General Francis Marion Cockrell (Missouri) – Brigadier-General Clement A. Evans (Georgia)Brigadier-General John Hunt Morgan (Kentucky) – Brigadier-General William N. Pendleton (Virginia) – Brigadier-General Stand Watie (Georgia) – Brigadier-General (last to surrender) Lawrence Sullivan Ross (Texas) – Brigadier-General John S. Mosby, the "Grey Ghost of the Confederacy" (Virginia) – Colonel Franklin Buchanan (Maryland) – Admiral Raphael Semmes (Maryland) – Rear Admiral See also American Civil War prison camps Cabinet of the Confederate States of America Commemoration of the American Civil War Commemoration of the American Civil War on postage stamps Confederate colonies Confederate Patent Office Confederate war finance C.S.A.: The Confederate States of America History of the Southern United States Knights of the Golden Circle List of Confederate arms manufacturers List of Confederate arsenals and armories List of Confederate monuments and memorials List of treaties of the Confederate States of America List of historical separatist movements List of civil wars National Civil War Naval Museum Notes References Bowman, John S. (ed), The Civil War Almanac, New York: Bison Books, 1983 Eicher, John H., & Eicher, David J., Civil War High Commands, Stanford University Press, 2001, Martis, Kenneth C. The Historical Atlas of the Congresses of the Confederate States of America 1861–1865 (1994) Further reading Overviews and reference American Annual Cyclopaedia for 1861 (N.Y.: Appleton's, 1864), an encyclopedia of events in the U.S. and CSA (and other countries); covers each state in detail Appletons' annual cyclopedia and register of important events: Embracing political, military, and ecclesiastical affairs; public documents; biography, statistics, commerce, finance, literature, science, agriculture, and mechanical industry, Volume 3 1863 (1864), thorough coverage of the events of 1863 Beringer, Richard E., Herman Hattaway, Archer Jones, and William N. Still Jr. Why the South Lost the Civil War. Athens: University of Georgia Press, 1986. . Boritt, Gabor S., and others., Why the Confederacy Lost, (1992) Coulter, E. Merton The Confederate States of America, 1861–1865, 1950 Current, Richard N., ed. Encyclopedia of the Confederacy (4 vol), 1993. 1900 pages, articles by scholars. Eaton, Clement A History of the Southern Confederacy, 1954 Faust, Patricia L., ed. Historical Times Illustrated History of the Civil War. New York: Harper & Row, 1986. . Gallagher, Gary W. The Confederate War. Cambridge, MA: Harvard University Press, 1997. . Heidler, David S., and Jeanne T. Heidler, eds. Encyclopedia of the American Civil War: A Political, Social, and Military History. New York: W. W. Norton & Company, 2000. . 2740 pages. McPherson, James M. Battle Cry of Freedom: The Civil War Era. Oxford History of the United States. New York: Oxford University Press, 1988. . standard military history of the war; Pulitzer Prize Nevins, Allan. The War for the Union. Vol. 1, The Improvised War 1861–1862. New York: Charles Scribner's Sons, 1959. ; The War for the Union. Vol. 2, War Becomes Revolution 1862–1863. New York: Charles Scribner's Sons, 1960. ; The War for the Union. Vol. 3, The Organized War 1863–1864. New York: Charles Scribner's Sons, 1971. ; The War for the Union. Vol. 4, The Organized War to Victory 1864–1865. New York: Charles Scribner's Sons, 1971. . The most detailed history of the war. Roland, Charles P. The Confederacy, (1960) brief survey Thomas, Emory M. The Confederate Nation, 1861–1865. New York: Harper & Row, 1979. . Standard political-economic-social history Wakelyn, Jon L. Biographical Dictionary of the Confederacy Greenwood Press Weigley, Russell F. A Great Civil War: A Military and Political History, 1861–1865. Bloomington and Indianapolis: Indiana University Press, 2000. . Historiography Boles, John B. and Evelyn Thomas Nolen, eds. Interpreting Southern History: Historiographical Essays in Honor of Sanford W. Higginbotham (1987) Foote, Lorien. "Rethinking the Confederate home front." Journal of the Civil War Era 7.3 (2017): 446-465 online. Grant, Susan-Mary, and Brian Holden Reid, eds. The American civil war: explorations and reconsiderations (Longman, 2000.) Hettle, Wallace. Inventing Stonewall Jackson: A Civil War Hero in History and Memory (LSU Press, 2011). Link, Arthur S. and Rembert W. Patrick, eds. Writing Southern History: Essays in Historiography in Honor of Fletcher M. Green (1965) Sternhell, Yael A. "Revisionism Reinvented? The Antiwar Turn in Civil War Scholarship." Journal of the Civil War Era 3.2 (2013): 239–256 online. Woodworth, Steven E. ed. The American Civil War: A Handbook of Literature and Research (1996), 750 pages of historiography and bibliography State studies Tucker, Spencer, ed. American Civil War: A State-by-State Encyclopedia (2 vol 2015) 1019pp Border states Ash, Stephen V. Middle Tennessee society transformed, 1860–1870: war and peace in the Upper South (2006) Cooling, Benjamin Franklin. Fort Donelson's Legacy: War and Society in Kentucky and Tennessee, 1862–1863 (1997) Cottrell, Steve. Civil War in Tennessee (2001) 142pp Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis. (1989) . Dollar, Kent, and others. Sister States, Enemy States: The Civil War in Kentucky and Tennessee (2009) Durham, Walter T. Nashville: The Occupied City, 1862–1863 (1985); Reluctant Partners: Nashville and the Union, 1863–1865 (1987) Mackey, Robert R. The Uncivil War: Irregular Warfare in the Upper South, 1861–1865 (University of Oklahoma Press, 2014) Temple, Oliver P. East Tennessee and the civil war (1899) 588pp online edition Alabama and Mississippi Fleming, Walter L. Civil War and Reconstruction in Alabama (1905). the most detailed study; Dunning School full text online from Project Gutenberg Rainwater, Percy Lee. Mississippi: storm center of secession, 1856–1861 (1938) Rigdon, John. A Guide to Alabama Civil War Research (2011) Smith, Timothy B. Mississippi in the Civil War: The Home Front University Press of Mississippi, (2010) 265 pages; Examines the declining morale of Mississippians as they witnessed extensive destruction and came to see victory as increasingly improbable Sterkx, H. E. Partners in Rebellion: Alabama Women in the Civil War (Fairleigh Dickinson University Press, 1970) Storey, Margaret M. "Civil War Unionists and the Political Culture of Loyalty in Alabama, 1860–1861". Journal of Southern History (2003): 71–106. in JSTOR Storey, Margaret M., Loyalty and Loss: Alabama's Unionists in the Civil War and Reconstruction. Baton Rouge: Louisiana State University Press, 2004. Towns, Peggy Allen. Duty Driven: The Plight of North Alabama's African Americans During the Civil War (2012) Florida and Georgia DeCredico, Mary A. Patriotism for Profit: Georgia's Urban Entrepreneurs and the Confederate War Effort (1990) Fowler, John D. and David B. Parker, eds. Breaking the Heartland: The Civil War in Georgia (2011) Hill, Louise Biles. Joseph E. Brown and the Confederacy. (1972); He was the governor Johns, John Edwin. Florida During the Civil War (University of Florida Press, 1963) Johnson, Michael P. Toward A Patriarchal Republic: The Secession of Georgia (1977) Mohr, Clarence L. On the Threshold of Freedom: Masters and Slaves in Civil War Georgia (1986) Nulty, William H. Confederate Florida: The Road to Olustee (University of Alabama Press, 1994) Parks, Joseph H. Joseph E. Brown of Georgia (LSU Press, 1977) 612 pages; Governor Wetherington, Mark V. Plain Folk's Fight: The Civil War and Reconstruction in Piney Woods Georgia (2009) Louisiana, Texas, Arkansas, and West Bailey, Anne J., and Daniel E. Sutherland, eds. Civil War Arkansas: beyond battles and leaders (Univ of Arkansas Pr, 2000) Ferguson, John Lewis, ed. Arkansas and the Civil War (Pioneer Press, 1965) Ripley, C. Peter. Slaves and Freedmen in Civil War Louisiana (LSU Press, 1976) Snyder, Perry Anderson. Shreveport, Louisiana, during the Civil War and Reconstruction (1979) Underwood, Rodman L. Waters of Discord: The Union Blockade of Texas During the Civil War (McFarland, 2003) Winters, John D. The Civil War in Louisiana (LSU Press, 1991) Woods, James M. Rebellion and Realignment: Arkansas's Road to Secession. (1987) Wooster, Ralph A. Civil War Texas (Texas A&M University Press, 2014) North and South Carolina Barrett, John G. The Civil War in North Carolina (1995) Carbone, John S. The Civil War in Coastal North Carolina (2001) Cauthen, Charles Edward; Power, J. Tracy. South Carolina goes to war, 1860–1865 (1950) Hardy, Michael C. North Carolina in the Civil War (2011) Inscoe, John C. The Heart of Confederate Appalachia: Western North Carolina in the Civil War (2003) Lee, Edward J. and Ron Chepesiuk, eds. South Carolina in the Civil War: The Confederate Experience in Letters and Diaries (2004), primary sources Miller, Richard F., ed. States at War, Volume 6: The Confederate States Chronology and a Reference Guide for South Carolina in the Civil War (UP of New England, 2018). Virginia Ash, Stephen V. Rebel Richmond: Life and Death in the Confederate Capital (UNC Press, 2019). Ayers, Edward L. and others. Crucible of the Civil War: Virginia from Secession to Commemoration (2008) Bryan, T. Conn. Confederate Georgia (1953), the standard scholarly survey Davis, William C. and James I. Robertson, Jr., eds. Virginia at War 1861. Lexington, KY: University of Kentucky Press, 2005. ; Virginia at War 1862 (2007); Virginia at War 1863 (2009); Virginia at War 1864 (2009); Virginia at War 1865 (2012) Snell, Mark A. West Virginia and the Civil War, Mountaineers Are Always Free, (2011) . Wallenstein, Peter, and Bertram Wyatt-Brown, eds. Virginia's Civil War (2008) Furgurson, Ernest B. Ashes of Glory: Richmond at War (1997) Social history, gender Bever, Megan L. "Prohibition, Sacrifice, and Morality in the Confederate States, 1861–1865." Journal of Southern History 85.2 (2019): 251–284 online. Brown, Alexis Girardin. "The Women Left Behind: Transformation of the Southern Belle, 1840–1880" (2000) Historian 62#4 pp 759–778. Cashin, Joan E. "Torn Bonnets and Stolen Silks: Fashion, Gender, Race, and Danger in the Wartime South." Civil War History 61#4 (2015): 338–361. online Chesson, Michael B. "Harlots or Heroines? A New Look at the Richmond Bread Riot." Virginia Magazine of History and Biography 92#2 (1984): 131–175. in JSTOR Clinton, Catherine, and Silber, Nina, eds. Divided Houses: Gender and the Civil War (1992) Davis, William C. and James I. Robertson Jr., eds. Virginia at War, 1865 (2012). Elliot, Jane Evans. Diary of Mrs. Jane Evans Elliot, 1837–1882 (1908) Faust, Drew Gilpin. Mothers of Invention: Women of the Slaveholding South in the American Civil War (1996) Faust, Drew Gilpin. This Republic of Suffering: Death and the American Civil War (2008) Frank, Lisa Tendrich, ed. Women in the American Civil War (2008) Frank, Lisa Tendrich. The Civilian War: Confederate Women and Union Soldiers during Sherman's March (LSU Press, 2015). Gleeson. David T. The Green and the Gray: The Irish in the Confederate States of America (U of North Carolina Press, 2013); online review Glymph, Thavolia. The Women's Fight: The Civil War's Battles for Home, Freedom, and Nation (UNC Press, 2019). Hilde, Libra Rose. Worth a Dozen Men: Women and Nursing in the Civil War South (U of Virginia Press, 2012). Levine, Bruce. The Fall of the House of Dixie: The Civil War and the Social Revolution That Transformed the South (2013) Lowry, Thomas P. The Story the Soldiers Wouldn't Tell: Sex in the Civil War (Stackpole Books, 1994). Massey, Mary. Bonnet Brigades: American Women and the Civil War (1966), excellent overview North and South; reissued as Women in the Civil War (1994) "Bonnet Brigades at Fifty: Reflections on Mary Elizabeth Massey and Gender in Civil War History," Civil War History (2015) 61#4 pp 400–444. Massey, Mary Elizabeth. Refugee Life in the Confederacy, (1964) Rable, George C. Civil Wars: Women and the Crisis of Southern Nationalism (1989) Slap, Andrew L. and Frank Towers, eds. Confederate Cities: The Urban South during the Civil War Era (U of Chicago Press, 2015). 302 pp. Stokes, Karen. South Carolina Civilians in Sherman's Path: Stories of Courage Amid Civil War Destruction (The History Press, 2012). Strong, Melissa J. "'The Finest Kind of Lady': Hegemonic Femininity in American Women’s Civil War Narratives." Women's Studies 46.1 (2017): 1–21 online. Swanson, David A., and Richard R. Verdugo. "The Civil War’s Demographic Impact on White Males in the Eleven Confederate States: An Analysis by State and Selected Age Groups." Journal of Political & Military Sociology 46.1 (2019): 1–26. Whites, LeeAnn. The Civil War as a Crisis in Gender: Augusta, Georgia, 1860–1890 (1995) Wiley, Bell Irwin Confederate Women (1975) online Wiley, Bell Irwin The Plain People of the Confederacy (1944) online Woodward, C. Vann, ed. Mary Chesnut's Civil War, 1981, detailed diary; primary source African Americans Andrews, William L. Slavery and Class in the American South: A Generation of Slave Narrative Testimony, 1840–1865 (Oxford UP, 2019). Ash, Stephen V. The Black Experience in the Civil War South (2010). Bartek, James M. "The Rhetoric of Destruction: Racial Identity and Noncombatant Immunity in the Civil War Era." (PhD Dissertation, University of Kentucky, 2010). online; Bibliography pp. 515–52. Frankel, Noralee. Freedom's Women: Black Women and Families in Civil War Era Mississippi (1999). Lang, Andrew F. In the Wake of War: Military Occupation, Emancipation, and Civil War America (LSU Press, 2017). Levin, Kevin M. Searching for Black Confederates: The Civil War’s Most Persistent Myth (UNC Press, 2019). Litwack, Leon F. Been in the Storm So Long: The Aftermath of Slavery (1979), on freed slaves Reidy, Joseph P. Illusions of Emancipation: The Pursuit of Freedom and Equality in the Twilight of Slavery (UNC Press, 2019). Wiley, Bell Irwin Southern Negroes: 1861–1865 (1938) Soldiers Broomall, James J. Private Confederacies: The Emotional Worlds of Southern Men as Citizens and Soldiers (UNC Press, 2019). Donald, David. "The Confederate as a Fighting Man." Journal of Southern History 25.2 (1959): 178–193. online Faust, Drew Gilpin. "Christian Soldiers: The Meaning of Revivalism in the Confederate Army." Journal of Southern History 53.1 (1987): 63–90 online. McNeill, William J. "A Survey of Confederate Soldier Morale During Sherman's Campaign Through Georgia and the Carolinas." Georgia Historical Quarterly 55.1 (1971): 1–25. Scheiber, Harry N. "The Pay of Confederate Troops and Problems of Demoralization: A Case of Administrative Failure." Civil War History 15.3 (1969): 226–236 online. Sheehan-Dean, Aaron. Why Confederates Fought: Family and Nation in Civil War Virginia (U of North Carolina Press, 2009). Watson, Samuel J. "Religion and combat motivation in the Confederate armies." Journal of Military History 58.1 (1994): 29+. Wiley, Bell Irwin. The life of Johnny Reb; the common soldier of the Confederacy (1971) online Wooster, Ralph A., and Robert Wooster. "'Rarin'for a Fight': Texans in the Confederate Army." Southwestern Historical Quarterly 84.4 (1981): 387–426 online. Intellectual history Bernath, Michael T. Confederate Minds: The Struggle for Intellectual Independence in the Civil War South (University of North Carolina Press; 2010) 412 pages. Examines the efforts of writers, editors, and other "cultural nationalists" to free the South from the dependence on Northern print culture and educational systems. Bonner, Robert E., "Proslavery Extremism Goes to War: The Counterrevolutionary Confederacy and Reactionary Militarism", Modern Intellectual History, 6 (August 2009), 261–85. Downing, David C. A South Divided: Portraits of Dissent in the Confederacy. (2007). Faust, Drew Gilpin. The Creation of Confederate Nationalism: Ideology and Identity in the Civil War South. (1988) Hutchinson, Coleman. Apples and Ashes: Literature, Nationalism, and the Confederate States of America. Athens, Georgia: University of Georgia Press, 2012. Lentz, Perry Carlton Our Missing Epic: A Study in the Novels about the American Civil War, 1970 Rubin, Anne Sarah. A Shattered Nation: The Rise and Fall of the Confederacy, 1861–1868, 2005 A cultural study of Confederates' self images Political history Alexander, Thomas B., and Beringer, Richard E. The Anatomy of the Confederate Congress: A Study of the Influences of Member Characteristics on Legislative Voting Behavior, 1861–1865, (1972) Cooper, William J, Jefferson Davis, American (2000), standard biography Davis, William C. A Government of Our Own: The Making of the Confederacy. New York: The Free Press, a division of Macmillan, Inc., 1994. . Eckenrode, H. J., Jefferson Davis: President of the South, 1923 Levine, Bruce. Confederate Emancipation: Southern Plans to Free and Arm Slaves during the Civil War. (2006) Martis, Kenneth C., "The Historical Atlas of the Congresses of the Confederate States of America 1861–1865" (1994) Neely, Mark E. Jr., Confederate Bastille: Jefferson Davis and Civil Liberties (1993) Neely, Mark E. Jr. Southern Rights: Political Prisoners and the Myth of Confederate Constitutionalism. (1999) George C. Rable The Confederate Republic: A Revolution against Politics, 1994 Rembert, W. Patrick Jefferson Davis and His Cabinet (1944). Williams, William M. Justice in Grey: A History of the Judicial System of the Confederate States of America (1941) Yearns, Wilfred Buck The Confederate Congress (1960) Foreign affairs Blumenthal, Henry. "Confederate Diplomacy: Popular Notions and International Realities", Journal of Southern History, Vol. 32, No. 2 (May 1966), pp. 151–171 in JSTOR Cleland, Beau. "The Confederate States of America and the British Empire: Neutral Territory and Civil Wars." Journal of Military and Strategic Studies 16.4 (2016): 171–181. online Daddysman, James W. The Matamoros Trade: Confederate Commerce, Diplomacy, and Intrigue. (1984) online Foreman, Amanda. A World on Fire: Britain's Crucial Role in the American Civil War (2011) especially on Brits inside the Confederacy; Hubbard, Charles M. The Burden of Confederate Diplomacy (1998) Jones, Howard. Blue and Gray Diplomacy: A History of Union and Confederate Foreign Relations (2009) online Jones, Howard. Union in Peril: The Crisis Over British Intervention in the Civil War. Lincoln, NE: University of Nebraska Press, Bison Books, 1997. . Originally published: Chapel Hill: University of North Carolina Press, 1992. Mahin, Dean B. One War at a Time: The International Dimensions of the American Civil War. Washington, DC: Brassey's, 2000. . Originally published: Washington, DC: Brassey's, 1999. Merli, Frank J. The Alabama, British Neutrality, and the American Civil War (2004). 225 pages. Owsley, Frank. King Cotton Diplomacy: Foreign Relations of the Confederate States of America (2nd ed. 1959) online Sainlaude, Steve. France and the American Civil War: A Diplomatic History (2019) excerpt Economic history Black, III, Robert C. The Railroads of the Confederacy. Chapel Hill: University of North Carolina Press, 1952, 1988. . Bonner, Michael Brem. "Expedient Corporatism and Confederate Political Economy", Civil War History, 56 (March 2010), 33–65. Dabney, Virginius Richmond: The Story of a City. Charlottesville: The University of Virginia Press, 1990 Grimsley, Mark The Hard Hand of War: Union Military Policy toward Southern Civilians, 1861–1865, 1995 Hurt, R. Douglas. Agriculture and the Confederacy: Policy, Productivity, and Power in the Civil War South (2015) Massey, Mary Elizabeth Ersatz in the Confederacy: Shortages and Substitutes on the Southern Homefront (1952) Paskoff, Paul F. "Measures of War: A Quantitative Examination of the Civil War's Destructiveness in the Confederacy", Civil War History (2008) 54#1 pp 35–62 in Project MUSE Ramsdell, Charles. Behind the Lines in the Southern Confederacy, 1994. Roark, James L. Masters without Slaves: Southern Planters in the Civil War and Reconstruction, 1977. Thomas, Emory M. The Confederacy as a Revolutionary Experience, 1992 Primary sources Carter, Susan B., ed. The Historical Statistics of the United States: Millennial Edition (5 vols), 2006 Commager, Henry Steele. The Blue and the Gray: The Story of the Civil War As Told by Participants. 2 vols. Indianapolis and New York: The Bobbs-Merrill Company, Inc., 1950. . Many reprints. Davis, Jefferson. The Rise of the Confederate Government. New York: Barnes & Noble, 2010. Original edition: 1881. . Davis, Jefferson. The Fall of the Confederate Government. New York: Barnes & Noble, 2010. Original edition: 1881. . Harwell, Richard B., The Confederate Reader (1957) Hettle, Wallace, ed. The Confederate Homefront: A History in Documents (LSU Press, 2017) 214 pages Jones, John B. A Rebel War Clerk's Diary at the Confederate States Capital, edited by Howard Swiggert, [1935] 1993. 2 vols. Richardson, James D., ed. A Compilation of the Messages and Papers of the Confederacy, Including the Diplomatic Correspondence 1861–1865, 2 volumes, 1906. Yearns, W. Buck and Barret, John G., eds. North Carolina Civil War Documentary, 1980. Confederate official government documents major online collection of complete texts in HTML format, from University of North Carolina Journal of the Congress of the Confederate States of America, 1861–1865 (7 vols), 1904. Available online at the Library of Congress0 External links Confederate offices Index of Politicians by Office Held or Sought Civil War Research & Discussion Group -*Confederate States of Am. Army and Navy Uniforms, 1861 The Countryman, 1862–1866, published weekly by Turnwold, Ga., edited by J.A. Turner The Federal and the Confederate Constitution Compared Confederate Postage Stamps Photographs of the original Confederate Constitution and other Civil War documents owned by the Hargrett Rare Book and Manuscript Library at the University of Georgia Libraries. Photographic History of the Civil War, 10 vols., 1912. DocSouth: Documenting the American South – numerous online text, image, and audio collections. The Boston Athenæum has over 4000 Confederate imprints, including rare books, pamphlets, government documents, manuscripts, serials, broadsides, maps, and sheet music that have been conserved and digitized. Oklahoma Digital Maps: Digital Collections of Oklahoma and Indian Territory Confederate States of America Collection at the Library of Congress Religion in the CSA: Confederate Veteran Magazine, May, 1922 1861 establishments in North America 1865 disestablishments in North America Federal constitutional republics Former confederations Former countries of the United States Former regions and territories of the United States Former unrecognized countries History of the Southern United States Separatism in the United States Slavery in the United States States and territories established in 1861 States and territories disestablished in 1865
3,207
7,043
https://en.wikipedia.org/wiki/Chemical%20formula
Chemical formula
In chemistry, a chemical formula is a way of presenting information about the chemical proportions of atoms that constitute a particular chemical compound or molecule, using chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, commas and plus (+) and minus (−) signs. These are limited to a single typographic line of symbols, which may include subscripts and superscripts. A chemical formula is not a chemical name, and it contains no words. Although a chemical formula may imply certain simple chemical structures, it is not the same as a full chemical structural formula. Chemical formulae can fully specify the structure of only the simplest of molecules and chemical substances, and are generally more limited in power than chemical names and structural formulae. The simplest types of chemical formulae are called empirical formulae, which use letters and numbers indicating the numerical proportions of atoms of each type. Molecular formulae indicate the simple numbers of each type of atom in a molecule, with no information on structure. For example, the empirical formula for glucose is (twice as many hydrogen atoms as carbon and oxygen), while its molecular formula is (12 hydrogen atoms, six carbon and oxygen atoms). Sometimes a chemical formula is complicated by being written as a condensed formula (or condensed molecular formula, occasionally called a "semi-structural formula"), which conveys additional information about the particular ways in which the atoms are chemically bonded together, either in covalent bonds, ionic bonds, or various combinations of these types. This is possible if the relevant bonding is easy to show in one dimension. An example is the condensed molecular/chemical formula for ethanol, which is or . However, even a condensed chemical formula is necessarily limited in its ability to show complex bonding relationships between atoms, especially atoms that have bonds to four or more different substituents. Since a chemical formula must be expressed as a single line of chemical element symbols, it often cannot be as informative as a true structural formula, which is a graphical representation of the spatial relationship between atoms in chemical compounds (see for example the figure for butane structural and chemical formulae, at right). For reasons of structural complexity, a single condensed chemical formula (or semi-structural formula) may correspond to different molecules, known as isomers. For example, glucose shares its molecular formula with a number of other sugars, including fructose, galactose and mannose. Linear equivalent chemical names exist that can and do specify uniquely any complex structural formula (see chemical nomenclature), but such names must use many terms (words), rather than the simple element symbols, numbers, and simple typographical symbols that define a chemical formula. Chemical formulae may be used in chemical equations to describe chemical reactions and other chemical transformations, such as the dissolving of ionic compounds into solution. While, as noted, chemical formulae do not have the full power of structural formulae to show chemical relationships between atoms, they are sufficient to keep track of numbers of atoms and numbers of electrical charges in chemical reactions, thus balancing chemical equations so that these equations can be used in chemical problems involving conservation of atoms, and conservation of electric charge. Overview A chemical formula identifies each constituent element by its chemical symbol and indicates the proportionate number of atoms of each element. In empirical formulae, these proportions begin with a key element and then assign numbers of atoms of the other elements in the compound, by ratios to the key element. For molecular compounds, these ratio numbers can all be expressed as whole numbers. For example, the empirical formula of ethanol may be written because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of ionic compounds, however, cannot be written with entirely whole-number empirical formulae. An example is boron carbide, whose formula of is a variable non-whole number ratio with n ranging from over 4 to more than 6.5. When the chemical compound of the formula consists of simple molecules, chemical formulae often employ ways to suggest the structure of the molecule. These types of formulae are variously known as molecular formulae and condensed formulae. A molecular formula enumerates the number of atoms to reflect those in the molecule, so that the molecular formula for glucose is rather than the glucose empirical formula, which is . However, except for very simple substances, molecular chemical formulae lack needed structural information, and are ambiguous. For simple molecules, a condensed (or semi-structural) formula is a type of chemical formula that may fully imply a correct structural formula. For example, ethanol may be represented by the condensed chemical formula , and dimethyl ether by the condensed formula . These two molecules have the same empirical and molecular formulae (), but may be differentiated by the condensed formulae shown, which are sufficient to represent the full structure of these simple organic compounds. Condensed chemical formulae may also be used to represent ionic compounds that do not exist as discrete molecules, but nonetheless do contain covalently bound clusters within them. These polyatomic ions are groups of atoms that are covalently bound together and have an overall ionic charge, such as the sulfate ion. Each polyatomic ion in a compound is written individually in order to illustrate the separate groupings. For example, the compound dichlorine hexoxide has an empirical formula , and molecular formula , but in liquid or solid forms, this compound is more correctly shown by an ionic condensed formula , which illustrates that this compound consists of ions and ions. In such cases, the condensed formula only need be complex enough to show at least one of each ionic species. Chemical formulae as described here are distinct from the far more complex chemical systematic names that are used in various systems of chemical nomenclature. For example, one systematic name for glucose is (2R,3S,4R,5R)-2,3,4,5,6-pentahydroxyhexanal. This name, interpreted by the rules behind it, fully specifies glucose's structural formula, but the name is not a chemical formula as usually understood, and uses terms and words not used in chemical formulae. Such names, unlike basic formulae, may be able to represent full structural formulae without graphs. Types Empirical formula In chemistry, the empirical formula of a chemical is a simple expression of the relative number of each type of atom or ratio of the elements in the compound. Empirical formulae are the standard for ionic compounds, such as , and for macromolecules, such as . An empirical formula makes no reference to isomerism, structure, or absolute number of atoms. The term empirical refers to the process of elemental analysis, a technique of analytical chemistry used to determine the relative percent composition of a pure chemical substance by element. For example, hexane has a molecular formula of , and (for one of its isomers, n-hexane) a structural formula , implying that it has a chain structure of 6 carbon atoms, and 14 hydrogen atoms. However, the empirical formula for hexane is . Likewise the empirical formula for hydrogen peroxide, , is simply , expressing the 1:1 ratio of component elements. Formaldehyde and acetic acid have the same empirical formula, . This is the actual chemical formula for formaldehyde, but acetic acid has double the number of atoms. Molecular formula Molecular formulae indicate the simple numbers of each type of atom in a molecule of a molecular substance. They are the same as empirical formulae for molecules that only have one atom of a particular type, but otherwise may have larger numbers. An example of the difference is the empirical formula for glucose, which is (ratio 1:2:1), while its molecular formula is (number of atoms 6:12:6). For water, both formulae are . A molecular formula provides more information about a molecule than its empirical formula, but is more difficult to establish. A molecular formula shows the number of elements in a molecule, and determines whether it is a binary compound, ternary compound, quaternary compound, or has even more elements. Structural formula In addition to quantitative description of a molecule, a structural formula captures how the atoms are organized, and shows (or implies) the chemical bonds between the atoms. There are multiple types of structural formulas focused on different aspects of the molecular structure. The two diagrams show two molecules which are structural isomers of each other, since they both have the same molecular formula , but they have different structural formulas as shown. Condensed formula The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule. A condensed (or semi-structural) formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as . In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: , and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write or less commonly . The two lines (or two pairs of dots) indicate that a double bond connects the atoms on either side of them. A triple bond may be expressed with three lines () or three pairs of dots (), and if there may be ambiguity, a single line or pair of dots may be used to indicate a single bond. Molecules with multiple functional groups that are the same may be expressed by enclosing the repeated group in round brackets. For example, isobutane may be written . This condensed structural formula implies a different connectivity from other molecules that can be formed using the same atoms in the same proportions (isomers). The formula implies a central carbon atom connected to one hydrogen atom and three methyl groups (). The same number of atoms of each element (10 hydrogens and 4 carbons, or ) may be used to make a straight chain molecule, n-butane: . Law of composition In any given chemical compound, the elements always combine in the same proportion with each other. This is the law of constant composition. The law of constant composition says that, in any particular chemical compound, all samples of that compound will be made up of the same elements in the same proportion or ratio. For example, any water molecule is always made up of two hydrogen atoms and one oxygen atom in a 2:1 ratio. If we look at the relative masses of oxygen and hydrogen in a water molecule, we see that 94% of the mass of a water molecule is accounted for by oxygen and the remaining 6% is the mass of hydrogen. This mass proportion will be the same for any water molecule. Chemical names in answer to limitations of chemical formulae The alkene called but-2-ene has two isomers, which the chemical formula does not identify. The relative position of the two methyl groups must be indicated by additional notation denoting whether the methyl groups are on the same side of the double bond (cis or Z) or on the opposite sides from each other (trans or E). As noted above, in order to represent the full structural formulae of many complex organic and inorganic compounds, chemical nomenclature may be needed which goes well beyond the available resources used above in simple condensed formulae. See IUPAC nomenclature of organic chemistry and IUPAC nomenclature of inorganic chemistry 2005 for examples. In addition, linear naming systems such as International Chemical Identifier (InChI) allow a computer to construct a structural formula, and simplified molecular-input line-entry system (SMILES) allows a more human-readable ASCII input. However, all these nomenclature systems go beyond the standards of chemical formulae, and technically are chemical naming systems, not formula systems. Polymers in condensed formulae For polymers in condensed chemical formulae, parentheses are placed around the repeating unit. For example, a hydrocarbon molecule that is described as , is a molecule with fifty repeating units. If the number of repeating units is unknown or variable, the letter n may be used to indicate this formula: . Ions in condensed formulae For ions, the charge on a particular atom may be denoted with a right-hand superscript. For example, , or . The total charge on a charged molecule or a polyatomic ion may also be shown in this way, such as for hydronium, , or sulfate, . Note that + and - are used in place of +1 and -1, respectively. For more complex ions, brackets [ ] are often used to enclose the ionic formula, as in , which is found in compounds such as caesium dodecaborate, . Parentheses ( ) can be nested inside brackets to indicate a repeating unit, as in Hexamminecobalt(III) chloride, . Here, indicates that the ion contains six ammine groups () bonded to cobalt, and [ ] encloses the entire formula of the ion with charge +3. This is strictly optional; a chemical formula is valid with or without ionization information, and Hexamminecobalt(III) chloride may be written as or . Brackets, like parentheses, behave in chemistry as they do in mathematics, grouping terms together they are not specifically employed only for ionization states. In the latter case here, the parentheses indicate 6 groups all of the same shape, bonded to another group of size 1 (the cobalt atom), and then the entire bundle, as a group, is bonded to 3 chlorine atoms. In the former case, it is clearer that the bond connecting the chlorines is ionic, rather than covalent. Isotopes Although isotopes are more relevant to nuclear chemistry or stable isotope chemistry than to conventional chemistry, different isotopes may be indicated with a prefixed superscript in a chemical formula. For example, the phosphate ion containing radioactive phosphorus-32 is . Also a study involving stable isotope ratios might include the molecule . A left-hand subscript is sometimes used redundantly to indicate the atomic number. For example, for dioxygen, and for the most abundant isotopic species of dioxygen. This is convenient when writing equations for nuclear reactions, in order to show the balance of charge more clearly. Trapped atoms The @ symbol (at sign) indicates an atom or molecule trapped inside a cage but not chemically bound to it. For example, a buckminsterfullerene () with an atom (M) would simply be represented as regardless of whether M was inside the fullerene without chemical bonding or outside, bound to one of the carbon atoms. Using the @ symbol, this would be denoted if M was inside the carbon network. A non-fullerene example is , an ion in which one arsenic (As) atom is trapped in a cage formed by the other 32 atoms. This notation was proposed in 1991 with the discovery of fullerene cages (endohedral fullerenes), which can trap atoms such as La to form, for example, or . The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene. Non-stoichiometric chemical formulae Chemical formulae most often use integers for each element. However, there is a class of compounds, called non-stoichiometric compounds, that cannot be represented by small integers. Such a formula might be written using decimal fractions, as in , or it might include a variable part represented by a letter, as in , where x is normally much less than 1. General forms for organic compounds A chemical formula used for a series of compounds that differ from each other by a constant unit is called a general formula. It generates a homologous series of chemical formulae. For example, alcohols may be represented by the formula (n ≥ 1), giving the homologs methanol, ethanol, propanol for 1 ≤ n ≤ 3. Hill system The Hill system (or Hill notation) is a system of writing empirical chemical formulae, molecular chemical formulae and components of a condensed formula such that the number of carbon atoms in a molecule is indicated first, the number of hydrogen atoms next, and then the number of all other chemical elements subsequently, in alphabetical order of the chemical symbols. When the formula contains no carbon, all the elements, including hydrogen, are listed alphabetically. By sorting formulae according to the number of atoms of each element present in the formula according to these rules, with differences in earlier elements or numbers being treated as more significant than differences in any later element or number—like sorting text strings into lexicographical order—it is possible to collate chemical formulae into what is known as Hill system order. The Hill system was first published by Edwin A. Hill of the United States Patent and Trademark Office in 1900. It is the most commonly used system in chemical databases and printed indexes to sort lists of compounds. A list of formulae in Hill system order is arranged alphabetically, as above, with single-letter elements coming before two-letter symbols when the symbols begin with the same letter (so "B" comes before "Be", which comes before "Br"). The following example formulae are written using the Hill system, and listed in Hill order: BrI BrClH2Si CCl4 CH3I C2H5Br H2O4S See also Dictionary of chemical formulae Formula unit Nuclear notation Periodic table Skeletal formula Simplified molecular-input line-entry system Notes References External links Hill notation example, from the University of Massachusetts Lowell libraries, including how to sort into Hill system order Molecular formula calculation applying Hill notation. The library calculating Hill notation is available on npm. Chemical nomenclature Notation
3,216
7,044
https://en.wikipedia.org/wiki/Beetle
Beetle
Beetles are insects that form the order Coleoptera (), in the superorder Endopterygota. Their front pair of wings are hardened into wing-cases, elytra, distinguishing them from most other insects. The Coleoptera, with about 400,000 described species, is the largest of all orders, constituting almost 40% of described insects and 25% of all known animal species; new species are discovered frequently, with estimates suggesting that there are between 0.9 and 2.1 million total species. Found in almost every habitat except the sea and the polar regions, they interact with their ecosystems in several ways: beetles often feed on plants and fungi, break down animal and plant debris, and eat other invertebrates. Some species are serious agricultural pests, such as the Colorado potato beetle, while others such as Coccinellidae (ladybirds or ladybugs) eat aphids, scale insects, thrips, and other plant-sucking insects that damage crops. Beetles typically have a particularly hard exoskeleton including the elytra, though some such as the rove beetles have very short elytra while blister beetles have softer elytra. The general anatomy of a beetle is quite uniform and typical of insects, although there are several examples of novelty, such as adaptations in water beetles which trap air bubbles under the elytra for use while diving. Beetles are endopterygotes, which means that they undergo complete metamorphosis, with a series of conspicuous and relatively abrupt changes in body structure between hatching and becoming adult after a relatively immobile pupal stage. Some, such as stag beetles, have a marked sexual dimorphism, the males possessing enormously enlarged mandibles which they use to fight other males. Many beetles are aposematic, with bright colors and patterns warning of their toxicity, while others are harmless Batesian mimics of such insects. Many beetles, including those that live in sandy places, have effective camouflage. Beetles are prominent in human culture, from the sacred scarabs of ancient Egypt to beetlewing art and use as pets or fighting insects for entertainment and gambling. Many beetle groups are brightly and attractively colored making them objects of collection and decorative displays. Over 300 species are used as food, mostly as larvae; species widely consumed include mealworms and rhinoceros beetle larvae. However, the major impact of beetles on human life is as agricultural, forestry, and horticultural pests. Serious pests include the boll weevil of cotton, the Colorado potato beetle, the coconut hispine beetle, and the mountain pine beetle. Most beetles, however, do not cause economic damage and many, such as the lady beetles and dung beetles are beneficial by helping to control insect pests. Etymology The name of the taxonomic order, Coleoptera, comes from the Greek koleopteros (κολεόπτερος), given to the group by Aristotle for their elytra, hardened shield-like forewings, from koleos, sheath, and pteron, wing. The English name beetle comes from the Old English word bitela, little biter, related to bītan (to bite), leading to Middle English betylle. Another Old English name for beetle is ċeafor, chafer, used in names such as cockchafer, from the Proto-Germanic *kebrô ("beetle"; compare German Käfer, Dutch kever). Distribution and diversity Beetles are by far the largest order of insects: the roughly 400,000 species make up about 40% of all insect species so far described, and about 25% of all animal species. A 2015 study provided four independent estimates of the total number of beetle species, giving a mean estimate of some 1.5 million with a "surprisingly narrow range" spanning all four estimates from a minimum of 0.9 to a maximum of 2.1 million beetle species. The four estimates made use of host-specificity relationships (1.5 to 1.9 million), ratios with other taxa (0.9 to 1.2 million), plant:beetle ratios (1.2 to 1.3), and extrapolations based on body size by year of description (1.7 to 2.1 million). Beetles are found in nearly all habitats, including freshwater and coastal habitats, wherever vegetative foliage is found, from trees and their bark to flowers, leaves, and underground near roots - even inside plants in galls, in every plant tissue, including dead or decaying ones. Tropical forest canopies have a large and diverse fauna of beetles, including Carabidae, Chrysomelidae, and Scarabaeidae. The heaviest beetle, indeed the heaviest insect stage, is the larva of the goliath beetle, Goliathus goliatus, which can attain a mass of at least and a length of . Adult male goliath beetles are the heaviest beetle in its adult stage, weighing and measuring up to . Adult elephant beetles, Megasoma elephas and Megasoma actaeon often reach and . The longest beetle is the Hercules beetle Dynastes hercules, with a maximum overall length of at least 16.7 cm (6.6 in) including the very long pronotal horn. The smallest recorded beetle and the smallest free-living insect (), is the featherwing beetle Scydosella musawasensis which may measure as little as 325 μm in length. Evolution Late Paleozoic and Triassic The oldest known beetle is Coleopsis, from the earliest Permian (Asselian) of Germany, around 295 million years ago. Early beetles from the Permian, which are collectively grouped into the "Protocoleoptera" are thought to have been xylophagous (wood eating) and wood boring. Fossils from this time have been found in Siberia and Europe, for instance in the red slate fossil beds of Niedermoschel near Mainz, Germany. Further fossils have been found in Obora, Czech Republic and Tshekarda in the Ural mountains, Russia. However, there are only a few fossils from North America before the middle Permian, although both Asia and North America had been united to Euramerica. The first discoveries from North America made in the Wellington Formation of Oklahoma were published in 2005 and 2008. The earliest members of modern beetle lineages appeared during the Late Permian. In the Permian–Triassic extinction event at the end of the Permian, most "protocoleopteran" lineages became extinct. Beetle diversity did not recover to pre-extinction levels until the Middle Triassic. Jurassic During the Jurassic (), there was a dramatic increase in the diversity of beetle families, including the development and growth of carnivorous and herbivorous species. The Chrysomeloidea diversified around the same time, feeding on a wide array of plant hosts from cycads and conifers to angiosperms. Close to the Upper Jurassic, the Cupedidae decreased, but the diversity of the early plant-eating species increased. Most recent plant-eating beetles feed on flowering plants or angiosperms, whose success contributed to a doubling of plant-eating species during the Middle Jurassic. However, the increase of the number of beetle families during the Cretaceous does not correlate with the increase of the number of angiosperm species. Around the same time, numerous primitive weevils (e.g. Curculionoidea) and click beetles (e.g. Elateroidea) appeared. The first jewel beetles (e.g. Buprestidae) are present, but they remained rare until the Cretaceous. The first scarab beetles were not coprophagous but presumably fed on rotting wood with the help of fungus; they are an early example of a mutualistic relationship. There are more than 150 important fossil sites from the Jurassic, the majority in Eastern Europe and North Asia. Outstanding sites include Solnhofen in Upper Bavaria, Germany, Karatau in South Kazakhstan, the Yixian formation in Liaoning, North China, as well as the Jiulongshan formation and further fossil sites in Mongolia. In North America there are only a few sites with fossil records of insects from the Jurassic, namely the shell limestone deposits in the Hartford basin, the Deerfield basin and the Newark basin. Cretaceous The Cretaceous saw the fragmenting of the southern landmass, with the opening of the southern Atlantic Ocean and the isolation of New Zealand, while South America, Antarctica, and Australia grew more distant. The diversity of Cupedidae and Archostemata decreased considerably. Predatory ground beetles (Carabidae) and rove beetles (Staphylinidae) began to distribute into different patterns; the Carabidae predominantly occurred in the warm regions, while the Staphylinidae and click beetles (Elateridae) preferred temperate climates. Likewise, predatory species of Cleroidea and Cucujoidea hunted their prey under the bark of trees together with the jewel beetles (Buprestidae). The diversity of jewel beetles increased rapidly, as they were the primary consumers of wood, while longhorn beetles (Cerambycidae) were rather rare: their diversity increased only towards the end of the Upper Cretaceous. The first coprophagous beetles are from the Upper Cretaceous and may have lived on the excrement of herbivorous dinosaurs. The first species where both larvae and adults are adapted to an aquatic lifestyle are found. Whirligig beetles (Gyrinidae) were moderately diverse, although other early beetles (e.g. Dytiscidae) were less, with the most widespread being the species of Coptoclavidae, which preyed on aquatic fly larvae. A 2020 review of the palaeoecological interpretations of fossil beetles from Cretaceous ambers has suggested that saproxylicity was the most common feeding strategy, with fungivorous species in particular appearing to dominate. Many fossil sites worldwide contain beetles from the Cretaceous. Most are in Europe and Asia and belong to the temperate climate zone during the Cretaceous. Lower Cretaceous sites include the Crato fossil beds in the Araripe basin in the Ceará, North Brazil, as well as overlying Santana formation; the latter was near the equator at that time. In Spain, important sites are near Montsec and Las Hoyas. In Australia, the Koonwarra fossil beds of the Korumburra group, South Gippsland, Victoria, are noteworthy. Major sites from the Upper Cretaceous include Kzyl-Dzhar in South Kazakhstan and Arkagala in Russia. Cenozoic Beetle fossils are abundant in the Cenozoic; by the Quaternary (up to 1.6 mya), fossil species are identical to living ones, while from the Late Miocene (5.7 mya) the fossils are still so close to modern forms that they are most likely the ancestors of living species. The large oscillations in climate during the Quaternary caused beetles to change their geographic distributions so much that current location gives little clue to the biogeographical history of a species. It is evident that geographic isolation of populations must often have been broken as insects moved under the influence of changing climate, causing mixing of gene pools, rapid evolution, and extinctions, especially in middle latitudes. Phylogeny The very large number of beetle species poses special problems for classification. Some families contain tens of thousands of species, and need to be divided into subfamilies and tribes. This immense number led the evolutionary biologist J. B. S. Haldane to quip, when some theologians asked him what could be inferred about the mind of the Christian God from the works of His Creation, "An inordinate fondness for beetles". Polyphaga is the largest suborder, containing more than 300,000 described species in more than 170 families, including rove beetles (Staphylinidae), scarab beetles (Scarabaeidae), blister beetles (Meloidae), stag beetles (Lucanidae) and true weevils (Curculionidae). These polyphagan beetle groups can be identified by the presence of cervical sclerites (hardened parts of the head used as points of attachment for muscles) absent in the other suborders. Adephaga contains about 10 families of largely predatory beetles, includes ground beetles (Carabidae), water beetles (Dytiscidae) and whirligig beetles (Gyrinidae). In these insects, the testes are tubular and the first abdominal sternum (a plate of the exoskeleton) is divided by the hind coxae (the basal joints of the beetle's legs). Archostemata contains four families of mainly wood-eating beetles, including reticulated beetles (Cupedidae) and the telephone-pole beetle. The Archostemata have an exposed plate called the metatrochantin in front of the basal segment or coxa of the hind leg. Myxophaga contains about 65 described species in four families, mostly very small, including Hydroscaphidae and the genus Sphaerius. The myxophagan beetles are small and mostly alga-feeders. Their mouthparts are characteristic in lacking galeae and having a mobile tooth on their left mandible. The consistency of beetle morphology, in particular their possession of elytra, has long suggested that Coleoptera is monophyletic, though there have been doubts about the arrangement of the suborders, namely the Adephaga, Archostemata, Myxophaga and Polyphaga within that clade. The twisted-wing parasites, Strepsiptera, are thought to be a sister group to the beetles, having split from them in the Early Permian. Molecular phylogenetic analysis confirms that the Coleoptera are monophyletic. Duane McKenna et al. (2015) used eight nuclear genes for 367 species from 172 of 183 Coleopteran families. They split the Adephaga into 2 clades, Hydradephaga and Geadephaga, broke up the Cucujoidea into 3 clades, and placed the Lymexyloidea within the Tenebrionoidea. The Polyphaga appear to date from the Triassic. Most extant beetle families appear to have arisen in the Cretaceous. The cladogram is based on McKenna (2015). The number of species in each group (mainly superfamilies) is shown in parentheses, and boldface if over 10,000. English common names are given where possible. Dates of origin of major groups are shown in italics in millions of years ago (mya). External morphology Beetles are generally characterized by a particularly hard exoskeleton and hard forewings (elytra) not usable for flying. Almost all beetles have mandibles that move in a horizontal plane. The mouthparts are rarely suctorial, though they are sometimes reduced; the maxillae always bear palps. The antennae usually have 11 or fewer segments, except in some groups like the Cerambycidae (longhorn beetles) and the Rhipiceridae (cicada parasite beetles). The coxae of the legs are usually located recessed within a coxal cavity. The genitalic structures are telescoped into the last abdominal segment in all extant beetles. Beetle larvae can often be confused with those of other endopterygote groups. The beetle's exoskeleton is made up of numerous plates, called sclerites, separated by thin sutures. This design provides armored defenses while maintaining flexibility. The general anatomy of a beetle is quite uniform, although specific organs and appendages vary greatly in appearance and function between the many families in the order. Like all insects, beetles' bodies are divided into three sections: the head, the thorax, and the abdomen. Because there are so many species, identification is quite difficult, and relies on attributes including the shape of the antennae, the tarsal formulae and shapes of these small segments on the legs, the mouthparts, and the ventral plates (sterna, pleura, coxae). In many species accurate identification can only be made by examination of the unique male genitalic structures. Head The head, having mouthparts projecting forward or sometimes downturned, is usually heavily sclerotized and is sometimes very large. The eyes are compound and may display remarkable adaptability, as in the case of the aquatic whirligig beetles (Gyrinidae), where they are split to allow a view both above and below the waterline. A few Longhorn beetles (Cerambycidae) and weevils as well as some fireflies (Rhagophthalmidae) have divided eyes, while many have eyes that are notched, and a few have ocelli, small, simple eyes usually farther back on the head (on the vertex); these are more common in larvae than in adults. The anatomical organization of the compound eyes may be modified and depends on whether a species is primarily crepuscular, or diurnally or nocturnally active. Ocelli are found in the adult carpet beetle (Dermestidae), some rove beetles (Omaliinae), and the Derodontidae. Beetle antennae are primarily organs of sensory perception and can detect motion, odor and chemical substances, but may also be used to physically feel a beetle's environment. Beetle families may use antennae in different ways. For example, when moving quickly, tiger beetles may not be able to see very well and instead hold their antennae rigidly in front of them in order to avoid obstacles. Certain Cerambycidae use antennae to balance, and blister beetles may use them for grasping. Some aquatic beetle species may use antennae for gathering air and passing it under the body whilst submerged. Equally, some families use antennae during mating, and a few species use them for defense. In the cerambycid Onychocerus albitarsis, the antennae have venom injecting structures used in defense, which is unique among arthropods. Antennae vary greatly in form, sometimes between the sexes, but are often similar within any given family. Antennae may be clubbed, threadlike, angled, shaped like a string of beads, comb-like (either on one side or both, bipectinate), or toothed. The physical variation of antennae is important for the identification of many beetle groups. The Curculionidae have elbowed or geniculate antennae. Feather like flabellate antennae are a restricted form found in the Rhipiceridae and a few other families. The Silphidae have a capitate antennae with a spherical head at the tip. The Scarabaeidae typically have lamellate antennae with the terminal segments extended into long flat structures stacked together. The Carabidae typically have thread-like antennae. The antennae arises between the eye and the mandibles and in the Tenebrionidae, the antennae rise in front of a notch that breaks the usually circular outline of the compound eye. They are segmented and usually consist of 11 parts, the first part is called the scape and the second part is the pedicel. The other segments are jointly called the flagellum. Beetles have mouthparts like those of grasshoppers. The mandibles appear as large pincers on the front of some beetles. The mandibles are a pair of hard, often tooth-like structures that move horizontally to grasp, crush, or cut food or enemies (see defence, below). Two pairs of finger-like appendages, the maxillary and labial palpi, are found around the mouth in most beetles, serving to move food into the mouth. In many species, the mandibles are sexually dimorphic, with those of the males enlarged enormously compared with those of females of the same species. Thorax The thorax is segmented into the two discernible parts, the pro- and pterothorax. The pterothorax is the fused meso- and metathorax, which are commonly separated in other insect species, although flexibly articulate from the prothorax. When viewed from below, the thorax is that part from which all three pairs of legs and both pairs of wings arise. The abdomen is everything posterior to the thorax. When viewed from above, most beetles appear to have three clear sections, but this is deceptive: on the beetle's upper surface, the middle section is a hard plate called the pronotum, which is only the front part of the thorax; the back part of the thorax is concealed by the beetle's wings. This further segmentation is usually best seen on the abdomen. Legs The multisegmented legs end in two to five small segments called tarsi. Like many other insect orders, beetles have claws, usually one pair, on the end of the last tarsal segment of each leg. While most beetles use their legs for walking, legs have been variously adapted for other uses. Aquatic beetles including the Dytiscidae (diving beetles), Haliplidae, and many species of Hydrophilidae, the legs, often the last pair, are modified for swimming, typically with rows of long hairs. Male diving beetles have suctorial cups on their forelegs that they use to grasp females. Other beetles have fossorial legs widened and often spined for digging. Species with such adaptations are found among the scarabs, ground beetles, and clown beetles (Histeridae). The hind legs of some beetles, such as flea beetles (within Chrysomelidae) and flea weevils (within Curculionidae), have enlarged femurs that help them leap. Wings The forewings of beetles are not used for flight, but form elytra which cover the hind part of the body and protect the hindwings. The elytra are usually hard shell-like structures which must be raised to allow the hindwings to move for flight. However, in the soldier beetles (Cantharidae), the elytra are soft, earning this family the name of leatherwings. Other soft wing beetles include the net-winged beetle Calopteron discrepans, which has brittle wings that rupture easily in order to release chemicals for defense. Beetles' flight wings are crossed with veins and are folded after landing, often along these veins, and stored below the elytra. A fold (jugum) of the membrane at the base of each wing is characteristic. Some beetles have lost the ability to fly. These include some ground beetles (Carabidae) and some true weevils (Curculionidae), as well as desert- and cave-dwelling species of other families. Many have the two elytra fused together, forming a solid shield over the abdomen. In a few families, both the ability to fly and the elytra have been lost, as in the glow-worms (Phengodidae), where the females resemble larvae throughout their lives. The presence of elytra and wings does not always indicate that the beetle will fly. For example, the tansy beetle walks between habitats despite being physically capable of flight. Abdomen The abdomen is the section behind the metathorax, made up of a series of rings, each with a hole for breathing and respiration, called a spiracle, composing three different segmented sclerites: the tergum, pleura, and the sternum. The tergum in almost all species is membranous, or usually soft and concealed by the wings and elytra when not in flight. The pleura are usually small or hidden in some species, with each pleuron having a single spiracle. The sternum is the most widely visible part of the abdomen, being a more or less sclerotized segment. The abdomen itself does not have any appendages, but some (for example, Mordellidae) have articulating sternal lobes. Anatomy and physiology Digestive system The digestive system of beetles is primarily adapted for a herbivorous diet. Digestion takes place mostly in the anterior midgut, although in predatory groups like the Carabidae, most digestion occurs in the crop by means of midgut enzymes. In the Elateridae, the larvae are liquid feeders that extraorally digest their food by secreting enzymes. The alimentary canal basically consists of a short, narrow pharynx, a widened expansion, the crop, and a poorly developed gizzard. This is followed by the midgut, that varies in dimensions between species, with a large amount of cecum, and the hindgut, with varying lengths. There are typically four to six Malpighian tubules. Nervous system The nervous system in beetles contains all the types found in insects, varying between different species, from three thoracic and seven or eight abdominal ganglia which can be distinguished to that in which all the thoracic and abdominal ganglia are fused to form a composite structure. Respiratory system Like most insects, beetles inhale air, for the oxygen it contains, and exhale carbon dioxide, via a tracheal system. Air enters the body through spiracles, and circulates within the haemocoel in a system of tracheae and tracheoles, through whose walls the gases can diffuse. Diving beetles, such as the Dytiscidae, carry a bubble of air with them when they dive. Such a bubble may be contained under the elytra or against the body by specialized hydrophobic hairs. The bubble covers at least some of the spiracles, permitting air to enter the tracheae. The function of the bubble is not only to contain a store of air but to act as a physical gill. The air that it traps is in contact with oxygenated water, so as the animal's consumption depletes the oxygen in the bubble, more oxygen can diffuse in to replenish it. Carbon dioxide is more soluble in water than either oxygen or nitrogen, so it readily diffuses out faster than in. Nitrogen is the most plentiful gas in the bubble, and the least soluble, so it constitutes a relatively static component of the bubble and acts as a stable medium for respiratory gases to accumulate in and pass through. Occasional visits to the surface are sufficient for the beetle to re-establish the constitution of the bubble. Circulatory system Like other insects, beetles have open circulatory systems, based on hemolymph rather than blood. As in other insects, a segmented tube-like heart is attached to the dorsal wall of the hemocoel. It has paired inlets or ostia at intervals down its length, and circulates the hemolymph from the main cavity of the haemocoel and out through the anterior cavity in the head. Specialized organs Different glands are specialized for different pheromones to attract mates. Pheromones from species of Rutelinae are produced from epithelial cells lining the inner surface of the apical abdominal segments; amino acid-based pheromones of Melolonthinae are produced from eversible glands on the abdominal apex. Other species produce different types of pheromones. Dermestids produce esters, and species of Elateridae produce fatty acid-derived aldehydes and acetates. To attract a mate, fireflies (Lampyridae) use modified fat body cells with transparent surfaces backed with reflective uric acid crystals to produce light by bioluminescence. Light production is highly efficient, by oxidation of luciferin catalyzed by enzymes (luciferases) in the presence of adenosine triphosphate (ATP) and oxygen, producing oxyluciferin, carbon dioxide, and light. Tympanal organs or hearing organs consist of a membrane (tympanum) stretched across a frame backed by an air sac and associated sensory neurons, are found in two families. Several species of the genus Cicindela (Carabidae) have hearing organs on the dorsal surfaces of their first abdominal segments beneath the wings; two tribes in the Dynastinae (within the Scarabaeidae) have hearing organs just beneath their pronotal shields or neck membranes. Both families are sensitive to ultrasonic frequencies, with strong evidence indicating they function to detect the presence of bats by their ultrasonic echolocation. Reproduction and development Beetles are members of the superorder Endopterygota, and accordingly most of them undergo complete metamorphosis. The typical form of metamorphosis in beetles passes through four main stages: the egg, the larva, the pupa, and the imago or adult. The larvae are commonly called grubs and the pupa sometimes is called the chrysalis. In some species, the pupa may be enclosed in a cocoon constructed by the larva towards the end of its final instar. Some beetles, such as typical members of the families Meloidae and Rhipiphoridae, go further, undergoing hypermetamorphosis in which the first instar takes the form of a triungulin. Mating Some beetles have intricate mating behaviour. Pheromone communication is often important in locating a mate. Different species use different pheromones. Scarab beetles such as the Rutelinae use pheromones derived from fatty acid synthesis, while other scarabs such as the Melolonthinae use amino acids and terpenoids. Another way beetles find mates is seen in the fireflies (Lampyridae) which are bioluminescent, with abdominal light-producing organs. The males and females engage in a complex dialog before mating; each species has a unique combination of flight patterns, duration, composition, and intensity of the light produced. Before mating, males and females may stridulate, or vibrate the objects they are on. In the Meloidae, the male climbs onto the dorsum of the female and strokes his antennae on her head, palps, and antennae. In Eupompha, the male draws his antennae along his longitudinal vertex. They may not mate at all if they do not perform the precopulatory ritual. This mating behavior may be different amongst dispersed populations of the same species. For example, the mating of a Russian population of tansy beetle (Chysolina graminis) is preceded by an elaborate ritual involving the male tapping the female's eyes, pronotum and antennae with its antennae, which is not evident in the population of this species in the United Kingdom. Competition can play a part in the mating rituals of species such as burying beetles (Nicrophorus), the insects fighting to determine which can mate. Many male beetles are territorial and fiercely defend their territories from intruding males. In such species, the male often has horns on the head or thorax, making its body length greater than that of a female. Copulation is generally quick, but in some cases lasts for several hours. During copulation, sperm cells are transferred to the female to fertilize the egg. Life cycle Egg Essentially all beetles lay eggs, though some myrmecophilous Aleocharinae and some Chrysomelinae which live in mountains or the subarctic are ovoviviparous, laying eggs which hatch almost immediately. Beetle eggs generally have smooth surfaces and are soft, though the Cupedidae have hard eggs. Eggs vary widely between species: the eggs tend to be small in species with many instars (larval stages), and in those that lay large numbers of eggs. A female may lay from several dozen to several thousand eggs during her lifetime, depending on the extent of parental care. This ranges from the simple laying of eggs under a leaf, to the parental care provided by scarab beetles, which house, feed and protect their young. The Attelabidae roll leaves and lay their eggs inside the roll for protection. Larva The larva is usually the principal feeding stage of the beetle life cycle. Larvae tend to feed voraciously once they emerge from their eggs. Some feed externally on plants, such as those of certain leaf beetles, while others feed within their food sources. Examples of internal feeders are most Buprestidae and longhorn beetles. The larvae of many beetle families are predatory like the adults (ground beetles, ladybirds, rove beetles). The larval period varies between species, but can be as long as several years. The larvae of skin beetles undergo a degree of reversed development when starved, and later grow back to the previously attained level of maturity. The cycle can be repeated many times (see Biological immortality). Larval morphology is highly varied amongst species, with well-developed and sclerotized heads, distinguishable thoracic and abdominal segments (usually the tenth, though sometimes the eighth or ninth). Beetle larvae can be differentiated from other insect larvae by their hardened, often darkened heads, the presence of chewing mouthparts, and spiracles along the sides of their bodies. Like adult beetles, the larvae are varied in appearance, particularly between beetle families. Beetles with somewhat flattened, highly mobile larvae include the ground beetles and rove beetles; their larvae are described as campodeiform. Some beetle larvae resemble hardened worms with dark head capsules and minute legs. These are elateriform larvae, and are found in the click beetle (Elateridae) and darkling beetle (Tenebrionidae) families. Some elateriform larvae of click beetles are known as wireworms. Beetles in the Scarabaeoidea have short, thick larvae described as scarabaeiform, more commonly known as grubs. All beetle larvae go through several instars, which are the developmental stages between each moult. In many species, the larvae simply increase in size with each successive instar as more food is consumed. In some cases, however, more dramatic changes occur. Among certain beetle families or genera, particularly those that exhibit parasitic lifestyles, the first instar (the planidium) is highly mobile to search out a host, while the following instars are more sedentary and remain on or within their host. This is known as hypermetamorphosis; it occurs in the Meloidae, Micromalthidae, and Ripiphoridae. The blister beetle Epicauta vittata (Meloidae), for example, has three distinct larval stages. Its first stage, the triungulin, has longer legs to go in search of the eggs of grasshoppers. After feeding for a week it moults to the second stage, called the caraboid stage, which resembles the larva of a carabid beetle. In another week it moults and assumes the appearance of a scarabaeid larva—the scarabaeidoid stage. Its penultimate larval stage is the pseudo-pupa or the coarcate larva, which will overwinter and pupate until the next spring. The larval period can vary widely. A fungus feeding staphylinid Phanerota fasciata undergoes three moults in 3.2 days at room temperature while Anisotoma sp. (Leiodidae) completes its larval stage in the fruiting body of slime mold in 2 days and possibly represents the fastest growing beetles. Dermestid beetles, Trogoderma inclusum can remain in an extended larval state under unfavourable conditions, even reducing their size between moults. A larva is reported to have survived for 3.5 years in an enclosed container. Pupa and adult As with all endopterygotes, beetle larvae pupate, and from these pupae emerge fully formed, sexually mature adult beetles, or imagos. Pupae never have mandibles (they are adecticous). In most pupae, the appendages are not attached to the body and are said to be exarate; in a few beetles (Staphylinidae, Ptiliidae etc.) the appendages are fused with the body (termed as obtect pupae). Adults have extremely variable lifespans, from weeks to years, depending on the species. Some wood-boring beetles can have extremely long life-cycles. It is believed that when furniture or house timbers are infested by beetle larvae, the timber already contained the larvae when it was first sawn up. A birch bookcase 40 years old released adult Eburia quadrigeminata (Cerambycidae), while Buprestis aurulenta and other Buprestidae have been documented as emerging as much as 51 years after manufacture of wooden items. Behaviour Locomotion The elytra allow beetles to both fly and move through confined spaces, doing so by folding the delicate wings under the elytra while not flying, and folding their wings out just before takeoff. The unfolding and folding of the wings is operated by muscles attached to the wing base; as long as the tension on the radial and cubital veins remains, the wings remain straight. Some beetle species (many Cetoniinae; some Scarabaeinae, Curculionidae and Buprestidae) fly with the elytra closed, with the metathoracic wings extended under the lateral elytra margins. The altitude reached by beetles in flight varies. One study investigating the flight altitude of the ladybird species Coccinella septempunctata and Harmonia axyridis using radar showed that, whilst the majority in flight over a single location were at 150–195 m above ground level, some reached altitudes of over 1100 m. Many rove beetles have greatly reduced elytra, and while they are capable of flight, they most often move on the ground: their soft bodies and strong abdominal muscles make them flexible, easily able to wriggle into small cracks. Aquatic beetles use several techniques for retaining air beneath the water's surface. Diving beetles (Dytiscidae) hold air between the abdomen and the elytra when diving. Hydrophilidae have hairs on their under surface that retain a layer of air against their bodies. Adult crawling water beetles use both their elytra and their hind coxae (the basal segment of the back legs) in air retention, while whirligig beetles simply carry an air bubble down with them whenever they dive. Communication Beetles have a variety of ways to communicate, including the use of pheromones. The mountain pine beetle emits a pheromone to attract other beetles to a tree. The mass of beetles are able to overcome the chemical defenses of the tree. After the tree's defenses have been exhausted, the beetles emit an anti-aggregation pheromone. This species can stridulate to communicate, but others may use sound to defend themselves when attacked. Parental care Parental care is found in a few families of beetle, perhaps for protection against adverse conditions and predators. The rove beetle Bledius spectabilis lives in salt marshes, so the eggs and larvae are endangered by the rising tide. The maternal beetle patrols the eggs and larvae, burrowing to keep them from flooding and asphyxiating, and protects them from the predatory carabid beetle Dicheirotrichus gustavi and from the parasitoidal wasp Barycnemis blediator, which kills some 15% of the larvae. Burying beetles are attentive parents, and participate in cooperative care and feeding of their offspring. Both parents work to bury small animal carcass to serve as a food resource for their young and build a brood chamber around it. The parents prepare the carcass and protect it from competitors and from early decomposition. After their eggs hatch, the parents keep the larvae clean of fungus and bacteria and help the larvae feed by regurgitating food for them. Some dung beetles provide parental care, collecting herbivore dung and laying eggs within that food supply, an instance of mass provisioning. Some species do not leave after this stage, but remain to safeguard their offspring. Most species of beetles do not display parental care behaviors after the eggs have been laid. Subsociality, where females guard their offspring, is well-documented in two families of Chrysomelidae, Cassidinae and Chrysomelinae. Eusociality Eusociality involves cooperative brood care (including brood care of offspring from other individuals), overlapping generations within a colony of adults, and a division of labor into reproductive and non-reproductive groups. Few organisms outside Hymenoptera exhibit this behavior; the only beetle to do so is the weevil Austroplatypus incompertus. This Australian species lives in horizontal networks of tunnels, in the heartwood of Eucalyptus trees. It is one of more than 300 species of wood-boring Ambrosia beetles which distribute the spores of ambrosia fungi. The fungi grow in the beetles' tunnels, providing food for the beetles and their larvae; female offspring remain in the tunnels and maintain the fungal growth, probably never reproducing. Cooperative brood care is also found in the bess beetles (Passalidae) where the larvae feed on the semi-digested faeces of the adults. Feeding Beetles are able to exploit a wide diversity of food sources available in their many habitats. Some are omnivores, eating both plants and animals. Other beetles are highly specialized in their diet. Many species of leaf beetles, longhorn beetles, and weevils are very host-specific, feeding on only a single species of plant. Ground beetles and rove beetles (Staphylinidae), among others, are primarily carnivorous and catch and consume many other arthropods and small prey, such as earthworms and snails. While most predatory beetles are generalists, a few species have more specific prey requirements or preferences. In some species, digestive ability relies upon a symbiotic relationship with fungi - some beetles have yeasts living their guts, including some yeasts previously undiscovered anywhere else. Decaying organic matter is a primary diet for many species. This can range from dung, which is consumed by coprophagous species (such as certain scarab beetles in the Scarabaeidae), to dead animals, which are eaten by necrophagous species (such as the carrion beetles, Silphidae). Some beetles found in dung and carrion are in fact predatory. These include members of the Histeridae and Silphidae, preying on the larvae of coprophagous and necrophagous insects. Many beetles feed under bark, some feed on wood while others feed on fungi growing on wood or leaf-litter. Some beetles have special mycangia, structures for the transport of fungal spores. Ecology Anti-predator adaptations Beetles, both adults and larvae, are the prey of many animal predators including mammals from bats to rodents, birds, lizards, amphibians, fishes, dragonflies, robberflies, reduviid bugs, ants, other beetles, and spiders. Beetles use a variety of anti-predator adaptations to defend themselves. These include camouflage and mimicry against predators that hunt by sight, toxicity, and defensive behaviour. Camouflage Camouflage is common and widespread among beetle families, especially those that feed on wood or vegetation, such as leaf beetles (Chrysomelidae, which are often green) and weevils. In some species, sculpturing or various colored scales or hairs cause beetles such as the avocado weevil Heilipus apiatus to resemble bird dung or other inedible objects. Many beetles that live in sandy environments blend in with the coloration of that substrate. Mimicry and aposematism Some longhorn beetles (Cerambycidae) are effective Batesian mimics of wasps. Beetles may combine coloration with behavioural mimicry, acting like the wasps they already closely resemble. Many other beetles, including ladybirds, blister beetles, and lycid beetles secrete distasteful or toxic substances to make them unpalatable or poisonous, and are often aposematic, where bright or contrasting coloration warn off predators; many beetles and other insects mimic these chemically protected species. Chemical defense is important in some species, usually being advertised by bright aposematic colors. Some Tenebrionidae use their posture for releasing noxious chemicals to warn off predators. Chemical defenses may serve purposes other than just protection from vertebrates, such as protection from a wide range of microbes. Some species sequester chemicals from the plants they feed on, incorporating them into their own defenses. Other species have special glands to produce deterrent chemicals. The defensive glands of carabid ground beetles produce a variety of hydrocarbons, aldehydes, phenols, quinones, esters, and acids released from an opening at the end of the abdomen. African carabid beetles (for example, Anthia) employ the same chemicals as ants: formic acid. Bombardier beetles have well-developed pygidial glands that empty from the sides of the intersegment membranes between the seventh and eighth abdominal segments. The gland is made of two containing chambers, one for hydroquinones and hydrogen peroxide, the other holding hydrogen peroxide and catalase enzymes. These chemicals mix and result in an explosive ejection, reaching a temperature of around , with the breakdown of hydroquinone to hydrogen, oxygen, and quinone. The oxygen propels the noxious chemical spray as a jet that can be aimed accurately at predators. Other defenses Large ground-dwelling beetles such as Carabidae, the rhinoceros beetle and the longhorn beetles defend themselves using strong mandibles, or heavily sclerotised (armored) spines or horns to deter or fight off predators. Many species of weevil that feed out in the open on leaves of plants react to attack by employing a drop-off reflex. Some combine it with thanatosis, in which they close up their appendages and "play dead". The click beetles (Elateridae) can suddenly catapult themselves out of danger by releasing the energy stored by a click mechanism, which consists of a stout spine on the prosternum and a matching groove in the mesosternum. Some species startle an attacker by producing sounds through a process known as stridulation. Parasitism A few species of beetles are ectoparasitic on mammals. One such species, Platypsyllus castoris, parasitises beavers (Castor spp.). This beetle lives as a parasite both as a larva and as an adult, feeding on epidermal tissue and possibly on skin secretions and wound exudates. They are strikingly flattened dorsoventrally, no doubt as an adaptation for slipping between the beavers' hairs. They are wingless and eyeless, as are many other ectoparasites. Others are kleptoparasites of other invertebrates, such as the small hive beetle (Aethina tumida) that infests honey bee nests, while many species are parasitic inquilines or commensal in the nests of ants. A few groups of beetles are primary parasitoids of other insects, feeding off of, and eventually killing their hosts. Pollination Beetle-pollinated flowers are usually large, greenish or off-white in color, and heavily scented. Scents may be spicy, fruity, or similar to decaying organic material. Beetles were most likely the first insects to pollinate flowers. Most beetle-pollinated flowers are flattened or dish-shaped, with pollen easily accessible, although they may include traps to keep the beetle longer. The plants' ovaries are usually well protected from the biting mouthparts of their pollinators. The beetle families that habitually pollinate flowers are the Buprestidae, Cantharidae, Cerambycidae, Cleridae, Dermestidae, Lycidae, Melyridae, Mordellidae, Nitidulidae and Scarabaeidae. Beetles may be particularly important in some parts of the world such as semiarid areas of southern Africa and southern California and the montane grasslands of KwaZulu-Natal in South Africa. Mutualism Mutualism is well known in a few beetles, such as the ambrosia beetle, which partners with fungi to digest the wood of dead trees. The beetles excavate tunnels in dead trees in which they cultivate fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases spores of its fungal symbiont. The fungus penetrates the plant's xylem tissue, digests it, and concentrates the nutrients on and near the surface of the beetle gallery, so the weevils and the fungus both benefit. The beetles cannot eat the wood due to toxins, and uses its relationship with fungi to help overcome the defenses of its host tree in order to provide nutrition for their larvae. Chemically mediated by a bacterially produced polyunsaturated peroxide, this mutualistic relationship between the beetle and the fungus is coevolved. Tolerance of extreme environments About 90% of beetle species enter a period of adult diapause, a quiet phase with reduced metabolism to tide unfavourable environmental conditions. Adult diapause is the most common form of diapause in Coleoptera. To endure the period without food (often lasting many months) adults prepare by accumulating reserves of lipids, glycogen, proteins and other substances needed for resistance to future hazardous changes of environmental conditions. This diapause is induced by signals heralding the arrival of the unfavourable season; usually the cue is photoperiodic. Short (decreasing) day length serves as a signal of approaching winter and induces winter diapause (hibernation). A study of hibernation in the Arctic beetle Pterostichus brevicornis showed that the body fat levels of adults were highest in autumn with the alimentary canal filled with food, but empty by the end of January. This loss of body fat was a gradual process, occurring in combination with dehydration. All insects are poikilothermic, so the ability of a few beetles to live in extreme environments depends on their resilience to unusually high or low temperatures. The bark beetle Pityogenes chalcographus can survive whilst overwintering beneath tree bark; the Alaskan beetle Cucujus clavipes puniceus is able to withstand ; its larvae may survive . At these low temperatures, the formation of ice crystals in internal fluids is the biggest threat to survival to beetles, but this is prevented through the production of antifreeze proteins that stop water molecules from grouping together. The low temperatures experienced by Cucujus clavipes can be survived through their deliberate dehydration in conjunction with the antifreeze proteins. This concentrates the antifreezes several fold. The hemolymph of the mealworm beetle Tenebrio molitor contains several antifreeze proteins. The Alaskan beetle Upis ceramboides can survive −60 °C: its cryoprotectants are xylomannan, a molecule consisting of a sugar bound to a fatty acid, and the sugar-alcohol, threitol. Conversely, desert dwelling beetles are adapted to tolerate high temperatures. For example, the Tenebrionid beetle Onymacris rugatipennis can withstand . Tiger beetles in hot, sandy areas are often whitish (for example, Habroscelimorpha dorsalis), to reflect more heat than a darker color would. These beetles also exhibits behavioural adaptions to tolerate the heat: they are able to stand erect on their tarsi to hold their bodies away from the hot ground, seek shade, and turn to face the sun so that only the front parts of their heads are directly exposed. The fogstand beetle of the Namib Desert, Stenocara gracilipes, is able to collect water from fog, as its elytra have a textured surface combining hydrophilic (water-loving) bumps and waxy, hydrophobic troughs. The beetle faces the early morning breeze, holding up its abdomen; droplets condense on the elytra and run along ridges towards their mouthparts. Similar adaptations are found in several other Namib desert beetles such as Onymacris unguicularis. Some terrestrial beetles that exploit shoreline and floodplain habitats have physiological adaptations for surviving floods. In the event of flooding, adult beetles may be mobile enough to move away from flooding, but larvae and pupa often cannot. Adults of Cicindela togata are unable to survive immersion in water, but larvae are able to survive a prolonged period, up to 6 days, of anoxia during floods. Anoxia tolerance in the larvae may have been sustained by switching to anaerobic metabolic pathways or by reducing metabolic rate. Anoxia tolerance in the adult carabid beetle Pelophilia borealis was tested in laboratory conditions and it was found that they could survive a continuous period of up to 127 days in an atmosphere of 99.9% nitrogen at 0 °C. Migration Many beetle species undertake annual mass movements which are termed as migrations. These include the pollen beetle Meligethes aeneus and many species of coccinellids. These mass movements may also be opportunistic, in search of food, rather than seasonal. A 2008 study of an unusually large outbreak of Mountain Pine Beetle (Dendroctonus ponderosae) in British Columbia found that beetles were capable of flying 30–110 km per day in densities of up to 18,600 beetles per hectare. Relationship to humans In ancient cultures Several species of dung beetle, especially the sacred scarab, Scarabaeus sacer, were revered in Ancient Egypt. The hieroglyphic image of the beetle may have had existential, fictional, or ontologic significance. Images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals are known from the Sixth Dynasty and up to the period of Roman rule. The scarab was of prime significance in the funerary cult of ancient Egypt. The scarab was linked to Khepri, the god of the rising sun, from the supposed resemblance of the rolling of the dung ball by the beetle to the rolling of the sun by the god. Some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these are the Judean LMLK seals, where eight of 21 designs contained scarab beetles, which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah. Beetles are mentioned as a symbol of the sun, as in ancient Egypt, in Plutarch's 1st century Moralia. The Greek Magical Papyri of the 2nd century BC to the 5th century AD describe scarabs as an ingredient in a spell. Pliny the Elder discusses beetles in his Natural History, describing the stag beetle: "Some insects, for the preservation of their wings, are covered with (elytra)—the beetle, for instance, the wing of which is peculiarly fine and frail. To these insects a sting has been denied by Nature; but in one large kind we find horns of a remarkable length, two-pronged at the extremities, and forming pincers, which the animal closes when it is its intention to bite." The stag beetle is recorded in a Greek myth by Nicander and recalled by Antoninus Liberalis in which Cerambus is turned into a beetle: "He can be seen on trunks and has hook-teeth, ever moving his jaws together. He is black, long and has hard wings like a great dung beetle". The story concludes with the comment that the beetles were used as toys by young boys, and that the head was removed and worn as a pendant. As pests About 75% of beetle species are phytophagous in both the larval and adult stages. Many feed on economically important plants and stored plant products, including trees, cereals, tobacco, and dried fruits. Some, such as the boll weevil, which feeds on cotton buds and flowers, can cause extremely serious damage to agriculture. The boll weevil crossed the Rio Grande near Brownsville, Texas, to enter the United States from Mexico around 1892, and had reached southeastern Alabama by 1915. By the mid-1920s, it had entered all cotton-growing regions in the US, traveling per year. It remains the most destructive cotton pest in North America. Mississippi State University has estimated, since the boll weevil entered the United States, it has cost cotton producers about $13 billion, and in recent times about $300 million per year. The bark beetle, elm leaf beetle and the Asian longhorned beetle (Anoplophora glabripennis) are among the species that attack elm trees. Bark beetles (Scolytidae) carry Dutch elm disease as they move from infected breeding sites to healthy trees. The disease has devastated elm trees across Europe and North America. Some species of beetle have evolved immunity to insecticides. For example, the Colorado potato beetle, Leptinotarsa decemlineata, is a destructive pest of potato plants. Its hosts include other members of the Solanaceae, such as nightshade, tomato, eggplant and capsicum, as well as the potato. Different populations have between them developed resistance to all major classes of insecticide. The Colorado potato beetle was evaluated as a tool of entomological warfare during World War II, the idea being to use the beetle and its larvae to damage the crops of enemy nations. Germany tested its Colorado potato beetle weaponisation program south of Frankfurt, releasing 54,000 beetles. The death watch beetle, Xestobium rufovillosum (Ptinidae), is a serious pest of older wooden buildings in Europe. It attacks hardwoods such as oak and chestnut, always where some fungal decay has taken or is taking place. The actual introduction of the pest into buildings is thought to take place at the time of construction. Other pests include the coconut hispine beetle, Brontispa longissima, which feeds on young leaves, seedlings and mature coconut trees, causing serious economic damage in the Philippines. The mountain pine beetle is a destructive pest of mature or weakened lodgepole pine, sometimes affecting large areas of Canada. As beneficial resources Beetles can be beneficial to human economics by controlling the populations of pests. The larvae and adults of some species of lady beetles (Coccinellidae) feed on aphids that are pests. Other lady beetles feed on scale insects, whitefly and mealybugs. If normal food sources are scarce, they may feed on small caterpillars, young plant bugs, or honeydew and nectar. Ground beetles (Carabidae) are common predators of many insect pests, including fly eggs, caterpillars, and wireworms. Ground beetles can help to control weeds by eating their seeds in the soil, reducing the need for herbicides to protect crops. The effectiveness of some species in reducing certain plant populations has resulted in the deliberate introduction of beetles in order to control weeds. For example, the genus Zygogramma is native to North America but has been used to control Parthenium hysterophorus in India and Ambrosia artemisiifolia in Russia. Dung beetles (Scarabidae) have been successfully used to reduce the populations of pestilent flies, such as Musca vetustissima and Haematobia exigua which are serious pests of cattle in Australia. The beetles make the dung unavailable to breeding pests by quickly rolling and burying it in the soil, with the added effect of improving soil fertility, tilth, and nutrient cycling. The Australian Dung Beetle Project (1965–1985), introduced species of dung beetle to Australia from South Africa and Europe to reduce populations of Musca vetustissima, following successful trials of this technique in Hawaii. The American Institute of Biological Sciences reports that dung beetles save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces. The Dermestidae are often used in taxidermy and in the preparation of scientific specimens, to clean soft tissue from bones. Larvae feed on and remove cartilage along with other soft tissue. As food and medicine Beetles are the most widely eaten insects, with about 344 species used as food, usually at the larval stage. The mealworm (the larva of the darkling beetle) and the rhinoceros beetle are among the species commonly eaten. A wide range of species is also used in folk medicine to treat those suffering from a variety of disorders and illnesses, though this is done without clinical studies supporting the efficacy of such treatments. As biodiversity indicators Due to their habitat specificity, many species of beetles have been suggested as suitable as indicators, their presence, numbers, or absence providing a measure of habitat quality. Predatory beetles such as the tiger beetles (Cicindelidae) have found scientific use as an indicator taxon for measuring regional patterns of biodiversity. They are suitable for this as their taxonomy is stable; their life history is well described; they are large and simple to observe when visiting a site; they occur around the world in many habitats, with species specialised to particular habitats; and their occurrence by species accurately indicates other species, both vertebrate and invertebrate. According to the habitats, many other groups such as the rove beetles in human-modified habitats, dung beetles in savannas and saproxylic beetles in forests have been suggested as potential indicator species. In art and adornment Many beetles have durable elytra that has been used as material in art, with beetlewing the best example. Sometimes, they are incorporated into ritual objects for their religious significance. Whole beetles, either as-is or encased in clear plastic, are made into objects ranging from cheap souvenirs such as key chains to expensive fine-art jewellery. In parts of Mexico, beetles of the genus Zopherus are made into living brooches by attaching costume jewelry and golden chains, which is made possible by the incredibly hard elytra and sedentary habits of the genus. In entertainment Fighting beetles are used for entertainment and gambling. This sport exploits the territorial behavior and mating competition of certain species of large beetles. In the Chiang Mai district of northern Thailand, male Xylotrupes rhinoceros beetles are caught in the wild and trained for fighting. Females are held inside a log to stimulate the fighting males with their pheromones. These fights may be competitive and involve gambling both money and property. In South Korea the Dytiscidae species Cybister tripunctatus is used in a roulette-like game. Beetles are sometimes used as instruments: the Onabasulu of Papua New Guinea historically used the "hugu" weevil Rhynchophorus ferrugineus as a musical instrument by letting the human mouth serve as a variable resonance chamber for the wing vibrations of the live adult beetle. As pets Some species of beetle are kept as pets, for example diving beetles (Dytiscidae) may be kept in a domestic fresh water tank. In Japan the practice of keeping horned rhinoceros beetles (Dynastinae) and stag beetles (Lucanidae) is particularly popular amongst young boys. Such is the popularity in Japan that vending machines dispensing live beetles were developed in 1999, each holding up to 100 stag beetles. As things to collect Beetle collecting became extremely popular in the Victorian era. The naturalist Alfred Russel Wallace collected (by his own count) a total of 83,200 beetles during the eight years described in his 1869 book The Malay Archipelago, including 2,000 species new to science. As inspiration for technologies Several coleopteran adaptations have attracted interest in biomimetics with possible commercial applications. The bombardier beetle's powerful repellent spray has inspired the development of a fine mist spray technology, claimed to have a low carbon impact compared to aerosol sprays. Moisture harvesting behavior by the Namib desert beetle (Stenocara gracilipes) has inspired a self-filling water bottle which utilises hydrophilic and hydrophobic materials to benefit people living in dry regions with no regular rainfall. Living beetles have been used as cyborgs. A Defense Advanced Research Projects Agency funded project implanted electrodes into Mecynorhina torquata beetles, allowing them to be remotely controlled via a radio receiver held on its back, as proof-of-concept for surveillance work. Similar technology has been applied to enable a human operator to control the free-flight steering and walking gaits of Mecynorhina torquata as well as graded turning and backward walking of Zophobas morio. Research published in 2020 sought to create a robotic camera backpack for beetles. Miniature cameras weighing 248 mg were attached to live beetles of the Tenebrionid genera Asbolus and Eleodes. The cameras filmed over a 60° range for up to 6 hours. In conservation Since beetles form such a large part of the world's biodiversity, their conservation is important, and equally, loss of habitat and biodiversity is essentially certain to impact on beetles. Many species of beetles have very specific habitats and long life cycles that make them vulnerable. Some species are highly threatened while others are already feared extinct. Island species tend to be more susceptible as in the case of Helictopleurus undatus of Madagascar which is thought to have gone extinct during the late 20th century. Conservationists have attempted to arouse a liking for beetles with flagship species like the stag beetle, Lucanus cervus, and tiger beetles (Cicindelidae). In Japan the Genji firefly, Luciola cruciata, is extremely popular, and in South Africa the Addo elephant dung beetle offers promise for broadening ecotourism beyond the big five tourist mammal species. Popular dislike of pest beetles, too, can be turned into public interest in insects, as can unusual ecological adaptations of species like the fairy shrimp hunting beetle, Cicinis bruchi. Notes References Bibliography Further reading External links Coleoptera from the Tree of Life Web Project Käfer der Welt Coleoptera Atlas Beetles – Coleoptera Extant Pennsylvanian first appearances Insects in culture
3,217
7,053
https://en.wikipedia.org/wiki/Cannon
Cannon
A cannon is a large-caliber gun classified as a type of artillery, which usually launches a projectile using explosive chemical propellant. Gunpowder ("black powder") was the primary propellant before the invention of smokeless powder during the late 19th century. Cannons vary in gauge, effective range, mobility, rate of fire, angle of fire and firepower; different forms of cannon combine and balance these attributes in varying degrees, depending on their intended use on the battlefield. A cannon is a type of heavy artillery weapon. The word cannon is derived from several languages, in which the original definition can usually be translated as tube, cane, or reed. In the modern era, the term cannon has fallen into decline, replaced by guns or artillery, if not a more specific term such as howitzer or mortar, except for high-caliber automatic weapons firing bigger rounds than machine guns, called autocannons. The earliest known depiction of cannons appeared in Song dynasty China as early as the 12th century; however, solid archaeological and documentary evidence of cannons do not appear until the 13th century. In 1288 Yuan dynasty troops are recorded to have used hand cannon in combat, and the earliest extant cannon bearing a date of production comes from the same period. By the early 14th century, possible mentions of cannon had appeared in the Middle East and the depiction of one in Europe by 1326. Recorded usage of cannon began appearing almost immediately after. They subsequently spread to India, their usage on the subcontinent being first attested to in 1366. By the end of the 14th century, cannons were widespread throughout Eurasia. Cannons were used primarily as anti-infantry weapons until around 1374, when large cannons were recorded to have breached walls for the first time in Europe. Cannons featured prominently as siege weapons, and ever larger pieces appeared. In 1464 a cannon known as the Great Turkish Bombard was created in the Ottoman Empire. Cannons as field artillery became more important after 1453, with the introduction of limber, which greatly improved cannon maneuverability and mobility. European cannons reached their longer, lighter, more accurate, and more efficient "classic form" around 1480. This classic European cannon design stayed relatively consistent in form with minor changes until the 1750s. Etymology and terminology The word cannon is derived from the Old Italian word , meaning "large tube", which came from Latin , in turn originating from the Greek (), "reed", and then generalised to mean any hollow tube-like object; cognate with Akkadian qanu(m) and Hebrew , "tube, reed". The word has been used to refer to a gun since 1326 in Italy, and 1418 in England. Both of the plural forms cannons and cannon are correct. History East Asia The cannon may have appeared as early as the 12th century in China, and was probably a parallel development or evolution of the fire-lance, a short ranged anti-personnel weapon combining a gunpowder-filled tube and a polearm of some sort. Co-viative projectiles such as iron scraps or porcelain shards were placed in fire lance barrels at some point, and eventually, the paper and bamboo materials of fire lance barrels were replaced by metal. The earliest known depiction of a cannon is a sculpture from the Dazu Rock Carvings in Sichuan dated to 1128, however, the earliest archaeological samples and textual accounts do not appear until the 13th century. The primary extant specimens of cannon from the 13th century are the Wuwei Bronze Cannon dated to 1227, the Heilongjiang hand cannon dated to 1288, and the Xanadu Gun dated to 1298. However, only the Xanadu gun contains an inscription bearing a date of production, so it is considered the earliest confirmed extant cannon. The Xanadu Gun is 34.7 cm in length and weighs 6.2 kg. The other cannons are dated using contextual evidence. The Heilongjiang hand cannon is also often considered by some to be the oldest firearm since it was unearthed near the area where the History of Yuan reports a battle took place involving hand cannons. According to the History of Yuan, in 1288, a Jurchen commander by the name of Li Ting led troops armed with hand cannons into battle against the rebel prince Nayan. Chen Bingying argues there were no guns before 1259, while Dang Shoushan believes the Wuwei gun and other Western Xia era samples point to the appearance of guns by 1220, and Stephen Haw goes even further by stating that guns were developed as early as 1200. Sinologist Joseph Needham and renaissance siege expert Thomas Arnold provide a more conservative estimate of around 1280 for the appearance of the "true" cannon. Whether or not any of these are correct, it seems likely that the gun was born sometime during the 13th century. References to cannons proliferated throughout China in the following centuries. Cannon featured in literary pieces. In 1341 Xian Zhang wrote a poem called The Iron Cannon Affair describing a cannonball fired from an eruptor which could "pierce the heart or belly when striking a man or horse, and even transfix several persons at once." By the 1350s the cannon was used extensively in Chinese warfare. In 1358 the Ming army failed to take a city due to its garrisons' usage of cannon, however, they themselves would use cannon, in the thousands, later on during the siege of Suzhou in 1366. The Mongol invasion of Java in 1293 brought gunpowder technology to the Nusantara archipelago in the form of cannon (Chinese: Pao). During the Ming dynasty cannons were used in riverine warfare at the Battle of Lake Poyang. One shipwreck in Shandong had a cannon dated to 1377 and an anchor dated to 1372. From the 13th to 15th centuries cannon-armed Chinese ships also travelled throughout Southeast Asia. Cannon appeared in Đại Việt by 1390 at the latest. The first of the western cannon to be introduced were breech-loaders in the early 16th century, which the Chinese began producing themselves by 1523 and improved on by including composite metal construction in their making. Japan did not acquire cannon until 1510 when a monk brought one back from China, and did not produce any in appreciable numbers. During the 1593 Siege of Pyongyang, 40,000 Ming troops deployed a variety of cannons against Japanese troops. Despite their defensive advantage and the use of arquebus by Japanese soldiers, the Japanese were at a severe disadvantage due to their lack of cannon. Throughout the Japanese invasions of Korea (1592–1598), the Ming–Joseon coalition used artillery widely in land and naval battles, including on the turtle ships of Yi Sun-sin. According to Ivan Petlin, the first Russian envoy to Beijing, in September 1619, the city was armed with large cannon with cannonballs weighing more than . His general observation was that the Chinese were militarily capable and had firearms: Western Europe Outside of China, the earliest texts to mention gunpowder are Roger Bacon's (1267) and in what has been interpreted as references to firecrackers. In the early 20th century, a British artillery officer proposed that another work tentatively attributed to Bacon, , dated to 1247, contained an encrypted formula for gunpowder hidden in the text. These claims have been disputed by science historians. In any case, the formula itself is not useful for firearms or even firecrackers, burning slowly and producing mostly smoke. There is a record of a gun in Europe dating to 1322 being discovered in the nineteenth century but the artifact has since been lost. The earliest known European depiction of a gun appeared in 1326 in a manuscript by Walter de Milemete, although not necessarily drawn by him, known as (Concerning the Majesty, Wisdom, and Prudence of Kings), which displays a gun with a large arrow emerging from it and its user lowering a long stick to ignite the gun through the touch hole. In the same year, another similar illustration showed a darker gun being set off by a group of knights, which also featured in another work of de Milemete's, . On 11 February of that same year, the Signoria of Florence appointed two officers to obtain and ammunition for the town's defense. In the following year a document from the Turin area recorded a certain amount was paid "for the making of a certain instrument or device made by Friar Marcello for the projection of pellets of lead". A reference from 1331 describes an attack mounted by two Germanic knights on Cividale del Friuli, using man-portable gunpowder weapons of some sort. The 1320s seem to have been the takeoff point for guns in Europe according to most modern military historians. Scholars suggest that the lack of gunpowder weapons in a well-traveled Venetian's catalogue for a new crusade in 1321 implies that guns were unknown in Europe up until this point, further solidifying the 1320 mark, however more evidence in this area may be forthcoming in the future. The oldest extant cannon in Europe is a small bronze example unearthed in Loshult, Scania in southern Sweden. It dates from the early-mid 14th century, and is currently in the Swedish History Museum in Stockholm. Early cannons in Europe often shot arrows and were known by an assortment of names such as , , ribaldis, and . The ribaldis, which shot large arrows and simplistic grapeshot, were first mentioned in the English Privy Wardrobe accounts during preparations for the Battle of Crécy, between 1345 and 1346. The Florentine Giovanni Villani recounts their destructiveness, indicating that by the end of the battle, "the whole plain was covered by men struck down by arrows and cannon balls". Similar cannon were also used at the Siege of Calais (1346–47), although it was not until the 1380s that the ribaudekin clearly became mounted on wheels. Early use The Battle of Crecy which pitted the English against the French in 1346 featured the early use of cannon which helped the longbowmen repulse a large force of Genoese crossbowmen deployed by the French. The English originally intended to use the cannon against cavalry sent to attack their archers, thinking that the loud noises produced by their cannon would panic the advancing horses along with killing the knights atop them. Early cannons could also be used for more than simply killing men and scaring horses. English cannon were used defensively in 1346 during the Siege of Breteuil to launch fire onto an advancing siege tower. In this way cannons could be used to burn down siege equipment before it reached the fortifications. The use of cannons to shoot fire could also be used offensively as another battle involved the setting of a castle ablaze with similar methods. The particular incendiary used in these projectiles was most likely a gunpowder mixture. This is one area where early Chinese and European cannons share a similarity as both were possibly used to shoot fire. Another aspect of early European cannons is that they were rather small, dwarfed by the bombards which would come later. In fact, it is possible that the cannons used at Crécy were capable of being moved rather quickly as there is an anonymous chronicle that notes the guns being used to attack the French camp, indicating that they would have been mobile enough to press the attack. These smaller cannons would eventually give way to larger, wall-breaching guns by the end of the 1300s. Islamic world There is no clear consensus on when the cannon first appeared in the Islamic world, with dates ranging from 1260 to the mid-14th century. The cannon may have appeared in the Islamic world in the late 13th century, with Ibn Khaldun in the 14th century stating that cannons were used in the Maghreb region of North Africa in 1274, and other Arabic military treatises in the 14th century referring to the use of cannon by Mamluk forces in 1260 and 1303, and by Muslim forces at the 1324 Siege of Huesca in Spain. However, some scholars do not accept these early dates. While the date of its first appearance is not entirely clear, the general consensus among most historians is that there is no doubt the Mamluk forces were using cannon by 1342. Other accounts may have also mentioned the use of cannon in the early 14th century. An Arabic text dating to 1320–1350 describes a type of gunpowder weapon called a which uses gunpowder to shoot projectiles out of a tube at the end of a stock. Some scholars consider this a hand cannon while others dispute this claim. The Nasrid army besieging Elche in 1331 made use of "iron pellets shot with fire". According to historian Ahmad Y. al-Hassan, during the Battle of Ain Jalut in 1260, the Mamluks used cannon against the Mongols. He claims that this was "the first cannon in history" and used a gunpowder formula almost identical to the ideal composition for explosive gunpowder. He also argues that this was not known in China or Europe until much later. Al-Hassan further claims that the earliest textual evidence of cannon is from the Middle East, based on earlier originals which report hand-held cannons being used by the Mamluks at the Battle of Ain Jalut in 1260. Such an early date is not accepted by some historians, including David Ayalon, Iqtidar Alam Khan, Joseph Needham and Tonio Andrade. Khan argues that it was the Mongols who introduced gunpowder to the Islamic world, and believes cannon only reached Mamluk Egypt in the 1370s. Needham argued that the term , dated to textual sources from 1342 to 1352, did not refer to true hand-guns or bombards, and that contemporary accounts of a metal-barrel cannon in the Islamic world did not occur until 1365. Similarly, Andrade dates the textual appearance of cannons in middle eastern sources to the 1360s. Gabor Ágoston and David Ayalon note that the Mamluks had certainly used siege cannons by 1342 or the 1360s, respectively, but earlier uses of cannons in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s and 1330s, though evidence is inconclusive. Ibn Khaldun reported the use of cannon as siege machines by the Marinid sultan Abu Yaqub Yusuf at the siege of Sijilmasa in 1274. The passage by Ibn Khaldun on the Marinid Siege of Sijilmassa in 1274 occurs as follows: "[The Sultan] installed siege engines ... and gunpowder engines ..., which project small balls of iron. These balls are ejected from a chamber ... placed in front of a kindling fire of gunpowder; this happens by a strange property which attributes all actions to the power of the Creator." The source is not contemporary and was written a century later around 1382. Its interpretation has been rejected as anachronistic by some historians, who urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, as they did for an earlier incendiary, naphtha. Ágoston and Peter Purton note that in the 1204–1324 period, late medieval Arabic texts used the same word for gunpowder, , that they used for an earlier incendiary, naphtha. Needham believes Ibn Khaldun was speaking of fire lances rather than hand cannon. The Ottoman Empire made good use of cannon as siege artillery. Sixty-eight super-sized bombards were used by Mehmed the Conqueror to capture Constantinople in 1453. Jim Bradbury argues that Urban, a Hungarian cannon engineer, introduced this cannon from Central Europe to the Ottoman realm; according to Paul Hammer, however, it could have been introduced from other Islamic countries which had earlier used cannons. These cannon could fire heavy stone balls a mile, and the sound of their blast could reportedly be heard from a distance of . Shkodëran historian Marin Barleti discusses Turkish bombards at length in his book De obsidione Scodrensi (1504), describing the 1478–79 siege of Shkodra in which eleven bombards and two mortars were employed. The Ottomans also used cannon to control passage of ships through the Bosphorus strait. Ottoman cannons also proved effective at stopping crusaders at Varna in 1444 and Kosovo in 1448 despite the presence of European cannon in the former case. The similar Dardanelles Guns (for the location) were created by Munir Ali in 1464 and were still in use during the Anglo-Turkish War (1807–1809). These were cast in bronze into two parts: the chase (the barrel) and the breech, which combined weighed 18.4 tonnes. The two parts were screwed together using levers to facilitate moving it. Fathullah Shirazi, a Persian inhabitant of India who worked for Akbar in the Mughal Empire, developed a volley gun in the 16th century. While there is evidence of cannons in Iran as early as 1405 they were not widespread. This changed following the increased use of firearms by Shah Ismail I, and the Iranian army used 500 cannons by the 1620s, probably captured from the Ottomans or acquired by allies in Europe. By 1443, Iranians were also making some of their own cannon, as Mir Khawand wrote of a 1200 kg metal piece being made by an Iranian which was most likely a cannon. Due to the difficulties of transporting cannon in mountainous terrain, their use was less common compared to their use in Europe. Eastern Europe Documentary evidence of cannons in Russia does not appear until 1382 and they were used only in sieges, often by the defenders. It was not until 1475 when Ivan III established the first Russian cannon foundry in Moscow that they began to produce cannons natively. The earliest surviving cannon from Russia dates to 1485. Later on large cannons were known as bombards, ranging from three to five feet in length and were used by Dubrovnik and Kotor in defence during the later 14th century. The first bombards were made of iron, but bronze became more prevalent as it was recognized as more stable and capable of propelling stones weighing as much as . Around the same period, the Byzantine Empire began to accumulate its own cannon to face the Ottoman Empire, starting with medium-sized cannon long and of 10 in calibre. The earliest reliable recorded use of artillery in the region was against the Ottoman siege of Constantinople in 1396, forcing the Ottomans to withdraw. The Ottomans acquired their own cannon and laid siege to the Byzantine capital again in 1422. By 1453, the Ottomans used 68 Hungarian-made cannon for the 55-day bombardment of the walls of Constantinople, "hurling the pieces everywhere and killing those who happened to be nearby". The largest of their cannons was the Great Turkish Bombard, which required an operating crew of 200 men and 70 oxen, and 10,000 men to transport it. Gunpowder made the formerly devastating Greek fire obsolete, and with the final fall of Constantinople—which was protected by what were once the strongest walls in Europe—on 29 May 1453, "it was the end of an era in more ways than one". Southeast Asia The Javanese Majapahit Empire was arguably able to encompass much of modern-day Indonesia due to its unique mastery of bronze-smithing and use of a central arsenal fed by a large number of cottage industries within the immediate region. Cannons were introduced to Majapahit when Kublai Khan's Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used a weapon called p'ao against Daha forces. This weapon is interpreted differently by researchers, it may be a trebuchet that throws thunderclap bombs, firearms, cannons, or rockets. It is possible that the gunpowder weapons carried by the Mongol–Chinese troops amounted to more than one type. Thomas Stamford Raffles wrote in The History of Java that in 1247 saka (1325 AD), cannons were widely used in Java especially by the Majapahit. It is recorded that the small kingdoms in Java that sought the protection of Majapahit had to hand over their cannons to the Majapahit. Majapahit under Mahapatih (prime minister) Gajah Mada (in office 1331–1364) utilized gunpowder technology obtained from Yuan dynasty for use in naval fleet. One of the earliest references to cannon and artillerymen in Java is from the year 1346. Mongol-Chinese gunpowder technology of Yuan dynasty resulted in eastern-style cetbang which is similar to Chinese cannon. Swivel guns however, only developed in the archipelago because of the close maritime relations of the Nusantara archipelago with the territory of West India after 1460 AD, which brought new types of gunpowder weapons to the archipelago, likely through Arab intermediaries. This weapon seems to be cannon and gun of Ottoman tradition, for example the prangi, which is a breech-loading swivel gun. A new type of cetbang, called the western-style cetbang, was derived from the Turkish prangi. Just like prangi, this cetbang is a breech-loading swivel gun made of bronze or iron, firing single rounds or scattershots (a large number of small bullets). Cannons derived from western-style cetbang can be found in Nusantara, among others were lantaka and lela. Most lantakas were made of bronze and the earliest ones were breech-loaded. There is a trend toward muzzle-loading weapons during colonial times. A pole gun () was recorded as being used by Java in 1413. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Circa 1540, the Javanese, always alert to new weapons, found the newly arrived Portuguese weaponry superior to that of the locally made variants. Majapahit-era cannon were further improved and used in the Demak Sultanate period during the Demak invasion of Portuguese Malacca. During this period, the iron, for manufacturing Javanese cannon was imported from Khorasan in northern Persia. The material was known by Javanese as (Khorasan iron). When the Portuguese came to the archipelago, they referred to it as , which was also used to refer to any breech-loading swivel gun, while the Spaniards called it . Duarte Barbosa c. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannon ( or ), long muskets, (arquebus), (hand cannon), Greek fire, guns (cannon), and other fireworks. Every place was considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180- and 260-pounders, weighing anywhere between 3 and 8 tons, length of them between . Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty. Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles' The History of Java (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali. Africa In Africa, the Adal Sultanate and the Abyssinian Empire both deployed cannons during the Adal-Abyssinian War. Imported from Arabia, and the wider Islamic world, the Adalites led by Ahmed ibn Ibrahim al-Ghazi were the first African power to introduce cannon warfare to the African continent. Later on as the Portuguese Empire entered the war it would supply and train the Abyssinians with cannons, while the Ottoman Empire sent soldiers and cannon to back Adal. The conflict proved, through their use on both sides, the value of firearms such as the matchlock musket, cannon, and the arquebus over traditional weapons. Offensive and defensive use While previous smaller guns could burn down structures with fire, larger cannons were so effective that engineers were forced to develop stronger castle walls to prevent their keeps from falling. Nonetheless, cannons were used other purposes than battering down walls as fortifications began using cannons as defensive instruments such as an example in India where the fort of Raicher had gun ports built into its walls to accommodate the use of defensive cannons. In The Art of War, Niccolò Machiavelli opined that field artillery forced an army to take up a defensive posture and this opposed a more ideal offensive stance. Machiavelli's concerns can be seen in the criticisms of Portuguese mortars being used in India during the sixteenth century as lack of mobility was one of the key problems with the design. In Russia the early cannons were again placed in forts as a defensive tool. Cannon were also difficult to move around in certain types of terrain with mountains providing a great obstacle for them, for these reasons offensives conducted with cannons would be difficult to pull off in places such as Iran. Early modern period By the 16th century, cannons were made in a great variety of lengths and bore diameters, but the general rule was that the longer the barrel, the longer the range. Some cannons made during this time had barrels exceeding in length, and could weigh up to . Consequently, large amounts of gunpowder were needed to allow them to fire stone balls several hundred yards. By mid-century, European monarchs began to classify cannons to reduce the confusion. Henry II of France opted for six sizes of cannon, but others settled for more; the Spanish used twelve sizes, and the English sixteen. They are, from largest to smallest: the cannon royal, cannon, cannon serpentine, bastard cannon, demicannon, pedrero, culverin, basilisk, demiculverin, bastard culverin, saker, minion, falcon, falconet, serpentine, and rabinet. Better powder had been developed by this time as well. Instead of the finely ground powder used by the first bombards, powder was replaced by a "corned" variety of coarse grains. This coarse powder had pockets of air between grains, allowing fire to travel through and ignite the entire charge quickly and uniformly. The end of the Middle Ages saw the construction of larger, more powerful cannon, as well as their spread throughout the world. As they were not effective at breaching the newer fortifications resulting from the development of cannon, siege engines—such as siege towers and trebuchets—became less widely used. However, wooden "battery-towers" took on a similar role as siege towers in the gunpowder age—such as that used at Siege of Kazan in 1552, which could hold ten large-calibre cannon, in addition to 50 lighter pieces. Another notable effect of cannon on warfare during this period was the change in conventional fortifications. Niccolò Machiavelli wrote, "There is no wall, whatever its thickness that artillery will not destroy in only a few days." Although castles were not immediately made obsolete by cannon, their use and importance on the battlefield rapidly declined. Instead of majestic towers and merlons, the walls of new fortresses were thick, angled, and sloped, while towers became low and stout; increasing use was also made of earth and brick in breastworks and redoubts. These new defences became known as bastion forts, after their characteristic shape which attempted to force any advance towards it directly into the firing line of the guns. A few of these featured cannon batteries, such as the House of Tudor's Device Forts in England. Bastion forts soon replaced castles in Europe and, eventually, those in the Americas as well. By the end of the 15th century, several technological advancements made cannons more mobile. Wheeled gun carriages and trunnions became common, and the invention of the limber further facilitated transportation. As a result, field artillery became more viable, and began to see more widespread use, often alongside the larger cannons intended for sieges. Better gunpowder, cast-iron projectiles (replacing stone), and the standardisation of calibres meant that even relatively light cannons could be deadly. In The Art of War, Niccolò Machiavelli observed that "It is true that the arquebuses and the small artillery do much more harm than the heavy artillery." This was the case at the Battle of Flodden, in 1513: the English field guns outfired the Scottish siege artillery, firing two or three times as many rounds. Despite the increased maneuverability, however, cannon were still the slowest component of the army: a heavy English cannon required 23 horses to transport, while a culverin needed nine. Even with this many animals pulling, they still moved at a walking pace. Due to their relatively slow speed, and lack of organisation, and undeveloped tactics, the combination of pike and shot still dominated the battlefields of Europe. Innovations continued, notably the German invention of the mortar, a thick-walled, short-barrelled gun that blasted shot upward at a steep angle. Mortars were useful for sieges, as they could hit targets behind walls or other defences. This cannon found more use with the Dutch, who learnt to shoot bombs filled with powder from them. Setting the bomb fuse was a problem. "Single firing" was first used to ignite the fuse, where the bomb was placed with the fuse down against the cannon's propellant. This often resulted in the fuse being blown into the bomb, causing it to blow up as it left the mortar. Because of this, "double firing" was tried where the gunner lit the fuse and then the touch hole. This, however, required considerable skill and timing, and was especially dangerous if the gun misfired, leaving a lighted bomb in the barrel. Not until 1650 was it accidentally discovered that double-lighting was superfluous as the heat of firing would light the fuse. Gustavus Adolphus of Sweden emphasised the use of light cannon and mobility in his army, and created new formations and tactics that revolutionised artillery. He discontinued using all 12 pounder—or heavier—cannon as field artillery, preferring, instead, to use cannons that could be handled by only a few men. One obsolete type of gun, the "leatheren", was replaced by 4 pounder and 9 pounder demi-culverins. These could be operated by three men, and pulled by only two horses. Gustavus Adolphus's army was also the first to use a cartridge that contained both powder and shot which sped up reloading, increasing the rate of fire. Finally, against infantry he pioneered the use of canister shot—essentially a tin can filled with musket balls. Until then there was no more than one cannon for every thousand infantrymen on the battlefield but Gustavus Adolphus increased the number of cannons sixfold. Each regiment was assigned two pieces, though he often arranged them into batteries instead of distributing them piecemeal. He used these batteries to break his opponent's infantry line, while his cavalry would outflank their heavy guns. At the Battle of Breitenfeld, in 1631, Adolphus proved the effectiveness of the changes made to his army, by defeating Johann Tserclaes, Count of Tilly. Although severely outnumbered, the Swedes were able to fire between three and five times as many volleys of artillery, and their infantry's linear formations helped ensure they did not lose any ground. Battered by cannon fire, and low on morale, Tilly's men broke ranks and fled. In England, cannons were being used to besiege various fortified buildings during the English Civil War. Nathaniel Nye is recorded as testing a Birmingham cannon in 1643 and experimenting with a saker in 1645. From 1645 he was the master gunner to the Parliamentarian garrison at Evesham and in 1646 he successfully directed the artillery at the Siege of Worcester, detailing his experiences and in his 1647 book The Art of Gunnery. Believing that war was as much a science as an art, his explanations focused on triangulation, arithmetic, theoretical mathematics, and cartography as well as practical considerations such as the ideal specification for gunpowder or slow matches. His book acknowledged mathematicians such as Robert Recorde and Marcus Jordanus as well as earlier military writers on artillery such as Niccolò Fontana Tartaglia and Thomas (or Francis) Malthus (author of A Treatise on Artificial Fire-Works). Around this time also came the idea of aiming the cannon to hit a target. Gunners controlled the range of their cannons by measuring the angle of elevation, using a "gunner's quadrant". Cannons did not have sights; therefore, even with measuring tools, aiming was still largely guesswork. In the latter half of the 17th century, the French engineer Sébastien Le Prestre de Vauban introduced a more systematic and scientific approach to attacking gunpowder fortresses, in a time when many field commanders "were notorious dunces in siegecraft". Careful sapping forward, supported by enfilading ricochets, was a key feature of this system, and it even allowed Vauban to calculate the length of time a siege would take. He was also a prolific builder of bastion forts, and did much to popularize the idea of "depth in defence" in the face of cannon. These principles were followed into the mid-19th century, when changes in armaments necessitated greater depth defence than Vauban had provided for. It was only in the years prior to World War I that new works began to break radically away from his designs. 18th and 19th centuries The lower tier of 17th-century English ships of the line were usually equipped with demi-cannons, guns that fired a solid shot, and could weigh up to . Demi-cannons were capable of firing these heavy metal balls with such force that they could penetrate more than a metre of solid oak, from a distance of , and could dismast even the largest ships at close range. Full cannon fired a shot, but were discontinued by the 18th century, as they were too unwieldy. By the end of the 18th century, principles long adopted in Europe specified the characteristics of the Royal Navy's cannon, as well as the acceptable defects, and their severity. The United States Navy tested guns by measuring them, firing them two or three times—termed "proof by powder"—and using pressurized water to detect leaks. The carronade was adopted by the Royal Navy in 1779; the lower muzzle velocity of the round shot when fired from this cannon was intended to create more wooden splinters when hitting the structure of an enemy vessel, as they were believed to be more deadly than the ball by itself. The carronade was much shorter, and weighed between a third to a quarter of the equivalent long gun; for example, a 32-pounder carronade weighed less than a ton, compared with a 32-pounder long gun, which weighed over 3 tons. The guns were, therefore, easier to handle, and also required less than half as much gunpowder, allowing fewer men to crew them. Carronades were manufactured in the usual naval gun calibres, but were not counted in a ship of the line's rated number of guns. As a result, the classification of Royal Navy vessels in this period can be misleading, as they often carried more cannons than were listed. Cannons were crucial in Napoleon's rise to power, and continued to play an important role in his army in later years. During the French Revolution, the unpopularity of the Directory led to riots and rebellions. When over 25,000 royalists led by General Danican assaulted Paris, Paul Barras was appointed to defend the capital; outnumbered five to one and disorganised, the Republicans were desperate. When Napoleon arrived, he reorganised the defences but realised that without cannons the city could not be held. He ordered Joachim Murat to bring the guns from the Sablons artillery park; the Major and his cavalry fought their way to the recently captured cannons, and brought them back to Napoleon. When Danican's poorly trained men attacked, on 13 Vendémiaire 1795 (5 October in the calendar used in France at the time), Napoleon ordered his cannon to fire grapeshot into the mob, an act that became known as the "whiff of grapeshot". The slaughter effectively ended the threat to the new government, while, at the same time, making Bonaparte a famous—and popular—public figure. Among the first generals to recognise that artillery was not being used to its full potential, Napoleon often massed his cannon into batteries and introduced several changes into the French artillery, improving it significantly and making it among the finest in Europe. Such tactics were successfully used by the French, for example, at the Battle of Friedland, when 66 guns fired a total of 3,000 roundshot and 500 rounds of grapeshot, inflicting severe casualties to the Russian forces, whose losses numbered over 20,000 killed and wounded, in total. At the Battle of Waterloo—Napoleon's final battle—the French army had many more artillery pieces than either the British or Prussians. As the battlefield was muddy, recoil caused cannons to bury themselves into the ground after firing, resulting in slow rates of fire, as more effort was required to move them back into an adequate firing position; also, roundshot did not ricochet with as much force from the wet earth. Despite the drawbacks, sustained artillery fire proved deadly during the engagement, especially during the French cavalry attack. The British infantry, having formed infantry squares, took heavy losses from the French guns, while their own cannons fired at the cuirassiers and lancers, when they fell back to regroup. Eventually, the French ceased their assault, after taking heavy losses from the British cannon and musket fire. In the 1810s and 1820s, greater emphasis was placed on the accuracy of long-range gunfire, and less on the weight of a broadside. Around 1822, George Marshall wrote Marshall's Practical Marine Gunnery. The book was used by cannon operators in the United States Navy throughout the 19th century. It listed all the types of cannons and instructions. The carronade, although initially very successful and widely adopted, disappeared from the Royal Navy in the 1850s after the development of wrought-iron-jacketed steel cannon by William Armstrong and Joseph Whitworth. Nevertheless, carronades were used in the American Civil War. Western cannons during the 19th century became larger, more destructive, more accurate, and could fire at longer range. One example is the American wrought-iron, muzzle-loading rifle, or Griffen gun (usually called the 3-inch Ordnance Rifle), used during the American Civil War, which had an effective range of over . Another is the smoothbore 12-pounder Napoleon, which originated in France in 1853 and was widely used by both sides in the American Civil War. This cannon was renowned for its sturdiness, reliability, firepower, flexibility, relatively lightweight, and range of . The practice of rifling—casting spiralling lines inside the cannon's barrel—was applied to artillery more frequently by 1855, as it gave cannon projectiles gyroscopic stability, which improved their accuracy. One of the earliest rifled cannons was the breech-loading Armstrong Gun—also invented by William Armstrong—which boasted significantly improved range, accuracy, and power than earlier weapons. The projectile fired from the Armstrong gun could reportedly pierce through a ship's side and explode inside the enemy vessel, causing increased damage and casualties. The British military adopted the Armstrong gun, and was impressed; the Duke of Cambridge even declared that it "could do everything but speak". Despite being significantly more advanced than its predecessors, the Armstrong gun was rejected soon after its integration, in favour of the muzzle-loading pieces that had been in use before. While both types of gun were effective against wooden ships, neither had the capability to pierce the armour of ironclads; due to reports of slight problems with the breeches of the Armstrong gun, and their higher cost, the older muzzle-loaders were selected to remain in service instead. Realising that iron was more difficult to pierce with breech-loaded cannons, Armstrong designed rifled muzzle-loading guns, which proved successful; The Times reported: "even the fondest believers in the invulnerability of our present ironclads were obliged to confess that against such artillery, at such ranges, their plates and sides were almost as penetrable as wooden ships." The superior cannon of the Western world brought them tremendous advantages in warfare. For example, in the First Opium War in China, during the 19th century, British battleships bombarded the coastal areas and fortifications from afar, safe from the reach of the Chinese cannons. Similarly, the shortest war in recorded history, the Anglo-Zanzibar War of 1896, was brought to a swift conclusion by shelling from British cruisers. The cynical attitude towards recruited infantry in the face of ever more powerful field artillery is the source of the term cannon fodder, first used by François-René de Chateaubriand, in 1814; however, the concept of regarding soldiers as nothing more than "food for powder" was mentioned by William Shakespeare as early as 1598, in Henry IV, Part 1. 20th and 21st centuries Cannons in the 20th and 21st centuries are usually divided into sub-categories and given separate names. Some of the most widely used types of modern cannon are howitzers, mortars, guns, and autocannon, although a few very large-calibre cannon, custom-designed, have also been constructed. Nuclear artillery was experimented with, but was abandoned as impractical. Modern artillery is used in a variety of roles, depending on its type. According to NATO, the general role of artillery is to provide fire support, which is defined as "the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize, or suppress the enemy". When referring to cannons, the term gun is often used incorrectly. In military usage, a gun is a cannon with a high muzzle velocity and a flat trajectory, useful for hitting the sides of targets such as walls, as opposed to howitzers or mortars, which have lower muzzle velocities, and fire indirectly, lobbing shells up and over obstacles to hit the target from above. By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannons proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited at hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This calibre gun was used by the Germans against Paris and could hit targets more than away. The Second World War sparked new developments in cannon technology. Among them were sabot rounds, hollow-charge projectiles, and proximity fuses, all of which increased the effectiveness of cannon against specific target. The proximity fuse emerged on the battlefields of Europe in late December 1944. Used to great effect in anti-aircraft projectiles, proximity fuses were fielded in both the European and Pacific Theatres of Operations; they were particularly useful against V-1 flying bombs and kamikaze planes. Although widely used in naval warfare, and in anti-air guns, both the British and Americans feared unexploded proximity fuses would be reverse engineered, leading to them limiting their use in continental battles. During the Battle of the Bulge, however, the fuses became known as the American artillery's "Christmas present" for the German army because of their effectiveness against German personnel in the open, when they frequently dispersed attacks. Anti-tank guns were also tremendously improved during the war: in 1939, the British used primarily 2 pounder and 6 pounder guns. By the end of the war, 17 pounders had proven much more effective against German tanks, and 32 pounders had entered development. Meanwhile, German tanks were continuously upgraded with better main guns, in addition to other improvements. For example, the Panzer III was originally designed with a 37 mm gun, but was mass-produced with a 50 mm cannon. To counter the threat of the Russian T-34s, another, more powerful 50 mm gun was introduced, only to give way to a larger 75 mm cannon, which was in a fixed mount as the StuG III, the most-produced German World War II armoured fighting vehicle of any type. Despite the improved guns, production of the Panzer III was ended in 1943, as the tank still could not match the T-34, and was replaced by the Panzer IV and Panther tanks. In 1944, the 8.8 cm KwK 43 and many variations, entered service with the Wehrmacht, and was used as both a tank main gun, and as the PaK 43 anti-tank gun. One of the most powerful guns to see service in World War II, it was capable of destroying any Allied tank at very long ranges. Despite being designed to fire at trajectories with a steep angle of descent, howitzers can be fired directly, as was done by the 11th Marine Regiment at the Battle of Chosin Reservoir, during the Korean War. Two field batteries fired directly upon a battalion of Chinese infantry; the Marines were forced to brace themselves against their howitzers, as they had no time to dig them in. The Chinese infantry took heavy casualties, and were forced to retreat. The tendency to create larger calibre cannons during the World Wars has reversed since. The United States Army, for example, sought a lighter, more versatile howitzer, to replace their ageing pieces. As it could be towed, the M198 was selected to be the successor to the World War II–era cannons used at the time, and entered service in 1979. Still in use today, the M198 is, in turn, being slowly replaced by the M777 Ultralightweight howitzer, which weighs nearly half as much and can be more easily moved. Although land-based artillery such as the M198 are powerful, long-ranged, and accurate, naval guns have not been neglected, despite being much smaller than in the past, and, in some cases, having been replaced by cruise missiles. However, the 's planned armament included the Advanced Gun System (AGS), a pair of 155 mm guns, which fire the Long Range Land-Attack Projectile. The warhead, which weighted , had a circular error of probability of , and was mounted on a rocket, to increase the effective range to , further than that of the Paris Gun. The AGS's barrels would be water cooled, and fire 10 rounds per minute, per gun. The combined firepower from both turrets would give a Zumwalt-class destroyer the firepower equivalent to 18 conventional M198 howitzers. The reason for the re-integration of cannons as a main armament in United States Navy ships was because satellite-guided munitions fired from a gun would be less expensive than a cruise missile but have a similar guidance capability. Autocannon Autocannons have an automatic firing mode, similar to that of a machine gun. They have mechanisms to automatically load their ammunition, and therefore have a higher rate of fire than artillery, often approaching, or, in the case of rotary autocannons, even surpassing the firing rate of a machine gun. While there is no minimum bore for autocannons, they are generally larger than machine guns, typically 20 mm or greater since World War II and are usually capable of using explosive ammunition even if it is not always used. Machine guns in contrast are usually too small to use explosive ammunition; such ammunition is additionally banned in international conflict for the parties to the Saint Petersburg Declaration of 1868. Most nations use rapid-fire cannon on light vehicles, replacing a more powerful, but heavier, tank gun. A typical autocannon is the 25 mm "Bushmaster" chain gun, mounted on the LAV-25 and M2 Bradley armoured vehicles. Autocannons may be capable of a very high rate of fire, but ammunition is heavy and bulky, limiting the amount carried. For this reason, both the 25 mm Bushmaster and the 30 mm RARDEN are deliberately designed with relatively low rates of fire. The typical rate of fire for a modern autocannon ranges from 90 to 1,800 rounds per minute. Systems with multiple barrels, such as a rotary autocannon, can have rates of fire of more than several thousand rounds per minute. The fastest of these is the GSh-6-23, which has a rate of fire of over 10,000 rounds per minute. Autocannons are often found in aircraft, where they replaced machine guns and as shipboard anti-aircraft weapons, as they provide greater destructive power than machine guns. Aircraft use The first documented installation of a cannon on an aircraft was on the Voisin Canon in 1911, displayed at the Paris Exposition that year. By World War I, all of the major powers were experimenting with aircraft-mounted cannons; however their low rate of fire and great size and weight precluded any of them from being anything other than experimental. The most successful (or least unsuccessful) was the SPAD 12 Ca.1 with a single 37mm Puteaux mounted to fire between the cylinder banks and through the propeller boss of the aircraft's Hispano-Suiza 8C. The pilot (by necessity an ace) had to manually reload each round. The first autocannon were developed during World War I as anti-aircraft guns, and one of these, the Coventry Ordnance Works "COW 37 mm gun", was installed in an aircraft. However, the war ended before it could be given a field trial, and it never became standard equipment in a production aircraft. Later trials had it fixed at a steep angle upwards in both the Vickers Type 161 and the Westland C.O.W. Gun Fighter, an idea that would return later. During this period autocannons became available and several fighters of the German and the Imperial Japanese Navy Air Service were fitted with 20 mm cannons. They continued to be installed as an adjunct to machine guns rather than as a replacement, as the rate of fire was still too low and the complete installation too heavy. There was a some debate in the RAF as to whether the greater number of possible rounds being fired from a machine gun, or a smaller number of explosive rounds from a cannon was preferable. Improvements during the war in regards to rate of fire allowed the cannon to displace the machine gun almost entirely. The cannon was more effective against armour so they were increasingly used during the course of World War II, and newer fighters such as the Hawker Tempest usually carried two or four versus the six .50 Browning machine guns for US aircraft or eight to twelve M1919 Browning machine guns on earlier British aircraft. The Hispano-Suiza HS.404, Oerlikon 20 mm cannon, MG FF, and their numerous variants became among the most widely used autocannon in the war. Cannons, as with machine guns, were generally fixed to fire forwards (mounted in the wings, in the nose or fuselage, or in a pannier under either); or were mounted in gun turrets on heavier aircraft. Both the Germans and Japanese mounted cannons to fire upwards and forwards for use against heavy bombers, with the Germans calling guns so-installed . This term derives from a German colloquialism for jazz music (the German word means "off-key"). Preceding the Vietnam War the high speeds aircraft were attaining led to a move to remove the cannon due to the mistaken belief that they would be useless in a dogfight, but combat experience during the Vietnam War showed conclusively that despite advances in missiles, there was still a need for them. Nearly all modern fighter aircraft are armed with an autocannon and they are also commonly found on ground-attack aircraft. One of the most powerful examples is the 30mm GAU-8/A Avenger Gatling-type rotary cannon, mounted exclusively on the Fairchild Republic A-10 Thunderbolt II. The Lockheed AC-130 gunship (a converted transport) can carry a 105 mm howitzer as well as a variety of autocannons ranging up to 40 mm. Both are used in the close air support role. Materials, parts, and terms Cannons in general have the form of a truncated cone with an internal cylindrical bore for holding an explosive charge and a projectile. The thickest, strongest, and closed part of the cone is located near the explosive charge. As any explosive charge will dissipate in all directions equally, the thickest portion of the cannon is useful for containing and directing this force. The backward motion of the cannon as its projectile leaves the bore is termed its recoil, and the effectiveness of the cannon can be measured in terms of how much this response can be diminished, though obviously diminishing recoil through increasing the overall mass of the cannon means decreased mobility. Field artillery cannon in Europe and the Americas were initially made most often of bronze, though later forms were constructed of cast iron and eventually steel. Bronze has several characteristics that made it preferable as a construction material: although it is relatively expensive, does not always alloy well, and can result in a final product that is "spongy about the bore", bronze is more flexible than iron and therefore less prone to bursting when exposed to high pressure; cast-iron cannon are less expensive and more durable generally than bronze and withstand being fired more times without deteriorating. However, cast-iron cannon have a tendency to burst without having shown any previous weakness or wear, and this makes them more dangerous to operate. The older and more-stable forms of cannon were muzzle-loading as opposed to breech-loading—to be used they had to have their ordnance packed down the bore through the muzzle rather than inserted through the breech. The following terms refer to the components or aspects of a classical western cannon (c. 1850) as illustrated here. In what follows, the words near, close, and behind will refer to those parts towards the thick, closed end of the piece, and far, front, in front of, and before to the thinner, open end. Negative spaces Bore: The hollow cylinder bored down the centre of the cannon, including the base of the bore or bottom of the bore, the nearest end of the bore into which the ordnance (wadding, shot, etc.) gets packed. The diameter of the bore represents the cannon's calibre. Chamber: The cylindrical, conical, or spherical recess at the nearest end of the bottom of the bore into which the gunpowder is packed. Vent: A thin tube on the near end of the cannon connecting the explosive charge inside with an ignition source outside and often filled with a length of fuse; always located near the breech. Sometimes called the fuse hole or the touch hole. On the top of the vent on the outside of the cannon is a flat circular space called the vent field where the charge is lit. If the cannon is bronze, it will often have a vent piece made of copper screwed into the length of the vent. Solid spaces The main body of a cannon consists of three basic extensions: the foremost and the longest is called the chase, the middle portion is the reinforce, and the closest and briefest portion is the cascabel or cascable. The chase: Simply the entire conical part of the cannon in front of the reinforce. It is the longest portion of the cannon, and includes the following elements: The neck: the narrowest part of the chase, always located near the foremost end of the piece. The muzzle: the portion of the chase forward of the neck. It includes the following: The swell of the muzzle refers to the slight swell in the diameter of the piece at the very end of the chase. It is often chamfered on the inside to make loading the cannon easier. In some guns, this element is replaced with a wide ring and is called a muzzle band. The face is the flat vertical plane at the foremost edge of the muzzle (and of the entire piece). The muzzle mouldings are the tiered rings which connect the face with the rest of the muzzle, the first of which is called the lip and the second the fillet The muzzle astragal and fillets are a series of three narrow rings running around the outside of the chase just behind the neck. Sometimes also collectively called the chase ring. The chase astragal and fillets: these are a second series of such rings located at the near end of the chase. The chase girdle: this is the brief length of the chase between the chase astragal and fillets and the reinforce. The reinforce: This portion of the piece is frequently divided into a first reinforce and a second reinforce, but in any case is marked as separate from the chase by the presence of a narrow circular reinforce ring or band at its foremost end. The span of the reinforce also includes the following: The trunnions are located at the foremost end of the reinforce just behind the reinforce ring. They consist of two cylinders perpendicular to the bore and below it which are used to mount the cannon on its carriage. The rimbases are short broad rings located at the union of the trunnions and the cannon which provide support to the carriage attachment. The reinforce band is only present if the cannon has two reinforces, and it divides the first reinforce from the second. The breech refers to the mass of solid metal behind the bottom of the bore extending to the base of the breech and including the base ring; it also generally refers to the end of the cannon opposite the muzzle, i.e., the location where the explosion of the gunpowder begins as opposed to the opening through which the pressurized gas escapes. The base ring forms a ring at the widest part of the entire cannon at the nearest end of the reinforce just before the cascabel. The cascabel: This is that portion of the cannon behind the reinforce(s) and behind the base ring. It includes the following: The knob which is the small spherical terminus of the piece; The neck, a short, narrow piece of metal holding out the knob; and The fillet, the tiered disk connecting the neck of the cascabel to the base of the breech. The base of the breech is the metal disk that forms the most forward part of the cascabel and rests against the breech itself, right next to the base ring. To pack a muzzle-loading cannon, first gunpowder is poured down the bore. This is followed by a layer of wadding (often nothing more than paper), and then the cannonball itself. A certain amount of windage allows the ball to fit down the bore, though the greater the windage the less efficient the propulsion of the ball when the gunpowder is ignited. To fire the cannon, the fuse located in the vent is lit, quickly burning down to the gunpowder, which then explodes violently, propelling wadding and ball down the bore and out of the muzzle. A small portion of exploding gas also escapes through the vent, but this does not dramatically affect the total force exerted on the ball. Any large, smoothbore, muzzle-loading gun—used before the advent of breech-loading, rifled guns—may be referred to as a cannon, though once standardised names were assigned to different-sized cannon, the term specifically referred to a gun designed to fire a shot, as distinct from a demi-cannon – , culverin – , or demi-culverin – . Gun specifically refers to a type of cannon that fires projectiles at high speeds, and usually at relatively low angles; they have been used in warships, and as field artillery. The term cannon is also used for autocannon, a modern repeating weapon firing explosive projectiles. Cannon have been used extensively in fighter aircraft since World War II. Operation In the 1770s, cannon operation worked as follows: each cannon would be manned by two gunners, six soldiers, and four officers of artillery. The right gunner was to prime the piece and load it with powder, and the left gunner would fetch the powder from the magazine and be ready to fire the cannon at the officer's command. On each side of the cannon, three soldiers stood, to ram and sponge the cannon, and hold the ladle. The second soldier on the left was tasked with providing 50 bullets. Before loading, the cannon would be cleaned with a wet sponge to extinguish any smouldering material from the last shot. Fresh powder could be set off prematurely by lingering ignition sources. The powder was added, followed by wadding of paper or hay, and the ball was placed in and rammed down. After ramming, the cannon would be aimed with the elevation set using a quadrant and a plummet. At 45 degrees, the ball had the utmost range: about ten times the gun's level range. Any angle above a horizontal line was called random-shot. Wet sponges were used to cool the pieces every ten or twelve rounds. During the Napoleonic Wars, a British gun team consisted of five gunners to aim it, clean the bore with a damp sponge to quench any remaining embers before a fresh charge was introduced, and another to load the gun with a bag of powder and then the projectile. The fourth gunner pressed his thumb on the vent hole, to prevent a draught that might fan a flame. The charge loaded, the fourth would prick the bagged charge through the vent hole, and fill the vent with powder. On command, the fifth gunner would fire the piece with a slow match. Friction primers replaced slow match ignition by the mid-19th century. When a cannon had to be abandoned such as in a retreat or surrender, the touch hole of the cannon would be plugged flush with an iron spike, disabling the cannon (at least until metal boring tools could be used to remove the plug). This was called "spiking the cannon". A gun was said to be honeycombed when the surface of the bore had cavities, or holes in it, caused either by corrosion or casting defects. Legal considerations In the United States, muzzleloading cannons are not subject to any regulations at the federal level. According to the Bureau of Alcohol, Tobacco, and Firearms, muzzleloading cannons made before 1899 (and replicas) that are unable to fire fixed ammunition are considered antiques. They are not subject to the Gun Control Act (GCA) of 1968 or National Firearms Act (NFA) of 1934. Muzzleloading cannons may be subject to state of local rules in some jurisdictions, however. Deceptive use Historically, logs or poles have been used as decoys to mislead the enemy as to the strength of an emplacement. The "Quaker Gun trick" was used by Colonel William Washington's Continental Army during the American Revolutionary War; in 1780, approximately 100 Loyalists surrendered to them, rather than face bombardment. During the American Civil War, Quaker guns were also used by the Confederates, to compensate for their shortage of artillery. The decoy cannon were painted black at the "muzzle", and positioned behind fortifications to delay Union attacks on those positions. On occasion, real gun carriages were used to complete the deception. In popular culture Cannon sounds have sometimes been used in classical pieces with a military theme. One of the best known examples of such a piece is Pyotr Ilyich Tchaikovsky's 1812 Overture. The overture is to be performed using an artillery section together with the orchestra, resulting in noise levels high enough that musicians are required to wear ear protection. The cannon fire simulates Russian artillery bombardments of the Battle of Borodino, a critical battle in Napoleon's invasion of Russia, whose defeat the piece celebrates. When the overture was first performed, the cannon were fired by an electric current triggered by the conductor. However, the overture was not recorded with real cannon fire until Mercury Records and conductor Antal Doráti's 1958 recording of the Minnesota Orchestra. Cannon fire is also frequently used annually in presentations of the 1812 on the American Independence Day, a tradition started by Arthur Fiedler of the Boston Pops in 1974. The hard rock band AC/DC also used cannon in their song "For Those About to Rock (We Salute You)", and in live shows replica Napoleonic cannon and pyrotechnics were used to perform the piece. A recording of that song has accompanied the firing of an authentic reproduction of a M1857 12-pounder Napoleon during Columbus Blue Jackets goal celebrations at Nationwide Arena since opening night of the 2007–08 season. The cannon is located behind the last row of section 111 and the focal point of the team's alternate logo on its third jerseys. Cannons have been fired in touchdown celebrations by several American football teams including the San Diego Chargers. The Pittsburgh Steelers used one only during the 1962 campaign but discontinued it after Buddy Dial was startled as a result of inadvertently running face-first into the cannon's smoky discharge in a 42–27 loss to the Dallas Cowboys at Forbes Field on October 21. Restoration Cannon recovered from the sea are often extensively damaged from exposure to salt water; because of this, electrolytic reduction treatment is required to forestall the process of corrosion. The cannon is then washed in deionized water to remove the electrolyte, and is treated in tannic acid, which prevents further rust and gives the metal a bluish-black colour. After this process, cannon on display may be protected from oxygen and moisture by a wax sealant. A coat of polyurethane may also be painted over the wax sealant, to prevent the wax-coated cannon from attracting dust in outdoor displays. In 2011, archaeologists say six cannon recovered from a river in Panama that could have belonged to legendary pirate Henry Morgan are being studied and could eventually be displayed after going through a restoration process. Notes References . . . . . . . . Hadden, R. Lee. 2005. "Confederate Boys and Peter Monkeys." Armchair General. January 2005. Adapted from a talk given to the Geological Society of America on 25 March 2004. . . . . . . . . . . . . Schmidtchen, Volker (1977a), "Riesengeschütze des 15. Jahrhunderts. Technische Höchstleistungen ihrer Zeit", Technikgeschichte 44 (2): 153–73 (153–57) Schmidtchen, Volker (1977b), "Riesengeschütze des 15. Jahrhunderts. Technische Höchstleistungen ihrer Zeit", Technikgeschichte 44 (3): 213–237 (226–28) . . External links Artillery Tactics and Combat during the Napoleonic Wars Handgonnes and Matchlocks – History of firearms to 1500 – Patent for a Casting ordnance – Cannon patent – Muzzle loading ordnance patent Historic Cannons Of San Francisco Chinese inventions Articles containing video clips
3,219
7,063
https://en.wikipedia.org/wiki/Catapult
Catapult
A catapult is a ballistic device used to launch a projectile a great distance without the aid of gunpowder or other propellants – particularly various types of ancient and medieval siege engines. A catapult uses the sudden release of stored potential energy to propel its payload. Most convert tension or torsion energy that was more slowly and manually built up within the device before release, via springs, bows, twisted rope, elastic, or any of numerous other materials and mechanisms. In use since ancient times, the catapult has proven to be one of the most persistently effective mechanisms in warfare. In modern times the term can apply to devices ranging from a simple hand-held implement (also called a "slingshot") to a mechanism for launching aircraft from a ship. The earliest catapults date to at least the 7th century BC, with King Uzziah, of Judah, recorded as equipping the walls of Jerusalem with machines that shot "great stones". Catapults are mentioned in Yajurveda under the name "Jyah" in chapter 30, verse 7. In the 5th century BC the mangonel appeared in ancient China, a type of traction trebuchet and catapult. Early uses were also attributed to Ajatashatru of Magadha in his, 5th century BC, war against the Licchavis. Greek catapults were invented in the early 4th century BC, being attested by Diodorus Siculus as part of the equipment of a Greek army in 399 BC, and subsequently used at the siege of Motya in 397 BC. Etymology The word 'catapult' comes from the Latin 'catapulta', which in turn comes from the Greek (katapeltēs), itself from κατά (kata), "downwards" and πάλλω (pallō), "to toss, to hurl". Catapults were invented by the ancient Greeks and in ancient India where they were used by the Magadhan Emperor Ajatashatru around the early to mid 5th century BC. Greek and Roman catapults The catapult and crossbow in Greece are closely intertwined. Primitive catapults were essentially "the product of relatively straightforward attempts to increase the range and penetrating power of missiles by strengthening the bow which propelled them". The historian Diodorus Siculus (fl. 1st century BC), described the invention of a mechanical arrow-firing catapult (katapeltikon) by a Greek task force in 399 BC. The weapon was soon after employed against Motya (397 BC), a key Carthaginian stronghold in Sicily. Diodorus is assumed to have drawn his description from the highly rated history of Philistus, a contemporary of the events then. The introduction of crossbows however, can be dated further back: according to the inventor Hero of Alexandria (fl. 1st century AD), who referred to the now lost works of the 3rd-century BC engineer Ctesibius, this weapon was inspired by an earlier foot-held crossbow, called the gastraphetes, which could store more energy than the Greek bows. A detailed description of the gastraphetes, or the "belly-bow", along with a watercolor drawing, is found in Heron's technical treatise Belopoeica. A third Greek author, Biton (fl. 2nd century BC), whose reliability has been positively reevaluated by recent scholarship, described two advanced forms of the gastraphetes, which he credits to Zopyros, an engineer from southern Italy. Zopyrus has been plausibly equated with a Pythagorean of that name who seems to have flourished in the late 5th century BC. He probably designed his bow-machines on the occasion of the sieges of Cumae and Milet between 421 BC and 401 BC. The bows of these machines already featured a winched pull back system and could apparently throw two missiles at once. Philo of Byzantium provides probably the most detailed account on the establishment of a theory of belopoietics (belos = "projectile"; poietike = "(art) of making") circa 200 BC. The central principle to this theory was that "all parts of a catapult, including the weight or length of the projectile, were proportional to the size of the torsion springs". This kind of innovation is indicative of the increasing rate at which geometry and physics were being assimilated into military enterprises. From the mid-4th century BC onwards, evidence of the Greek use of arrow-shooting machines becomes more dense and varied: arrow firing machines (katapaltai) are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An extant inscription from the Athenian arsenal, dated between 338 and 326 BC, lists a number of stored catapults with shooting bolts of varying size and springs of sinews. The later entry is particularly noteworthy as it constitutes the first clear evidence for the switch to torsion catapults, which are more powerful than the more-flexible crossbows and which came to dominate Greek and Roman artillery design thereafter. This move to torsion springs was likely spurred by the engineers of Philip II of Macedonia. Another Athenian inventory from 330 to 329 BC includes catapult bolts with heads and flights. As the use of catapults became more commonplace, so did the training required to operate them. Many Greek children were instructed in catapult usage, as evidenced by "a 3rd Century B.C. inscription from the island of Ceos in the Cyclades [regulating] catapult shooting competitions for the young". Arrow firing machines in action are reported from Philip II's siege of Perinth (Thrace) in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, which could have been used to house anti-personnel arrow shooters, as in Aigosthena. Projectiles included both arrows and (later) stones that were sometimes lit on fire. Onomarchus of Phocis first used catapults on the battlefield against Philip II of Macedon. Philip's son, Alexander the Great, was the next commander in recorded history to make such use of catapults on the battlefield as well as to use them during sieges. The Romans started to use catapults as arms for their wars against Syracuse, Macedon, Sparta and Aetolia (3rd and 2nd centuries BC). The Roman machine known as an arcuballista was similar to a large crossbow. Later the Romans used ballista catapults on their warships. Other ancient catapults In chronological order: 19th century BC, Egypt, walls of the fortress of Buhen appear to contain platforms for siege weapons. c.750 BC, Judah, King Uzziah is documented as having overseen the construction of machines to "shoot great stones". between 484 and 468 BC, India, Ajatashatru is recorded in Jaina texts as having used catapults in his campaign against the Licchavis. between 500 and 300 BC, China, recorded use of mangonels. They were probably used by the Mohists as early as the 4th century BC, descriptions of which can be found in the Mojing (compiled in the 4th century BC). In Chapter 14 of the Mojing, the mangonel is described hurling hollowed out logs filled with burning charcoal at enemy troops. The mangonel was carried westward by the Avars and appeared next in the eastern Mediterranean by the late 6th century AD, where it replaced torsion powered siege engines such as the ballista and onager due to its simpler design and faster rate of fire. The Byzantines adopted the mangonel possibly as early as 587, the Persians in the early 7th century, and the Arabs in the second half of the 7th century. The Franks and Saxons adopted the weapon in the 8th century. Medieval catapults Castles and fortified walled cities were common during this period and catapults were used as siege weapons against them. As well as their use in attempts to breach walls, incendiary missiles, or diseased carcasses or garbage could be catapulted over the walls. Defensive techniques in the Middle Ages progressed to a point that rendered catapults largely ineffective. The Viking siege of Paris (885–6 A.D.) "saw the employment by both sides of virtually every instrument of siege craft known to the classical world, including a variety of catapults", to little effect, resulting in failure. The most widely used catapults throughout the Middle Ages were as follows: Ballista Ballistae were similar to giant crossbows and were designed to work through torsion. The projectiles were large arrows or darts made from wood with an iron tip. These arrows were then shot "along a flat trajectory" at a target. Ballistae were accurate, but lacked firepower compared with that of a mangonel or trebuchet. Because of their immobility, most ballistae were constructed on site following a siege assessment by the commanding military officer. Springald The springald's design resembles that of the ballista, being a crossbow powered by tension. The springald's frame was more compact, allowing for use inside tighter confines, such as the inside of a castle or tower, but compromising its power. Mangonel This machine was designed to throw heavy projectiles from a "bowl-shaped bucket at the end of its arm". Mangonels were mostly used for “firing various missiles at fortresses, castles, and cities,” with a range of up to 1300 feet. These missiles included anything from stones to excrement to rotting carcasses. Mangonels were relatively simple to construct, and eventually wheels were added to increase mobility. Onager Mangonels are also sometimes referred to as Onagers. Onager catapults initially launched projectiles from a sling, which was later changed to a "bowl-shaped bucket". The word Onager is derived from the Greek word onagros for "wild ass", referring to the "kicking motion and force" that were recreated in the Mangonel's design. Historical records regarding onagers are scarce. The most detailed account of Mangonel use is from “Eric Marsden's translation of a text written by Ammianus Marcellius in the 4th Century AD” describing its construction and combat usage. Trebuchet Trebuchets were probably the most powerful catapult employed in the Middle Ages. The most commonly used ammunition were stones, but "darts and sharp wooden poles" could be substituted if necessary. The most effective kind of ammunition though involved fire, such as "firebrands, and deadly Greek Fire". Trebuchets came in two different designs: Traction, which were powered by people, or Counterpoise, where the people were replaced with "a weight on the short end". The most famous historical account of trebuchet use dates back to the siege of Stirling Castle in 1304, when the army of Edward I constructed a giant trebuchet known as Warwolf, which then proceeded to "level a section of [castle] wall, successfully concluding the siege". Couillard A simplified trebuchet, where the trebuchet's single counterweight is split, swinging on either side of a central support post. Leonardo da Vinci's catapult Leonardo da Vinci sought to improve the efficiency and range of earlier designs. His design incorporated a large wooden leaf spring as an accumulator to power the catapult. Both ends of the bow are connected by a rope, similar to the design of a bow and arrow. The leaf spring was not used to pull the catapult armature directly, rather the rope was wound around a drum. The catapult armature was attached to this drum which would be turned until enough potential energy was stored in the deformation of the spring. The drum would then be disengaged from the winding mechanism, and the catapult arm would snap around. Though no records exist of this design being built during Leonardo's lifetime, contemporary enthusiasts have reconstructed it. Modern use Military The last large scale military use of catapults was during the trench warfare of World War I. During the early stages of the war, catapults were used to throw hand grenades across no man's land into enemy trenches. They were eventually replaced by small mortars. In the 1840s the invention of vulcanized rubber allowed the making of small hand-held catapults, either improvised from Y-shaped sticks or manufactured for sale; both were popular with children and teenagers. These devices were also known as slingshots in the USA. Special variants called aircraft catapults are used to launch planes from land bases and sea carriers when the takeoff runway is too short for a powered takeoff or simply impractical to extend. Ships also use them to launch torpedoes and deploy bombs against submarines. Small catapults, referred to as "traps", are still widely used to launch clay targets into the air in the sport of clay pigeon shooting. Entertainment In the 1990s and into the early 2000s, a powerful catapult, a trebuchet, was used by thrill-seekers first on private property and in 2001–2002 at Middlemoor Water Park, Somerset, England, to experience being catapulted through the air for . The practice has been discontinued due to a fatality at the Water Park. There had been an injury when the trebuchet was in use on private property. Injury and death occurred when those two participants failed to land onto the safety net. The operators of the trebuchet were tried, but found not guilty of manslaughter, though the jury noted that the fatality might have been avoided had the operators "imposed stricter safety measures." Human cannonball circus acts use a catapult launch mechanism, rather than gunpowder, and are risky ventures for the human cannonballs. Early launched roller coasters used a catapult system powered by a diesel engine or a dropped weight to acquire their momentum, such as Shuttle Loop installations between 1977 and 1978. The catapult system for roller coasters has been replaced by flywheels and later linear motors. Pumpkin chunking is another widely popularized use, in which people compete to see who can launch a pumpkin the farthest by mechanical means (although the world record is held by a pneumatic air cannon). Other In January 2011, a homemade catapult was discovered that was used to smuggle cannabis into the United States from Mexico. The machine was found 20 feet from the border fence with bales of cannabis ready to launch. See also Aircraft catapult List of siege engines Mangonel Mass driver National Catapult Contest Sling (weapon) Trebuchet Notes References Bibliography . . . . . . . External links . . . . . Projectile weapons Siege engines Obsolete technologies
3,224
7,079
https://en.wikipedia.org/wiki/CID
CID
CID may refer to: Film C.I.D. (1955 film), an Indian Malayalam film C.I.D. (1956 film), an Indian Hindi film C. I. D. (1965 film), an Indian Telugu film C.I.D. (1990 film), an Indian Hindi film Television CID (Indian TV series) C.I.D. (Singaporean TV series) Organizations Police units Criminal Investigation Department (disambiguation) Criminal Investigation Division (disambiguation) United States Army Criminal Investigation Division Other organizations Center for International Development, at Harvard University Central Institute for the Deaf Committee of Imperial Defence, a former part of the government of Great Britain and the British Empire Conseil International de la Danse, an umbrella organization for all forms of dance in the world Council of Industrial Design, a UK body renamed the Design Council University of Colombo, Centre for Instrument Development, in Sri Lanka Science and technology Biology and medicine Clinical Infectious Diseases, a medical journal Cytomegalic inclusion disease Chromosomal Interacting Domain, a self-interacting domain in bacteria Chemistry Collision-induced dissociation, a mass spectrometry mechanism Compound identification number, a field in the PubChem database Configuration interaction doubles, in quantum chemistry Computing and telecommunications Caller ID, a telephone service that transmits the caller's telephone number to the called party Card Identification Number, a security feature on credit cards Cell ID, used to identify cell phone towers of the Universidad de La Habana Certified Interconnect Designer, a certification for printed circuit-board designers CID fonts, a font file format Other uses in science and technology Channel-iron deposits, one of the major sources of saleable iron ore Controlled Impact Demonstration, a project to improve aircraft crash survivability Cubic inch displacement, a measurement in internal combustion engines Other uses Centro Insular de Deportes, an indoor sports arena in Spain CID The Dummy, a video game Combat Identification, the accurate characterization of detected objects for military action Common-interest development, a form of housing Community improvement district, an area within which businesses are required to pay an additional tax The Eastern Iowa Airport (IATA code CID), serving Cedar Rapids, Iowa Corpus des inscriptions de Delphes, a compendium of ancient Greek inscriptions from Delphi See also Cid (disambiguation) SID (disambiguation)
3,235
7,102
https://en.wikipedia.org/wiki/CAMP
CAMP
CAMP, cAMP or camP may stand for: CAMP: Cathelicidin, or Cathelicidin antimicrobial peptide Campaign Against Marijuana Planting CAMP, Center for Architecture and Metropolitan Planning in Prague Central Atlantic magmatic province CAMP (company), an Italian manufacturer of climbing equipment cAMP: Cyclic adenosine monophosphate (cAMP) (+)-cis-2-Aminomethylcyclopropane carboxylic acid, a GABAA-ρ agonist camP: 2,5-diketocamphane 1,2-monooxygenase, an enzyme See also Camp (disambiguation) Camping (disambiguation)
3,242
7,103
https://en.wikipedia.org/wiki/CGMP
CGMP
CGMP is an initialism. It can refer to: cyclic guanosine monophosphate (cGMP) current good manufacturing practice (cGMP) CGMP, Cisco Group Management Protocol, the Cisco version of Internet Group Management Protocol snooping caseinoglycomacropeptide (CGMP) or caseinomacropeptide; see K-casein Competitive guaranteed maximum price
3,243
7,104
https://en.wikipedia.org/wiki/Cotton%20Mather
Cotton Mather
Cotton Mather (; February 12, 1663 – February 13, 1728) was a New England Puritan clergyman and writer. Educated at Harvard College, in 1685 he joined his father Increase as minister of the Congregationalist Old North Meeting House of Boston, where he continued to preach for the rest of his life. A major intellectual and public figure in English-speaking colonial America, Cotton Mather helped lead the successful revolt of 1689 against Sir Edmund Andros, the governor imposed on New England by King James II. Mather's subsequent involvement in the Salem witch trials of 1692–1693, which he defended in the book Wonders of the Invisible World (1693), attracted intense controversy in his own day and has negatively affected his historical reputation. As a historian of colonial New England, Mather is noted for his Magnalia Christi Americana (1702). Personally and intellectually committed to the waning social and religious orders in New England, Cotton Mather unsuccessfully sought the presidency of Harvard College, an office that had been held by his father Increase, another significant Puritan clergyman and intellectual. After 1702, Cotton Mather clashed with Joseph Dudley, the governor of the Province of Massachusetts Bay, whom Mather attempted unsuccessfully to drive out of power. Mather championed the new Yale College as an intellectual bulwark of Puritanism in New England. He corresponded extensively with European intellectuals and received an honorary Doctor of Divinity degree from the University of Glasgow in 1710. A promoter of the new experimental science in America, Cotton Mather carried out original research on plant hybridization and on the use of inoculation as a means of preventing smallpox contagion. He dispatched many reports on scientific matters to the Royal Society of London, which elected him as a fellow in 1713. Mather's promotion of inoculation against smallpox, which he had learned about from an African named Onesimus whom Mather held as a slave, caused violent controversy in Boston during the outbreak of 1721. Scientist and US founding father Benjamin Franklin, who as a young Bostonian had opposed the old Puritan order represented by Mather and participated in the anti-inoculation campaign, later described Mather's book Bonifacius, or Essays to Do Good (1710) as a major influence on his life. Early life and education Cotton Mather was born in 1663 in the city of Boston, the capital of the Massachusetts Bay Colony, to the Rev. Increase Mather and his wife Maria née Cotton. His grandfathers were Richard Mather and John Cotton, both of them prominent Puritan ministers who had played major roles in the establishment and growth of the Massachusetts colony. Richard Mather was a graduate of the University of Oxford and John Cotton a graduate of the University of Cambridge. Increase Mather was a graduate of Harvard College and the University of Dublin, and served as the minister of Boston's original North Church (not to be confused with the Anglican Old North Church of Paul Revere fame). This was one of the two principal Congregationalist churches in the city, the other being the First Church established by John Winthrop. Cotton Mather was therefore born into one of the most influential and intellectually distinguished families in colonial New England and seemed destined to follow his father and grandfathers into the Puritan clergy. Cotton entered Harvard College, in the neighboring town of Cambridge, in 1674. Aged only eleven and a half, he is the youngest student ever admitted to that institution. At around this time, Cotton began to be afflicted by stuttering, a speech disorder that he would struggle to overcome throughout the rest of his life. Bullied by the older students and fearing that his stutter would make him unsuitable as a preacher, Cotton withdrew temporarily from the College, continuing his education at home. He also took an interest in medicine and considered the possibility of pursuing a career as a physician rather than as a religious minister. Cotton eventually returned to Harvard and received his Bachelor of Arts degree in 1678, followed by a Master of Arts degree in 1681. After completing his education, Cotton joined his father's church as assistant pastor. In 1685, Cotton was ordained and assumed full responsibilities as co-pastor of the church. Father and son continued to share responsibility for the care of the congregation until the death of Increase in 1723. Cotton would die less than five years after his father, and was therefore throughout most of his career in the shadow of the respected and formidable Increase. Increase Mather eventually became president of Harvard and exercised considerable influence on the politics of the Massachusetts colony. Despite Cotton's efforts, he never became quite as influential as his father. One of the most public displays of their strained relationship emerged during the Salem witch trials, which Increase Mather reportedly did not support. Cotton did surpass his father's output as a writer, producing nearly 400 works. Cotton Mather married Abigail Phillips, daughter of Colonel John Phillips of Charlestown, on May 4, 1686, when Cotton was twenty-three and Abigail sixteen years old. Abigail, the couple's newborn twins, and a two-year-old daughter all succumbed to a measles epidemic in 1702. Revolt of 1689 On May 14, 1686, ten days after Cotton Mather's marriage to Abigail Phillips, Edward Randolph disembarked in Boston bearing letters patent from King James II of England that revoked the Charter of the Massachusetts Bay Company and commissioned Randolph to reorganize the colonial government. James's intention was to curb Massachusetts's religious separatism by incorporating the colony it into a larger Dominion of New England, without an elected legislature and under a governor who would serve at the pleasure of the Crown. Later that year, the King appointed Sir Edmund Andros as governor of that new Dominion. This was a direct attack upon the Puritan religious and social orders that the Mathers represented, as well as upon the local autonomy of Massachusetts. The colonists were particularly outraged when Andros declared that all grants of land made in the name of the old Massachusetts Bay Company were invalid, forcing them to apply and pay for new royal patents on land that they already occupied or face eviction. In April 1687, Increase Mather sailed to London, where he remained for the next four years, pleading with the Court for what he regarded as the interests of the Massachusetts colony. The birth of a male heir to King James in June 1688, which could have cemented a Roman Catholic dynasty in the English throne, triggered the so-called Glorious Revolution in which Parliament deposed James and gave the Crown jointly to his Protestant daughter Mary and her husband, the Dutch Prince William of Orange. News of the events in London greatly emboldened the opposition in Boston to Governor Andros, finally precipitating the 1689 Boston revolt. Cotton Mather, then aged twenty-six, was one of the Puritan ministers who guided resistance in Boston to Andros's regime. Early in 1689, Randolph had a warrant issued for Cotton Mather's arrest on a charge of "scandalous libel", but the warrant was overruled by Wait Winthrop. According to some sources, Cotton Mather escaped a second attempted arrest on April 18, 1689, the same day that the people of Boston took up arms against Andros. The young Mather may have authored, in whole or in part, the "Declaration of the Gentlemen, Merchants, and Inhabitants of Boston and the Country Adjacent", which justified that uprising by a list of grievances that the declaration attributed to the deposed officials. The authorship of that document is uncertain: it was not signed by Mather or any other clergymen, and Puritans frowned upon the clergy being seen to play too direct and personal a hand in political affairs. That day, Mather probably read the Declaration to a crowd gathered in front of the Boston Town House. In July, Andros, Randolph, Joseph Dudley, and other officials who had been deposed and arrested in the Boston revolt were summoned to London to answer the complaints against them. The administration of Massachusetts was temporarily assumed by Simon Bradstreet, whose rule proved weak and contentious. In 1691, the government of King William and Queen Mary issued a new Massachusetts Charter. This charter united the Massachusetts Bay Colony with Plymouth Colony into the new Province of Massachusetts Bay. Rather than restoring the old Puritan rule, the Charter of 1691 mandated religious toleration for all non-Catholics and established a government led by a Crown-appointed governor. The first governor under the new charter was Sir William Phips, who was a member of the Mathers' church in Boston. Salem witch trials of 1692, the Mather influence Pre-trials In 1689, Mather published Memorable Providences detailing the supposed afflictions of several children in the Goodwin family in Boston. Mather had a prominent role in the witchcraft case against Catholic washerwoman Goody Glover, which ultimately resulted in her conviction and execution. Besides praying for the children, which also included fasting and meditation, he would also observe and record their activities. The children were subject to hysterical fits, which he detailed in Memorable Providences. In his book, Mather argued that since there are witches and devils, there are "immortal souls." He also claimed that witches appear spectrally as themselves. He opposed any natural explanations for the fits; he believed that people who confessed to using witchcraft were sane; he warned against performing magic due to its connection with the devil. Robert Calef was a contemporary of Mather and critical of him, and he considered this book responsible for laying the groundwork for the Salem witch trials three years later: Nineteenth-century historian Charles Wentworth Upham shared the view that the afflicted in Salem were imitating the Goodwin children, but he put the blame on both Cotton and his father Increase Mather: Cambridge Association of ministers In 1690, Cotton Mather played a primary role in forming a new ministers club called the Cambridge Association. Their first order of business was to respond to a letter from the pastor of Salem Village (Samuel Parris). A second meeting was planned a week later in the college library and Parris was invited to travel down to Cambridge to meet with them, which he did. Throughout 1692, this association of powerful ministers were often queried for their opinion on Christian doctrine relative to witchcraft. The court of Oyer and Terminer In 1692, Cotton Mather claimed to have been industrious and influential in the direction of things at Salem from the beginning (see Sept. 2 1692 letter to Stoughton below). But it remains unknown how much of a role he had in the formation or construction of the Court of Oyer and Terminer at the end of May or what the original intent for this court may have been. Sir William Phips, governor of the newly chartered Province of Massachusetts Bay, signed an order forming the new court and allowed his lieutenant governor, William Stoughton, to become the court's chief justice. According to George Bancroft, Mather had been influential in gaining the politically unpopular Stoughton his appointment as lieutenant governor under Phips through the intervention of Mather's own politically powerful father, Increase. "Intercession had been made by Cotton Mather for the advancement of Stoughton, a man of cold affections, proud, self-willed and covetous of distinction." Apparently Mather saw in Stoughton, a bachelor who had never wed, an ally for church-related matters. Bancroft quotes Mather's reaction to Stoughton's appointment as follows: '"The time for a favor is come", exulted Cotton Mather; "Yea, the set time is come."' Just prior to the first session of the new court, Mather wrote a lengthy essay which was copied and distributed to the judges. One of Mather's recommendations, invasive bodily searches for witch-marks, took place for the first time only days later, on June 2, 1692. Mather claimed not to have personally attended any sessions of the court of Oyer and Terminer (although his father attended the trial of George Burroughs). His contemporaries Calef and Thomas Brattle do place him at the executions (see below). Mather began to publicize and celebrate the trials well before they were put to an end: "If in the midst of the many Dissatisfaction among us, the publication of these Trials may promote such a pious Thankfulness unto God, for Justice being so far executed among us, I shall Re-joyce that God is Glorified." Mather called himself a historian rather than an advocate but, according to one modern writer, his writing largely presumes the guilt of the accused and includes such comments as calling Martha Carrier "a rampant hag". Mather referred to George Burroughs as a "very puny man" whose "tergiversations, contradictions, and falsehoods" made his testimony not "worth considering". The use of so-called "spectral evidence" The afflicted girls claimed that the semblance of a defendant, invisible to any but themselves, was tormenting them; it was greatly contested whether this should be considered evidence, but for the Court of Oyer and Terminer decided to allow it, despite the defendant's denial and profession of strongly held Christian beliefs. In his May 31, 1692 essay to the judges (see photo above), Mather expressed his support of the prosecutions, but also included some words of caution; "do not lay more stress on pure spectral evidence than it will bear … It is very certain that the Devils have sometimes represented the shapes of persons not only innocent, but also very virtuous. Though I believe that the just God then ordinarily provides a way for the speedy vindication of the persons thus abused." Return of the Ministers An opinion on the trials was sought from the ministers of the area in mid June. In an anonymous work written years later, Cotton Mather took credit for being the scribe: "drawn up at their desire, by Cotton Mather the younger, as I have been informed." The "Return of the Several Ministers" ambivalently discussed whether or not to allow spectral evidence. The original full version of the letter was reprinted in late 1692 in the final two pages of Increase Mather's Cases of Conscience. It is a curious document and remains a source of confusion and argument. Calef calls it "perfectly ambidexter, giving as great as greater encouragement to proceed in those dark methods, then cautions against them… indeed the Advice then given, looks most like a thing of his composing, as carrying both fire to increase and water to quench the conflagration." It seems likely that the "Several" ministers consulted did not agree, and thus Cotton Mather's careful construction and presentation of the advice, sent from Boston to Salem, could have been crucial to its interpretation (see photos). Thomas Hutchinson summarized the Return, "The two first and the last sections of this advice took away the force of all the others, and the prosecutions went on with more vigor than before." Reprinting the Return five years later in his anonymously published Life of Phips (1697), Cotton Mather omitted the fateful "two first and the last" sections, though they were the ones he had already given most attention in his "Wonders of the Invisible World" rushed into publication in the summer and early autumn of 1692. Pushing forward the August 19 executions On August 19, 1692, Mather attended the execution of George Burroughs (and four others who were executed after Mather spoke) and Robert Calef presents him as playing a direct and influential role: On September 2, 1692, after eleven of the accused had been executed, Cotton Mather wrote a letter to Chief Justice William Stoughton congratulating him on "extinguishing of as wonderful a piece of devilism as has been seen in the world" and claiming that "one half of my endeavors to serve you have not been told or seen." Regarding spectral evidence, Upham concludes that "Cotton Mather never in any public writing 'denounced the admission' of it, never advised its absolute exclusion; but on the contrary recognized it as a ground of 'presumption' … [and once admitted] nothing could stand against it. Character, reason, common sense, were swept away." In a letter to an English clergyman in 1692, Boston intellectual Thomas Brattle, criticizing the trials, said of the judges' use of spectral evidence: The later exclusion of spectral evidence from trials by Governor Phips, around the same time his own wife's (Lady Mary Phips) name coincidentally started being bandied about in connection with witchcraft, began in January 1693. This immediately brought about a sharp decrease in convictions. Due to a reprieve by Phips, there were no further executions. Phips's actions were vigorously opposed by William Stoughton. Bancroft notes that Mather considered witches "among the poor, and vile, and ragged beggars upon Earth", and Bancroft asserts that Mather considered the people against the witch trials to be witch advocates. Post-trials In the years after the trials, of the principal actors in the trial, whose lives are recorded after, neither he nor Stoughton admitted strong misgivings. For several years after the trials, Cotton Mather continued to defend them and seemed to hold out a hope for their return. Wonders of the Invisible World contained a few of Mather's sermons, the conditions of the colony and a description of witch trials in Europe. He somewhat clarified the contradictory advice he had given in Return of the Several Ministers, by defending the use of spectral evidence. Wonders of the Invisible World appeared around the same time as Increase Mather's Cases of Conscience. Mather did not sign his name or support his father's book initially: The last major events in Mather's involvement with witchcraft were his interactions with Mercy Short in December 1692 and Margaret Rule in September 1693. The latter brought a five year campaign by Boston merchant Robert Calef against the influential and powerful Mathers. Calef's book More Wonders of the Invisible World was inspired by the fear that Mather would succeed in once again stirring up new witchcraft trials, and the need to bear witness to the horrible experiences of 1692. He quotes the public apologies of the men on the jury and one of the judges. Increase Mather was said to have publicly burned Calef's book in Harvard Yard around the time he was removed from the head of the college and replaced by Samuel Willard. Poole vs. Upham Charles Wentworth Upham wrote Salem Witchcraft Volumes I and II With an Account of Salem Village and a History of Opinions on Witchcraft and Kindred Subjects, which runs to almost 1,000 pages. It came out in 1867 and cites numerous criticisms of Mather by Robert Calef. William Frederick Poole defended Mather from these criticisms. In 1869, Poole quoted from various school textbooks of the time demonstrating they were in agreement on Cotton Mather's role in the Witch Trial If anyone imagines that we are stating the case too strongly, let him try an experiment with the first bright boy he meets by asking,... 'Who got up Salem Witchcraft?'... he will reply, 'Cotton Mather'. Let him try another boy... 'Who was Cotton Mather?' and the answer will come, 'The man who was on horseback, and hung witches.' Poole was a librarian, and a lover of literature, including Mather's Magnalia "and other books and tracts, numbering nearly 400 [which] were never so prized by collectors as today." Poole announced his intention to redeem Mather's name, using as a springboard a harsh critique of Upham's book, via his own book Cotton Mather and Salem witchcraft. A quick search of the name Mather in Upham's book (referring to either father, son, or ancestors) shows that it occurs 96 times. Poole's critique runs less than 70 pages but the name "Mather" occurs many more times than the other book, which is more than ten times as long. Upham shows a balanced and complicated view of Cotton Mather, such as this first mention: "One of Cotton Mather's most characteristic productions is the tribute to his venerated master. It flows from a heart warm with gratitude." Upham's book refers to Robert Calef no fewer than 25 times with the majority of these regarding documents compiled by Calef in the mid-1690s and stating: "Although zealously devoted to the work of exposing the enormities connected with the witchcraft prosecutions, there is no ground to dispute the veracity of Calef as to matters of fact." He goes on to say that Calef's collection of writings "gave a shock to Mather's influence, from which it never recovered." Calef produced only the one book; he is self-effacing and apologetic for his limitations, and on the title page he is listed not as author but "collector". Poole, champion of literature, could not accept Calef whose "faculties, as indicated by his writings appear to us to have been of an inferior order;…", and his book "in our opinion, has a reputation much beyond its merits." Poole refers to Calef as Mather's "personal enemy" and opens a line, "Without discussing the character and motives of Calef…" but does not follow up on this suggestive comment to discuss any actual or purported motive or reason to impugn Calef. Upham responded to Poole (referring to Poole as "the Reviewer") in a book running five times as long and sharing the same title but with the clauses reversed: Salem Witchcraft and Cotton Mather. Many of Poole's arguments were addressed, but both authors emphasize the importance of Cotton Mather's difficult and contradictory view on spectral evidence, as copied in the final pages, called "The Return of Several Ministers", of Increase Mather's "Cases of Conscience". The debate continues: Kittredge vs. Burr Evidenced by the published opinion in the years that followed the Poole vs Upham debate, it would seem Upham was considered the clear winner (see Sibley, GH Moore, WC Ford, and GH Burr below.). In 1891, Harvard English professor Barrett Wendall wrote Cotton Mather, The Puritan Priest. His book often expresses agreement with Upham but also announces an intention to show Cotton Mather in a more positive light. "[Cotton Mather] gave utterance to many hasty things not always consistent with fact or with each other…" And some pages later: "[Robert] Calef's temper was that of the rational Eighteenth century; the Mathers belonged rather to the Sixteenth, the age of passionate religious enthusiasm." In 1907, George Lyman Kittredge published an essay that would become foundational to a major change in the 20th-century view of witchcraft and Mather culpability therein. Kittredge is dismissive of Robert Calef, and sarcastic toward Upham, but shows a fondness for Poole and a similar soft touch toward Cotton Mather. Responding to Kittredge in 1911, George Lincoln Burr, a historian at Cornell, published an essay that begins in a professional and friendly fashion toward both Poole and Kittredge, but quickly becomes a passionate and direct criticism, stating that Kittredge in the "zeal of his apology… reached results so startlingly new, so contradictory of what my own lifelong study in this field has seemed to teach, so unconfirmed by further research… and withal so much more generous to our ancestors than I can find it in my conscience to deem fair, that I should be less than honest did I not seize this earliest opportunity share with you the reasons for my doubts…". (In referring to "ancestors" Burr primarily means the Mathers, as is made clear in the substance of the essay.) The final paragraph of Burr's 1911 essay pushes these men's debate into the realm of a progressive creed … I fear that they who begin by excusing their ancestors may end by excusing themselves. Perhaps as a continuation of his argument, in 1914, George Lincoln Burr published a large compilation "Narratives". This book arguably continues to be the single most cited reference on the subject. Unlike Poole and Upham, Burr avoids forwarding his previous debate with Kittredge directly into his book and mentions Kittredge only once, briefly in a footnote citing both of their essays from 1907 and 1911, but without further comment. But in addition to the viewpoint displayed by Burr's selections, he weighs in on the Poole vs Upham debate at various times, including siding with Upham in a note on Thomas Brattle's letter, "The strange suggestion of W. F. Poole that Brattle here means Cotton Mather himself, is adequately answered by Upham…" Burr's "Narratives" reprint a lengthy but abridged portion of Calef's book and introducing it he digs deep into the historical record for information on Calef and concludes "…that he had else any grievance against the Mathers or their colleagues there is no reason to think." Burr finds that a comparison between Calef's work and original documents in the historical record collections "testify to the care and exactness…" 20th century revision: The Kittredge lineage at Harvard 1920–3 Kenneth B. Murdock wrote a doctoral dissertation on Increase Mather advised by Chester Noyes Greenough and Kittredge. Murdock's father was a banker hired in 1920 to run the Harvard Press and he published his son's dissertation as a handsome volume in 1925: Increase Mather, The Foremost American Puritan (Harvard University Press). Kittredge was right hand man to the elder Murdock at the Press. This work focuses on Increase Mather and is more critical of the son, but the following year he published a selection of Cotton Mather's writings with an introduction that claims Cotton Mather was "not less but more humane than his contemporaries. Scholars have demonstrated that his advice to the witch judges was always that they should be more cautious in accepting evidence" against the accused. Murdock's statement seems to claim a majority view. But one wonders who Murdock would have meant by "scholars" at this time other than Poole, Kittredge, and TJ Holmes (below) and Murdock's obituary calls him a pioneer "in the reversal of a movement among historians of American culture to discredit the Puritan and colonial period…" 1924 Thomas J. Holmes was an Englishman with no college education, but he apprenticed in bookbinding and emigrated to the U.S. and became the librarian at the William G. Mather Library in Ohio where he likely met Murdock. In 1924, Holmes wrote an essay for the Bibliographical Society of America identifying himself as part of the Poole-Kittredge lineage and citing Kenneth B. Murdock's still unpublished dissertation. In 1932 Holmes published a bibliography of Increase Mather followed by Cotton Mather, A Bibliography (1940). Holmes often cites Murdock and Kittredge and is highly knowledgeable about the construction of books. Holmes' work also includes Cotton Mather's October 20, 1692 letter (see above) to his uncle opposing an end to the trials. 1930 Samuel Eliot Morison published Builders of the Bay Colony. Morison chose not to include anyone with the surname Mather or Cotton in his collection of twelve "builders" and in the bibliography writes "I have a higher opinion than most historians of Cotton Mather's Magnalia… Although Mather is inaccurate, pedantic, and not above suppresio veri, he does succeed in giving a living picture of the person he writes about." Whereas Kittredge and Murdock worked from the English department, Morison was from Harvard's history department. Morison's view seems to have evolved over the course of the 1930s, as can be seen in Harvard College in the Seventeenth Century (1936) published while Kittredge ran the Harvard press, and in a year that coincided with the tercentary of the college: "Since the appearance of Professor Kittredge's work, it is not necessary to argue that a man of learning…" of that era should be judged on his view of witchcraft. In The Intellectual Life of Colonial New England (1956), Morison writes that Cotton Mather found balance and level-thinking during the witchcraft trials. Like Poole, Morison suggests Calef had an agenda against Mather, without providing supporting evidence. 1953 Perry Miller published The New England Mind: From Colony to Province (Belknap Press of Harvard University Press). Miller worked from the Harvard English Department and his expansive prose contains few citations, but the "Bibliographical Notes" for Chapter XIII "The Judgement of the Witches" references the bibliographies of TJ Holmes (above) calling Holmes portrayal of Cotton Mather's composition of Wonders "an epoch in the study of Salem Witchcraft." However, following the discovery of the authentic holograph of the September 2, 1692 letter, in 1985, David Levin writes that the letter demonstrates that the timeline employed by TJ Holmes and Perry Miller, is off by "three weeks." Contrary to the evidence in the later arriving letter, Miller portrays Phips and Stoughton as pressuring Cotton Mather to write the book (p.201): "If ever there was a false book produced by a man whose heart was not in it, it is The Wonders….he was insecure, frightened, sick at heart…" The book "has ever since scarred his reputation," Perry Miller writes. Miller seems to imagine Cotton Mather as sensitive, tender, and a good vehicle for his jeremiad thesis: "His mind was bubbling with every sentence of the jeremiads, for he was heart and soul in the effort to reorganize them. 1969 Chadwick Hansen Witchcraft at Salem. Hansen states a purpose to "set the record straight" and reverse the "traditional interpretation of what happened at Salem…" and names Poole and Kittredge as like-minded influences. (Hansen reluctantly keys his footnotes to Burr's anthology for the reader's convenience, "in spite of [Burr's] anti-Puritan bias…") Hansen presents Mather as a positive influence on the Salem Trials and considers Mather's handling of the Goodwin children sane and temperate. Hansen posits that Mather was a moderating influence by opposing the death penalty for those who confessed—or feigned confession—such as Tituba and Dorcas Good, and that most negative impressions of him stem from his "defense" of the ongoing trials in Wonders of the Invisible World. Writing an introduction to a facsimile of Robert Calef's book in 1972, Hansen compares Robert Calef to Joseph Goebbels, and also explains that, in Hansen's opinion, women "are more subject to hysteria than men." 1971 The Admirable Cotton Mather by James Playsted Wood. A young adult book. In the preface, Wood discusses the Harvard-based revision and writes that Kittredge and Murdock "added to a better understanding of a vital and courageous man…" 1985 David Hall writes, "With [Kittredge] one great phase of interpretation came to a dead end." Hall writes that whether the old interpretation favored by "antiquarians" had begun with the "malice of Robert Calef or deep hostility to Puritanism," either way "such notions are no longer… the concern of the historian." But David Hall notes "one minor exception. Debate continues on the attitude and role of Cotton Mather…" Tercentenary of the trials and ongoing scholarship Toward the latter half of the twentieth century, a number of historians at universities far from New England seemed to find inspiration in the Kittredge lineage. In Selected Letters of Cotton Mather Ken Silverman writes, "Actually, Mather had very little to do with the trials." Twelve pages later Silverman publishes, for the first time, a letter to chief judge William Stoughton on September 2, 1692, in which Cotton Mather writes "… I hope I can may say that one half of my endeavors to serve you have not been told or seen … I have labored to divert the thoughts of my readers with something of a designed contrivance…" Writing in the early 1980s, historian John Demos imputed to Mather a purportedly moderating influence on the trials. Coinciding with the tercentenary of the trials in 1992, there was a flurry of publications. Historian Larry Gregg highlights Mather's cloudy thinking and confusion between sympathy for the possessed, and the boundlessness of spectral evidence when Mather stated, "the devil have sometimes represented the shapes of persons not only innocent, but also the very virtuous." Historical and theological writings Cotton Mather was an extremely prolific writer, producing 388 different books and pamphlets during his lifetime. His most widely distributed work was Magnalia Christi Americana (which may be translated as "The Glorious Works of Christ in America"), subtitled "The ecclesiastical history of New England, from its first planting in the year 1620 unto the year of Our Lord 1698. In seven books." Despite the Latin title, the work is written in English. Mather began working on it towards the end of 1693 and it was finally published in London in 1702. The work incorporates information that Mather put together from a variety of sources, such as letters, diaries, sermons, Harvard College records, personal conversations, and the manuscript histories composed by William Hubbard and William Bradford. The Magnalia includes about fifty biographies of eminent New Englanders (ranging from John Eliot, the first Puritan missionary to the Native Americans, to Sir William Phips, the incumbent governor of Massachusetts at the time that Mather began writing), plus dozens of brief biographical sketches, including those of Hannah Duston and Hannah Swarton. According to Kenneth Silverman, an expert on early American literature and Cotton Mather's biographer, Silverman argues that, although Mather glorifies New England's Puritan past, in the Magnalia he also attempts to transcend the religious separatism of the old Puritan settlers, reflecting Mather's more ecumenical and cosmopolitan embrace of a Transatlantic Protestant Christianity that included, in addition to Mather's own Congregationalists, also Presbyterians, Baptists, and low church Anglicans. In 1693 Mather also began work on a grand intellectual project that he titled Biblia Americana, which sought to provide a commentary and interpretation of the Christian Bible in light of "all of the Learning in the World". Mather, who continued to work on it for many years, sought to incorporate into his reading of Scripture the new scientific knowledge and theories, including geography, heliocentrism, atomism, and Newtonianism. According to Silverman, the project "looks forward to Mather's becoming probably the most influential spokesman in New England for a rationalized, scientized Christianity." Mather could not find a publisher for the Biblia Americana, which remained in manuscript form during his lifetime. It is currently being edited in ten volumes, published by Mohr Siebeck under the direction of Reiner Smolinski and Jan Stievermann. As of 2019, six of the ten volumes have appeared in print. Conflict with Governor Dudley In Massachusetts at the start of the 18th century, Joseph Dudley was a highly controversial figure, as he had participated actively in the government of Sir Edmund Andros in 1686–1689. Dudley was among those arrested in the revolt of 1689, and was later called to London to answer the charges against him brought by a committee of the colonists. However, Dudley was able to pursue a successful political career in Britain. Upon the death in 1701 of acting governor William Stoughton, Dudley began enlisting support in London to procure appointment as the new governor of Massachusetts. Although the Mathers (to whom he was related by marriage), continued to resent Dudley's role in the Andros administration, they eventually came around to the view that Dudley would now be preferable as governor to the available alternatives, at a time when the English Parliament was threatening to repeal the Massachusetts Charter. With the Mathers' support, Dudley was appointed governor by the Crown and returned to Boston in 1702. Contrary to the promises that he had made to the Mathers, Governor Dudley proved a divisive and high-handed executive, reserving his patronage for a small circle composed of transatlantic merchants, Anglicans, and religious liberals such as Thomas Brattle, Benjamin Colman, and John Leverett. In the context of Queen Anne's War (1702–1713), Cotton Mather preached and published against Governor Dudley, whom Mather accused of corruption and misgovernment. Mather sought unsuccessfully to have Dudley replaced by Sir Charles Hobby. Outmaneuvered by Dudley, this political rivalry left Mather increasingly isolated at a time when Massachusetts society was steadily moving away from the Puritan tradition that Mather represented. Relationship with Harvard and Yale Cotton Mather was a fellow of Harvard College from 1690 to 1702, and at various times sat on its Board of Overseers. His father Increase had succeeded John Rogers as president of Harvard in 1684, first as acting president (1684–1686), later with the title of "rector" (1686–1692, during much of which period he was away from Massachusetts, pleading the Puritans' case before the Royal Court in London), and finally with the full title of president (1692–1701). Increase was unwilling to move permanently to the Harvard campus in Cambridge, Massachusetts, since his congregation in Boston was much larger than the Harvard student body, which at the time counted only a few dozen. Instructed by a committee of the Massachusetts General Assembly that the president of Harvard had to reside in Cambridge and preach to the students in person, Increase resigned in 1701 and was replaced by the Rev. Samuel Willard as acting president. Cotton Mather sought the presidency of Harvard, but in 1708 the fellows instead appointed a layman, John Leverett, who had the support of Governor Dudley. The Mathers disapproved of the increasing independence and liberalism of the Harvard faculty, which they regarded as laxity. Cotton Mather came to see the Collegiate School, which had moved in 1716 from Saybrook to New Haven, Connecticut, as a better vehicle for preserving the Puritan orthodoxy in New England. In 1718, Cotton convinced Boston-born British businessman Elihu Yale to make a charitable gift sufficient to ensure the school's survival. It was also Mather who suggested that the school change its name to Yale College after it accepted that donation. Cotton Mather sought the presidency of Harvard again after Leverett's death in 1724, but the fellows offered the position to the Rev. Joseph Sewall (son of Judge Samuel Sewall, who had repented publicly for his role in the Salem witch trials). When Sewall turned it down, Mather once again hoped that he might get the appointment. Instead, the fellows offered it to one of its own number, the Rev. Benjamin Coleman, an old rival of Mather. When Coleman refused it, the presidency went finally to the Rev. Benjamin Wadsworth. Advocacy for smallpox inoculation The practice of smallpox inoculation (as distinguished from to the later practice of vaccination) was developed possibly in 8th-century India or 10th-century China and by the 17th-century had reached Turkey. It was also practiced in western Africa, but it is not known when it started there. Inoculation or, rather, variolation, involved infecting a person via a cut in the skin with exudate from a patient with a relatively mild case of smallpox (variola), to bring about a manageable and recoverable infection that would provide later immunity. By the beginning of the 18th century, the Royal Society in England was discussing the practice of inoculation, and the smallpox epidemic in 1713 spurred further interest. It was not until 1721, however, that England recorded its first case of inoculation. Early New England Smallpox was a serious threat in colonial America, most devastating to Native Americans, but also to Anglo-American settlers. New England suffered smallpox epidemics in 1677, 1689–90, and 1702. It was highly contagious, and mortality could reach as high as 30 percent. Boston had been plagued by smallpox outbreaks in 1690 and 1702. During this era, public authorities in Massachusetts dealt with the threat primarily by means of quarantine. Incoming ships were quarantined in Boston Harbor, and any smallpox patients in town were held under guard or in a "pesthouse". In 1716, Onesimus, one of Mather's slaves, explained to Mather how he had been inoculated as a child in Africa. Mather was fascinated by the idea. By July 1716, he had read an endorsement of inoculation by Dr Emanuel Timonius of Constantinople in the Philosophical Transactions. Mather then declared, in a letter to Dr John Woodward of Gresham College in London, that he planned to press Boston's doctors to adopt the practice of inoculation should smallpox reach the colony again. By 1721, a whole generation of young Bostonians was vulnerable and memories of the last epidemic's horrors had by and large disappeared. Smallpox returned on April 22 of that year, when HMS Seahorse arrived from the West Indies carrying smallpox on board. Despite attempts to protect the town through quarantine, nine known cases of smallpox appeared in Boston by May 27, and by mid-June, the disease was spreading at an alarming rate. As a new wave of smallpox hit the area and continued to spread, many residents fled to outlying rural settlements. The combination of exodus, quarantine, and outside traders' fears disrupted business in the capital of the Bay Colony for weeks. Guards were stationed at the House of Representatives to keep Bostonians from entering without special permission. The death toll reached 101 in September, and the Selectmen, powerless to stop it, "severely limited the length of time funeral bells could toll." As one response, legislators delegated a thousand pounds from the treasury to help the people who, under these conditions, could no longer support their families. On June 6, 1721, Mather sent an abstract of reports on inoculation by Timonius and Jacobus Pylarinus to local physicians, urging them to consult about the matter. He received no response. Next, Mather pleaded his case to Dr. Zabdiel Boylston, who tried the procedure on his youngest son and two slaves—one grown and one a boy. All recovered in about a week. Boylston inoculated seven more people by mid-July. The epidemic peaked in October 1721, with 411 deaths; by February 26, 1722, Boston was again free from smallpox. The total number of cases since April 1721 came to 5,889, with 844 deaths—more than three-quarters of all the deaths in Boston during 1721. Meanwhile, Boylston had inoculated 287 people, with six resulting deaths. Inoculation debate Boylston and Mather's inoculation crusade "raised a horrid Clamour" among the people of Boston. Both Boylston and Mather were "Object[s] of their Fury; their furious Obloquies and Invectives", which Mather acknowledges in his diary. Boston's Selectmen, consulting a doctor who claimed that the practice caused many deaths and only spread the infection, forbade Boylston from performing it again. The New-England Courant published writers who opposed the practice. The editorial stance was that the Boston populace feared that inoculation spread, rather than prevented, the disease; however, some historians, notably H. W. Brands, have argued that this position was a result of the contrarian positions of editor-in-chief James Franklin (a brother of Benjamin Franklin). Public discourse ranged in tone from organized arguments by John Williams from Boston, who posted that "several arguments proving that inoculating the smallpox is not contained in the law of Physick, either natural or divine, and therefore unlawful", to those put forth in a pamphlet by Dr. William Douglass of Boston, entitled The Abuses and Scandals of Some Late Pamphlets in Favour of Inoculation of the Small Pox (1721), on the qualifications of inoculation's proponents. (Douglass was exceptional at the time for holding a medical degree from Europe.) At the extreme, in November 1721, someone hurled a lighted grenade into Mather's home. Medical opposition Several opponents of smallpox inoculation, among them John Williams, stated that there were only two laws of physick (medicine): sympathy and antipathy. In his estimation, inoculation was neither a sympathy toward a wound or a disease, or an antipathy toward one, but the creation of one. For this reason, its practice violated the natural laws of medicine, transforming health care practitioners into those who harm rather than heal. As with most colonists, Williams' Puritan beliefs were enmeshed in every aspect of his life, and he used the Bible to state his case. He quoted Matthew 9:12, when Jesus said: "It is not the healthy who need a doctor, but the sick." William Douglass proposed a more secular argument against inoculation, stressing the importance of reason over passion and urging the public to be pragmatic in their choices. In addition, he demanded that ministers leave the practice of medicine to physicians, and not meddle in areas where they lacked expertise. According to Douglass, smallpox inoculation was "a medical experiment of consequence," one not to be undertaken lightly. He believed that not all learned individuals were qualified to doctor others, and while ministers took on several roles in the early years of the colony, including that of caring for the sick, they were now expected to stay out of state and civil affairs. Douglass felt that inoculation caused more deaths than it prevented. The only reason Mather had had success in it, he said, was because Mather had used it on children, who are naturally more resilient. Douglass vowed to always speak out against "the wickedness of spreading infection". Speak out he did: "The battle between these two prestigious adversaries [Douglass and Mather] lasted far longer than the epidemic itself, and the literature accompanying the controversy was both vast and venomous." Puritan resistance Generally, Puritan pastors favored the inoculation experiments. Increase Mather, Cotton's father, was joined by prominent pastors Benjamin Colman and William Cooper in openly propagating the use of inoculations. "One of the classic assumptions of the Puritan mind was that the will of God was to be discerned in nature as well as in revelation." Nevertheless, Williams questioned whether the smallpox "is not one of the strange works of God; and whether inoculation of it be not a fighting with the most High." He also asked his readers if the smallpox epidemic may have been given to them by God as "punishment for sin," and warned that attempting to shield themselves from God's fury (via inoculation), would only serve to "provoke him more". Puritans found meaning in affliction, and they did not yet know why God was showing them disfavor through smallpox. Not to address their errant ways before attempting a cure could set them back in their "errand". Many Puritans believed that creating a wound and inserting poison was doing violence and therefore was antithetical to the healing art. They grappled with adhering to the Ten Commandments, with being proper church members and good caring neighbors. The apparent contradiction between harming or murdering a neighbor through inoculation and the Sixth Commandment—"thou shalt not kill"—seemed insoluble and hence stood as one of the main objections against the procedure. Williams maintained that because the subject of inoculation could not be found in the Bible, it was not the will of God, and therefore "unlawful." He explained that inoculation violated The Golden Rule, because if one neighbor voluntarily infected another with disease, he was not doing unto others as he would have done to him. With the Bible as the Puritans' source for all decision-making, lack of scriptural evidence concerned many, and Williams vocally scorned Mather for not being able to reference an inoculation edict directly from the Bible. Inoculation defended With the smallpox epidemic catching speed and racking up a staggering death toll, a solution to the crisis was becoming more urgently needed by the day. The use of quarantine and various other efforts, such as balancing the body's humors, did not slow the spread of the disease. As news rolled in from town to town and correspondence arrived from overseas, reports of horrific stories of suffering and loss due to smallpox stirred mass panic among the people. "By circa 1700, smallpox had become among the most devastating of epidemic diseases circulating in the Atlantic world." Mather strongly challenged the perception that inoculation was against the will of God and argued the procedure was not outside of Puritan principles. He wrote that "whether a Christian may not employ this Medicine (let the matter of it be what it will) and humbly give Thanks to God's good Providence in discovering of it to a miserable World; and humbly look up to His Good Providence (as we do in the use of any other Medicine) It may seem strange, that any wise Christian cannot answer it. And how strangely do Men that call themselves Physicians betray their Anatomy, and their Philosophy, as well as their Divinity in their invectives against this Practice?" The Puritan minister began to embrace the sentiment that smallpox was an inevitability for anyone, both the good and the wicked, yet God had provided them with the means to save themselves. Mather reported that, from his view, "none that have used it ever died of the Small Pox, tho at the same time, it were so malignant, that at least half the People died, that were infected With it in the Common way." While Mather was experimenting with the procedure, prominent Puritan pastors Benjamin Colman and William Cooper expressed public and theological support for them. The practice of smallpox inoculation was eventually accepted by the general population due to first-hand experiences and personal relationships. Although many were initially wary of the concept, it was because people were able to witness the procedure's consistently positive results, within their own community of ordinary citizens, that it became widely utilized and supported. One important change in the practice after 1721 was regulated quarantine of innoculees. The aftermath Although Mather and Boylston were able to demonstrate the efficacy of the practice, the debate over inoculation would continue even beyond the epidemic of 1721–22. After overcoming considerable difficulty and achieving notable success, Boylston traveled to London in 1725, where he published his results and was elected to the Royal Society in 1726, with Mather formally receiving the honor two years prior. Other scientific work In 1716, Mather used different varieties of maize ("Indian corn") to conduct one of the first recorded experiments on plant hybridization. He described the results in a letter to his friend James Petiver: In his Curiosa Americana (1712–1724) collection, Mather also announced that flowering plants reproduce sexually, an observation that later became the basis of the Linnaean system of plant classification. Mather may also have been the first to develop the concept of genetic dominance, which later would underpin Mendelian genetics. In 1713, the Secretary of the Royal Society of London, naturalist Richard Waller, informed Mather that he had been elected as a fellow of the Society. Mather was the eighth colonial American to join that learned body, with the first having been John Winthrop the Younger in 1662. During the controversies surrounding Mather's smallpox inoculation campaign of 1721, his adversaries questioned that credential on the grounds that Mather's name did not figure in the published lists of the Society's members. At the time, the Society responded that those published lists included only members who had been inducted in person and who were therefore entitled to vote in the Society's yearly elections. In May 1723, Mather's correspondent John Woodward discovered that, although Mather had been duly nominated in 1713, approved by the council, and informed by Waller of his election at that time, due to an oversight the nomination had not in fact been voted upon by the full assembly of fellows or the vote had not been recorded. After Woodward informed the Society of the situation, the members proceeded to elect Mather by a formal vote. Mather's enthusiasm for experimental science was strongly influenced by his reading of Robert Boyle's work. Mather was a significant popularizer of the new scientific knowledge and promoted Copernican heliocentrism in some of his sermons. He also argued against the spontaneous generation of life and compiled a medical manual titled The Angel of Bethesda that he hoped would assist people who were unable to procure the services of a physician, but which went unpublished in Mather's lifetime. This was the only comprehensive medical work written in colonial English-speaking America. Although much of what Mather included in that manual were folk remedies now regarded as unscientific or superstitious, some of them are still valid, including smallpox inoculation and the use of citrus juice to treat scurvy. Mather also outlined an early form of germ theory and discussed psychogenic diseases, while recommending hygiene, physical exercise, temperate diet, and avoidance of tobacco smoking. In his later years, Mather also promoted the professionalization of scientific research in America. He presented a Boston tradesman named Grafton Feveryear with the barometer that Feveryear used to make the first quantitative meteorological observations in New England, which he communicated to the Royal Society in 1727. Mather also sponsored Isaac Greenwood, a Harvard graduate and member of Mather's church, who travelled to London and collaborated with the Royal Society's curator of experiments, John Theophilus Desaguliers. Greenwood later became the first Hollis professor of mathematics and natural philosophy at Harvard, and may well have been the first American to practice science professionally. Slavery and racial attitudes Cotton Mather's household included both free servants and a number of slaves who performed domestic chores. Surviving records indicate that, over the course of his lifetime, Mather owned at least three, and probably more, slaves. Like the vast majority of Christians at the time, but unlike his political rival Judge Samuel Sewall, Mather was never an abolitionist, although he did publicly denounce what he regarded as the illegal and inhuman aspects of the burgeoning Atlantic slave trade. In his book The Negro Christianized (1706), Mather insisted that slaveholders should treat their black slaves humanely and instruct them in Christianity with a view to promoting their salvation. Mather received black members of his congregation in his home and he paid a schoolteacher to instruct local black people in reading. Mather consistently held that black Africans were "of one Blood" with the rest of mankind and that blacks and whites would meet as equals in Heaven. After a number of black people carried out arson attacks in Boston in 1723, Mather asked the outraged white Bostonians whether the black population had been "always treated according to the Rules of Humanity? Are they treated as those, that are of one Blood with us, and those who have Immortal Souls in them, and are not mere Beasts of Burden?" Mather advocated the Christianization of black slaves both on religious grounds and as tending to make them more patient and faithful servants of their masters. In The Negro Christianized, Mather argued against the opinion of Richard Baxter that a Christian could not enslave another baptized Christian. The African slave Onesimus, from whom Mather first learned about smallpox inoculation, had been purchased for him as a gift by his congregation in 1706. Despite his efforts, Mather was unable to convert Onesimus to Christianity and finally manumitted him in 1716. Sermons against pirates and piracy Throughout his career Mather was also keen to minister to convicted pirates. He produced a number of pamphlets and sermons concerning piracy, including Faithful Warnings to prevent Fearful Judgments; Instructions to the Living, from the Condition of the Dead; The Converted Sinner ... A Sermon Preached in Boston, May 31, 1724, In the Hearing and at the Desire of certain Pirates; A Brief Discourse occasioned by a Tragical Spectacle of a Number of Miserables under Sentence of Death for Piracy; Useful Remarks. An Essay upon Remarkables in the Way of Wicked Men and The Vial Poured Out Upon the Sea. His father Increase had preached at the trial of Dutch pirate Peter Roderigo; Cotton Mather in turn preached at the trials and sometimes executions of pirate Captains (or the crews of) William Fly, John Quelch, Samuel Bellamy, William Kidd, Charles Harris, and John Phillips. He also ministered to Thomas Hawkins, Thomas Pound, and William Coward; having been convicted of piracy, they were jailed alongside "Mary Glover the Irish Catholic witch," daughter of witch "Goody" Ann Glover at whose trial Mather had also preached. In his conversations with William Fly and his crew Mather scolded them: "You have something within you, that will compell you to confess, That the Things which you have done, are most Unreasonable and Abominable. The Robberies and Piracies, you have committed, you can say nothing to Justify them. … It is a most hideous Article in the Heap of Guilt lying on you, that an Horrible Murder is charged upon you; There is a cry of Blood going up to Heaven against you." Death and place of burial Cotton Mather was twice widowed, and only two of his 15 children survived him. He died on the day after his 65th birthday and was buried on Copp's Hill Burying Ground, in Boston's North End. Works Mather was a prolific writer and industrious in having his works printed, including a vast number of his sermons. Major Memorable Providences (1689) his first full book, on the subject of witchcraft Wonders of the Invisible World (1692) his second major book, also on witchcraft, sent to London in October, 1692 Pillars of Salt (1699) Magnalia Christi Americana (1702) The Negro Christianized (1706) Corderius Americanus: A Discourse on the Good Education of Children (1708) Bonifacius (1710) Pillars of Salt Mather's first published sermon, printed in 1686, concerned the execution of James Morgan, convicted of murder. Thirteen years later, Mather published the sermon in a compilation, along with other similar works, called Pillars of Salt. Magnalia Christi Americana Magnalia Christi Americana, considered Mather's greatest work, was published in 1702, when he was 39. The book includes several biographies of saints and describes the process of the New England settlement. In this context "saints" does not refer to the canonized saints of the Catholic church, but to those Puritan divines about whom Mather is writing. It comprises seven total books, including Pietas in Patriam: The life of His Excellency Sir William Phips, originally published anonymously in London in 1697. Despite being one of Mather's best-known works, some have openly criticized it, labeling it as hard to follow and understand, and poorly paced and organized. However, other critics have praised Mather's work, citing it as one of the best efforts at properly documenting the establishment of America and growth of the people. The Christian Philosopher In 1721, Mather published The Christian Philosopher, the first systematic book on science published in America. Mather attempted to show how Newtonian science and religion were in harmony. It was in part based on Robert Boyle's The Christian Virtuoso (1690). Mather reportedly took inspiration from Hayy ibn Yaqdhan, by the 12th-century Islamic philosopher Abu Bakr Ibn Tufail. Despite condemning the "Mahometans" as infidels, Mather viewed the novel's protagonist, Hayy, as a model for his ideal Christian philosopher and monotheistic scientist. Mather viewed Hayy as a noble savage and applied this in the context of attempting to understand the Native American Indians, in order to convert them to Puritan Christianity. Mather's short treatise on the Lord's Supper was later translated by his cousin Josiah Cotton. In popular culture The rock band Cotton Mather is named after Mather. The Handsome Family's 2006 album Last Days of Wonder is named in reference to Mather's 1693 book Wonders of the Invisible World, which lyricist Rennie Sparks found intriguing because of what she called its "madness brimming under the surface of things." One of the stories in Richard Brautigan′s collection Revenge of the Lawn is called ″1692 Cotton Mather Newsreel″. Seth Gabel portrays Cotton Mather in the TV series Salem, which aired from 2014 to 2017. See also John Ratcliff References Notes References Further reading External links Salem Witchcraft and Cotton Mather by Charles Wentworth Upham at Project Gutenberg Cotton Mather's writings Mather's influential commentary, collegiateway.org The Wonders of the Invisible World (1693 edition) (PDF format) The Threefold Paradise of Cotton Mather: An Edition of "Triparadisus" (PDF format) Cotton Mather's "~Resolved~", A Puritan Father's Lesson Plan, neprimer.com Cotton Mather's "The Story of Margaret Rule", bartleby.com 1663 births 1728 deaths 17th-century American philosophers 17th-century American writers 17th-century apocalypticists 17th-century Calvinist and Reformed theologians 17th-century Christian clergy 17th-century male writers 17th-century New England Puritan ministers 18th-century American philosophers 18th-century American writers 18th-century apocalypticists 18th-century Calvinist and Reformed theologians 18th-century Christian clergy 18th-century New England Puritan ministers Alumni of the University of Glasgow American Calvinist and Reformed theologians American colonial clergy American people of English descent American religious writers American sermon writers American evangelicals Boston Latin School alumni Burials in Boston Christianity in the early modern period Demonologists Early Modern period Fellows of the Royal Society Harvard College alumni History of religion in the United States Massachusetts colonial-era clergy Mather family People from colonial Boston People from North End, Boston People of the Salem witch trials Philosophers of science Witch hunters Writers from Boston Yale University people American slave owners Witch trials in North America
3,244
7,105
https://en.wikipedia.org/wiki/Cordwainer%20Smith
Cordwainer Smith
Paul Myron Anthony Linebarger (July 11, 1913 – August 6, 1966), better known by his pen-name Cordwainer Smith, was an American author known for his science fiction works. Linebarger was a US Army officer, a noted East Asia scholar, and an expert in psychological warfare. Although his career as a writer was shortened by his death at the age of 53, he is considered one of science fiction's more influential and talented authors. Early life and education Linebarger's father, Paul Myron Wentworth Linebarger, was a lawyer, working as a judge in the Philippines. There he met Chinese nationalist Sun Yat-sen to whom he became an advisor. Linebarger's father sent his wife to give birth in Milwaukee, Wisconsin so that their child would be eligible to become president of the United States. Sun Yat-sen, who was considered the father of Chinese nationalism, became Linebarger's godfather. His young life was unsettled as his father moved the family to a succession of places in Asia, Europe, and the United States. He was sometimes sent to boarding schools for safety. In all, Linebarger attended more than 30 schools. In 1919, while at a boarding school in Hawaii, he was blinded in his right eye and it was replaced by a glass eye. The vision in his remaining eye was impaired by infection. Linebarger was familiar with English, German, and Chinese by adulthood. At the age of 23, he received a PhD in political science from Johns Hopkins University. Career From 1937 to 1946, Linebarger held a faculty appointment at Duke University, where he began producing highly regarded works on Far Eastern affairs. While retaining his professorship at Duke after the beginning of World War II, Linebarger began serving as a second lieutenant of the United States Army, where he was involved in the creation of the Office of War Information and the Operation Planning and Intelligence Board. He also helped organize the army's first psychological warfare section. In 1943, he was sent to China to coordinate military intelligence operations. When he later pursued his interest in China, Linebarger became a close confidant of Chiang Kai-shek. By the end of the war, he had risen to the rank of major. In 1947, Linebarger moved to the Johns Hopkins University's School of Advanced International Studies in Washington, DC, where he served as Professor of Asiatic Studies. He used his experiences in the war to write the book Psychological Warfare (1948), regarded by many in the field as a classic text. He eventually rose to the rank of colonel in the reserves. He was recalled to advise the British forces in the Malayan Emergency and the U.S. Eighth Army in the Korean War. While he was known to call himself a "visitor to small wars", he refrained from becoming involved in the Vietnam War, but is known to have done work for the Central Intelligence Agency. In 1969 CIA officer Miles Copeland Jr. wrote that Linebarger was "perhaps the leading practitioner of 'black' and 'gray' propaganda in the Western world". According to Joseph Burkholder Smith, a former CIA operative, he conducted classes in psychological warfare for CIA agents at his home in Washington under cover of his position at the School of Advanced International Studies. He traveled extensively and became a member of the Foreign Policy Association, and was called upon to advise President John F. Kennedy. Marriage and family In 1936, Linebarger married Margaret Snow. They had a daughter in 1942 and another in 1947. They divorced in 1949. In 1950, Linebarger married again to Genevieve Collins; they had no children. They remained married until his death from a heart attack in 1966, at Johns Hopkins University Medical Center in Baltimore, Maryland, at age 53. Linebarger had expressed a wish to retire to Australia, which he had visited in his travels. He is buried in Arlington National Cemetery, Section 35, Grave Number 4712. His widow, Genevieve Collins Linebarger, was interred with him on November 16, 1981. Case history debate Linebarger is long rumored to have been "Kirk Allen", the fantasy-haunted subject of "The Jet-Propelled Couch," a chapter in psychologist Robert M. Lindner's best-selling 1954 collection The Fifty-Minute Hour. According to Cordwainer Smith scholar Alan C. Elms, this speculation first reached print in Brian Aldiss's 1973 history of science fiction, Billion Year Spree; Aldiss, in turn, claimed to have received the information from science fiction fan and scholar Leon Stover. More recently, both Elms and librarian Lee Weinstein have gathered circumstantial evidence to support the case for Linebarger's being Allen, but both concede there is no direct proof that Linebarger was ever a patient of Lindner's or that he suffered from a disorder similar to that of Kirk Allen. Science fiction style According to Frederik Pohl: Linebarger's identity as "Cordwainer Smith" was secret until his death. ("Cordwainer" is an archaic word for "a worker in cordwain or cordovan leather; a shoemaker", and a "smith" is "one who works in iron or other metals; esp. a blacksmith or farrier": two kinds of skilled workers with traditional materials.) Linebarger also employed the literary pseudonyms "Carmichael Smith" (for his political thriller Atomsk), "Anthony Bearden" (for his poetry) and "Felix C. Forrest" (for the novels Ria and Carola). Some of Smith's stories are written in narrative styles closer to traditional Chinese stories than to most English-language fiction, as well as reminiscent of the Genji tales of Lady Murasaki. The total volume of his science fiction output is relatively small, because of his time-consuming profession and his early death. Smith's works consist of one novel, originally published in two volumes in edited form as The Planet Buyer, also known as The Boy Who Bought Old Earth (1964) and The Underpeople (1968), and later restored to its original form as Norstrilia (1975); and 32 short stories (collected in The Rediscovery of Man (1993), including two versions of the short story "War No. 81-Q"). Linebarger's cultural links to China are partially expressed in the pseudonym "Felix C. Forrest", which he used in addition to "Cordwainer Smith": his godfather Sun Yat-Sen suggested to Linebarger that he adopt the Chinese name "Lin Bai-lo" (), which may be roughly translated as "Forest of Incandescent Bliss". ("Felix" is Latin for "happy".) In his later years, Linebarger proudly wore a tie with the Chinese characters for this name embroidered on it. As an expert in psychological warfare, Linebarger was very interested in the newly developing fields of psychology and psychiatry. He used many of their concepts in his fiction. His fiction often has religious overtones or motifs, particularly evident in characters who have no control over their actions. James B. Jordan argued for the importance of Anglicanism to Smith's works back to 1949. But Linebarger's daughter Rosana Hart has indicated that he did not become an Anglican until 1950, and was not strongly interested in religion until later still. The introduction to the collection Rediscovery of Man notes that from around 1960 Linebarger became more devout and expressed this in his writing. Linebarger's works are sometimes included in analyses of Christianity in fiction, along with the works of authors such as C. S. Lewis and J.R.R. Tolkien. Most of Smith's stories are set in the far future, between 4,000 and 14,000 years from now. After the Ancient Wars devastate Earth, humans, ruled by the Instrumentality of Mankind, rebuild and expand to the stars in the Second Age of Space around 6000 AD. Over the next few thousand years, mankind spreads to thousands of worlds and human life becomes safe but sterile, as robots and the animal-derived Underpeople take over many human jobs and humans themselves are genetically programmed as embryos for specified duties. Towards the end of this period, the Instrumentality attempts to revive old cultures and languages in a process known as the Rediscovery of Man, where humans emerge from their mundane utopia and Underpeople are freed from slavery. For years, Linebarger had a pocket notebook which he had filled with ideas about The Instrumentality and additional stories in the series. But while in a small boat in a lake or bay in the mid 60s, he leaned over the side, and his notebook fell out of his breast pocket into the water, where it was lost forever. Another story claims that he accidentally left the notebook in a restaurant in Rhodes in 1965. With the book gone, he felt empty of ideas, and decided to start a new series which was an allegory of Mid-Eastern politics. Smith's stories describe a long future history of Earth. The settings range from a postapocalyptic landscape with walled cities, defended by agents of the Instrumentality, to a state of sterile utopia, in which freedom can be found only deep below the surface, in long-forgotten and buried anthropogenic strata. These features may place Smith's works within the Dying Earth subgenre of science fiction. They are ultimately more optimistic and distinctive. Smith's most celebrated short story is his first-published, "Scanners Live in Vain", which led many of its earliest readers to assume that "Cordwainer Smith" was a new pen name for one of the established giants of the genre. It was selected as one of the best science fiction short stories of the pre-Nebula Award period by the Science Fiction and Fantasy Writers of America, appearing in The Science Fiction Hall of Fame Volume One, 1929-1964. "The Ballad of Lost C'Mell" was similarly honored, appearing in The Science Fiction Hall of Fame, Volume Two. After "Scanners Live in Vain", Smith's next story did not appear for several years, but from 1955 until his death in 1966 his stories appeared regularly, for the most part in Galaxy Science Fiction. His universe featured creations such as: The planet Norstrilia (Old North Australia), a semi-arid planet where an immortality drug called stroon is harvested from gigantic, virus-infected sheep each weighing more than 100 tons. Norstrilians are nominally the richest people in the galaxy and defend their immensely valuable stroon with sophisticated weapons (as shown in the story "Mother Hitton's Littul Kittons"). However, extremely high taxes ensure that everyone on the planet lives a frugal, rural life, like the farmers of old Australia, to keep the Norstrilians tough. The punishment world Shayol (cf. Sheol), where criminals are punished by the regrowth and harvesting of their organs for transplanting Planoforming spacecraft, which are crewed by humans telepathically linked with cats to defend against the attacks of malevolent entities in space, which are perceived by the humans as dragons, and by the cats as gigantic rats, in "The Game of Rat and Dragon". The Underpeople, animals modified into human form and intelligence to fulfill servile roles, and treated as property. Several stories feature clandestine efforts to liberate the Underpeople and grant them civil rights. They are seen everywhere throughout regions controlled by the Instrumentality. Names of Underpeople have a single-letter prefix based on their animal species. Thus C'Mell ("The Ballad of Lost C'Mell") is cat-derived; D'Joan ("The Dead Lady of Clown Town"), a Joan of Arc figure, is descended from dogs; and B'dikkat ("A Planet Named Shayol") has bovine ancestors. Habermans and their supervisors, Scanners, who are essential for space travel, but at the cost of having their sensory nerves cut to block the "pain of space", and who perceive only by vision and various life-support implants. A technological breakthrough removes the need for the treatment, but resistance among the Scanners to their perceived loss of status ensues, forming the basis of the story "Scanners Live in Vain". Early works in the timeline include neologisms which are not explained to any great extent, but serve to produce an atmosphere of strangeness. These words are usually derived from non-English words. For instance, manshonyagger derives from the German words "menschen" meaning, in some senses, "men" or "mankind", and "jäger", meaning a hunter, and refers to war machines that roam the wild lands between the walled cities and prey on men, except for those they can identify as Germans. Another example is "Meeya Meefla", the only city to have preserved its name from the pre-atomic era: evidently Miami, Florida, from its abbreviated form (as on road signs) "MIAMI FLA". Character names in the stories often derive from words in languages other than English. Smith seemed particularly fond of using numbers for this purpose. For instance, the name "Lord Sto Odin" in the story "Under Old Earth" is derived from the Russian words for "One hundred and one", сто один; it also suggests the name of the Norse god Odin. Quite a few of the names mean "five-six" in different languages, including both the robot Fisi (fi[ve]-si[x]), the dead Lady Panc Ashash (in Sanskrit "pañcha" [पञ्च] is "five" and "ṣaṣ" [षष्] is "six"), Limaono (lima-ono, Hawaiian and/or Fijian), Englok (ng5-luk6 [五-六], in Cantonese), Goroke (go-roku [五-六], Japanese) and Femtiosex ("fifty-six" in Swedish) in "The Dead Lady of Clown Town" as well as the main character in "Think Blue, Count Two", Veesey-koosey, which is an English transcription of the Finnish words "viisi" (five) and "kuusi" (six). Four of the characters in "Think Blue, Count Two" are called "Thirteen" in different languages: Tiga-belas (both in Indonesian and Malay), Trece (Spanish), Talatashar (based on an Arabic dialect form ثلاث عشر, thalāth ʿashar) and Sh'san (based on Mandarin 十三, shísān, where the "í" is never pronounced). Other names, notably that of Lord Jestocost (Russian Жестокость, Cruelty), are non-English but not numbers. Remnants of modern culture accordingly appear as valued antiquities or sometimes just as unrecognized survivals, lending a rare feeling of nostalgia for the present to the stories. Published non-fiction The Political Doctrines of Sun Yat-Sen: An Exposition of the San Min Chu I (1937) Government in Republican China (1938) The China of Chiang K'ai-shek: A Political Study (1941) Psychological Warfare (1948; revised second edition, 1954 - available online) Foreign milieux (HBM 200/1) (1951) Immediate improvement of theater-level psychological warfare in the Far East (1951) Far Eastern Government and Politics: China and Japan (1954; with Djang Chu and Ardath W. Burks) "Draft statement of a ten-year China and Indochina policy, 1956–1966" (1956) Essays on military psychological operations (1966) Unpublished novels 1939 (rewritten in 1947) General Death 1946 Journey in Search of a Destination 1947-1948 The Dead Can Bite (a.k.a. Sarmantia) Published fiction Short stories Titles marked with an asterisk * are independent stories not related to the Instrumentality universe. "War No. 81-Q" (original version, June 1928) * "Scanners Live in Vain" (June 1950) "The Game of Rat and Dragon" (October 1955) "Mark Elf" (May 1957) "The Burning of the Brain" (October 1958) "Western Science Is So Wonderful" (December 1958) * "No, No, Not Rogov!" (February 1959) "Nancy" (March 1959) * "When the People Fell" (April 1959) "Golden the Ship Was—Oh! Oh! Oh!" (April 1959) "Angerhelm" (June 1959) * "The Fife Of Bodhidharma" (June 1959) * "The Lady Who Sailed The Soul" (April 1960) "Alpha Ralpha Boulevard" (June 1961) "Mother Hitton's Littul Kittons" (June 1961) "A Planet Named Shayol" (October 1961) "From Gustible's Planet"(July 1962) "The Ballad of Lost C'Mell" (October 1962) "Think Blue, Count Two" (February 1963) The stories making up the collection Quest of the Three Worlds: "On the Gem Planet" (October 1963) "On the Storm Planet" (February 1965) "On the Sand Planet" (December 1965) "Three to a Given Star" (October 1965) "Drunkboat" (October 1963) "The Good Friends" (October 1963) * "The Boy Who Bought Old Earth" (The first half of "Norstrilia", April 1964, adapted into "The Planet Buyer") "The Store Of Heart's Desire" (The second half of "Norstrilia", May 1964, adapted into "The Underpeople") "The Crime and the Glory of Commander Suzdal" (May 1964) "The Dead Lady of Clown Town" (August 1964) "Under Old Earth" (February 1966) "Down to a Sunless Sea" (October 1975) (with Genevieve Linebarger) "The Queen of the Afternoon" (April 1978) "The Colonel Came Back from the Nothing-at-All" (May 1979) "Himself in Anachron" (1993) (completed by Genevieve Linebarger) "War No. 81-Q" (rewritten version, 1993) Book format Ria (1947; writing as "Felix C. Forrest") Carola (1948; writing as "Felix C. Forrest") Atomsk: A Novel of Suspense (1949; writing as "Carmichael Smith") You Will Never Be The Same (1963, collection of short science fiction stories) The Planet Buyer (1964; first half of Norstrilia, with some rearrangement) Space Lords (1965; short science fiction stories) Quest of the Three Worlds (1966; four related science fiction novellas) The Underpeople (1968; second half of Norstrilia, with some rearrangement) Under Old Earth and Other Explorations (1970; short science fiction stories) Stardreamer (1971; short science fiction stories) Norstrilia (1975; first complete publication in intended form) The Best of Cordwainer Smith (1975; short science fiction stories) The Instrumentality of Mankind (1979; short science fiction stories) The Rediscovery of Man (1993; definitive and complete compilation of short science fiction writings) Norstrilia (1994; corrected edition with variant texts) We the Underpeople (2006; collection of 5 Instrumentality of Mankind short stories & the novel Norstrilia) When the People Fell (2007; collection of many Instrumentality of Mankind short stories, including all of those previously collected in Quest of the Three Worlds) See also Cordwainer Smith Rediscovery Award References External links Official webite Arlington National Cemetery: Linebarger Paul Myron Anthony Linebarger Papers at the Hoover Institution Archives "Remembering Cordwainer Smith," Ted Gioia (The Atlantic Monthly) Past Masters: Forest of Incandescent Bliss by Bud Webster at Galactic Central Felix C. Forrest (3 records) and Carmichael Smith (no records) at LC Authorities 1913 births 1966 deaths 20th-century American novelists American Episcopalians American male novelists American military writers American science fiction writers American short story writers American sinologists Burials at Arlington National Cemetery Duke University faculty Johns Hopkins University faculty Johns Hopkins University alumni Writers from Milwaukee Psychological warfare theorists United States Army colonels Religion in science fiction American male short story writers Novelists from Wisconsin Novelists from Maryland American male non-fiction writers People of the United States Office of War Information United States Army personnel of World War II 20th-century American male writers 20th-century pseudonymous writers
3,245
7,193
https://en.wikipedia.org/wiki/Commutator
Commutator
In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. Group theory The commutator of two elements, and , of a group , is the element . This element is equal to the group's identity if and only if and commute (from the definition , being equal to the identity if and only if ). The set of all commutators of a group is not in general closed under the group operation, but the subgroup of G generated by all commutators is closed and is called the derived group or the commutator subgroup of G. Commutators are used to define nilpotent and solvable groups and the largest abelian quotient group. The definition of the commutator above is used throughout this article, but many other group theorists define the commutator as . Identities (group theory) Commutator identities are an important tool in group theory. The expression denotes the conjugate of by , defined as . and and and Identity (5) is also known as the Hall–Witt identity, after Philip Hall and Ernst Witt. It is a group-theoretic analogue of the Jacobi identity for the ring-theoretic commutator (see next section). N.B., the above definition of the conjugate of by is used by some group theorists. Many other group theorists define the conjugate of by as . This is often written . Similar identities hold for these conventions. Many identities are used that are true modulo certain subgroups. These can be particularly useful in the study of solvable groups and nilpotent groups. For instance, in any group, second powers behave well: If the derived subgroup is central, then Ring theory Rings often do not support division. Thus, the commutator of two elements a and b of a ring (or any associative algebra) is defined differently by The commutator is zero if and only if a and b commute. In linear algebra, if two endomorphisms of a space are represented by commuting matrices in terms of one basis, then they are so represented in terms of every basis. By using the commutator as a Lie bracket, every associative algebra can be turned into a Lie algebra. The anticommutator of two elements and of a ring or associative algebra is defined by Sometimes is used to denote anticommutator, while is then used for commutator. The anticommutator is used less often, but can be used to define Clifford algebras and Jordan algebras and in the derivation of the Dirac equation in particle physics. The commutator of two operators acting on a Hilbert space is a central concept in quantum mechanics, since it quantifies how well the two observables described by these operators can be measured simultaneously. The uncertainty principle is ultimately a theorem about such commutators, by virtue of the Robertson–Schrödinger relation. In phase space, equivalent commutators of function star-products are called Moyal brackets and are completely isomorphic to the Hilbert space commutator structures mentioned. Identities (ring theory) The commutator has the following properties: Lie-algebra identities Relation (3) is called anticommutativity, while (4) is the Jacobi identity. Additional identities If is a fixed element of a ring R, identity (1) can be interpreted as a Leibniz rule for the map given by . In other words, the map adA defines a derivation on the ring R. Identities (2), (3) represent Leibniz rules for more than two factors, and are valid for any derivation. Identities (4)–(6) can also be interpreted as Leibniz rules. Identities (7), (8) express Z-bilinearity. Some of the above identities can be extended to the anticommutator using the above ± subscript notation. For example: Exponential identities Consider a ring or algebra in which the exponential can be meaningfully defined, such as a Banach algebra or a ring of formal power series. In such a ring, Hadamard's lemma applied to nested commutators gives: (For the last expression, see Adjoint derivation below.) This formula underlies the Baker–Campbell–Hausdorff expansion of log(exp(A) exp(B)). A similar expansion expresses the group commutator of expressions (analogous to elements of a Lie group) in terms of a series of nested commutators (Lie brackets), Graded rings and algebras When dealing with graded algebras, the commutator is usually replaced by the graded commutator, defined in homogeneous components as Adjoint derivation Especially if one deals with multiple commutators in a ring R, another notation turns out to be useful. For an element , we define the adjoint mapping by: This mapping is a derivation on the ring R: By the Jacobi identity, it is also a derivation over the commutation operation: Composing such mappings, we get for example and We may consider itself as a mapping, , where is the ring of mappings from R to itself with composition as the multiplication operation. Then is a Lie algebra homomorphism, preserving the commutator: By contrast, it is not always a ring homomorphism: usually . General Leibniz rule The general Leibniz rule, expanding repeated derivatives of a product, can be written abstractly using the adjoint representation: Replacing x by the differentiation operator , and y by the multiplication operator , we get , and applying both sides to a function g, the identity becomes the usual Leibniz rule for the n-th derivative . See also Anticommutativity Associator Baker–Campbell–Hausdorff formula Canonical commutation relation Centralizer a.k.a. commutant Derivation (abstract algebra) Moyal bracket Pincherle derivative Poisson bracket Ternary commutator Three subgroups lemma Notes References Further reading External links Abstract algebra Group theory Binary operations Mathematical identities
3,280
7,241
https://en.wikipedia.org/wiki/Commonwealth%20Heads%20of%20Government%20Meeting
Commonwealth Heads of Government Meeting
The Commonwealth Heads of Government Meeting (CHOGM; or) is a biennial summit meeting of the governmental leaders from all Commonwealth nations. Despite the name, the head of state may be present in the meeting instead of the head of government, especially among semi-presidential states. Every two years the meeting is held in a different member state and is chaired by that nation's respective prime minister or president, who becomes the Commonwealth Chair-in-Office until the next meeting. Queen Elizabeth II, who was the Head of the Commonwealth, attended every CHOGM beginning with Ottawa in 1973 until Perth in 2011, although her formal participation only began in 1997. She was represented by the Prince of Wales at the 2013 meeting as the 87-year-old monarch was curtailing long-distance travel. The Queen attended the 2015 summit in Malta and the 2018 summit (delayed by one year) in London, but was represented again by the Prince of Wales at the 2022 meeting (delayed by two years) in Rwanda. The first CHOGM was held in 1971 in Singapore and there have been 26 held in total: the most recent was held in Kigali, Rwanda. They are held once every two years, although this pattern has occasionally been interrupted. They are held around the Commonwealth, rotating by invitation amongst its members. In the past, CHOGMs have attempted to orchestrate common policies on certain contentious issues and current events, with a special focus on issues affecting member nations. CHOGMs have discussed the continuation of apartheid rule in South Africa and how to end it, military coups in Pakistan and Fiji, and allegations of electoral fraud in Zimbabwe. Sometimes the member states agree on a common idea or solution and release a joint statement declaring their opinion. More recently, beginning at the 1997 CHOGM, the meeting has had an official theme, set by the host nation, on which the primary discussions have been focused. History The meetings originated with the leaders of the self-governing colonies of the British Empire. The First Colonial Conference in 1887 was followed by periodic meetings, known as Imperial Conferences from 1907, of government leaders of the Empire. The development of the independence of the dominions, and the creation of a number of new dominions, as well as the invitation of Southern Rhodesia (which also attended as a sui generis colony), changed the nature of the meetings. As the dominion leaders asserted themselves more and more at the meetings, it became clear that the time for 'imperial' conferences was over. From the ashes of the Second World War, seventeen Commonwealth Prime Ministers' Conferences were held between 1944 and 1969. Of these, sixteen were held in London, reflecting then-prevailing views of the Commonwealth as the continuation of the Empire and the centralisation of power in the British Commonwealth Office (the one meeting outside London, in Lagos, was an extraordinary meeting held in January 1966 to co-ordinate policies towards Rhodesia). Two supplementary meetings were also held during this period: a Commonwealth Statesmen's meeting to discuss peace terms in April 1945, and a Commonwealth Economic Conference in 1952. The 1960s saw an overhaul of the Commonwealth. The swift expansion of the Commonwealth after decolonisation saw the newly independent countries demand the creation of the Commonwealth Secretariat, and the United Kingdom, in response, successfully founding the Commonwealth Foundation. This decentralisation of power demanded a reformulation of the meetings. Instead of the meetings always being held in London, they would rotate across the membership, subject to countries' ability to host the meetings: beginning with Singapore in 1971. They were also renamed the 'Commonwealth Heads of Government Meetings' to reflect the growing diversity of the constitutional structures in the Commonwealth. Structure The core of the CHOGM are the executive sessions, which are the formal gatherings of the heads of government to do business. However, the majority of the important decisions are held not in the main meetings themselves, but at the informal 'retreats': introduced at the second CHOGM, in Ottawa, by Prime Minister of Canada Pierre Trudeau, but reminiscent of the excursions to Chequers or Dorneywood in the days of the Prime Ministers' Conferences. Only the head of the delegation and their spouse and one additional person attend the retreats. The additional person may be of any capacity (personal, political, security, etc.) but only has occasional and intermittent access to the head of the delegation. It is usually at the retreat where, isolated from their advisers, the heads resolve the most intransigent issues: leading to the Gleneagles Agreement in 1977, the Lusaka Declaration in 1979, the Langkawi Declaration in 1989, the Millbrook Programme in 1995, the Aso Rock Declaration in 2003, and the Colombo Declaration on Sustainable, Inclusive and Equitable Development in 2013. The 'fringe' of civil society organisations, including the Commonwealth Family and local groups, adds a cultural dimension to the event, and brings the CHOGM a higher media profile and greater acceptance by the local population. First officially recognised at Limassol in 1993, these events, spanning a longer period than the meeting itself, have, to an extent, preserved the length of the CHOGM: but only in the cultural sphere. Other meetings, such as those of the Commonwealth Ministerial Action Group, Commonwealth Business Council, and respective foreign ministers, have also dealt with business away from the heads of government themselves. As the scope of the CHOGM has expanded beyond the meetings of the heads of governments themselves, the CHOGMs have become progressively shorter, and their business compacted into less time. The 1971 CHOGM lasted for nine days, and the 1977 and 1991 CHOGMs for seven days each. However, Harare's epochal CHOGM was the last to last a week; the 1993 CHOGM lasted for five days, and the contentious 1995 CHOGM for only three-and-a-half. The 2005 and subsequent conferences were held over two to two-and-a-half-days. However, recent CHOGMs have also featured several days of pre-summit Commonwealth Forums on business, women, youth, as well as the Commonwealth People's Forum and meetings of foreign ministers. Issues During the 1980s, CHOGMs were dominated by calls for the Commonwealth to impose sanctions on South Africa to pressure the country to end apartheid. The division between Britain, during the government of Margaret Thatcher which resisted the call for sanctions and African Commonwealth countries, and the rest of the Commonwealth was intense at times and led to speculation that the organisation might collapse. According to one of Margaret Thatcher's former aides, Mrs. Thatcher, very privately, used to say that CHOGM stood for "Compulsory Handouts to Greedy Mendicants." According to his daughter, Denis Thatcher also referred to CHOGM as standing for 'Coons Holidaying on Government Money'. In 2011, British Prime Minister David Cameron informed the British House of Commons that his proposals to reform the rules governing royal succession, a change which would require the approval of all sixteen Commonwealth realms, was approved at the 28–30 October CHOGM in Perth, subsequently referred to as the Perth Agreement. Rwanda joined the Commonwealth in 2009 despite the Commonwealth Human Rights Initiative's (CHRI) finding that "the state of governance and human rights in Rwanda does not satisfy Commonwealth standards", and that it "does not therefore qualify for admission". Both the CHRI and Human Rights Watch have found that respect for democracy and human rights in Rwanda has declined since the country joined the Commonwealth. There have been calls for the Commonwealth to stand up for democracy and human rights in Rwanda at the 2022 CHOGM. Agenda Under the Millbrook Commonwealth Action Programme, each CHOGM is responsible for renewing the remit of the Commonwealth Ministerial Action Group, whose responsibility it is to uphold the Harare Declaration on the core political principles of the Commonwealth. Incidents A bomb exploded at the Sydney Hilton Hotel, the venue for the February 1978 Commonwealth Heads of Government Regional Meeting. Twelve foreign heads of government were staying in the hotel at the time. Most delegates were evacuated by Royal Australian Air Force helicopters and the meeting was moved to Bowral, protected by 800 soldiers of the Australian Army. As the convocation of heads of governments and permanent Commonwealth staff and experts, CHOGMs are the highest institution of action in the Commonwealth, and rare occasions on which Commonwealth leaders all come together. CHOGMs have been the venues of many of the Commonwealth's most dramatic events. Robert Mugabe announced Zimbabwe's immediate withdrawal from the Commonwealth at the 2003 CHOGM, and Nigeria's execution of Ken Saro-Wiwa and eight others on the first day of the 1995 CHOGM led to that country's suspension. It has also been the trigger of a number of events that have shaken participating countries domestically. The departure of Uganda's President Milton Obote to the 1971 CHOGM allowed Idi Amin to overthrow Obote's government. Similarly, President James Mancham's attendance of the 1977 CHOGM gave Prime Minister France-Albert René the opportunity to seize power in the Seychelles. List of meetings The 25th CHOGM was originally scheduled for Vanuatu in 2017 but the country rescinded its offer to host after Cyclone Pam devastated the country's infrastructure in March 2015. The meeting was rescheduled for the United Kingdom in the spring of 2018 which also resulted in the 26th CHOGM, originally scheduled for 2019, to be rescheduled for 22–27 June 2020. However, due to the coronavirus pandemic, the 26th CHOGM was again postponed to 2022. Notes References External links Commonwealth Heads of Government Meeting page on the Commonwealth Secretariat web site Kampala' 2007, CHOGM 2007 Official page CHOGM count Down, CHOGM News CHOGM 2007, CHOGM 2007 Kampala Uganda, Updates and information CHOGM 2007 Highlights & News, CHOGM 2007 Highlights CHOGM 2011, Australian Government CHOGM 2013, CHOGM 2013 Official website
3,308
7,246
https://en.wikipedia.org/wiki/Charles%20Messier
Charles Messier
Charles Messier (; 26 June 1730 – 12 April 1817) was a French astronomer. He published an astronomical catalogue consisting of 110 nebulae and star clusters, which came to be known as the Messier objects. Messier's purpose for the catalogue was to help astronomical observers distinguish between permanent and transient visually diffuse objects in the sky. Biography Messier was born in Badonviller in the Lorraine region of France, the tenth of twelve children of Françoise B. Grandblaise and Nicolas Messier, a Court usher. Six of his brothers and sisters died while young, and his father died in 1741. Charles' interest in astronomy was stimulated by the appearance of the great six-tailed comet in 1744 and by an annular solar eclipse visible from his hometown on 25 July 1748. In 1751, Messier entered the employ of Joseph Nicolas Delisle, the astronomer of the French Navy, who instructed him to keep careful records of his observations. Messier's first documented observation was that of the Mercury transit of 6 May 1753, followed by his observations journals at Cluny Hotel and at the French Navy observatories. In 1764, Messier was made a fellow of the Royal Society; in 1769, he was elected a foreign member of the Royal Swedish Academy of Sciences; and on 30 June 1770, he was elected to the French Academy of Sciences. He was given the nickname "Ferret of Comets" by King Louis XV. Messier discovered 13 comets: C/1760 B1 (Messier) C/1763 S1 (Messier) C/1764 A1 (Messier) C/1766 E1 (Messier) C/1769 P1 (Messier) D/1770 L1 (Lexell) C/1771 G1 (Messier) C/1773 T1 (Messier) C/1780 U2 (Messier) C/1788 W1 (Messier) C/1793 S2 (Messier) C/1798 G1 (Messier) C/1785 A1 (Messier-Méchain) He also co-discovered Comet C/1801 N1, a discovery shared with several other observers including Pons, Méchain, and Bouvard. (Comet Pons-Messier-Méchain-Bouvard) Near the end of his life, Messier self-published a booklet connecting the great comet of 1769 to the birth of Napoleon, who was in power at the time of publishing. According to Maik Meyer: Messier is buried in Père Lachaise Cemetery, Paris, in Section 11. The grave is faintly inscribed, and is near the grave of Frédéric Chopin, slightly to the west and directly north, and behind the small mausoleum of the horologist Abraham-Louis Breguet. Messier catalogue Messier's occupation as a comet hunter led him to continually come across fixed diffuse objects in the night sky which could be mistaken for comets. He compiled a list of them, in collaboration with his friend and assistant Pierre Méchain (who may have found at least 20 of the objects), to avoid wasting time sorting them out from the comets they were looking for. The entries are now known to be 39 galaxies, 4 planetary nebulae, 7 other types of nebulae, and 55 star clusters. Messier did his observing with a 100 mm (four-inch) refracting telescope from Hôtel de Cluny (now the Musée national du Moyen Âge), in downtown Paris, France. The list he compiled only contains objects found in the area of the sky Messier could observe, from the north celestial pole to a declination of about −35.7° . They are not organized scientifically by object type, or by location. The first version of Messier's catalogue contained 45 objects and was published in 1774 in the journal of the French Academy of Sciences in Paris. In addition to his own discoveries, this version included objects previously observed by other astronomers, with only 17 of the 45 objects being discovered by Messier himself. By 1780 the catalog had increased to 80 objects. The final version of the catalogue was published in 1781, in the 1784 issue of Connaissance des Temps. The final list of Messier objects had grown to 103. On several occasions between 1921 and 1966, astronomers and historians discovered evidence of another seven objects that were observed either by Messier or by Méchain, shortly after the final version was published. These seven objects, M 104 through M 110, are accepted by astronomers as "official" Messier objects. The objects' Messier designations, from M 1 to M 110, are still used by professional and amateur astronomers today and their relative brightness makes them popular objects in the amateur astronomical community. Legacy The lunar crater Messier and the asteroid 7359 Messier were named in his honour. See also Deep-sky object List of Messier objects Messier object Messier marathon Caldwell catalogue Notes References External links a virtual exhibition by the Charles Messier's manuscripts on Paris Observatory digital library 1730 births 1817 deaths People from Meurthe-et-Moselle Discoverers of comets 18th-century French astronomers Fellows of the Royal Society Members of the French Academy of Sciences Members of the Royal Swedish Academy of Sciences Burials at Père Lachaise Cemetery
3,311
7,248
https://en.wikipedia.org/wiki/Corrado%20Gini
Corrado Gini
Corrado Gini (23 May 1884 – 13 March 1965) was an Italian statistician, demographer and sociologist who developed the Gini coefficient, a measure of the income inequality in a society. Gini was a proponent of organicism and applied it to nations. Gini was a eugenicist, and prior to and during World War II, he was an advocate of Italian Fascism. Following the war, he founded the Italian Unionist Movement, which advocated for the annexation of Italy by the United States. Career Gini was born on May 23, 1884, in Motta di Livenza, near Treviso, into an old landed family. He entered the Faculty of Law at the University of Bologna, where in addition to law he studied mathematics, economics, and biology. Gini's scientific work ran in two directions: towards the social sciences and towards statistics. His interests ranged well beyond the formal aspects of statistics—to the laws that govern biological and social phenomena. His first published work was Il sesso dal punto di vista statistico (1908). This work is a thorough review of the natal sex ratio, looking at past theories and at how new hypothesis fit the statistical data. In particular, it presents evidence that the tendency to produce one or the other sex of child is, to some extent, heritable. He published the Gini coefficient in the 1912 paper Variability and Mutability (). Also called the Gini index and the Gini ratio, it is a measure of statistical dispersion intended to represent the income inequality within a nation or other group. In 1910, he acceded to the Chair of Statistics in the University of Cagliari and then at Padua in 1913. He founded the statistical journal Metron in 1920, directing it until his death; it only accepted articles with practical applications. He became a professor at the Sapienza University of Rome in 1925. At the University, he founded a lecture course on sociology, maintaining it until his retirement. He also set up the School of Statistics in 1928, and, in 1936, the Faculty of Statistical, Demographic and Actuarial Sciences. Under fascism In 1926, he was appointed President of the Central Institute of Statistics in Rome. This he organised as a single centre for Italian statistical services. He was a close intimate of Mussolini throughout the 20s. He resigned from his position within the institute in 1932. In 1927 he published a treatise entitled The Scientific Basis of Fascism. In 1929, Gini founded the Italian Committee for the Study of Population Problems (Comitato italiano per lo studio dei problemi della popolazione) which, two years later, organised the first Population Congress in Rome. A eugenicist apart from being a demographer, Gini led an expedition to survey Polish populations, among them the Karaites. Gini was throughout the 20s a supporter of fascism, and expressed his hope that Nazi Germany and Fascist Italy would emerge as victors in WW2. However, he never supported any measure of exclusion of the Jews. Milestones during the rest of his career include: In 1933 – vice president of the International Sociological Institute. In 1934 – president of the Italian Genetics and Eugenics Society. In 1935 – president of the International Federation of Eugenics Societies in Latin-language Countries. In 1937 – president of the Italian Sociological Society. In 1941 – president of the Italian Statistical Society. In 1957 – Gold Medal for outstanding service to the Italian School. In 1962 – National Member of the Accademia dei Lincei. Italian Unionist Movement On October 12, 1944, Gini joined with the Calabrian activist Santi Paladino, and fellow-statistician Ugo Damiani to found the Italian Unionist Movement, for which the emblem was the Stars and Stripes, the Italian flag and a world map. According to the three men, the Government of the United States should annex all free and democratic nations worldwide, thereby transforming itself into a world government, and allowing Washington, D.C. to maintain Earth in a perpetual condition of peace. The party existed up to 1948 but had little success and its aims were not supported by the United States. Organicism and nations Gini was a proponent of organicism and saw nations as organic in nature. Gini shared the view held by Oswald Spengler that populations go through a cycle of birth, growth, and decay. Gini claimed that nations at a primitive level have a high birth rate, but, as they evolve, the upper classes birth rate drops while the lower class birth rate, while higher, will inevitably deplete as their stronger members emigrate, die in war, or enter into the upper classes. If a nation continues on this path without resistance, Gini claimed the nation would enter a final decadent stage where the nation would degenerate as noted by decreasing birth rate, decreasing cultural output, and the lack of imperial conquest. At this point, the decadent nation with its aging population can be overrun by a more youthful and vigorous nation. Gini's organicist theories of nations and natality are believed to have influenced policies of Italian Fascism. Honours The following honorary degrees were conferred upon him: Economics by the Catholic University of the Sacred Heart in Milan (1932), Sociology by the University of Geneva (1934), Sciences by Harvard University (1936), Social Sciences by the University of Cordoba, Argentine (1963). Partial bibliography Il sesso dal punto di vista statistica: le leggi della produzione dei sessi (1908) Sulla misura della concentrazione e della variabilità dei caratteri (1914) Quelques considérations au sujet de la construction des nombres indices des prix et des questions analogues (1924) Memorie di metodologia statistica. Vol.1: Variabilità e Concentrazione (1955) Memorie di metodologia statistica. Vol.2: Transvariazione (1960) References External links Biography Of Corrado Gini at the Metron, the statistics journal he founded. Paper on "Corrado Gini and Italian Statistics under Fascism" by Giovanni Favero June 2002 A. Forcina and G. M. Giorgi "Early Gini’s Contributions to Inequality Measurement and Statistical Inference." JEHPS mars 2005 Another photograph 1884 births 1965 deaths People from Motta di Livenza Italian sociologists Italian eugenicists Italian fascists Italian statisticians University of Bologna alumni Academic staff of the Sapienza University of Rome Academic staff of the University of Cagliari Fellows of the Econometric Society Demographers
3,313
7,249
https://en.wikipedia.org/wiki/Crankshaft
Crankshaft
A crankshaft is a mechanical component used in a piston engine to convert the reciprocating motion into rotational motion. The crankshaft is a rotating shaft containing one or more crankpins, that are driven by the pistons via the connecting rods. The crankpins are also called rod bearing journals, and they rotate within the "big end" of the connecting rods. Most modern crankshafts are located in the engine block. They are made from steel or cast iron, using either a forging, casting or machining process. Design The crankshaft located within the engine block, held in place via main bearings which allow the crankshaft to rotate within the block. The up-down motion of each piston is transferred to the crankshaft via connecting rods. A flywheel is often attached to one end of the crankshaft, in order to smoothen the power delivery and reduce vibration. A crankshaft is subjected to enormous stresses, in some cases more than per cylinder. Crankshafts for single-cylinder engines are usually a simpler design than for engines with multiple cylinders. Bearings The crankshaft is able to rotate in the engine block due to the 'main bearings'. Since the crankshaft is subject to large horizontal and torsional forces from each cylinder, these main bearings are located at various points along the crankshaft, rather than just one at each end. The number of main bearings is determined based on the overall load factor and the maximum engine speed. Crankshafts in diesel engines often use a main bearing between every cylinder and at both ends of the crankshaft, due to the high forces of combustion present. Flexing of the crankshaft was a factor in V8 engines replacing straight-eight engines in the 1950s; the long crankshafts of the latter suffered from an unacceptable amount of flex when engine designers began using higher compression ratios and higher engine speeds (RPM). Piston stroke The distance between the axis of the crankpins and the axis of the crankshaft determines the stroke length of the engine. Most modern car engines are classified as "over square" or short-stroke, wherein the stroke is less than the diameter of the cylinder bore. A common way to increase the low-RPM torque of an engine is to increase the stroke, sometimes known as "stroking" the engine. Historically, the trade-off for a long-stroke engine was a lower rev limit and increased vibration at high RPM, due to the increased piston velocity. Cross-plane and flat-plane configurations When designing an engine, the crankshaft configuration is closely related to the engine's firing order. Most production V8 engines (such as the Ford Modular engine and the General Motors LS engine) use a cross-plane crank whereby the crank throws are spaced 90° apart. However, some high-performance V8 engines (such as the Ferrari 488) instead use a flat-plane crank, whereby the throws are spaced 180° apart, which essentially results in two inline-four engines sharing a common crankcase. Flat-plane engines are usually able to operate at higher RPM, however they have higher second-order vibrations, so they are better suited to racing car engines. Engine balance For some engines it is necessary to provide counterweights for the reciprocating mass of the piston, conrods and crankshaft, in order to improve the engine balance. These counterweights are typically cast as part of the crankshaft but, occasionally, are bolt-on pieces. Flying arms In some engines, the crankshaft contains direct links between adjacent crank pins, without the usual intermediate main bearing. These links are called flying arms. This arrangement is sometimes used in V6 and V8 engines, in order to maintain an even firing interval while using different V angles, and to reduce the number of main bearings required. The downside of flying arms is that the rigidity of the crankshaft is reduced, which can cause problems at high RPM or high power outputs. Counter-rotating crankshafts In most engines, each connecting rod is attached a single crankshaft, which results in the angle of the connecting rod varying as the piston moves through its stroke. This variation in angle pushes the pistons against the cylinder wall, which causes friction between the piston and cylinder wall. To prevent this, some early engines - such as the 1900-1904 Lanchester Engine Company flat-twin engines - connected each piston to two crankshafts that are rotating in opposite directions. This arrangement cancels out the lateral forces and reduces the requirement for counterweights. This design is rarely used, however a similar principle applies to balance shafts, which are occasionally used. Construction Forged crankshafts Crankshafts can be created from a steel bar using roll forging. Today, manufacturers tend to favour the use of forged crankshafts due to their lighter weight, more compact dimensions and better inherent damping. With forged crankshafts, vanadium micro-alloyed steels are mainly used as these steels can be air-cooled after reaching high strengths without additional heat treatment, except for the surface hardening of the bearing surfaces. The low alloy content also makes the material cheaper than high alloy steels. Carbon steels also require additional heat treatment to reach the desired properties. Cast crankshafts Another construction method is to cast the crankshaft from ductile. Cast iron crankshafts are today mostly found in cheaper production engines where the loads are lower. Machined crankshafts Crankshafts can also be machined from billet, often a bar of high quality vacuum remelted steel. Though the fiber flow (local inhomogeneities of the material's chemical composition generated during casting) does not follow the shape of the crankshaft (which is undesirable), this is usually not a problem since higher quality steels, which normally are difficult to forge, can be used. Per unit, these crankshafts tend to be very expensive due to the large amount of material that must be removed with lathes and milling machines, the high material cost, and the additional heat treatment required. However, since no expensive tooling is needed, this production method allows small production runs without high up-front costs. History China The earliest hand-operated cranks appeared in China during the Han Dynasty (202 BC-220 AD). They were used for silk-reeling, hemp-spinning, for the agricultural winnowing fan, in the water-powered flour-sifter, for hydraulic-powered metallurgic bellows, and in the well windlass. The rotary winnowing fan greatly increased the efficiency of separating grain from husks and stalks. However, the potential of the crank of converting circular motion into reciprocal motion never seems to have been fully realized in China, and the crank was typically absent from such machines until the turn of the 20th century. Europe A crank in the form of an eccentrically-mounted handle of the rotary handmill appeared in 5th century BC Celtiberian Spain and ultimately spread across the Roman Empire. A Roman iron crank dating to the 2nd century AD was excavated in Augusta Raurica, Switzerland. The crank-operated Roman mill is dated to the late 2nd century. Evidence for the crank combined with a connecting rod appears in the Hierapolis mill, dating to the 3rd century; they are also found in stone sawmills in Roman Syria and Ephesus dating to the 6th century. The pediment of the Hierapolis mill shows a waterwheel fed by a mill race powering via a gear train two frame saws which cut blocks by the way of some kind of connecting rods and cranks. The crank and connecting rod mechanisms of the other two archaeologically-attested sawmills worked without a gear train. Water-powered marble saws in Germany were mentioned by the late 4th century poet Ausonius; about the same time, these mill types seem also to be indicated by Gregory of Nyssa from Anatolia. A rotary grindstone operated by a crank handle is shown in the Carolingian manuscript Utrecht Psalter; the pen drawing of around 830 goes back to a late antique original. Cranks used to turn wheels are also depicted or described in various works dating from the tenth to thirteenth centuries. The first depictions of the compound crank in the carpenter's brace appear between 1420 and 1430 in northern European artwork. The rapid adoption of the compound crank can be traced in the works of an unknown German engineer writing on the state of military technology during the Hussite Wars: first, the connecting-rod, applied to cranks, reappeared; second, double-compound cranks also began to be equipped with connecting-rods; and third, the flywheel was employed for these cranks to get them over the 'dead-spot'. The concept was much improved by the Italian engineer and writer Roberto Valturio in 1463, who devised a boat with five sets, where the parallel cranks are all joined to a single power source by one connecting-rod, an idea also taken up by his compatriot Italian painter Francesco di Giorgio. The crank had become common in Europe by the early 15th century, as seen in the works of the military engineer Konrad Kyeser (1366–after 1405). Devices depicted in Kyeser's Bellifortis include cranked windlasses for spanning siege crossbows, cranked chain of buckets for water-lifting and cranks fitted to a wheel of bells. Kyeser also equipped the Archimedes' screws for water-raising with a crank handle, an innovation which subsequently replaced the ancient practice of working the pipe by treading. Pisanello painted a piston-pump driven by a water-wheel and operated by two simple cranks and two connecting-rods. The 15th also century saw the introduction of cranked rack-and-pinion devices, called cranequins, which were fitted to the crossbow's stock as a means of exerting even more force while spanning the missile weapon. In the textile industry, cranked reels for winding skeins of yarn were introduced. The Italian physician Guido da Vigevano (c. 1280−1349), planning for a new crusade, made illustrations for a paddle boat and war carriages that were propelled by manually turned compound cranks and gear wheels, identified as an early crankshaft prototype by Lynn Townsend White. The Luttrell Psalter, dating to around 1340, describes a grindstone which was rotated by two cranks, one at each end of its axle; the geared hand-mill, operated either with one or two cranks, appeared later in the 15th century. Around 1480, the early medieval rotary grindstone was improved with a treadle and crank mechanism. Cranks mounted on push-carts first appear in a German engraving of 1589. Crankshafts were also described by Leonardo da Vinci (1452–1519) and a Dutch farmer and windmill owner by the name Cornelis Corneliszoon van Uitgeest in 1592. His wind-powered sawmill used a crankshaft to convert a windmill's circular motion into a back-and-forward motion powering the saw. Corneliszoon was granted a patent for his crankshaft in 1597. From the 16th century onwards, evidence of cranks and connecting rods integrated into machine design becomes abundant in the technological treatises of the period: Agostino Ramelli's The Diverse and Artifactitious Machines of 1588 depicts eighteen examples, a number that rises in the Theatrum Machinarum Novum by Georg Andreas Böckler to 45 different machines. Cranks were formerly common on some machines in the early 20th century; for example almost all phonographs before the 1930s were powered by clockwork motors wound with cranks. Reciprocating piston engines use cranks to convert the linear piston motion into rotational motion. Internal combustion engines of early 20th century automobiles were usually started with hand cranks, before electric starters came into general use. Western Asia The non-manual crank appears in several of the hydraulic devices described by the Banū Mūsā brothers in their 9th-century Book of Ingenious Devices. These automatically operated cranks appear in several devices, two of which contain an action which approximates to that of a crankshaft, anticipating Ismail al-Jazari's invention by several centuries and its first appearance in Europe by over five centuries. The automatic crank described by the Banū Mūsā would not have allowed a full rotation, however, but only a small modification was required to convert it to a crankshaft. Arab engineer Ismail al-Jazari (1136–1206), in the Artuqid Sultanate, described a crank and connecting rod system in a rotating machine in two of his water-raising machines. The author Sally Ganchy identified a crankshaft in his twin-cylinder pump mechanism, including both the crank and shaft mechanisms. See also Bicycle crankset Brace (tool) Cam Cam engine Camshaft Crank (mechanism) Crankcase Crankshaft torsional vibration Piston motion equations Tunnel crankshaft Scotch yoke Swashplate References Sources External links Interactive crank animation https://www.desmos.com/calculator/8l2kvyivqo D & T Mechanisms - Interactive Tools for Teachers (applets) https://web.archive.org/web/20140714155346/http://www.content.networcs.net/tft/mechanisms.htm Engine components Engine technology Linkages (mechanical) Piston engines
3,314
7,252
https://en.wikipedia.org/wiki/Cell%20cycle
Cell cycle
The cell cycle, or cell-division cycle, is the series of events that take place in a cell that cause it to divide into two daughter cells. These events include the duplication of its DNA (DNA replication) and some of its organelles, and subsequently the partitioning of its cytoplasm, chromosomes and other components into two daughter cells in a process called cell division. In cells with nuclei (eukaryotes, i.e., animal, plant, fungal, and protist cells), the cell cycle is divided into two main stages: interphase and the mitotic (M) phase (including mitosis and cytokinesis). During interphase, the cell grows, accumulating nutrients needed for mitosis, and replicates its DNA and some of its organelles. During the mitotic phase, the replicated chromosomes, organelles, and cytoplasm separate into two new daughter cells. To ensure the proper replication of cellular components and division, there are control mechanisms known as cell cycle checkpoints after each of the key steps of the cycle that determine if the cell can progress to the next phase. In cells without nuclei (prokaryotes, i.e., bacteria and archaea), the cell cycle is divided into the B, C, and D periods. The B period extends from the end of cell division to the beginning of DNA replication. DNA replication occurs during the C period. The D period refers to the stage between the end of DNA replication and the splitting of the bacterial cell into two daughter cells. In single-celled organisms, a single cell-division cycle is how the organism replicates itself. In multicellular organisms such as plants and animals, a series of cell-division cycles is how the organism develops from a single-celled fertilized egg into a mature organism, and is also the process by which hair, skin, blood cells, and some internal organs are regenerated and healed (with possible exception of nerves; see nerve damage). After cell division, each of the daughter cells begin the interphase of a new cell cycle. Although the various stages of interphase are not usually morphologically distinguishable, each phase of the cell cycle has a distinct set of specialized biochemical processes that prepare the cell for initiation of the cell division. Phases The eukaryotic cell cycle consists of four distinct phases: G1 phase, S phase (synthesis), G2 phase (collectively known as interphase) and M phase (mitosis and cytokinesis). M phase is itself composed of two tightly coupled processes: mitosis, in which the cell's nucleus divides, and cytokinesis, in which the cell's cytoplasm divides forming two daughter cells. Activation of each phase is dependent on the proper progression and completion of the previous one. Cells that have temporarily or reversibly stopped dividing are said to have entered a state of quiescence called G0 phase. After cell division, each of the daughter cells begin the interphase of a new cycle. Although the various stages of interphase are not usually morphologically distinguishable, each phase of the cell cycle has a distinct set of specialized biochemical processes that prepare the cell for initiation of cell division. G0 phase (quiescence) G0 is a resting phase where the cell has left the cycle and has stopped dividing. The cell cycle starts with this phase. Non-proliferative (non-dividing) cells in multicellular eukaryotes generally enter the quiescent G0 state from G1 and may remain quiescent for long periods of time, possibly indefinitely (as is often the case for neurons). This is very common for cells that are fully differentiated. Some cells enter the G0 phase semi-permanently and are considered post-mitotic, e.g., some liver, kidney, and stomach cells. Many cells do not enter G0 and continue to divide throughout an organism's life, e.g., epithelial cells. The word "post-mitotic" is sometimes used to refer to both quiescent and senescent cells. Cellular senescence occurs in response to DNA damage and external stress and usually constitutes an arrest in G1. Cellular senescence may make a cell's progeny nonviable; it is often a biochemical alternative to the self-destruction of such a damaged cell by apoptosis. Interphase Interphase represent the phase between two successive M phases. Interphase is a series of changes that takes place in a newly formed cell and its nucleus before it becomes capable of division again. It is also called preparatory phase or intermitosis. Typically interphase lasts for at least 91% of the total time required for the cell cycle. Interphase proceeds in three stages, G1, S, and G2, followed by the cycle of mitosis and cytokinesis. The cell's nuclear DNA contents are duplicated during S phase. G1 phase (First growth phase or Post mitotic gap phase) The first phase within interphase, from the end of the previous M phase until the beginning of DNA synthesis, is called G1 (G indicating gap). It is also called the growth phase. During this phase, the biosynthetic activities of the cell, which are considerably slowed down during M phase, resume at a high rate. The duration of G1 is highly variable, even among different cells of the same species. In this phase, the cell increases its supply of proteins, increases the number of organelles (such as mitochondria, ribosomes), and grows in size. In G1 phase, a cell has three options. To continue cell cycle and enter S phase Stop cell cycle and enter G0 phase for undergoing differentiation. Become arrested in G1 phase hence it may enter G0 phase or re-enter cell cycle. The deciding point is called check point (Restriction point). This check point is called the restriction point or START and is regulated by G1/S cyclins, which cause transition from G1 to S phase. Passage through the G1 check point commits the cell to division. S phase (DNA replication) The ensuing S phase starts when DNA synthesis commences; when it is complete, all of the chromosomes have been replicated, i.e., each chromosome consists of two sister chromatids. Thus, during this phase, the amount of DNA in the cell has doubled, though the ploidy and number of chromosomes are unchanged. Rates of RNA transcription and protein synthesis are very low during this phase. An exception to this is histone production, most of which occurs during the S phase. G2 phase (growth) G2 phase occurs after DNA replication and is a period of protein synthesis and rapid cell growth to prepare the cell for mitosis. During this phase microtubules begin to reorganize to form a spindle (preprophase). Before proceeding to mitotic phase, cells must be checked at the G2 checkpoint for any DNA damage within the chromosomes. The G2 checkpoint is mainly regulated by the tumor protein p53. If the DNA is damaged, p53 will either repair the DNA or trigger the apoptosis of the cell. If p53 is dysfunctional or mutated, cells with damaged DNA may continue through the cell cycle, leading to the development of cancer. Mitotic phase (chromosome separation) The relatively brief M phase consists of nuclear division (karyokinesis). It is a relatively short period of the cell cycle. M phase is complex and highly regulated. The sequence of events is divided into phases, corresponding to the completion of one set of activities and the start of the next. These phases are sequentially known as: prophase prometaphase metaphase anaphase telophase Mitosis is the process by which a eukaryotic cell separates the chromosomes in its cell nucleus into two identical sets in two nuclei. During the process of mitosis the pairs of chromosomes condense and attach to microtubules that pull the sister chromatids to opposite sides of the cell. Mitosis occurs exclusively in eukaryotic cells, but occurs in different ways in different species. For example, animal cells undergo an "open" mitosis, where the nuclear envelope breaks down before the chromosomes separate, while fungi such as Aspergillus nidulans and Saccharomyces cerevisiae (yeast) undergo a "closed" mitosis, where chromosomes divide within an intact cell nucleus. Cytokinesis phase (separation of all cell components) Mitosis is immediately followed by cytokinesis, which divides the nuclei, cytoplasm, organelles and cell membrane into two cells containing roughly equal shares of these cellular components. Mitosis and cytokinesis together define the division of the mother cell into two daughter cells, genetically identical to each other and to their parent cell. This accounts for approximately 10% of the cell cycle. Because cytokinesis usually occurs in conjunction with mitosis, "mitosis" is often used interchangeably with "M phase". However, there are many cells where mitosis and cytokinesis occur separately, forming single cells with multiple nuclei in a process called endoreplication. This occurs most notably among the fungi and slime molds, but is found in various groups. Even in animals, cytokinesis and mitosis may occur independently, for instance during certain stages of fruit fly embryonic development. Errors in mitosis can result in cell death through apoptosis or cause mutations that may lead to cancer. Regulation of eukaryotic cell cycle Regulation of the cell cycle involves processes crucial to the survival of a cell, including the detection and repair of genetic damage as well as the prevention of uncontrolled cell division. The molecular events that control the cell cycle are ordered and directional; that is, each process occurs in a sequential fashion and it is impossible to "reverse" the cycle. Role of cyclins and CDKs Two key classes of regulatory molecules, cyclins and cyclin-dependent kinases (CDKs), determine a cell's progress through the cell cycle. Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of these central molecules. Many of the genes encoding cyclins and CDKs are conserved among all eukaryotes, but in general, more complex organisms have more elaborate cell cycle control systems that incorporate more individual components. Many of the relevant genes were first identified by studying yeast, especially Saccharomyces cerevisiae; genetic nomenclature in yeast dubs many of these genes cdc (for "cell division cycle") followed by an identifying number, e.g. cdc25 or cdc20. Cyclins form the regulatory subunits and CDKs the catalytic subunits of an activated heterodimer; cyclins have no catalytic activity and CDKs are inactive in the absence of a partner cyclin. When activated by a bound cyclin, CDKs perform a common biochemical reaction called phosphorylation that activates or inactivates target proteins to orchestrate coordinated entry into the next phase of the cell cycle. Different cyclin-CDK combinations determine the downstream proteins targeted. CDKs are constitutively expressed in cells whereas cyclins are synthesised at specific stages of the cell cycle, in response to various molecular signals. General mechanism of cyclin-CDK interaction Upon receiving a pro-mitotic extracellular signal, G1 cyclin-CDK complexes become active to prepare the cell for S phase, promoting the expression of transcription factors that in turn promote the expression of S cyclins and of enzymes required for DNA replication. The G1 cyclin-CDK complexes also promote the degradation of molecules that function as S phase inhibitors by targeting them for ubiquitination. Once a protein has been ubiquitinated, it is targeted for proteolytic degradation by the proteasome. However, results from a recent study of E2F transcriptional dynamics at the single-cell level argue that the role of G1 cyclin-CDK activities, in particular cyclin D-CDK4/6, is to tune the timing rather than the commitment of cell cycle entry. Active S cyclin-CDK complexes phosphorylate proteins that make up the pre-replication complexes assembled during G1 phase on DNA replication origins. The phosphorylation serves two purposes: to activate each already-assembled pre-replication complex, and to prevent new complexes from forming. This ensures that every portion of the cell's genome will be replicated once and only once. The reason for prevention of gaps in replication is fairly clear, because daughter cells that are missing all or part of crucial genes will die. However, for reasons related to gene copy number effects, possession of extra copies of certain genes is also deleterious to the daughter cells. Mitotic cyclin-CDK complexes, which are synthesized but inactivated during S and G2 phases, promote the initiation of mitosis by stimulating downstream proteins involved in chromosome condensation and mitotic spindle assembly. A critical complex activated during this process is a ubiquitin ligase known as the anaphase-promoting complex (APC), which promotes degradation of structural proteins associated with the chromosomal kinetochore. APC also targets the mitotic cyclins for degradation, ensuring that telophase and cytokinesis can proceed. Specific action of cyclin-CDK complexes Cyclin D is the first cyclin produced in the cells that enter the cell cycle, in response to extracellular signals (e.g. growth factors). Cyclin D levels stay low in resting cells that are not proliferating. Additionally, CDK4/6 and CDK2 are also inactive because CDK4/6 are bound by INK4 family members (e.g., p16), limiting kinase activity. Meanwhile, CDK2 complexes are inhibited by the CIP/KIP proteins such as p21 and p27, When it is time for a cell to enter the cell cycle, which is triggered by a mitogenic stimuli, levels of cyclin D increase. In response to this trigger, cyclin D binds to existing CDK4/6, forming the active cyclin D-CDK4/6 complex. Cyclin D-CDK4/6 complexes in turn mono-phosphorylates the retinoblastoma susceptibility protein (Rb) to pRb. The un-phosphorylated Rb tumour suppressor functions in inducing cell cycle exit and maintaining G0 arrest (senescence). In the last few decades, a model has been widely accepted whereby pRB proteins are inactivated by cyclin D-Cdk4/6-mediated phosphorylation. Rb has 14+ potential phosphorylation sites. Cyclin D-Cdk 4/6 progressively phosphorylates Rb to hyperphosphorylated state, which triggers dissociation of pRB–E2F complexes, thereby inducing G1/S cell cycle gene expression and progression into S phase. However, scientific observations from a recent study show that Rb is present in three types of isoforms: (1) un-phosphorylated Rb in G0 state; (2) mono-phosphorylated Rb, also referred to as "hypo-phosphorylated' or 'partially' phosphorylated Rb in early G1 state; and (3) inactive hyper-phosphorylated Rb in late G1 state. In early G1 cells, mono-phosphorylated Rb exits as 14 different isoforms, one of each has distinct E2F binding affinity. Rb has been found to associate with hundreds of different proteins and the idea that different mono-phosphorylated Rb isoforms have different protein partners was very appealing. A recent report confirmed that mono-phosphorylation controls Rb's association with other proteins and generates functional distinct forms of Rb. All different mono-phosphorylated Rb isoforms inhibit E2F transcriptional program and are able to arrest cells in G1-phase. Importantly, different mono-phosphorylated forms of RB have distinct transcriptional outputs that are extended beyond E2F regulation. In general, the binding of pRb to E2F inhibits the E2F target gene expression of certain G1/S and S transition genes including E-type cyclins. The partial phosphorylation of RB de-represses the Rb-mediated suppression of E2F target gene expression, begins the expression of cyclin E. The molecular mechanism that causes the cell switched to cyclin E activation is currently not known, but as cyclin E levels rise, the active cyclin E-CDK2 complex is formed, bringing Rb to be inactivated by hyper-phosphorylation. Hyperphosphorylated Rb is completely dissociated from E2F, enabling further expression of a wide range of E2F target genes are required for driving cells to proceed into S phase [1]. Recently, it has been identified that cyclin D-Cdk4/6 binds to a C-terminal alpha-helix region of Rb that is only distinguishable to cyclin D rather than other cyclins, cyclin E, A and B. This observation based on the structural analysis of Rb phosphorylation supports that Rb is phosphorylated in a different level through multiple Cyclin-Cdk complexes. This also makes feasible the current model of a simultaneous switch-like inactivation of all mono-phosphorylated Rb isoforms through one type of Rb hyper-phosphorylation mechanism. In addition, mutational analysis of the cyclin D- Cdk 4/6 specific Rb C-terminal helix shows that disruptions of cyclin D-Cdk 4/6 binding to Rb prevents Rb phosphorylation, arrests cells in G1, and bolsters Rb's functions in tumor suppressor. This cyclin-Cdk driven cell cycle transitional mechanism governs a cell committed to the cell cycle that allows cell proliferation. A cancerous cell growth often accompanies with deregulation of Cyclin D-Cdk 4/6 activity. The hyperphosphorylated Rb dissociates from the E2F/DP1/Rb complex (which was bound to the E2F responsive genes, effectively "blocking" them from transcription), activating E2F. Activation of E2F results in transcription of various genes like cyclin E, cyclin A, DNA polymerase, thymidine kinase, etc. Cyclin E thus produced binds to CDK2, forming the cyclin E-CDK2 complex, which pushes the cell from G1 to S phase (G1/S, which initiates the G2/M transition). Cyclin B-cdk1 complex activation causes breakdown of nuclear envelope and initiation of prophase, and subsequently, its deactivation causes the cell to exit mitosis. A quantitative study of E2F transcriptional dynamics at the single-cell level by using engineered fluorescent reporter cells provided a quantitative framework for understanding the control logic of cell cycle entry, challenging the canonical textbook model. Genes that regulate the amplitude of E2F accumulation, such as Myc, determine the commitment in cell cycle and S phase entry. G1 cyclin-CDK activities are not the driver of cell cycle entry. Instead, they primarily tune the timing of E2F increase, thereby modulating the pace of cell cycle progression. Inhibitors Endogenous Two families of genes, the cip/kip (CDK interacting protein/Kinase inhibitory protein) family and the INK4a/ARF (Inhibitor of Kinase 4/Alternative Reading Frame) family, prevent the progression of the cell cycle. Because these genes are instrumental in prevention of tumor formation, they are known as tumor suppressors. The cip/kip family includes the genes p21, p27 and p57. They halt the cell cycle in G1 phase by binding to and inactivating cyclin-CDK complexes. p21 is activated by p53 (which, in turn, is triggered by DNA damage e.g. due to radiation). p27 is activated by Transforming Growth Factor β (TGF β), a growth inhibitor. The INK4a/ARF family includes p16INK4a, which binds to CDK4 and arrests the cell cycle in G1 phase, and p14ARF which prevents p53 degradation. Synthetic Synthetic inhibitors of Cdc25 could also be useful for the arrest of cell cycle and therefore be useful as antineoplastic and anticancer agents. Many human cancers possess the hyper-activated Cdk 4/6 activities. Given the observations of cyclin D-Cdk 4/6 functions, inhibition of Cdk 4/6 should result in preventing a malignant tumor from proliferating. Consequently, scientists have tried to invent the synthetic Cdk4/6 inhibitor as Cdk4/6 has been characterized to be a therapeutic target for anti-tumor effectiveness. Three Cdk4/6 inhibitors - palbociclib, ribociclib, and abemaciclib - currently received FDA approval for clinical use to treat advanced-stage or metastatic, hormone-receptor-positive (HR-positive, HR+), HER2-negative (HER2-) breast cancer. For example, palbociclib is an orally active CDK4/6 inhibitor which has demonstrated improved outcomes for ER-positive/HER2-negative advanced breast cancer. The main side effect is neutropenia which can be managed by dose reduction. Cdk4/6 targeted therapy will only treat cancer types where Rb is expressed. Cancer cells with loss of Rb have primary resistance to Cdk4/6 inhibitors. Transcriptional regulatory network Current evidence suggests that a semi-autonomous transcriptional network acts in concert with the CDK-cyclin machinery to regulate the cell cycle. Several gene expression studies in Saccharomyces cerevisiae have identified 800–1200 genes that change expression over the course of the cell cycle. They are transcribed at high levels at specific points in the cell cycle, and remain at lower levels throughout the rest of the cycle. While the set of identified genes differs between studies due to the computational methods and criteria used to identify them, each study indicates that a large portion of yeast genes are temporally regulated. Many periodically expressed genes are driven by transcription factors that are also periodically expressed. One screen of single-gene knockouts identified 48 transcription factors (about 20% of all non-essential transcription factors) that show cell cycle progression defects. Genome-wide studies using high throughput technologies have identified the transcription factors that bind to the promoters of yeast genes, and correlating these findings with temporal expression patterns have allowed the identification of transcription factors that drive phase-specific gene expression. The expression profiles of these transcription factors are driven by the transcription factors that peak in the prior phase, and computational models have shown that a CDK-autonomous network of these transcription factors is sufficient to produce steady-state oscillations in gene expression). Experimental evidence also suggests that gene expression can oscillate with the period seen in dividing wild-type cells independently of the CDK machinery. Orlando et al. used microarrays to measure the expression of a set of 1,271 genes that they identified as periodic in both wild type cells and cells lacking all S-phase and mitotic cyclins (clb1,2,3,4,5,6). Of the 1,271 genes assayed, 882 continued to be expressed in the cyclin-deficient cells at the same time as in the wild type cells, despite the fact that the cyclin-deficient cells arrest at the border between G1 and S phase. However, 833 of the genes assayed changed behavior between the wild type and mutant cells, indicating that these genes are likely directly or indirectly regulated by the CDK-cyclin machinery. Some genes that continued to be expressed on time in the mutant cells were also expressed at different levels in the mutant and wild type cells. These findings suggest that while the transcriptional network may oscillate independently of the CDK-cyclin oscillator, they are coupled in a manner that requires both to ensure the proper timing of cell cycle events. Other work indicates that phosphorylation, a post-translational modification, of cell cycle transcription factors by Cdk1 may alter the localization or activity of the transcription factors in order to tightly control timing of target genes. While oscillatory transcription plays a key role in the progression of the yeast cell cycle, the CDK-cyclin machinery operates independently in the early embryonic cell cycle. Before the midblastula transition, zygotic transcription does not occur and all needed proteins, such as the B-type cyclins, are translated from maternally loaded mRNA. DNA replication and DNA replication origin activity Analyses of synchronized cultures of Saccharomyces cerevisiae under conditions that prevent DNA replication initiation without delaying cell cycle progression showed that origin licensing decreases the expression of genes with origins near their 3' ends, revealing that downstream origins can regulate the expression of upstream genes. This confirms previous predictions from mathematical modeling of a global causal coordination between DNA replication origin activity and mRNA expression, and shows that mathematical modeling of DNA microarray data can be used to correctly predict previously unknown biological modes of regulation. Checkpoints Cell cycle checkpoints are used by the cell to monitor and regulate the progress of the cell cycle. Checkpoints prevent cell cycle progression at specific points, allowing verification of necessary phase processes and repair of DNA damage. The cell cannot proceed to the next phase until checkpoint requirements have been met. Checkpoints typically consist of a network of regulatory proteins that monitor and dictate the progression of the cell through the different stages of the cell cycle. It is estimated that in normal human cells about 1% of single-strand DNA damages are converted to about 50 endogenous DNA double-strand breaks per cell per cell cycle. Although such double-strand breaks are usually repaired with high fidelity, errors in their repair are considered to contribute significantly to the rate of cancer in humans. There are several checkpoints to ensure that damaged or incomplete DNA is not passed on to daughter cells. Three main checkpoints exist: the G1/S checkpoint, the G2/M checkpoint and the metaphase (mitotic) checkpoint. Another checkpoint is the Go checkpoint, in which the cells are checked for maturity. If the cells fail to pass this checkpoint by not being ready yet, they will be discarded from dividing. G1/S transition is a rate-limiting step in the cell cycle and is also known as restriction point. This is where the cell checks whether it has enough raw materials to fully replicate its DNA (nucleotide bases, DNA synthase, chromatin, etc.). An unhealthy or malnourished cell will get stuck at this checkpoint. The G2/M checkpoint is where the cell ensures that it has enough cytoplasm and phospholipids for two daughter cells. But sometimes more importantly, it checks to see if it is the right time to replicate. There are some situations where many cells need to all replicate simultaneously (for example, a growing embryo should have a symmetric cell distribution until it reaches the mid-blastula transition). This is done by controlling the G2/M checkpoint. The metaphase checkpoint is a fairly minor checkpoint, in that once a cell is in metaphase, it has committed to undergoing mitosis. However that's not to say it isn't important. In this checkpoint, the cell checks to ensure that the spindle has formed and that all of the chromosomes are aligned at the spindle equator before anaphase begins. While these are the three "main" checkpoints, not all cells have to pass through each of these checkpoints in this order to replicate. Many types of cancer are caused by mutations that allow the cells to speed through the various checkpoints or even skip them altogether. Going from S to M to S phase almost consecutively. Because these cells have lost their checkpoints, any DNA mutations that may have occurred are disregarded and passed on to the daughter cells. This is one reason why cancer cells have a tendency to exponentially accrue mutations. Aside from cancer cells, many fully differentiated cell types no longer replicate so they leave the cell cycle and stay in G0 until their death. Thus removing the need for cellular checkpoints. An alternative model of the cell cycle response to DNA damage has also been proposed, known as the postreplication checkpoint. Checkpoint regulation plays an important role in an organism's development. In sexual reproduction, when egg fertilization occurs, when the sperm binds to the egg, it releases signalling factors that notify the egg that it has been fertilized. Among other things, this induces the now fertilized oocyte to return from its previously dormant, G0, state back into the cell cycle and on to mitotic replication and division. p53 plays an important role in triggering the control mechanisms at both G1/S and G2/M checkpoints. In addition to p53, checkpoint regulators are being heavily researched for their roles in cancer growth and proliferation. Fluorescence imaging of the cell cycle Pioneering work by Atsushi Miyawaki and coworkers developed the fluorescent ubiquitination-based cell cycle indicator (FUCCI), which enables fluorescence imaging of the cell cycle. Originally, a green fluorescent protein, mAG, was fused to hGem(1/110) and an orange fluorescent protein (mKO2) was fused to hCdt1(30/120). Note, these fusions are fragments that contain a nuclear localization signal and ubiquitination sites for degradation, but are not functional proteins. The green fluorescent protein is made during the S, G2, or M phase and degraded during the G0 or G1 phase, while the orange fluorescent protein is made during the G0 or G1 phase and destroyed during the S, G2, or M phase. A far-red and near-infrared FUCCI was developed using a cyanobacteria-derived fluorescent protein (smURFP) and a bacteriophytochrome-derived fluorescent protein (movie found at this link). Role in tumor formation A disregulation of the cell cycle components may lead to tumor formation. As mentioned above, when some genes like the cell cycle inhibitors, RB, p53 etc. mutate, they may cause the cell to multiply uncontrollably, forming a tumor. Although the duration of cell cycle in tumor cells is equal to or longer than that of normal cell cycle, the proportion of cells that are in active cell division (versus quiescent cells in G0 phase) in tumors is much higher than that in normal tissue. Thus there is a net increase in cell number as the number of cells that die by apoptosis or senescence remains the same. The cells which are actively undergoing cell cycle are targeted in cancer therapy as the DNA is relatively exposed during cell division and hence susceptible to damage by drugs or radiation. This fact is made use of in cancer treatment; by a process known as debulking, a significant mass of the tumor is removed which pushes a significant number of the remaining tumor cells from G0 to G1 phase (due to increased availability of nutrients, oxygen, growth factors etc.). Radiation or chemotherapy following the debulking procedure kills these cells which have newly entered the cell cycle. The fastest cycling mammalian cells in culture, crypt cells in the intestinal epithelium, have a cycle time as short as 9 to 10 hours. Stem cells in resting mouse skin may have a cycle time of more than 200 hours. Most of this difference is due to the varying length of G1, the most variable phase of the cycle. M and S do not vary much. In general, cells are most radiosensitive in late M and G2 phases and most resistant in late S phase. For cells with a longer cell cycle time and a significantly long G1 phase, there is a second peak of resistance late in G1. The pattern of resistance and sensitivity correlates with the level of sulfhydryl compounds in the cell. Sulfhydryls are natural substances that protect cells from radiation damage and tend to be at their highest levels in S and at their lowest near mitosis. Homologous recombination (HR) is an accurate process for repairing DNA double-strand breaks. HR is nearly absent in G1 phase, is most active in S phase, and declines in G2/M. Non-homologous end joining, a less accurate and more mutagenic process for repairing double strand breaks, is active throughout the cell cycle. See also Cellular model Eukaryotic DNA replication Mitotic catastrophe Origin recognition complex Retinoblastoma protein Synchronous culture – synchronization of cell cultures Wee1 References Further reading External links David Morgan's Seminar: Controlling the Cell Cycle The cell cycle & Cell death Transcriptional program of the cell cycle: high-resolution timing Cell cycle and metabolic cycle regulated transcription in yeast Cell Cycle Animation 1Lec.com Cell Cycle Fucci:Using GFP to visualize the cell-cycle Science Creative Quarterly's overview of the cell cycle KEGG – Human Cell Cycle Cellular senescence
3,317
7,255
https://en.wikipedia.org/wiki/Connection%20%28dance%29
Connection (dance)
In partner dancing, connection is a term that refers to physical, non-verbal communication between dancers to facilitate synchronized or coordinated dance movements. Some forms of connection involve "lead/follow" in which one dancer (the "lead") directs the movements of the other dancer (the "follower") by means of non-verbal directions conveyed through a physical connection between the dancers. In other forms, connection involves multiple dancers (more than two) without a distinct leader or follower (e.g. contact improvisation). Connection refers to a host of different techniques in many types of partner dancing, especially (but not exclusively) those that feature significant physical contact between the dancers, including the Argentine Tango, Lindy Hop, Balboa, East Coast Swing, West Coast Swing, Salsa, and other ballroom dances. Other forms of communication, such as visual cues or spoken cues, sometimes aid in connecting with one's partner, but are often used in specific circumstances (e.g., practicing figures, or figures which are purposely danced without physical connection). Connection can be used to transmit power and energy as well as information and signals; some dance forms (and some dancers) primarily emphasize power or signaling, but most are probably a mixture of both. Philosopher of dance Ilya Vidrin argues that connection between partners involves norm-based communication that include “a physical exchange of information on the basis of ethically-bound conditions” (proximity, orientation, and points of contact) which constrain agency and predictability. Lead/Follow Following and leading in a partner dance is accomplished by maintaining a physical connection called the frame that allows the leader to transmit body movement to the follower, and for the follower to suggest ideas to the leader. A frame is a stable structural combination of both bodies maintained through the dancers' arms and/or legs. Connection occurs in both open and closed dance positions (also called "open frame" and "closed frame"). In closed position with body contact, connection is achieved by maintaining the frame. The follower moves to match the leader, maintaining the pressure between the two bodies as well as the position. When creating frame, tension is the primary means of establishing communication. Changes in tension are made to create rhythmic variations in moves and movements, and are communicated through points of contact. In an open position or a closed position without body contact, the hands and arms alone provide the connection, which may be one of three forms: tension, compression or neutral. During tension or leverage connection, the dancers are pulling away from each other with an equal and opposite force. The arms do not originate this force alone: they are often assisted by tension in trunk musculature, through body weight or by momentum. During compression connection, the dancers are pushing towards each other. In a neutral position, the hands do not impart any force other than the touch of the follower's hands in the leader's. In swing dances, tension and compression may be maintained for a significant period of time. In other dances, such as Latin, tension and compression may be used as indications of upcoming movement. However, in both styles, tension and compression do not signal immediate movement: the follow must be careful not to move prior to actual movement by the lead. Until then, the dancers must match pressures without moving their hands. In some styles of Lindy Hop, the tension may become quite high without initiating movement. The general rule for open connections is that moves of the leader's hands back, forth, left or right are originated through moves of the entire body. Accordingly, for the follower, a move of the connected hand is immediately transformed into the corresponding move of the body. Tensing the muscles and locking the arm achieves this effect but is neither comfortable nor correct. Such tension eliminates the subtler communication in the connection, and eliminates free movement up and down, such as is required to initiate many turns. Instead of just tensing the arms, connection is achieved by engaging the shoulder, upper body and torso muscles. Movement originates in the body's core. A leader leads by moving himself and maintaining frame and connection. Different forms of dance and different movements within each dance may call for differences in the connection. In some dances the separation distance between the partners remains pretty constant. In others e.g. Modern Jive moving closer together and further apart are fundamental to the dance, requiring flexion and extension of the arms, alternating compression and tension. The connection between two partners has a different feel in every dance and with every partner. Good social dancers adapt to the conventions of the dance and the responses of their partners. See also Frame Dance move Lead and follow Musicality References Partner dance technique
3,319
7,257
https://en.wikipedia.org/wiki/Caste
Caste
Caste is a form of social stratification characterised by endogamy, hereditary transmission of a style of life which often includes an occupation, ritual status in a hierarchy, and customary social interaction and exclusion based on cultural notions of purity and pollution. Its paradigmatic ethnographic example is the division of India's Hindu society into rigid social groups, with roots in south Asia's ancient history and persisting to the present time. However, the economic significance of the caste system in India has been declining as a result of urbanisation and affirmative action programs. A subject of much scholarship by sociologists and anthropologists, the Hindu caste system is sometimes used as an analogical basis for the study of caste-like social divisions existing outside Hinduism and India. The term "caste" is also applied to morphological groupings in eusocial insects such as ants, bees, and termites. Etymology The English word caste () derives from the Spanish and Portuguese , which, according to the John Minsheu's Spanish dictionary (1569), means "race, lineage, tribe or breed". When the Spanish colonised the New World, they used the word to mean a 'clan or lineage'. It was, however, the Portuguese who first employed in the primary modern sense of the English word 'caste' when they applied it to the thousands of endogamous, hereditary Indian social groups they encountered upon their arrival in India in 1498. The use of the spelling caste, with this latter meaning, is first attested in English in 1613. In the Latin American context, the term caste is sometimes used to describe the casta system of racial classification, based on whether a person was of pure European, Indigenous or African descent, or some mix thereof, with the different groups being placed in a racial hierarchy; however, despite the etymological connection between the Latin American casta system and South Asian caste systems (the former giving its name to the later), it is controversial to what extent the two phenomenon are really comparable. In South Asia India Modern India's caste system is based on the artificial modern superimposition of an old four-fold theoretical classification called the Varna on the natural social groupings called the Jāti. Varna conceptualised a society as consisting of four types varnas or categories: Brahmin, Kshatriya, Vaishya and Shudra, according to the nature of work of its members. Varna was not an inherited category and the occupation determined the varna. However, a person's Jati is determined at birth and makes them take up that Jati's occupation and members could and did change their occupation based on personal strengths, economic, social and political factors. Thus both Jati and Varna were fluid categories, subject to change based on occupation. Based on DNA analysis, endogamous i.e. non-intermarrying Jatis originated during the Gupta Empire. From 1901 onwards, for the purposes of the Decennial Census, the British colonial authorities arbitrarily and incorrectly forced all Jātis into the four Varna categories as described in ancient texts. Herbert Hope Risley, the Census Commissioner, noted that "The principle suggested as a basis was that of classification by social precedence as recognized by native public opinion at the present day, and manifesting itself in the facts that particular castes are supposed to be the modern representatives of one or other of the castes of the theoretical Indian system." Varna, as mentioned in ancient Hindu texts, describes society as divided into four categories: Brahmins (scholars and yajna priests), Kshatriyas (rulers and warriors), Vaishyas (farmers, merchants and artisans) and Shudras (workmen/service providers). The texts do not mention any hierarchy or a separate, untouchable category in Varna classifications. Scholars believe that the Varnas system was never truly operational in society and there is no evidence of it ever being a reality in Indian history. The practical division of the society had always been in terms of Jatis (birth groups), which are not based on any specific religious principle, but could vary from ethnic origins to occupations to geographic areas. The Jātis have been endogamous social groups without any fixed hierarchy but subject to vague notions of rank articulated over time based on lifestyle and social, political or economic status. Many of India's major empires and dynasties like the Mauryas, Shalivahanas, Chalukyas, Kakatiyas among many others, were founded by people who would have been classified as Shudras, under the Varnas system, as interpreted by the British rulers. It is well established that by the 9th century, kings from all the four Varnas, including Brahmins and Vaishyas, had occupied the highest seat in the monarchical system in Hindu India, contrary to the Varna theory. In many instances, as in Bengal, historically the kings and rulers had been called upon, when required, to mediate on the ranks of Jātis, which might number in thousands all over the subcontinent and vary by region. In practice, the jātis may or may not fit into the Varna classes and many prominent Jatis, for example the Jats and Yadavs, straddled two Varnas i.e. Kshatriyas and Vaishyas, and the Varna status of Jātis itself was subject to articulation over time. Starting with the 1901 Census of India led by colonial administrator Herbert Hope Risley, all the jātis were grouped under the theoretical varnas categories. According to political scientist Lloyd Rudolph, Risley believed that varna, however ancient, could be applied to all the modern castes found in India, and "[he] meant to identify and place several hundred million Indians within it." The terms varna (conceptual classification based on occupation) and jāti (groups) are two distinct concepts: while varna is a theoretical four-part division, jāti (community) refers to the thousands of actual endogamous social groups prevalent across the subcontinent. The classical authors scarcely speak of anything other than the varnas, as it provided a convenient shorthand; but a problem arises when colonial Indologists sometimes confuse the two. Upon independence from Britain, the Indian Constitution listed 1,108 Jatis across the country as Scheduled Castes in 1950, for positive discrimination. This constitution would also ban discrimination of the basis of the caste, though its practice in India remained intact. The Untouchable communities are sometimes called Scheduled Castes, Dalit or Harijan in contemporary literature. In 2001, Dalits were 16.2% of India's population. Most of the 15 million bonded child workers are from the lowest castes. Independent India has witnessed caste-related violence. In 2005, government recorded approximately 110,000 cases of reported violent acts, including rape and murder, against Dalits. For 2012, the government recorded 651 murders, 3,855 injuries, 1,576 rapes, 490 kidnappings, and 214 cases of arson. The socio-economic limitations of the caste system are reduced due to urbanisation and affirmative action. Nevertheless, the caste system still exists in endogamy and patrimony, and thrives in the politics of democracy, where caste provides ready made constituencies to politicians. The globalisation and economic opportunities from foreign businesses has influenced the growth of India's middle-class population. Some members of the Chhattisgarh Potter Caste Community (CPCC) are middle-class urban professionals and no longer potters unlike the remaining majority of traditional rural potter members. There is persistence of caste in Indian politics. Caste associations have evolved into caste-based political parties. Political parties and the state perceive caste as an important factor for mobilisation of people and policy development. Studies by Bhatt and Beteille have shown changes in status, openness, mobility in the social aspects of Indian society. As a result of modern socio-economic changes in the country, India is experiencing significant changes in the dynamics and the economics of its social sphere. While arranged marriages are still the most common practice in India, the internet has provided a network for younger Indians to take control of their relationships through the use of dating apps. This remains isolated to informal terms, as marriage is not often achieved through the use of these apps. Hypergamy is still a common practice in India and Hindu culture. Men are expected to marry within their caste, or one below, with no social repercussions. If a woman marries into a higher caste, then her children will take the status of their father. If she marries down, her family is reduced to the social status of their son in law. In this case, the women are bearers of the egalitarian principle of the marriage. There would be no benefit in marrying a higher caste if the terms of the marriage did not imply equality. However, men are systematically shielded from the negative implications of the agreement. Geographical factors also determine adherence to the caste system. Many Northern villages are more likely to participate in exogamous marriage, due to a lack of eligible suitors within the same caste. Women in North India have been found to be less likely to leave or divorce their husbands since they are of a relatively lower caste system, and have higher restrictions on their freedoms. On the other hand, Pahari women, of the northern mountains, have much more freedom to leave their husbands without stigma. This often leads to better husbandry as his actions are not protected by social expectations. Chiefly among the factors influencing the rise of exogamy is the rapid urbanisation in India experienced over the last century. It is well known that urban centers tend to be less reliant on agriculture and are more progressive as a whole. As India's cities boomed in population, the job market grew to keep pace. Prosperity and stability were now more easily attained by an individual, and the anxiety to marry quickly and effectively was reduced. Thus, younger, more progressive generations of urban Indians are less likely than ever to participate in the antiquated system of arranged endogamy. India has also implemented a form of Affirmative Action, locally known as "reservation groups". Quota system jobs, as well as placements in publicly funded colleges, hold spots for the 8% of India's minority, and underprivileged groups. As a result, in states such as Tamil Nadu or those in the north-east, where underprivileged populations predominate, over 80% of government jobs are set aside in quotas. In education, colleges lower the marks necessary for the Dalits to enter. Nepal The Nepali caste system resembles in some respects the Indian jāti system, with numerous jāti divisions with a varna system superimposed. Inscriptions attest the beginnings of a caste system during the Licchavi period. Jayasthiti Malla (1382–1395) categorised Newars into 64 castes (Gellner 2001). A similar exercise was made during the reign of Mahindra Malla (1506–1575). The Hindu social code was later set up in the Gorkha Kingdom by Ram Shah (1603–1636). Pakistan McKim Marriott claims a social stratification that is hierarchical, closed, endogamous and hereditary is widely prevalent, particularly in western parts of Pakistan. Frederik Barth in his review of this system of social stratification in Pakistan suggested that these are castes. Sri Lanka The caste system in Sri Lanka is a division of society into strata, influenced by the textbook varnas and jāti system found in India. Ancient Sri Lankan texts such as the Pujavaliya, Sadharmaratnavaliya and Yogaratnakaraya and inscriptional evidence show that the above hierarchy prevailed throughout the feudal period. The repetition of the same caste hierarchy even as recently as the 18th century, in the Kandyan-period Kadayimpoth – Boundary books as well indicates the continuation of the tradition right up to the end of Sri Lanka's monarchy. Outside South Asia Southeast Asia Indonesia Balinese caste structure has been described as being based either on three categories—the noble triwangsa (thrice born), the middle class of dwijāti (twice born), and the lower class of ekajāti (once born)--or on four castes Brahminas – priest Satrias – knighthood Wesias – commerce Sudras – servitude The Brahmana caste was further subdivided by Dutch ethnographers into two: Siwa and Buda. The Siwa caste was subdivided into five: Kemenuh, Keniten, Mas, Manuba and Petapan. This classification was to accommodate the observed marriage between higher-caste Brahmana men with lower-caste women. The other castes were similarly further sub-classified by 19th-century and early-20th-century ethnographers based on numerous criteria ranging from profession, endogamy or exogamy or polygamy, and a host of other factors in a manner similar to castas in Spanish colonies such as Mexico, and caste system studies in British colonies such as India. Philippines In the Philippines, pre-colonial societies do not have a single social structure. The class structures can be roughly categorised into four types: Classless societies - egalitarian societies with no class structure. Examples include the Mangyan and the Kalanguya peoples. Warrior societies - societies where a distinct warrior class exists, and whose membership depends on martial prowess. Examples include the Mandaya, Bagobo, Tagakaulo, and B'laan peoples who had warriors called the bagani or magani. Similarly, in the Cordillera highlands of Luzon, the Isneg and Kalinga peoples refer to their warriors as mengal or maingal. This society is typical for head-hunting ethnic groups or ethnic groups which had seasonal raids (mangayaw) into enemy territory. Petty plutocracies - societies which have a wealthy class based on property and the hosting of periodic prestige feasts. In some groups, it was an actual caste whose members had specialised leadership roles, married only within the same caste, and wore specialised clothing. These include the kadangyan of the Ifugao, Bontoc, and Kankanaey peoples, as well as the baknang of the Ibaloi people. In others, though wealth may give one prestige and leadership qualifications, it was not a caste per se. Principalities - societies with an actual ruling class and caste systems determined by birthright. Most of these societies are either Indianized or Islamized to a degree. They include the larger coastal ethnic groups like the Tagalog, Kapampangan, Visayan, and Moro societies. Most of them were usually divided into four to five caste systems with different names under different ethnic groups that roughly correspond to each other. The system was more or less feudalistic, with the datu ultimately having control of all the lands of the community. The land is subdivided among the enfranchised classes, the sakop or sa-op (vassals, lit. "those under the power of another"). The castes were hereditary, though they were not rigid. They were more accurately a reflection of the interpersonal political relationships, a person is always the follower of another. People can move up the caste system by marriage, by wealth, or by doing something extraordinary; and conversely they can be demoted, usually as criminal punishment or as a result of debt. Shamans are the exception, as they are either volunteers, chosen by the ranking shamans, or born into the role by innate propensity for it. They are enumerated below from the highest rank to the lowest: Royalty - (Visayan: kadatoan) the datu and immediate descendants. They are often further categorised according to purity of lineage. The power of the datu is dependent on the willingness of their followers to render him respect and obedience. Most roles of the datu were judicial and military. In case of an unfit datu, support may be withdrawn by his followers. Datu were almost always male, though in some ethnic groups like the Banwaon people, the female shaman (babaiyon) co-rules as the female counterpart of the datu. Nobility - (Visayan: tumao; Tagalog: maginoo; Kapampangan ginu; Tausug: bangsa mataas) the ruling class, either inclusive of or exclusive of the royal family. Most are descendants of the royal line or gained their status through wealth or bravery in battle. They owned lands and subjects, from whom they collected taxes. Shamans - (Visayan: babaylan; Tagalog: katalonan) the spirit mediums, usually female or feminised men. While they weren't technically a caste, they commanded the same respect and status as nobility. Warriors - (Visayan: timawa; Tagalog: maharlika) the martial class. They could own land and subjects like the higher ranks, but were required to fight for the datu in times of war. In some Filipino ethnic groups, they were often tattooed extensively to record feats in battle and as protection against harm. They were sometimes further subdivided into different classes, depending on their relationship with the datu. They traditionally went on seasonal raids on enemy settlements. Commoners and slaves - (Visayan, Maguindanao: ulipon; Tagalog: alipin; Tausug: kiapangdilihan; Maranao: kakatamokan) - the lowest class composed of the rest of the community who were not part of the enfranchised classes. They were further subdivided into the commoner class who had their own houses, the servants who lived in the houses of others, and the slaves who were usually captives from raids, criminals, or debtors. Most members of this class were equivalent to the European serf class, who paid taxes and can be conscripted to communal tasks, but were more or less free to do as they please. East Asia China and Mongolia During the period of Yuan Dynasty, ruler Kublai Khan enforced a Four Class System, which was a legal caste system. The order of four classes of people in descending order were: Mongolian Semu people Han people (in the northern areas of China) Southerners (people of the former Southern Song dynasty) Today, the Hukou system is argued by various Western sources to be the current caste system of China. Tibet There is significant controversy over the social classes of Tibet, especially with regards to the serfdom in Tibet controversy. has put forth the argument that pre-1950s Tibetan society was functionally a caste system, in contrast to previous scholars who defined the Tibetan social class system as similar to European feudal serfdom, as well as non-scholarly western accounts which seek to romanticise a supposedly 'egalitarian' ancient Tibetan society. Japan In Japan's history, social strata based on inherited position rather than personal merit, were rigid and highly formalised in a system called mibunsei (身分制). At the top were the Emperor and Court nobles (kuge), together with the Shōgun and daimyō. Below them, the population was divided into four classes: samurai, peasants, craftsmen and merchants. Only samurai were allowed to bear arms. A samurai had a right to kill any peasants, craftsman or merchant who he felt were disrespectful. Merchants were the lowest caste because they did not produce any products. The castes were further sub-divided; for example, peasants were labelled as furiuri, tanagari, mizunomi-byakusho among others. As in Europe, the castes and sub-classes were of the same race, religion and culture. Howell, in his review of Japanese society notes that if a Western power had colonised Japan in the 19th century, they would have discovered and imposed a rigid four-caste hierarchy in Japan. De Vos and Wagatsuma observe that Japanese society had a systematic and extensive caste system. They discuss how alleged caste impurity and alleged racial inferiority, concepts often assumed to be different, are superficial terms, and are due to identical inner psychological processes, which expressed themselves in Japan and elsewhere. Endogamy was common because marriage across caste lines was socially unacceptable. Japan had its own untouchable caste, shunned and ostracised, historically referred to by the insulting term eta, now called burakumin. While modern law has officially abolished the class hierarchy, there are reports of discrimination against the buraku or burakumin underclasses. The burakumin are regarded as "ostracised". The burakumin are one of the main minority groups in Japan, along with the Ainu of Hokkaidō and those of Korean or Chinese descent. Korea The baekjeong (백정) were an "untouchable" outcaste of Korea. The meaning today is that of butcher. It originates in the Khitan invasion of Korea in the 11th century. The defeated Khitans who surrendered were settled in isolated communities throughout Goryeo to forestall rebellion. They were valued for their skills in hunting, herding, butchering, and making of leather, common skill sets among nomads. Over time, their ethnic origin was forgotten, and they formed the bottom layer of Korean society. In 1392, with the foundation of the Confucian Joseon dynasty, Korea systemised its own native class system. At the top were the two official classes, the Yangban, which literally means "two classes". It was composed of scholars (munban) and warriors (muban). Scholars had a significant social advantage over the warriors. Below were the jung-in (중인-中人: literally "middle people". This was a small class of specialised professions such as medicine, accounting, translators, regional bureaucrats, etc. Below that were the sangmin (상민-常民: literally 'commoner'), farmers working their own fields. Korea also had a serf population known as the nobi. The nobi population could fluctuate up to about one third of the population, but on average the nobi made up about 10% of the total population. In 1801, the vast majority of government nobi were emancipated, and by 1858 the nobi population stood at about 1.5% of the total population of Korea. The hereditary nobi system was officially abolished around 1886–87 and the rest of the nobi system was abolished with the Gabo Reform of 1894, but traces remained until 1930. The opening of Korea to foreign Christian missionary activity in the late 19th century saw some improvement in the status of the baekjeong. However, everyone was not equal under the Christian congregation, and even so protests erupted when missionaries tried to integrate baekjeong into worship, with non-baekjeong finding this attempt insensitive to traditional notions of hierarchical advantage. Around the same time, the baekjeong began to resist open social discrimination. They focused on social and economic injustices affecting them, hoping to create an egalitarian Korean society. Their efforts included attacking social discrimination by upper class, authorities, and "commoners", and the use of degrading language against children in public schools. With the Gabo reform of 1896, the class system of Korea was officially abolished. Following the collapse of the Gabo government, the new cabinet, which became the Gwangmu government after the establishment of the Korean Empire, introduced systematic measures for abolishing the traditional class system. One measure was the new household registration system, reflecting the goals of formal social equality, which was implemented by the loyalists' cabinet. Whereas the old registration system signified household members according to their hierarchical social status, the new system called for an occupation. While most Koreans by then had surnames and even bongwan, although still substantial number of cheonmin, mostly consisted of serfs and slaves, and untouchables did not. According to the new system, they were then required to fill in the blanks for surname in order to be registered as constituting separate households. Instead of creating their own family name, some cheonmins appropriated their masters' surname, while others simply took the most common surname and its bongwan in the local area. Along with this example, activists within and outside the Korean government had based their visions of a new relationship between the government and people through the concept of citizenship, employing the term inmin ("people") and later, kungmin ("citizen"). North Korea The Committee for Human Rights in North Korea reported that "Every North Korean citizen is assigned a heredity-based class and socio-political rank over which the individual exercises no control but which determines all aspects of his or her life." Called Songbun, Barbara Demick describes this "class structure" as an updating of the hereditary "caste system", a combination of Confucianism and Stalinism. It originated in 1946 and was entrenched by the 1960s, and consisted of 53 categories ranging across three classes: loyal, wavering, and impure. The privileged "loyal" class included members of the Korean Workers' Party and Korean People's Army officers' corps, the wavering class included peasants, and the impure class included Axis collaborators and landowners. She claims that a bad family background is called "tainted blood", and that by law this "tainted blood" lasts three generations. West Asia Kurdistan Yazidis There are three hereditary groups, often called castes, in Yazidism. Membership in the Yazidi society and a caste is conferred by birth. Pîrs and Sheikhs are the priestly castes, which are represented by many sacred lineages (). Sheikhs are in charge of both religious and administrative functions and are divided into three endogamous houses, Şemsanî, Adanî and Qatanî who are in turn divided into lineages. The Pîrs are in charge of purely religious functions and traditionally consist of 40 lineages or clans, but approximately 90 appellations of Pîr lineages have been found, which may have been a result of new sub-lineages arising and number of clans increasing over time due to division as Yazidis settled in different places and countries. Division could occur in one family, if there were a few brothers in one clan, each of them could become the founder of their own Pîr sub-clan (). Mirîds are the lay caste and are divided into tribes, who are each affiliated to a Pîr and a Sheikh priestly lineage assigned to the tribe. Iran Pre-Islamic Sassanid society was immensely complex, with separate systems of social organisation governing numerous different groups within the empire. Historians believe society comprised four social classes, which linguistic analysis indicates may have been referred to collectively as "pistras". The classes, from highest to lowest status, were priests (), warriors (), secretaries (), and commoners (). Yemen In Yemen there exists a hereditary caste, the African-descended Al-Akhdam who are kept as perennial manual workers. Estimates put their number at over 3.5 million residents who are discriminated, out of a total Yemeni population of around 22 million. Africa Various sociologists have reported caste systems in Africa. The specifics of the caste systems have varied in ethnically and culturally diverse Africa, however the following features are common – it has been a closed system of social stratification, the social status is inherited, the castes are hierarchical, certain castes are shunned while others are merely endogamous and exclusionary. In some cases, concepts of purity and impurity by birth have been prevalent in Africa. In other cases, such as the Nupe of Nigeria, the Beni Amer of East Africa, and the Tira of Sudan, the exclusionary principle has been driven by evolving social factors. West Africa Among the Igbo of Nigeria – especially Enugu, Anambra, Imo, Abia, Ebonyi, Edo and Delta states of the country – scholar Elijah Obinna finds that the Osu caste system has been and continues to be a major social issue. The Osu caste is determined by one's birth into a particular family irrespective of the religion practised by the individual. Once born into Osu caste, this Nigerian person is an outcast, shunned and ostracised, with limited opportunities or acceptance, regardless of his or her ability or merit. Obinna discusses how this caste system-related identity and power is deployed within government, Church and indigenous communities. The osu class systems of eastern Nigeria and southern Cameroon are derived from indigenous religious beliefs and discriminate against the "Osus" people as "owned by deities" and outcasts. The Songhai economy was based on a caste system. The most common were metalworkers, fishermen, and carpenters. Lower caste participants consisted of mostly non-farm working immigrants, who at times were provided special privileges and held high positions in society. At the top were noblemen and direct descendants of the original Songhai people, followed by freemen and traders. In a review of social stratification systems in Africa, Richter reports that the term caste has been used by French and American scholars to many groups of West African artisans. These groups have been described as inferior, deprived of all political power, have a specific occupation, are hereditary and sometimes despised by others. Richter illustrates caste system in Ivory Coast, with six sub-caste categories. Unlike other parts of the world, mobility is sometimes possible within sub-castes, but not across caste lines. Farmers and artisans have been, claims Richter, distinct castes. Certain sub-castes are shunned more than others. For example, exogamy is rare for women born into families of woodcarvers. Similarly, the Mandé societies in Gambia, Ghana, Guinea, Ivory Coast, Liberia, Senegal and Sierra Leone have social stratification systems that divide society by ethnic ties. The Mande class system regards the jonow slaves as inferior. Similarly, the Wolof in Senegal is divided into three main groups, the geer (freeborn/nobles), jaam (slaves and slave descendants) and the underclass neeno. In various parts of West Africa, Fulani societies also have class divisions. Other castes include Griots, Forgerons, and Cordonniers. Tamari has described endogamous castes of over fifteen West African peoples, including the Tukulor, Songhay, Dogon, Senufo, Minianka, Moors, Manding, Soninke, Wolof, Serer, Fulani, and Tuareg. Castes appeared among the Malinke people no later than 14th century, and was present among the Wolof and Soninke, as well as some Songhay and Fulani populations, no later than 16th century. Tamari claims that wars, such as the Sosso-Malinke war described in the Sunjata epic, led to the formation of blacksmith and bard castes among the people that ultimately became the Mali empire. As West Africa evolved over time, sub-castes emerged that acquired secondary specialisations or changed occupations. Endogamy was prevalent within a caste or among a limited number of castes, yet castes did not form demographic isolates according to Tamari. Social status according to caste was inherited by off-springs automatically; but this inheritance was paternal. That is, children of higher caste men and lower caste or slave concubines would have the caste status of the father. Central Africa Ethel M. Albert in 1960 claimed that the societies in Central Africa were caste-like social stratification systems. Similarly, in 1961, Maquet notes that the society in Rwanda and Burundi can be best described as castes. The Tutsi, noted Maquet, considered themselves as superior, with the more numerous Hutu and the least numerous Twa regarded, by birth, as respectively, second and third in the hierarchy of Rwandese society. These groups were largely endogamous, exclusionary and with limited mobility. Horn of Africa In a review published in 1977, Todd reports that numerous scholars report a system of social stratification in different parts of Africa that resembles some or all aspects of caste system. Examples of such caste systems, he claims, are to be found in Ethiopia in communities such as the Gurage and Konso. He then presents the Dime of Southwestern Ethiopia, amongst whom there operates a system which Todd claims can be unequivocally labelled as caste system. The Dime have seven castes whose size varies considerably. Each broad caste level is a hierarchical order that is based on notions of purity, non-purity and impurity. It uses the concepts of defilement to limit contacts between caste categories and to preserve the purity of the upper castes. These caste categories have been exclusionary, endogamous and the social identity inherited. Alula Pankhurst has published a study of caste groups in SW Ethiopia. Among the Kafa, there were also traditionally groups labelled as castes. "Based on research done before the Derg regime, these studies generally presume the existence of a social hierarchy similar to the caste system. At the top of this hierarchy were the Kafa, followed by occupational groups including blacksmiths (Qemmo), weavers (Shammano), bards (Shatto), potters, and tanners (Manno). In this hierarchy, the Manjo were commonly referred to as hunters, given the lowest status equal only to slaves." The Borana Oromo of southern Ethiopia in the Horn of Africa also have a class system, wherein the Wata, an acculturated hunter-gatherer group, represent the lowest class. Though the Wata today speak the Oromo language, they have traditions of having previously spoken another language before adopting Oromo. The traditionally nomadic Somali people are divided into clans, wherein the Rahanweyn agro-pastoral clans and the occupational clans such as the Madhiban were traditionally sometimes treated as outcasts. As Gabboye, the Madhiban along with the Yibir and Tumaal (collectively referred to as sab) have since obtained political representation within Somalia, and their general social status has improved with the expansion of urban centers. Europe European feudalism with its rigid aristocracy can also be considered as a caste system. Basque region For centuries, through the modern times, the majority regarded Cagots who lived primarily in the Basque region of France and Spain as an inferior caste, the untouchables. While they had the same skin color and religion as the majority, in the churches they had to use segregated doors, drink from segregated fonts, and receive communion on the end of long wooden spoons. It was a closed social system. The socially isolated Cagots were endogamous, and chances of social mobility non-existent. United Kingdom In July 2013, the UK government announced its intention to amend the Equality Act 2010, to "introduce legislation on caste, including any necessary exceptions to the caste provisions, within the framework of domestic discrimination law". Section 9(5) of the Equality Act 2010 provides that "a Minister may by order amend the statutory definition of race to include caste and may provide for exceptions in the Act to apply or not to apply to caste". From September 2013 to February 2014, Meena Dhanda led a project on "Caste in Britain" for the UK Equality and Human Rights Commission (EHRC). Americas United States A survey on caste discrimination conducted by Equality Labs found 67% of Indian Dalits living in the US reporting that they faced caste-based harassment at the workplace, and 27% reporting verbal or physical assault based on their caste. In 2023, Seattle became the first city in the United States to ban discrimination based on caste. In the opinion of W. Lloyd Warner, discrimination in the Southern United States in the 1930s against Blacks was similar to Indian castes in such features as residential segregation and marriage restrictions. In her 2020 book Caste: The Origins of Our Discontents, journalist Isabel Wilkerson used caste as an analogy to understand racial discrimination in the United States. Gerald D. Berreman contrasted the differences between discrimination in the United States and India. In India, there are complex religious features which make up the system, whereas in the United States race and color are the basis for differentiation. The caste systems in India and the United States have higher groups which desire to retain their positions for themselves and thus perpetuate the two systems. The process of creating a homogenized society by social engineering in both India and the Southern US has created other institutions that have made class distinctions among different groups evident. Anthropologist James C. Scott elaborates on how "global capitalism is perhaps the most powerful force for homogenization, whereas the state may be the defender of local difference and variety in some instances". The caste system, a relic of feudalistic economic systems, emphasizes differences between socio-economic classes that are obviated by openly free market capitalistic economic systems, which reward individual initiative, enterprise, merit, and thrift, thereby creating a path for social mobility. When the feudalistic slave economy of the southern United States was dismantled, even Jim Crow laws did not prevent the economic success of many industrious African Americans, including millionaire women like Maggie Walker, Annie Malone, and Madame C.J. Walker. Parts of the United States are sometimes divided by race and class status despite the national narrative of integration. Caste in sociology and entomology The initial observational studies of the division of labour in ant colonies attempted to demonstrate that ants specialized in tasks that were best suited to their size when they emerged from the pupae stage into the adult stage. A large proportion of the experimental work was done in species that showed strong variation in size. As the size of an adult was fixed for life, workers of a specific size range came to be called a "caste," calling up the traditional caste system in India in which a human's standing in society was decided at birth. The notion of caste encouraged a link between scholarship in entomology and sociology because it served as an example of a division of labour in which the participants seemed to be uncompromisingly adapted to special functions and sometimes even unique environments. To bolster the concept of caste, entomologists and sociologists referred to the complementary social or natural parallel and thereby appeared to generalize the concept and give it an appearance of familiarity. In the late 19th- and early 20th centuries, the perceived similarities between the Indian caste system and caste polymorphism in insects were used to create a correspondence or parallelism for the purpose of explaining or clarifying racial stratification in human societies; the explanations came particularly to be employed in the United States. Ideas from heredity and natural selection influenced some sociologists who believed that some groups were predetermined to belong to a lower social or occupational status. Chiefly through the work of W. Lloyd Warner at the University of Chicago, a group of sociologists sharing similar principles came to evolve around the creed of caste in the 1930s and 1940s. The ecologically-oriented sociologist Robert E. Park, although attributing more weight to environmental explanations than the biological nonetheless believed that there were obstacles to the assimilation of blacks into American society and that an "accommodation stage" in a biracially organized caste system was required before full assimilation. He did disavow his position in 1937, suggesting that blacks were a minority and not a caste. The Indian sociologist Radhakamal Mukerjee was influenced by Robert E. Park and adopted the concept of "caste" to describe race relations in the US. According to anthropologist Diane Rodgers, Mukerjee "proceeded to suggest that a caste system should be correctly instituted in the (US) South to ease race relations." Mukerjee often employed both entomological and sociological data and clues to describe caste systems. He wrote "while the fundamental industries of man are dispersed throughout the insect world, the same kind of polymorphism appears again and again in different species of social insects which have reacted in the same manner as man, under the influence of the same environment, to ensure the supply and provision of subsistence." Comparing the caste system in India to caste polymorphism in insects, he noted, "where we find the organization of social insects developed to perfection, there also has been seen among human associations a minute and even rigid specialization of functions, along with ant- and bee-like societal integrity and cohesiveness." He considered the "resemblances between insect associations and caste-ridden societies" to be striking enough to be "amusing." See also Estates of the realm Inter-caste marriages in India Job Kamaiya Priestly caste Propiska Social exclusion Warrior caste Notes References Sources Oxford English Dictionary () Quote: caste, n. 2a. spec. One of the several hereditary classes into which society in India has from time immemorial been divided; ... This is now the leading sense, which influences all others. Further reading Spectres of Agrarian Territory by David Ludden 11 December 2001 "Early Evidence for Caste in South India", pp. 467–492 in Dimensions of Social Life: Essays in honour of David G. Mandelbaum, Edited by Paul Hockings and Mouton de Gruyter, Berlin, New York, Amsterdam, 1987. External links Casteless Auguste Comte on why and how castes developed across the world – in The Positive Philosophy, Volume 3 (see page 55 onwards) Robert Merton on Caste and The Sociology of Science Caste, Society and Politics in India from the Eighteenth Century to the Modern Age – Susan Bayly Class In Yemen by Marguerite Abadjian (Archive of the Baltimore Sun) International Dalit Solidarity Network: An international advocacy group for Dalits Social status
3,320
7,287
https://en.wikipedia.org/wiki/Castello
Castello
Castello may refer to: Places Castello, Venice, the largest of the six sestieri of Venice Castello, the old town center of Giudicato of Cagliari in Sardinia Castello, a neighbourhood in Florence Castello, Hong Kong, a private housing estate in Hong Kong A locality in the town of Monteggio in Switzerland Cittadella (Gozo), a citadel in Gozo, Malta Short name of Castellón de la Plana, a city in the Valencian Community, Spain Other Roman Catholic Diocese of Castello, a former diocese based in Venice Castello (surname) Castello cheeses See also Città di Castello, a town in Umbria, Italy Castell (disambiguation) Castella (disambiguation) Castelli (disambiguation) Castellón (disambiguation) Castells (disambiguation)
3,333
7,293
https://en.wikipedia.org/wiki/Commodore%2064
Commodore 64
The Commodore 64, also known as the C64, is an 8-bit home computer introduced in January 1982 by Commodore International (first shown at the Consumer Electronics Show, January 7–10, 1982, in Las Vegas). It has been listed in the Guinness World Records as the highest-selling single computer model of all time, with independent estimates placing the number sold between 12.5 and 17 million units. Volume production started in early 1982, marketing in August for . Preceded by the VIC-20 and Commodore PET, the C64 took its name from its of RAM. With support for multicolor sprites and a custom chip for waveform generation, the C64 could create superior visuals and audio compared to systems without such custom hardware. The C64 dominated the low-end computer market (except in the UK and Japan, lasting only about six months in Japan) for most of the later years of the 1980s. For a substantial period (1983–1986), the C64 had between 30% and 40% share of the US market and two million units sold per year, outselling IBM PC compatibles, Apple computers, and the Atari 8-bit family of computers. Sam Tramiel, a later Atari president and the son of Commodore's founder, said in a 1989 interview, "When I was at Commodore we were building C64s a month for a couple of years." In the UK market, the C64 faced competition from the BBC Micro, the ZX Spectrum, and later the Amstrad CPC 464. but the C64 was still the second most popular computer in the UK after the ZX Spectrum. The Commodore 64 failed to make any impact in Japan. The Japanese market was dominated by Japanese computers, such as the NEC PC-8801, Sharp X1, Fujitsu FM-7, and MSX. Part of the Commodore 64's success was its sale in regular retail stores instead of only electronics or computer hobbyist specialty stores. Commodore produced many of its parts in-house to control costs, including custom integrated circuit chips from MOS Technology. In the United States, it has been compared to the Ford Model T automobile for its role in bringing a new technology to middle-class households via creative and affordable mass-production. Approximately 10,000 commercial software titles have been made for the Commodore 64, including development tools, office productivity applications, and video games. C64 emulators allow anyone with a modern computer, or a compatible video game console, to run these programs today. The C64 is also credited with popularizing the computer demoscene and is still used today by some computer hobbyists. In 2011, 17 years after it was taken off the market, research showed that brand recognition for the model was still at 87%. History In January 1981, MOS Technology, Inc., Commodore's integrated circuit design subsidiary, initiated a project to design the graphic and audio chips for a next-generation video game console. Design work for the chips, named MOS Technology VIC-II (Video Integrated Circuit for graphics) and MOS Technology SID (Sound Interface Device for audio), was completed in November 1981. Commodore then began a game console project that would use the new chips—called the Ultimax or the Commodore MAX Machine, engineered by Yash Terakura from Commodore Japan. This project was eventually cancelled after just a few machines were manufactured for the Japanese market. At the same time, Robert "Bob" Russell (system programmer and architect on the VIC-20) and Robert "Bob" Yannes (engineer of the SID) were critical of the current product line-up at Commodore, which was a continuation of the Commodore PET line aimed at business users. With the support of Al Charpentier (engineer of the VIC-II) and Charles Winterble (manager of MOS Technology), they proposed to Commodore CEO Jack Tramiel a low-cost sequel to the VIC-20. Tramiel dictated that the machine should have of random-access memory (RAM). Although 64-Kbit dynamic random-access memory (DRAM) chips cost over at the time, he knew that 64K DRAM prices were falling and would drop to an acceptable level before full production was reached. The team was able to quickly design the computer because, unlike most other home-computer companies, Commodore had its own semiconductor fab to produce test chips; because the fab was not running at full capacity, development costs were part of existing corporate overhead. The chips were complete by November, by which time Charpentier, Winterble, and Tramiel had decided to proceed with the new computer; the latter set a final deadline for the first weekend of January, to coincide with the 1982 Consumer Electronics Show (CES). The product was code named the VIC-40 as the successor to the popular VIC-20. The team that constructed it consisted of Yash Terakura, Shiraz Shivji, Bob Russell, Bob Yannes, and David A. Ziembicki. The design, prototypes, and some sample software were finished in time for the show, after the team had worked tirelessly over both Thanksgiving and Christmas weekends. The machine used the same case, same-sized motherboard, and same Commodore BASIC 2.0 in ROM as the VIC-20. BASIC also served as the user interface shell and was available immediately on startup at the READY prompt. When the product was to be presented, the VIC-40 product was renamed C64. The C64 made an impressive debut at the January 1982 Consumer Electronics Show, as recalled by Production Engineer David A. Ziembicki: "All we saw at our booth were Atari people with their mouths dropping open, saying, 'How can you do that for $595? The answer was vertical integration; due to Commodore's ownership of MOS Technology's semiconductor fabrication facilities, each C64 had an estimated production cost of . Reception In July 1983, BYTE magazine stated that "the 64 retails for . At that price it promises to be one of the hottest contenders in the under- personal computer market." It described the SID as "a true music synthesizer ... the quality of the sound has to be heard to be believed", while criticizing the use of Commodore BASIC 2.0, the floppy disk performance which is "even slower than the Atari 810 drive", and Commodore's quality control. BYTE gave more details, saying the C64 had "inadequate Commodore BASIC 2.0. An 8K-byte interpreted BASIC" which they assumed was because "Obviously, Commodore feels that most home users will be running prepackaged software - there is no provision for using graphics (or sound as mentioned above) from within a BASIC program except by means of POKE commands." This was one of very few warnings about C64 BASIC published in any computer magazines. Creative Computing said in December 1984 that the 64 was "the overwhelming winner" in the category of home computers under . Despite criticizing its "slow disk drive, only two cursor directional keys, zero manufacturer support, non-standard interfaces, etc.", the magazine said that at the 64's price of less than "you can't get another system with the same features: 64K, color, sprite graphics, and barrels of available software". The Tandy/Radio Shack Color Computer was the runner up. However, this was only one of twelve categories being voted on, depending on the price and what people wanted to do with a computer. The same article also said "Although there was no single best all-around system, we noted that one system stood out because it was mentioned in so many categories. Although many systems were mentioned in two categories, just two systems were mentioned in three categories, and only one in four categories—the Apple Macintosh." Apart from this, the Apple II was the winner in the category of home computer over , which was the category the Commodore 64 was in when it was first released at the price of . Market war: 1982–1983 Commodore had a reputation for announcing products that never appeared, so sought to quickly ship the C64. Production began in spring 1982 and volume shipments began in August. The C64 faced a wide range of competing home computers, but with a lower price and more flexible hardware, it quickly outsold many of its competitors. In the United States, the greatest competitors were the Atari 8-bit 400, the Atari 800, and the Apple II. The Atari 400 and 800 had been designed to accommodate previously-stringent FCC emissions requirements and so were expensive to manufacture. Though similar in specifications, the C64 and Apple II represented differing design philosophies; as an open architecture system, upgrade capability for the Apple II was granted by internal expansion slots, whereas the C64's comparatively closed architecture had only a single external ROM cartridge port for bus expansion. However, the Apple II used its expansion slots for interfacing to common peripherals like disk drives, printers, and modems; the C64 had a variety of ports integrated into its motherboard which were used for these purposes, usually leaving the cartridge port free. Commodore's was not a completely closed system, however; the company had published detailed specifications for most of their models since the Commodore PET and VIC-20 days, and the C64 was no exception. C64 sales were nonetheless relatively slow due to a lack of software, reliability issues with early production models, particularly high failure rates of the PLA chip, which used a new production process, and a shortage of 1541 disk drives, which also suffered rather severe reliability issues. During 1983, however, a trickle of software turned into a flood and sales began rapidly climbing, especially with price cuts from to just (equivalent to $ to $ in ). Commodore sold the C64 not only through its network of authorized dealers, but also through department stores, discount stores, toy stores and college bookstores. The C64 had a built-in RF modulator and thus could be plugged into any television set. This allowed it (like its predecessor, the VIC-20) to compete directly against video game consoles such as the Atari 2600. Like the Apple IIe, the C64 could also output a composite video signal, avoiding the RF modulator altogether. This allowed the C64 to be plugged into a specialized monitor for a sharper picture. Unlike the IIe, the C64's NTSC output capability also included separate luminance/chroma signal output equivalent to (and electrically compatible with) S-Video, for connection to the Commodore 1702 monitor, providing even better video quality than a composite signal. Aggressive pricing of the C64 is considered to have been a major catalyst in the video game crash of 1983. In January 1983, Commodore offered a $100 rebate in the United States on the purchase of a C64 to anyone that traded in another video game console or computer. To take advantage of this rebate, some mail-order dealers and retailers offered a Timex Sinclair 1000 (TS1000) for as little as with purchase of a C64. This deal meant that the consumer could send the TS1000 to Commodore, collect the rebate, and pocket the difference; Timex Corporation departed the computer market within a year. Commodore's tactics soon led to a price war with the major home computer manufacturers. The success of the VIC-20 and C64 contributed significantly to the exit from the field of Texas Instruments and other smaller competitors. The price war with Texas Instruments was seen as a personal battle for Commodore president Jack Tramiel. Commodore dropped the C64's list price by within two months of its release. In June 1983 the company lowered the price to $300, and some stores sold the computer for . At one point, the company was selling as many C64s as all computers sold by the rest of the industry combined. Meanwhile, TI lost money by selling the TI-99/4A for . TI's subsequent demise in the home computer industry in October 1983 was seen as revenge for TI's tactics in the electronic calculator market in the mid-1970s, when Commodore was almost bankrupted by TI. All four machines had similar memory configurations which were standard in 1982–83: for the Apple II+ (upgraded within months of C64's release to with the Apple IIe) and for the Atari 800. At upwards of , the Apple II was about twice as expensive, while the Atari 800 cost $899. One key to the C64's success was Commodore's aggressive marketing tactics, and they were quick to exploit the relative price/performance divisions between its competitors with a series of television commercials after the C64's launch in late 1982. The company also published detailed documentation to help developers, while Atari initially kept technical information secret. Although many early C64 games were inferior Atari 8-bit ports, by late 1983 the growing installed base caused developers to create new software with better graphics and sound. It was the only non-discontinued, widely available home computer by then, with more than 500,000 sold during the Christmas season; because of production problems in Atari's supply chain, by the start of 1984 "the Commodore 64 largely has [the low-end] market to itself right now", The Washington Post reported. 1984–1987 With sales booming and the early reliability issues with the hardware addressed, software for the C64 began to grow in size and ambition during 1984. This growth shifted to the primary focus of most US game developers. The two holdouts were Sierra, who largely skipped over the C64 in favor of Apple and PC compatible machines, and Broderbund, who were heavily invested in educational software and developed primarily around the Apple II. In the North American market, the disk format had become nearly universal while cassette and cartridge-based software all but disappeared. So most US-developed games by this point grew large enough to require multi-loading. At a mid-1984 conference of game developers and experts at Origins Game Fair, Dan Bunten, Sid Meier, and a representative of Avalon Hill said that they were developing games for the C64 first as the most promising market. By 1985, games were an estimated 60 to 70% of Commodore 64 software. Computer Gaming World stated in January 1985 that companies such as Epyx that survived the video game crash did so because they "jumped on the Commodore bandwagon early". Over 35% of SSI's 1986 sales were for the C64, ten points higher than for the Apple II. The C64 was even more important for other companies, which often found that more than half the sales for a title ported to six platforms came from the C64 version. That year, Computer Gaming World published a survey of ten game publishers that found that they planned to release forty-three Commodore 64 games that year, compared to nineteen for Atari and forty-eight for Apple II, and Alan Miller stated that Accolade developed first for the C64 because "it will sell the most on that system". In Europe, the primary competitors to the C64 were British-built computers: the Sinclair ZX Spectrum, the BBC Micro, and the Amstrad CPC 464. In the UK, the 48K Spectrum had not only been released a few months ahead of the C64's early 1983 debut, but it was also selling for , less than half the C64's price. The Spectrum quickly became the market leader and Commodore had an uphill struggle against it in the marketplace. The C64 did however go on to rival the Spectrum in popularity in the latter half of the 1980s. Adjusted to the size of population, the popularity of Commodore 64 was the highest in Finland at roughly 3 units per 100 inhabitants, where it was subsequently marketed as "the Computer of the Republic". Rumors spread in late 1983 that Commodore would discontinue the C64. By early 1985 the C64's price was ; with an estimated production cost of , its profitability was still within the industry-standard markup of two to three times. Commodore sold about one million C64s in 1985 and a total of 3.5 million by mid-1986. Although the company reportedly attempted to discontinue the C64 more than once in favor of more expensive computers such as the Commodore 128, demand remained strong. In 1986, Commodore introduced the 64C, a redesigned 64, which Compute! saw as evidence that—contrary to C64 owners' fears that the company would abandon them in favor of the Amiga and 128—"the 64 refuses to die". Its introduction also meant that Commodore raised the price of the C64 for the first time, which the magazine cited as the end of the home-computer price war. Software sales also remained strong; MicroProse, for example, in 1987 cited the Commodore and IBM PC markets as its top priorities. 1988–1994 By 1988, PC compatibles were the largest and fastest-growing home and entertainment software markets, displacing former leader Commodore. Commodore 64 software sales were almost unchanged in the third quarter of 1988 year over year while the overall market grew 42%, but the company was still selling 1 to 1.5 million units worldwide each year of what Computer Chronicles that year called "the Model T of personal computers". Epyx CEO David Shannon Morse cautioned that "there are no new 64 buyers, or very few. It's a consistent group that's not growing... it's going to shrink as part of our business." One computer gaming executive stated that the Nintendo Entertainment System's enormous popularityseven million sold in 1988, almost as many as the number of C64s sold in its first five yearshad stopped the C64's growth. Trip Hawkins reinforced that sentiment, stating that Nintendo was "the last hurrah of the 8-bit world". SSI exited the Commodore 64 market in 1991, after most competitors. Ultima VI, released in 1991, was the last major C64 game release from a North American developer, and The Simpsons, published by Ultra Games, was the last arcade conversion. The latter was a somewhat uncommon example of a US-developed arcade port as after the early years of the C64, most arcade conversions were produced by UK developers and converted to NTSC and disk format for the US market, American developers instead focusing on more computer-centered game genres such as RPGs and simulations. In the European market, disk software was rarer and cassettes were the most common distribution method; this led to a higher prevalence of arcade titles and smaller, lower-budget games that could fit entirely in the computer's memory without requiring multiloads. European programmers also tended to exploit advanced features of the C64's hardware more than their US counterparts. In the United States, demand for 8-bit computers all but ceased as the 1990s began and PC compatibles completely dominated the computer market. However, the C64 continued to be popular in the UK and other European countries. The machine's eventual demise was not due to lack of demand or the cost of the C64 itself (still profitable at a retail price point between £44 and £50), but rather because of the cost of producing the disk drive. In March 1994, at CeBIT in Hanover, Germany, Commodore announced that the C64 would be finally discontinued in 1995, noting that the Commodore 1541 cost more than the C64 itself. However, only one month later in April 1994, the company filed for bankruptcy. When Commodore went bankrupt, all production on their inventory, including the C64, was discontinued, thus ending the C64's 11 and a half year production. Claims of sales of 17, 22 and 30 million of C64 units sold worldwide have been made. Company sales records, however, indicate that the total number was about 12.5 million. Based on that figure, the Commodore 64 was still the third most popular computing platform into the 21st century until 2017 when the Raspberry Pi family replaced it. While 360,000 C64s were sold in 1982, about 1.3 million were sold in 1983, followed by a large spike in 1984 when 2.6 million were sold. After that, sales held steady at between 1.3 and 1.6 million a year for the remainder of the decade and then dropped off after 1989. North American sales peaked between 1983 and 1985 and gradually tapered off afterward, while European sales remained quite strong into the early 1990s. The computer's designers claimed that "The freedom that allowed us to do the C-64 project will probably never exist again in that environment"; by spring 1983 most had left to found Ensoniq. C64 family Commodore MAX In 1982, Commodore released the Commodore MAX Machine in Japan. It was called the Ultimax in the United States and VC-10 in Germany. The MAX was intended to be a game console with limited computing capability and was based on a cut-down version of the hardware family later used in the C64. The MAX was discontinued months after its introduction because of poor sales in Japan. Commodore Educator 64 1983 saw Commodore attempt to compete with the Apple II's hold on the US education market with the Educator 64, essentially a C64 and "greenscale" monochrome monitor in a PET case. Schools preferred the all-in-one metal construction of the PET over the standard C64's separate components, which could be easily damaged, vandalized, or stolen. Schools did not prefer the Educator 64 to the wide range of software and hardware options the Apple IIe was able to offer, and it was produced in limited quantities. SX-64 Also in 1983, Commodore released the SX-64, a portable version of the C64. The SX-64 has the distinction of being the first commercial full-color portable computer. While earlier computers using this form factor only incorporate monochrome ("green screen") displays, the base SX-64 unit features a color cathode-ray tube (CRT) and one integrated 1541 floppy disk drive. Even though Commodore claimed in advertisements that it would have dual 1541 drives, when the SX-64 was released there was only one and the other became a floppy disk storage slot. Also, unlike most other C64s, the SX-64 does not have a datasette connector so an external cassette was not an option. Commodore 128 Two designers at Commodore, Fred Bowen and Bil Herd, were determined to rectify the problems of the Plus/4. They intended that the eventual successors to the C64—the Commodore 128 and 128D computers (1985)—were to build upon the C64, avoiding the Plus/4's flaws. The successors had many improvements such as a BASIC with graphics and sound commands (like almost all home computers not made by Commodore ), 80-column display ability, and full CP/M compatibility. The decision to make the Commodore 128 plug compatible with the C64 was made quietly by Bowen and Herd, software and hardware designers respectively, without the knowledge or approval by the management in the post Jack Tramiel era. The designers were careful not to reveal their decision until the project was too far along to be challenged or changed and still make the impending Consumer Electronics Show (CES) in Las Vegas. Upon learning that the C128 was designed to be compatible with the C64, Commodore's marketing department independently announced that the C128 would be 100% compatible with the C64, thereby raising the bar for C64 support. In a case of malicious compliance, the 128 design was altered to include a separate "64 mode" using a complete C64 environment to try to ensure total compatibility. Commodore 64C The C64's designers intended the computer to have a new, wedge-shaped case within a year of release, but the change did not occur. In 1986, Commodore released the 64C computer, which is functionally identical to the original. The exterior design was remodeled in the sleeker style of the Commodore 128. The 64C uses new versions of the SID, VIC-II, and I/O chips being deployed. Models with the C64E board had the graphic symbols printed on the top of the keys, instead of the normal location on the front. The sound chip (SID) was changed to use the MOS 8580 chip, with the core voltage reduced from 12V to 9V. The most significant changes include different behavior in the filters and in the volume control, which result in some music/sound effects sounding differently than intended, and in digitally-sampled audio being almost inaudible, respectively (though both of these can mostly be corrected-for in software). The 64 KB RAM memory went from eight chips to two chips. BASIC and the KERNAL went from two separate chips into one 16 KB ROM chip. The PLA chip and some TTL chips were integrated into a DIL 64-pin chip. The "252535-01" PLA integrated the color RAM as well into the same chip. The smaller physical space made it impossible to put in some internal expansions like a floppy-speeder. In the United States, the 64C was often bundled with the third-party GEOS graphical user interface (GUI)-based operating system, as well as the software needed to access Quantum Link. The 1541 drive received a matching face-lift, resulting in the 1541C. Later, a smaller, sleeker 1541-II model was introduced, along with the 3.5-inch microfloppy 1581. Commodore 64 Games System In 1990, the C64 was repackaged in the form of a game console, called the C64 Games System (C64GS), with most external connectivity removed. A simple modification to the 64C's motherboard was made to allow cartridges to be inserted from above. A modified ROM replaced the BASIC interpreter with a boot screen to inform the user to insert a cartridge. Designed to compete with the Nintendo Entertainment System and the Sega Master System, it suffered from very low sales compared to its rivals. It was another commercial failure for Commodore, and it was never released outside Europe. The Commodore game system lacked a keyboard, so any software that required a keyboard could not be used. Commodore 65 In 1990, an advanced successor to the C64, the Commodore 65 (also known as the "C64DX"), was prototyped, but the project was canceled by Commodore's chairman Irving Gould in 1991. The C65's specifications were impressive for an 8-bit computer, bringing specs comparable to the 16-bit Apple IIGS. For example, it could display 256 colors on the screen, while OCS based Amigas could only display 64 in HalfBrite mode (32 colors and half-bright transformations). Although no specific reason was given for the C65's cancellation, it would have competed in the marketplace with Commodore's lower-end Amigas and the Commodore CDTV. Software In 1982, the C64's graphics and sound capabilities were rivaled only by the Atari 8-bit family and appeared exceptional when compared with the widely publicized Atari VCS and Apple II. The C64 is often credited with starting the computer subculture known as the demoscene (see Commodore 64 demos). It is still being actively used in the demoscene, especially for music (its SID sound chip even being used in special sound cards for PCs, and the Elektron SidStation synthesizer). Even though other computers quickly caught up with it, the C64 remained a strong competitor to the later video game consoles Nintendo Entertainment System (NES) and Sega Master System, thanks in part to its by-then established software base, especially outside North America, where it comprehensively outsold the NES. Because of lower incomes and the domination of the Sinclair Spectrum in the UK, almost all British C64 software used cassette tapes. Few cassette C64 programs were released in the US after 1983 and, in North America, the diskette was the principal method of software distribution. The cartridge slot on the C64 was also mainly a feature used in the computer's first two years on the US market and became rapidly obsolete once the price and reliability of 1541 drives improved. A handful of PAL region games used bank switched cartridges to get around the 16 KB memory limit. BASIC As is common for home computers of the early 1980s, the C64 comes with a BASIC interpreter, in ROM. KERNAL, I/O, and tape/disk drive operations are accessed via custom BASIC language commands. The disk drive has its own interfacing microprocessor and ROM (firmware) I/O routines, much like the earlier CBM/PET systems and the Atari 400 and Atari 800. This means that no memory space is dedicated to running a disk operating system, as was the case with earlier systems such as the Apple II and TRS-80. Commodore BASIC 2.0 is used instead of the more advanced BASIC 4.0 from the PET series, since C64 users were not expected to need the disk-oriented enhancements of BASIC 4.0. The company did not expect many to buy a disk drive, and using BASIC 2.0 simplified VIC-20 owners' transition to the 64. "The choice of BASIC 2.0 instead of 4.0 was made with some soul-searching, not just at random. The typical user of a C64 is not expected to need the direct disk commands as much as other extensions, and the amount of memory to be committed to BASIC were to be limited. We chose to leave expansion space for color and sound extensions instead of the disk features. As a result, you will have to handle the disk in the more cumbersome manner of the 'old days'." The version of Microsoft BASIC is not very comprehensive and does not include specific commands for sound or graphics manipulation, instead requiring users to use the "PEEK and POKE" commands to access the graphics and sound chip registers directly. To provide extended commands, including graphics and sound, Commodore produced two different cartridge-based extensions to BASIC 2.0: Simons' BASIC and Super Expander 64. Other languages available for the C64 include Pascal, C, Logo, Forth, and FORTRAN. Compilers for BASIC 2.0 such as Petspeed 2 (from Commodore), Blitz (from Jason Ranheim), and Turbo Lightning (from Ocean Software) were produced. Most commercial C64 software was written in assembly language, either cross-developed on a larger computer, or directly on the C64 using a machine code monitor or an assembler. This maximized speed and minimized memory use. Some games, particularly adventures, used high-level scripting languages and sometimes mixed BASIC and machine language. Alternative operating systems Many third-party operating systems have been developed for the C64. As well as the original GEOS, two third-party GEOS-compatible systems have been written: Wheels and GEOS megapatch. Both of these require hardware upgrades to the original C64. Several other operating systems are or have been available, including WiNGS OS, the Unix-like LUnix, operated from a command-line, and the embedded systems OS Contiki, with full GUI. Other less well-known OSes include ACE, Asterix, DOS/65, and GeckOS. A version of CP/M was released, but this requires the addition of an external Z80 processor to the expansion bus. Furthermore, the Z80 processor is underclocked to be compatible with the C64's memory bus, so performance is poor compared to other CP/M implementations. C64 CP/M and C128 CP/M both suffer a lack of software; although most commercial CP/M software can run on these systems, software media is incompatible between platforms. The low usage of CP/M on Commodores means that software houses saw no need to invest in mastering versions for the Commodore disk format. The C64 CP/M cartridge is also not compatible with anything except the early 326298 motherboards. Networking software During the 1980s, the Commodore 64 was used to run bulletin board systems using software packages such as Punter BBS, Bizarre 64, Blue Board, C-Net, Color 64, CMBBS, C-Base, DMBBS, Image BBS, EBBS, and The Deadlock Deluxe BBS Construction Kit, often with sysop-made modifications. These boards sometimes were used to distribute cracked software. As late as December 2013, there were 25 such Bulletin Board Systems in operation, reachable via the Telnet protocol. There were major commercial online services, such as Compunet (UK), CompuServe (US later bought by America Online), The Source (US), and Minitel (France) among many others. These services usually required custom software which was often bundled with a modem and included free online time as they were billed by the minute. Quantum Link (or Q-Link) was a US and Canadian online service for Commodore 64 and 128 personal computers that operated from November 5, 1985, to November 1, 1994. It was operated by Quantum Computer Services of Vienna, Virginia, which in October 1991 changed its name to America Online and continued to operate its AOL service for the IBM PC compatible and Apple Macintosh. Q-Link was a modified version of the PlayNET system, which Control Video Corporation (CVC, later renamed Quantum Computer Services) licensed. Online gaming The first graphical character-based interactive environment is Club Caribe. First released as Habitat in 1988, Club Caribe was introduced by LucasArts for Q-Link customers on their Commodore 64 computers. Users could interact with one another, chat and exchange items. Although the game's open world was very basic, its use of online avatars and the combination of chat and graphics was revolutionary. Online graphics in the late 1980s were severely restricted by the need to support modem data transfer rates as low as 300 bits per second. Habitat's graphics were stored locally on floppy disk, eliminating the need for network transfer. Hardware CPU and memory The C64 uses an 8-bit MOS Technology 6510 microprocessor. It is almost identical to the 6502 but with three-state buses, a different pinout, slightly different clock signals and other minor changes for this specific application. It also has six I/O lines on otherwise unused legs on the 40-pin IC package. These are used for two purposes in the C64: to bank-switch the machine's read-only memory (ROM) in and out of the processor's address space, and to operate the datasette tape recorder. The C64 has of 8-bit-wide dynamic RAM, of 4-bit-wide static color RAM for text mode, and are available to built-in Commodore BASIC 2.0 on startup. There is of ROM, made up of the BASIC interpreter, the KERNAL, and the character ROM. As the processor could only address at a time, the ROM was mapped into memory, and only of RAM (plus in between the ROMs) were available at startup. Most "breadbin" Commodore 64s used 4164 DRAM, with eight chips to total up 64K of system RAM. Later models, featuring Assy 250466 and Assy 250469 motherboards, used 41464 DRAM (64K×4) chips which stored per chip, so only two were required Since 4164 DRAMs are 64K×1, eight chips are needed to make an entire byte, and the computer will not function without all of them present. Thus, the first chip contains Bit 0 for the entire memory space, the second chip contains Bit 1, and so forth. This also makes detecting faulty RAM easy, as a bad chip will display random characters on the screen and the character displayed can be used to determine the faulty RAM. The C64 performs a RAM test on power up and if a RAM error is detected, the amount of free BASIC memory will be lower than the normal 38911 figure. If the faulty chip is in lower memory, then an ?OUT OF MEMORY IN 0 error is displayed rather than the usual BASIC startup banner. The color RAM at $D800 uses a separate 2114 SRAM chip and is gated directly to the VIC-II. The C64 uses a somewhat complicated memory banking scheme; the normal power-on default is to have the BASIC ROM mapped in at - and the screen editor/KERNAL ROM at –. RAM underneath the system ROMs can be written to, but not read back without swapping out the ROMs. Memory location contains a register with control bits for enabling/disabling the system ROMs as well as the I/O area at . If the KERNAL ROM is swapped out, BASIC will be removed at the same time, and it is not possible to have BASIC active without the KERNAL (as BASIC often calls KERNAL routines and part of the ROM code for BASIC is in fact located in the KERNAL ROM). The character ROM is normally not visible to the CPU. It has two mirrors at and , but only the VIC-II can see them; the CPU will see RAM in those locations. The character ROM may be mapped into – where it is then visible to the CPU. Since doing so necessitates swapping out the I/O registers, interrupts must be disabled first. Graphics memory and data cannot be placed at or as the VIC-II will see the character ROM there instead. By removing I/O from the memory map, – becomes free RAM. The color RAM at is swapped out along with the I/O registers and this area can be used for static graphics data such as character sets since the VIC-II cannot see the I/O registers (or color RAM via the CPU mapping). If all ROMs and the I/O area are swapped out, the entire 64k RAM space is available aside for locations /. – is free RAM and not used by BASIC or KERNAL routines; because of this, it is an ideal location to store short machine language programs that can be accessed from BASIC. The cassette buffer at – can also be used to store short machine language routines provided that a Datasette is not used, which will overwrite the buffer. C64 cartridges map into assigned ranges in the CPU's address space and the most common cartridge auto starting requires the presence of a special string at which contains "" followed by the address where program execution begins. A few early C64 cartridges released in 1982 use Ultimax mode (or MAX mode), a leftover feature of the failed MAX Machine. These cartridges map into and displace the KERNAL ROM. If Ultimax mode is used, the programmer will have to provide code for handling system interrupts. The cartridge port has 16 address lines, which grants access to the entire address space of the computer if needed. Disk and tape software normally load at the start of BASIC memory ($0801) and use a small BASIC stub (e.g., 10 SYS(2064)) to jump to the start of the program. Although no Commodore 8-bit machine except the C128 can automatically boot from a floppy disk, some software intentionally overwrites certain BASIC vectors in the process of loading so that execution begins automatically rather than requiring the user to type RUN at the BASIC prompt following loading. Around 300 cartridges were released for the C64, mostly in the machine's first years on the market, after which most software outgrew the cartridge limit. In the final years of the C64, larger software companies such as Ocean Software began releasing games on bank-switched cartridges to overcome this cartridge limit. Commodore did not include a reset button on any of their computers until the CBM-II line, but there were third-party cartridges with a reset button on them. It is possible to trigger a soft reset by jumping to the CPU reset routine at (64738). A few programs use this as an "exit" feature, although it does not clear memory. The KERNAL ROM went through three separate revisions, mostly designed to fix bugs. The initial version is only found on 326298 motherboards, used in the first production models, and cannot detect whether an NTSC or PAL VIC-II is present. The second revision is found on all C64s made from late 1982 through 1985. The third and last KERNAL ROM revision was introduced on the 250466 motherboard (late breadbin models with 41464 RAM) and is found in all C64Cs. The 6510 CPU is clocked at (NTSC) and (PAL), lower than some competing systems (for example, the Atari 800 is clocked at ). A small performance boost can be gained by disabling the VIC-II's video output via a register write. This feature is often used by tape and disk fastloaders as well as the KERNAL cassette routine to keep a standard CPU cycle timing not modified by the VIC-II's sharing of the bus. The Restore key is gated directly to the CPU's NMI line and will generate an NMI if pressed. The KERNAL handler for the NMI checks if Run/Stop is also pressed; if not, it ignores the NMI and simply exits back out. Run/Stop-Restore normally functions as a soft reset in BASIC that restores all I/O registers to their power on default state, but does not clear memory or reset pointers, so any BASIC programs in memory will be left untouched. Machine language software usually disables Run/Stop-Restore by remapping the NMI vector to a dummy RTI instruction. The NMI can be used for an extra interrupt thread by programs as well, but runs the risk of a system lockup or undesirable side effects if the Restore key is accidentally pressed, as this will trigger an inadvertent activation of the NMI thread. Joysticks, mice, and paddles The C64 retained the DE-9 joystick Atari joystick port from the VIC-20 and added another; any Atari-specification game controller can be used on a C64. The joysticks are read from the registers at and , and most software is designed to use a joystick in port 2 for control rather than port 1, as the upper bits of are used by the keyboard and an I/O conflict can result. Although it is possible to use Sega game pads on a C64, it is not recommended as the slightly different signal generated by them can damage the CIA chip. The SID chip's register is used to control paddles and is an analog input. Atari paddles are electrically compatible with the C64, but have different resistance values than Commodore's paddles, which means most software will not work properly with them. However, only a handful of games, mostly ones released early in the computer's life cycle, can use paddles. In 1986, Commodore released two mice for the C64 and C128, the 1350 and 1351. The 1350 is a digital device, read from the joystick registers (and can be used with any program supporting joystick input); while the 1351 is a true, analog potentiometer based, mouse, read with the SID's analog-to-digital converter. Graphics The graphics chip, VIC-II, features 16 colors, eight hardware sprites per scanline (enabling up to 112 sprites per PAL screen), scrolling capabilities, and two bitmap graphics modes. Text modes The standard text mode features 40 columns, like most Commodore PET models; the built-in character encoding is not standard ASCII but PETSCII, an extended form of ASCII-1963. The KERNAL ROM sets the VIC-II to a dark blue background on power up with a light blue text and border. Unlike the PET and VIC-20, the C64 uses "fat" double-width text as some early VIC-IIs had poor video quality that resulted in a fuzzy picture. Most screenshots show borders around the screen, which is a feature of the VIC-II chip. By utilizing interrupts to reset various hardware registers on precise timings it was possible to place graphics within the borders and thus use the full screen. The C64 has a resolution of 320×200 pixels, consisting of a 40×25 grid of 8×8 character blocks. The C64 has 255 predefined character blocks, called PETSCII. The character set can be copied into RAM and altered by a programmer. There are two colour modes, high resolution, with two colours available per character block (one foreground and one background) and multicolour with four colours per character block (three foreground and one background). In multicolour mode, attributes are shared between pixel pairs, so the effective visible resolution is 160×200 pixels. This is necessary since only 16 KB of memory is available for the VIC-II video processor. As the C64 has a bitmapped screen, it is possible to draw each pixel individually. This is, however, very slow. Most programmers used techniques developed for earlier non-bitmapped systems, like the Commodore PET and TRS-80. A programmer redraws the character set and the video processor fills the screen block by block from the top left corner to the bottom right corner. Two different types of animation are used: character block animation and hardware sprites. Character block animation The user draws a series of characters of a person walking, say, two in the middle of the block, and another two walking in and out of the block. Then the user sequences them so the character walks into the block and out again. Drawing a series of these and the user gets a person walking across the screen. By timing the redraw to occur when the television screen blanks out to restart drawing the screen there will be no flicker. For this to happen, the user programs the VIC-II that it generates a raster interrupt when the video flyback occurs. This is the technique used in the classic Space Invaders arcade game. Horizontal and vertical pixelwise scrolling of up to one character block is supported by two hardware scroll registers. Depending on timing, hardware scrolling affects the entire screen or just selected lines of character blocks. On a non-emulated C64, scrolling is glasslike and blur-free. Hardware sprites A sprite is a movable character which moves over an area of the screen, draws over the background and then redraws it after it moves. Note this is very different from character block animation, where the user is just flipping character blocks. On the C64, the VIC-II video processor handles most of the legwork in sprite emulation; the programmer simply defines the sprite and where they want it to go. The C64 has two types of sprites, respecting their colour mode limitations. Hi-res sprites have one colour (one background and one foreground) and multicolour sprites three (one background and three foreground). Colour modes can be split or windowed on a single screen. Sprites can be doubled in size vertically and horizontally up to four times their size, but the pixel attributes are the same – the pixels become "fatter". There are 8 sprites in total and all 8 can be shown in each horizontal line concurrently. Sprites can move with glassy smoothness in front of and behind screen characters and other sprites. The hardware sprites of a C64 can be displayed on either a bitmapped (high resolution) screen or, alternatively, on a text mode screen in conjunction with fast and smooth character block animation. In contrast, software emulated sprites found on systems without support for hardware sprites such as the Apple II and ZX Spectrum required a bitmapped screen. Sprite-sprite and sprite-background collisions are detected in hardware and the VIC-II can be programmed to trigger an interrupt accordingly. Sound The SID chip has three channels, each with its own ADSR envelope generator and filter capabilities. Ring modulation makes use of channel no. 3, to work with the other two channels. Bob Yannes developed the SID chip and later co-founded synthesizer company Ensoniq. Yannes criticized other contemporary computer sound chips as "primitive, obviously ... designed by people who knew nothing about music". Often the game music has become a hit of its own among C64 users. Well-known composers and programmers of game music on the C64 are Rob Hubbard, Jeroen Tel, Tim Follin, David Whittaker, Chris Hülsbeck, Ben Daglish, Martin Galway, Kjell Nordbø and David Dunn among many others. Due to the chip's three channels, chords are often played as arpeggios, coining the C64's characteristic lively sound. It was also possible to continuously update the master volume with sampled data to enable the playback of 4-bit digitized audio. As of 2008, it became possible to play four channel 8-bit audio samples, 2 SID channels and still use filtering. There are two versions of the SID chip: the 6581 and the 8580. The MOS Technology 6581 was used in the original ("breadbin") C64s, the early versions of the 64C, and the Commodore 128. The 6581 was replaced with the MOS Technology 8580 in 1987. While the 6581 sound quality is a little crisper and many Commodore 64 fans say they prefer its sound, it lacks some versatility available in the 8580 – for example, the 8580 can mix all available waveforms on each channel, whereas the 6581 can only mix waveforms in a channel in a much more limited fashion. The main difference between the 6581 and the 8580 is the supply voltage. The 6581 uses a supply—the 8580, a supply. A modification can be made to use the 6581 in a newer 64C board (which uses the chip). The SID chip's distinctive sound has allowed it to retain a following long after its host computer was discontinued. A number of audio enthusiasts and companies have designed SID-based products as add-ons for the C64, x86 PCs, and standalone or Musical Instrument Digital Interface (MIDI) music devices such as the Elektron SidStation. These devices use chips taken from excess stock, or removed from used computers. In 2007, Timbaland's extensive use of the SidStation led to the plagiarism controversy for "Block Party" and "Do It" (written for Nelly Furtado). In 1986, the Sound Expander was released for the Commodore 64. It was a sound module that contained a Yamaha YM3526 sound chip capable of FM synthesis. It was primarily intended for professional music production. Hardware revisions Commodore made many changes to the C64's hardware during its lifetime, sometimes causing compatibility issues. The computer's rapid development, and Commodore and Tramiel's focus on cost cutting instead of product testing, resulted in several defects that caused developers like Epyx to complain and required many revisions to fix; Charpentier said that "not coming a little close to quality" was one of the company's mistakes. Cost reduction was the reason for most of the revisions. Reducing manufacturing costs was vitally important to Commodore's survival during the price war and leaner years of the 16-bit era. The C64's original (NMOS based) motherboard went through two major redesigns and numerous sub-revisions, exchanging positions of the VIC-II, SID and PLA chips. Initially, a large portion of the cost was eliminated by reducing the number of discrete components, such as diodes and resistors, which enabled the use of a smaller printed circuit board. There were 16 total C64 motherboard revisions, aimed at simplifying and reducing manufacturing costs. Some board revisions were exclusive to PAL regions. All C64 motherboards were manufactured in Hong Kong. IC locations changed frequently on each motherboard revision, as did the presence or lack thereof of the metal RF shield around the VIC-II. PAL boards often had aluminized cardboard instead of a metal shield. The SID and VIC-II are socketed on all boards; however, the other ICs may be either socketed or soldered. The first production C64s, made in 1982 to early 1983, are known as "silver label" models due to the case sporting a silver-colored "Commodore" logo. The power LED had a separate silver badge around it reading "64". These machines also have only a 5-pin video cable and cannot output S-video. In late 1982, Commodore introduced the familiar "rainbow badge" case, but many machines produced into early 1983 also used silver label cases until the existing stock of them was used up. In the spring of 1983, the original 326298 board was replaced by the 250407 motherboard which sported an 8-pin video connector and added S-video support for the first time. This case design was used until the C64C appeared in 1986. All ICs switched to using plastic shells while the silver label C64s had some ceramic ICs, notably the VIC-II. The case is made from ABS plastic which may become brown with time. This can be reversed by using a process known as "retrobright". ICs The VIC-II was manufactured with 5 micrometer NMOS technology and was clocked at either (PAL) or (NTSC). Internally, the clock was divided down to generate the dot clock (about 8 MHz) and the two-phase system clocks (about 1 MHz; the exact pixel and system clock speeds are slightly different between NTSC and PAL machines). At such high clock rates, the chip generated a lot of heat, forcing MOS Technology to use a ceramic dual in-line package called a "CERDIP". The ceramic package was more expensive, but it dissipated heat more effectively than plastic. After a redesign in 1983, the VIC-II was encased in a plastic dual in-line package, which reduced costs substantially, but it did not totally eliminate the heat problem. Without a ceramic package, the VIC-II required the use of a heat sink. To avoid extra cost, the metal RF shielding doubled as the heat sink for the VIC, although not all units shipped with this type of shielding. Most C64s in Europe shipped with a cardboard RF shield, coated with a layer of metal foil. The effectiveness of the cardboard was highly questionable and, worse still, it acted as an insulator, blocking airflow which trapped heat generated by the SID, VIC, and PLA chips. The SID was originally manufactured using NMOS at 7 micrometers and in some areas 6 micrometers. The prototype SID and some very early production models featured a ceramic dual in-line package, but unlike the VIC-II, these are extremely rare as the SID was encased in plastic when production started in early 1982. Motherboard In 1986, Commodore released the last revision to the classic C64 motherboard. It was otherwise identical to the 1984 design, except for the two 64 kilobit × 4 bit DRAM chips that replaced the original eight 64 kilobit × 1 bit ICs. After the release of the Commodore 64C, MOS Technology began to reconfigure the original C64's chipset to use HMOS production technology. The main benefit of using HMOS was that it required less voltage to drive the IC, which consequently generates less heat. This enhanced the overall reliability of the SID and VIC-II. The new chipset was renumbered to 85xx to reflect the change to HMOS. In 1987, Commodore released a 64C variant with a highly redesigned motherboard commonly known as a "short board". The new board used the new HMOS chipset, featuring a new 64-pin PLA chip. The new "SuperPLA", as it was dubbed, integrated many discrete components and transistor–transistor logic (TTL) chips. In the last revision of the 64C motherboard, the 2114 4-bit-wide color RAM was integrated into the SuperPLA. Power supply The C64 used an external power supply, a conventional transformer with multiple tappings (as opposed to switch mode, the type now used on PC power supplies). It was encased in an epoxy resin gel, which discouraged tampering but tended to increase the heat level during use. The design saved space within the computer's case and allowed international versions to be more easily manufactured. The 1541-II and 1581 disk drives, along with various third-party clones, also come with their own external power supply "bricks", as did most peripherals leading to a "spaghetti" of cables and the use of numerous double adapters by users. Commodore power supplies often failed sooner than expected. The computer reportedly had a 30% return rate in late 1983, compared to the 5–7% the industry considered acceptable. Creative Computing reported four working computers out of seven C64s. Malfunctioning power bricks were particularly notorious for damaging the RAM chips. Due to their higher density and single supply (+5V), they had less tolerance for an overvoltage condition. The usually failing voltage regulator could be replaced by piggy-backing a new regulator onto the board and fitting a heat sink on top. The original PSU included on early 1982–83 machines had a 5-pin connector that could accidentally be plugged into the video output of the computer. To prevent the user from making this damaging mistake, Commodore changed the plug design on 250407 motherboards to a 3-pin connector in 1984. Commodore later changed the design yet again, omitting the resin gel in order to reduce costs. The follow-on model, the Commodore 128, used a larger, improved power supply that included a fuse. The power supply that came with the Commodore REU was similar to that of the Commodore 128's unit, providing an upgrade for customers who purchased that accessory. Specifications Internal hardware Microprocessor CPU: MOS Technology 6510/8500 (the 6510/8500 is a modified 6502 with an integrated 6-bit I/O port) Clock speed: or Video: MOS Technology VIC-II 6567/8562 (NTSC), 6569/8565 (PAL) 16 colors Text mode: 40×25 characters; 256 user-defined chars (8×8 pixels, or 4×8 in multicolor mode); or extended background color; 64 user-defined chars with 4 background colors, 4-bit color RAM defines foreground color Bitmap modes: 320×200 (2 unique colors in each 8×8 pixel block), 160×200 (3 unique colors + 1 common color in each 4×8 block) 8 hardware sprites of 24×21 pixels (12×21 in multicolor mode) Smooth scrolling, raster interrupts Sound: MOS Technology 6581/8580 SID 3-channel synthesizer with programmable ADSR envelope 8 octaves 4 waveforms per audio channel: triangle, sawtooth, variable pulse, noise Oscillator synchronization, ring modulation Programmable filter: high pass, low pass, band pass, notch filter Input/Output: Two 6526 Complex Interface Adapters 16 bit parallel I/O 8 bit serial I/O 24-hours (AM/PM) Time of Day clock (TOD), with programmable alarm clock 16 bit interval timers RAM: 64 KB, of which 38 KB were available for BASIC programs 1024 nybbles color RAM (memory allocated for screen color data storage) Expandable to 320 KB with Commodore 1764 256 KB RAM Expansion Unit (REU); although only 64 KB directly accessible; REU used mostly for the GEOS. REUs of 128 KB and 512 KB, originally designed for the C128, were also available, but required the user to buy a stronger power supply from some third party supplier; with the 1764 this was included. Creative Micro Designs also produced a 2 MB REU for the C64 and C128, called the 1750 XL. The technology actually supported up to 16 MB, but 2 MB was the biggest one officially made. Expansions of up to 16 MB were also possible via the CMD SuperCPU. ROM: ( Commodore BASIC 2.0; KERNAL; character generator, providing two character sets) Input/output (I/O) ports and power supply I/O ports: ROM cartridge expansion slot (44-pin slot for edge connector with 6510 CPU address/data bus lines and control signals, as well as GND and voltage pins; used for program modules and memory expansions, among others) Integrated RF modulator television antenna output via an RCA connector. The used channel could be adjusted from number 36 with the potentiometer to the left. 8-pin DIN connector containing composite video output, separate Y/C outputs and sound input/output. This is a 262° horseshoe version of the plug, rather than the 270° circular version. Early C64 units (with motherboard Assy 326298) use a 5-pin DIN connector that carries composite video and luminance signals, but lacks a chroma signal. Serial bus (proprietary serial version of IEEE-488, 6-pin DIN plug) for CBM printers and disk drives PET-type Commodore Datassette 300 baud tape interface (edge connector with digital cassette motor/read/write/key-sense signals), Ground and +5V DC lines. The cassette motor is controlled by a +5V DC signal from the 6510 CPU. The 9V AC input is transformed into unregulated 6.36V DC which is used to actually power the cassette motor. User port (edge connector with TTL-level signals, for modems and so on; byte-parallel signals which can be used to drive third-party parallel printers, among other things, 17 logic signals, 7 Ground and voltage pins, including 9V AC) 2 × screwless DE9M game controller ports (compatible with Atari 2600 controllers), each supporting five digital inputs and two analog inputs. Available peripherals included digital joysticks, analog paddles, a light pen, the Commodore 1351 mouse, and graphics tablets such as the KoalaPad. Power supply: 5V DC and 9V AC from an external "power brick", attached to a 7-pin female DIN-connector on the computer. The is used to supply power via a charge pump to the SID sound generator chip, provide via a rectifier to the cassette motor, a "0" pulse for every positive half wave to the time-of-day (TOD) input on the CIA chips, and directly to the user-port. Thus, as a minimum, a square wave is required. But a sine wave is preferred. Memory map Note that even if an I/O chip like the VIC-II only uses 64 positions in the memory address space, it will occupy 1,024 addresses because some address bits are left undecoded. Peripherals Manufacturing cost Vertical integration was the key to keeping Commodore 64 production costs low. At the introduction in 1982, the production cost was US$135 and the retail price US$595. In 1985, the retail price went down to US$149 (US$ today) and the production costs were believed to be somewhere between US$35–50 ( Commodore would not confirm this cost figure. Dougherty of the Berkeley Softworks estimated the costs of the Commodore 64 parts based on his experience at Mattel and Imagic. To lower costs, TTL chips were replaced with less expensive custom chips and ways to increase the yields on the sound and graphics chips were found. The video chip 6567 had the ceramic package replaced with plastic but heat dissipation demanded a redesign of the chip and the development of a plastic package that can dissipate heat as well as ceramic. Clones Clones are computers that imitate C64 functions. In the middle of 2004, after an absence from the marketplace of more than 10 years, PC manufacturer Tulip Computers BV (owners of the Commodore brand since 1997) announced the C64 Direct-to-TV (C64DTV), a joystick-based TV game based on the C64 with 30 video games built into ROM. Designed by Jeri Ellsworth, a self-taught computer designer who had earlier designed the modern C-One C64 implementation, the C64DTV was similar in concept to other mini-consoles based on the Atari 2600 and Intellivision, which had gained modest success earlier in the decade. The product was advertised on QVC in the United States for the 2004 holiday season. By modifying the circuit board, it is possible to attach C1541 floppy disk drives, a second joystick, and PS/2 keyboards to these units, which gives the DTV devices nearly all the capabilities of a full Commodore 64. The DTV hardware is also used in the mini-console Hummer, sold at RadioShack in mid-2005. In 2015, a Commodore 64 compatible motherboard was produced by Individual Computers. Dubbed the "C64 Reloaded", it is a modern redesign of the Commodore 64 motherboard revision 250466 with a few new features. The motherboard itself is designed to be placed in an empty C64 or C64C case already owned by the user. Produced in limited quantities, models of this Commodore 64 "clone" sport either machined or ZIF sockets in which the custom C64 chips would be placed. The board also contains jumpers to accept different revisions of the VIC-II and SID chips, as well as the ability to jumper between the analogue video system modes PAL and NTSC. The motherboard contains several innovations, including selection via the RESTORE key of multiple KERNAL and character ROMs, built-in reset toggle on the power switch, and an S-video socket to replace the original TV modulator. The motherboard is powered by a DC-to-DC converter that uses a single power input of from a mains adapter to power the unit rather than the original and failure-prone Commodore 64 power supply brick. Newer compatible hardware As of 2008, C64 enthusiasts still develop new hardware, including Ethernet cards, specially adapted hard disks and flash card interfaces (sd2iec). In 2022 a product called A-SID was introduced that turns the C-64 into a WAH effect. Brand reuse In 1998, the C64 brand was reused for the "Web.it Internet Computer", a low-powered Internet-oriented all-in-one x86 PC running MS-DOS and Windows 3.1. It uses a AMD Élan SC400 SoC with 16 MB of RAM, a 3.5" floppy disk drive 56k-modem and PCMCIA. Despite its "Commodore 64" nameplate, the "C64 Web.it" is not directly compatible with the original (except via included emulation software), nor does it share its appearance. PC clones branded as C64x sold by Commodore USA, LLC, a company licensing the Commodore trademark, began shipping in June 2011. The C64x has a case resembling the original C64 computer, but – as with the "Web.it" – it is based on a x86 architecture and is not compatible with the Commodore 64 on either hardware or software levels. Virtual Console Several Commodore 64 games were released on the Nintendo Wii's Virtual Console service in Europe and North America only. The games were unlisted from the service as of August 2013 for unknown reasons. THEC64 and THEC64 Mini THEC64 Mini is an unofficial Linux-based console that emulates the Commodore 64, released in 2018 by UK-based Retro Games. The console takes the form of a decorative half-scale Commodore 64 with two USB and one HDMI port, plus a mini USB connection to power the system. The console's decorative keyboard is non-functional – the system is controlled via the included THEC64 joystick or a separate USB keyboard. It is possible to load new software ROMs into the console, which uses emulator x64 (as part of VICE) to run software, and has a built-in graphical operating system. The full-size THEC64 was released in 2019 in Europe and Australia, and was scheduled for release in November 2020 in the North American market. The console and built-in keyboard are built to scale with the original Commodore 64, including a functional keyboard. Enhancements include VIC-20 emulation, four USB ports, and an upgraded joystick. Neither product features any of Commodore's trademarks – the Commodore key on the original keyboard is replaced with a THEC64 key, and Retro Games can call neither product a "C64" – although the system ROMs are licensed from Cloanto Corporation. The consoles can be switched between "carousel mode" for accessing the built-in game library, and "classic mode" in which the machine operates similarly to a traditional Commodore 64. USB storage can be used to hold disk, cartridge and tape images for use with the machine. Emulators Commodore 64 emulators include the open source VICE, Hoxs64, and CCS64. An iPhone app was also released with a compilation of C64 ports. See also List of Commodore 64 games History of personal computers IDE64 – P-ATA interface cartridge for the C64 SuperCPU – CPU upgrade for C64 and C128 Footnotes References Sources Bagnall, Brian (2005). On the Edge: the Spectacular Rise and Fall of Commodore. Variant Press. . See especially pp. 224–260. Tomczyk, Michael (1984). The Home Computer Wars: An Insider's Account of Commodore and Jack Tramiel. COMPUTE! Publications, Inc. . Jeffries, Ron. "A best buy for '83: Commodore 64". Creative Computing, January 1983. Amiga Format News Special. "Commodore at CeBIT '94". Amiga Format, Issue 59, May 1994. Computer Chronicles; "Commodore 64 – Interview with Commodore president Max Toy", 1988. The C-64 Scene Database; "– Kjell Nordbø artist page (bio/release history) at CSDb". External links Commodore 64 history, manuals, and photos C64-Wiki (wiki-based encyclopaedia) Extensive collection of information on C64 programming A History of Gaming Platforms: The Commodore 64 from October 2007 A Commodore 64 Web Server Using Contiki v2.3* Design case history: the Commodore 64, IEEE Spectrum, March 1985 Comparing different unit sales analyses 6502-based home computers American inventions Products introduced in 1982
3,336
7,299
https://en.wikipedia.org/wiki/Colonialism
Colonialism
Colonialism is a practice or policy of control by one people or power over other people or areas, often by establishing colonies and generally with the aim of economic dominance. In the process of colonisation, colonisers may impose their religion, language, economics, and other cultural practices. The foreign administrators rule the territory in pursuit of their interests, seeking to benefit from the colonised region's people and resources. It is associated with but distinct from imperialism. Though colonialism has existed since ancient times, the concept is most strongly associated with the European colonial period starting with the 15th century when some European states established colonising empires. At first, European colonising countries followed policies of mercantilism, aiming to strengthen the home-country economy, so agreements usually restricted the colony to trading only with the metropole (mother country). By the mid-19th century, the British Empire gave up mercantilism and trade restrictions and adopted the principle of free trade, with few restrictions or tariffs. Christian missionaries were active in practically all of the European-controlled colonies because the metropoles were Christian. Historian Philip Hoffman calculated that by 1800, before the Industrial Revolution, Europeans already controlled at least 35% of the globe, and by 1914, they had gained control of 84% of the globe. In the aftermath of World War II colonial powers retreated between 1945 and 1975; over which time nearly all colonies gained independence, entering into changed colonial, so-called postcolonial and neocolonialist relations. Postcolonialism and neocolonialism have continued or shifted relations and ideologies of colonialism, justifying its continuation with concepts such as development and new frontiers, as in exploring outer space for colonization. Definitions Collins English Dictionary defines colonialism as "the practice by which a powerful country directly controls less powerful countries and uses their resources to increase its own power and wealth". Webster's Encyclopedic Dictionary defines colonialism as "the system or policy of a nation seeking to extend or retain its authority over other people or territories". The Merriam-Webster Dictionary offers four definitions, including "something characteristic of a colony" and "control by one power over a dependent area or people". Etymologically, the word "colony" comes from the Latin colōnia—"a place for agriculture". The Stanford Encyclopedia of Philosophy uses the term "to describe the process of European settlement and political control over the rest of the world, including the Americas, Australia, and parts of Africa and Asia". It discusses the distinction between colonialism, imperialism and conquest and states that "[t]he difficulty of defining colonialism stems from the fact that the term is often used as a synonym for imperialism. Both colonialism and imperialism were forms of conquest that were expected to benefit Europe economically and strategically," and continues "given the difficulty of consistently distinguishing between the two terms, this entry will use colonialism broadly to refer to the project of European political domination from the sixteenth to the twentieth centuries that ended with the national liberation movements of the 1960s". In his preface to Jürgen Osterhammel's Colonialism: A Theoretical Overview, Roger Tignor says "For Osterhammel, the essence of colonialism is the existence of colonies, which are by definition governed differently from other territories such as protectorates or informal spheres of influence." In the book, Osterhammel asks, "How can 'colonialism' be defined independently from 'colony?'" He settles on a three-sentence definition: Types The Times once quipped that there were three types of colonial empire: "The English, which consists in making colonies with colonists; the German, which collects colonists without colonies; the French, which sets up colonies without colonists." Modern studies of colonialism have often distinguished between various overlapping categories of colonialism, broadly classified into four types: settler colonialism, exploitation colonialism, surrogate colonialism, and internal colonialism. Some historians have identified other forms of colonialism, including national and trade forms. Settler colonialism involves large-scale immigration by settlers to colonies, often motivated by religious, political, or economic reasons. This form of colonialism aims largely to supplant prior existing populations with a settler one, and involves large number of settlers emigrating to colonies for the purpose of settling down and establishing settlements. Argentina, Australia, Brazil, Canada, Chile, New Zealand, Russia, South Africa, United States, Uruguay, (and to a more controversial extent Israel) are examples of nations created or expanded in their contemporary form by settler colonization. Exploitation colonialism involves fewer colonists and focuses on the exploitation of natural resources or labour to the benefit of the metropole. This form consists of trading posts as well as larger colonies where colonists would constitute much of the political and economic administration. The European colonization of Africa and Asia was largely conducted under the auspices of exploitation colonialism. Surrogate colonialism involves a settlement project supported by a colonial power, in which most of the settlers do not come from the same ethnic group as the ruling power. Internal colonialism is a notion of uneven structural power between areas of a state. The source of exploitation comes from within the state. This is demonstrated in the way control and exploitation may pass from people from the colonizing country to an immigrant population within a newly independent country. National colonialism is a process involving elements of both settler and internal colonialism, in which nation-building and colonization are symbiotically connected, with the colonial regime seeking to remake the colonized peoples into their own cultural and political image. The goal is to integrate them into the state, but only as reflections of the state's preferred culture. The Republic of China in Taiwan is the archetypal example of a national-colonialist society. Trade colonialism involves the undertaking of colonialist ventures in support of trade opportunities for merchants. This form of colonialism was most prominent in 19th-century Asia, where previously isolationist states were forced to open their ports to Western powers. Examples of this include the Opium Wars and the opening of Japan. Socio-cultural evolution As colonialism often played out in pre-populated areas, sociocultural evolution included the formation of various ethnically hybrid populations. Colonialism gave rise to culturally and ethnically mixed populations such as the mestizos of the Americas, as well as racially divided populations such as those found in French Algeria or in Southern Rhodesia. In fact, everywhere where colonial powers established a consistent and continued presence, hybrid communities existed. Notable examples in Asia include the Anglo-Burmese, Anglo-Indian, Burgher, Eurasian Singaporean, Filipino mestizo, Kristang, and Macanese peoples. In the Dutch East Indies (later Indonesia) the vast majority of "Dutch" settlers were in fact Eurasians known as Indo-Europeans, formally belonging to the European legal class in the colony (see also Indos in pre-colonial history and Indos in colonial history). History Antiquity Activity that could be called colonialism has a long history, starting at least as early as the ancient Egyptians. Phoenicians, Greeks, and Romans founded colonies in antiquity. Phoenicia had an enterprising maritime trading-culture that spread across the Mediterranean from 1550 BC to 300 BC; later the Persian Empire and various Greek city-states continued on this line of setting up colonies. The Romans would soon follow, setting up coloniae throughout the Mediterranean, in North Africa, and in Western Asia. Beginning in the 7th century, Arabs colonized a substantial portion of the Middle East, North Africa, and parts of Asia and Europe. From the 9th century Vikings (Norsemen) established colonies in Britain, Ireland, Iceland, Greenland, North America, present-day Russia and Ukraine, France (Normandy) and Sicily. In the 9th century a new wave of Mediterranean colonisation began, with competitors such as the Venetians, Genovese and Amalfians infiltrating the wealthy previously Byzantine or Eastern Roman islands and lands. European Crusaders set up colonial regimes in Outremer (in the Levant, 1097–1291) and in the Baltic littoral (12th century onwards). Venice began to dominate Dalmatia and reached its greatest nominal colonial extent at the conclusion of the Fourth Crusade in 1204, with the declaration of the acquisition of three octaves of the Byzantine Empire. Modernity More than a century before the Jamestown, Virginia settlement led by captain Christopher Newport, modern colonialism started with the Portuguese Prince Henry the Navigator (1394–1460), initiating the Age of Discovery and establishing African trading posts (1445 onwards). Spain (initially the Crown of Castile) and soon after Portugal encountered the Americas (1492 onwards) through sea travel and built trading posts or conquered large extents of land. For some people, it is this building of colonies across oceans that differentiates colonialism from other types of expansionism. Madrid and Lisbon divided the areas of these "new" lands between the Spanish and Portuguese Empires in 1494; other would-be colonial powers paid little heed to the theoretical demarcation. The 17th century saw the birth of the Dutch and French colonial empires, as well as the English overseas possessions, which later became the British Empire. It also saw the establishment of some Danish and Swedish overseas colonies. A first wave of independence movements started with the American Revolutionary War (1775–1783), initiating a new phase for the British Empire. The Spanish Empire largely collapsed in the Americas with the Spanish American wars of independence ( onwards). Empire-builders established several new colonies after this time, including in the German and Belgian colonial empires. In the late 19th century, many European powers became involved in the Scramble for Africa. The Austrian, Russian, and Ottoman Empires existed at the same time as the above empires but did not expand over oceans. Rather, these empires expanded through the more traditional route of the conquest of neighbouring territories. There was, though, some Russian colonisation of North America across the Bering Strait. From the 1860s, the Empire of Japan modelled itself on European colonial empires and expanded its territories in the Pacific and on the Asian mainland. Argentina and the Empire of Brazil fought for hegemony in South America. The United States gained overseas territories after the 1898 Spanish–American War, hence, the coining of the term "American Empire". 20th century The world's colonial population at the outbreak of the First World War (1914) – a high point for colonialism – totalled about 560 million people, of whom 70% lived in British possessions, 10% in French possessions, 9% in Dutch possessions, 4% in Japanese possessions, 2% in German possessions, 2% in American possessions, 3% in Portuguese possessions, 1% in Belgian possessions and 0.5% in Italian possessions. The domestic domains of the colonial powers had a total population of about 370 million people. Outside Europe, few areas had remained without coming under formal colonial tutorship – and even Siam, China, Japan, Nepal, Afghanistan, Persia, and Abyssinia had felt varying degrees of Western colonial-style influence – concessions, unequal treaties, extraterritoriality and the like. Asking whether colonies paid, economic historian Grover Clark (1891–1938) argues an emphatic "No!" He reports that in every case the support cost, especially the military system necessary to support and defend colonies, outran the total trade they produced. Apart from the British Empire, they did not provide favoured destinations for the immigration of surplus metropole populations. The question of whether colonies paid is a complicated one when recognizing the multiplicity of interests involved. In some cases colonial powers paid a lot in military costs while private investors pocketed the benefits. In other cases the colonial powers managed to move the burden of administrative costs to the colonies themselves by imposing taxes. After World War I (1914–1918), the victorious Allies divided up the German colonial empire and much of the Ottoman Empire between themselves as League of Nations mandates, grouping these territories into three classes according to how quickly it was deemed that they could prepare for independence. The empires of Russia and Austria collapsed in 1917–1918. Nazi Germany set up short-lived colonial systems (Reichskommissariate, Generalgouvernement) in Eastern Europe in the early 1940s. After World War II (1939–1945), decolonisation progressed rapidly, due to a number of reasons. First, the Japanese victories in the Pacific War of 1941–1945 had showed Indians and other subject peoples that the colonial powers were not invincible. Second, World War II had significantly weakened all the overseas colonial powers economically. The word "neocolonialism" originated from Jean-Paul Sartre in 1956, to refer to a variety of contexts since the decolonisation that took place after World War II. Generally it does not refer to a type of direct colonisation – rather to colonialism or colonial-style exploitation by other means. Specifically, neocolonialism may refer to the theory that former or existing economic relationships, such as the General Agreement on Tariffs and Trade and the Central American Free Trade Agreement, or the operations of companies (such as Royal Dutch Shell in Nigeria and Brunei) fostered by former colonial powers were or are used to maintain control of former colonies and dependencies after the colonial independence movements of the post–World War II period. The term "neocolonialism" became popular in ex-colonies in the late 20th century. List of colonies British Aden Afghanistan Anglo-Egyptian Sudan Ascension Island Australia New South Wales Victoria Tasmania Queensland South Australia Western Australia Bahamas Barbados Basutoland Bechuanaland British Borneo Brunei Labuan North Borneo Sarawak British East Africa British Guiana British Honduras British Hong Kong British Leeward Islands Anguilla Antigua Barbuda British Virgin Islands Dominica Montserrat Nevis Saint Kitts British Malaya Federated Malay States Straits Settlements Unfederated Malay States British Somaliland British Western Pacific Territories British Solomon Islands Fiji Gilbert and Ellice Islands Phoenix Islands Pitcairn Islands New Hebrides (condominium with France) Tonga Union Islands British Windward Islands Barbados Dominica Grenada Saint Lucia Saint Vincent and the Grenadines Myanmar Canada Ceylon Christmas Island Cocos (Keeling) Islands Cyprus (including Akrotiri and Dhekelia) Egypt Falkland Islands Falkland Islands Dependencies Graham Land South Georgia South Orkney Islands South Shetland Islands South Sandwich Islands Victoria Land Gambia Gibraltar Gold Coast India (including what is today Pakistan, Bangladesh, and Myanmar) Heard Island and McDonald Islands Ireland Jamaica Kenya Maldives Malta Mandatory Palestine Emirate of Transjordan Mandatory Iraq Mauritius Muscat and Oman Norfolk Island Nigeria Northern Rhodesia Nyasaland Seychelles Sierra Leone Shanghai International Settlement South Africa Cape Colony Natal Transvaal Colony Orange River Colony Southern Rhodesia St Helena Swaziland Trinidad and Tobago Tristan da Cunha Trucial States Uganda Tonga French Acadia Algeria Canada Clipperton Island Comoros Islands (including Mayotte) French Guiana French Equatorial Africa Chad Oubangui-Chari French Congo Gabon French India (Pondichéry, Chandernagor, Karikal, Mahé and Yanaon) French Indochina Annam Tonkin Cochinchina Cambodia Laos French Polynesia French Somaliland French Southern and Antarctic Lands French West Africa Ivory Coast Dahomey Guinea French Sudan Mauritania Niger Senegal Upper Volta Guadeloupe Saint Barthélemy Saint Martin La Réunion Louisiana Madagascar Martinique French Morocco French Mandate for Syria and Lebanon New Caledonia Saint-Pierre-et-Miquelon Saint-Domingue Shanghai French Concession (similar concessions in Kouang-Tchéou-Wan, Tientsin, Hankéou) Tunisia New Hebrides (condominium with Britain) Wallis-et-Futuna American American Concession in Tianjin (1869–1902) American Concession in Shanghai (1848–1863) American Concession in Beihai (1876–1943) American Concession in Harbin (1898–1943) American Samoa Beijing Legation Quarter (1861–1945) Corn Islands (1914–1971) Canton and Enderbury Islands Caroline Islands Cuba (Platt Amendment turned Cuba into a protectorate – until Cuban Revolution) Falkland Islands (1832) Guantánamo Bay Guam Gulangyu Island (1903–1945) Haiti (1915–1934) Indian Territory (1834–1907) Isle of Pines (1899–1925) Liberia (Independent since 1847, US protectorate until post-WW2) Marshall Islands Midway Nicaragua (1912–1933) Northern Mariana Islands Palau Palmyra Atoll Panama (Hay–Bunau-Varilla Treaty turned Panama into a protectorate, protectorate until post-WW2) Panama Canal Zone (1903–1979) Philippines (1898–1946) Puerto Rico Quita Sueño Bank (1869–1981) Roncador Bank (1856–1981) Ryukyu Islands (1945–1972) Shanghai International Settlement (1863–1945) Sultanate of Sulu (1903–1915) Swan Islands, Honduras (1914–1972) Treaty Ports of China, Korea and Japan United States Virgin Islands Wake Island Wilkes Land Russian Emirate of Bukhara (1873–1917) Grand Duchy of Finland (1809–1917) Khiva Khanate (1873–1917) Kauai (Hawaii) (1816–1817) Russian America (Alaska) (1733–1867) Fort Ross (California) Russian Dalian (1898-1905) German Bismarck Archipelago Kamerun Caroline Islands German New Guinea German Samoa German Solomon Islands German East Africa German South-West Africa Gilbert Islands Jiaozhou Bay Mariana Islands Marshall Islands Nauru Palau Togoland Tianjin Italian Italian Aegean Islands Italian Albania (1918–1920) Italian Albania (1939–1943) Italian concession of Tientsin Italian governorate of Dalmatia Italian governorate of Montenegro Hellenic State Italian Eritrea Italian Somaliland Italian Trans-Juba (briefly; annexed) Libya Italian Tripolitania Italian Cyrenaica Italian Libya Italian East Africa Italian occupation of Majorca (1936-1939) Dutch Dutch Brazil Dutch Ceylon Dutch Formosa Dutch Cape Colony Aruba Bonaire Curaçao Saba Sint Eustatius Sint Maarten Surinam (Dutch colony) Dutch East Indies Dutch New Guinea Portuguese Portuguese Africa Cabinda Ceuta Madeira Portuguese Angola Portuguese Cape Verde Portuguese Guinea Portuguese Mozambique Portuguese São Tomé and Príncipe Fort of São João Baptista de Ajudá Portuguese Asia Portuguese Ceylon Portuguese India Goa Daman Diu Portuguese Macau Portuguese Oceania Flores Portuguese Timor Solor Portuguese South America Colonial Brazil Cisplatina Misiones Orientales Portuguese North America Azores Newfoundland and Labrador Spanish Canary Islands Cape Juby Captaincy General of Cuba Spanish Florida Spanish Louisiana Captaincy General of the Philippines Caroline Islands Mariana Islands Marshall Islands Palau Islands Ifni Río de Oro Saguia el-Hamra Spanish Morocco Spanish Netherlands Spanish Sahara Spanish Sardinia Spanish Sicily Viceroyalty of Peru Captaincy General of Chile Viceroyalty of the Río de la Plata Spanish Guinea Annobón Fernando Po Río Muni Viceroyalty of New Granada Captaincy General of Venezuela Viceroyalty of New Spain Captaincy General of Guatemala Captaincy General of Yucatán Captaincy General of Santo Domingo Captaincy General of Puerto Rico Spanish Formosa Austrian and Austro-Hungarian Bosnia and Herzegovina 1878–1918. Tianjin, China, 1902–1917. Austrian Netherlands, 1714–1797 Nicobar Islands, 1778–1783 North Borneo, 1876–1879 Danish Andaman and Nicobar Islands Danish West Indies (now United States Virgin Islands) Danish Norway Faroe Islands Greenland Iceland Serampore Danish Gold Coast Danish India Belgian Belgian Congo Ruanda-Urundi Tianjin Swedish Guadeloupe New Sweden Saint Barthélemy Swedish Gold Coast Dominions of Sweden in continental Europe Norwegian Svalbard Jan Mayen Bouvet Island Queen Maud Land Peter I Island Ottoman Rumelia Ottoman North Africa Ottoman Arabia Australian Papua New Guinea Christmas Island Cocos Islands Coral Sea Islands Heard Island and McDonald Islands Norfolk Island Nauru Australian Antarctic Territory New Zealand Cook Islands Nauru Niue Ross Dependency Balleny Islands Ross Island Scott Island Roosevelt Island Japanese Bonin Islands Karafuto Korea Kuril Islands Kwantung Leased Territory Nanyo Caroline Islands Marshall Islands Northern Mariana Islands Palau Islands Penghu Islands Ryukyu Domain Taiwan Volcano Islands Chinese Dzungaria (Xinjiang) from 1758–present Kashgaria (East Turkistan) from 1884 – 1933, 1934–1944, 1949–present Guangxi (Tusi) Hainan Nansha Islands Xisha Islands Manchuria Inner Mongolia Outer Mongolia (Mongolia & Tuva) during the late Qing dynasty Taiwan Tibet (Kashag) Yunnan (Tusi) Vietnam during the Han, Sui, and Tang dynasties Omani Omani Empire Swahili coast Zanzibar Qatar Bahrain Somalia Socotra Mexican The Californias Texas Central America Clipperton Island Revillagigedo Islands Chiapas Ecuadorian Galápagos Islands Colombian Panama Ecuador Venezuela Archipelago of San Andrés, Providencia and Santa Catalina Argentine Protectorate of Peru (1820–1822) Gobierno del Cerrito (1843–1851) Chile (1817–1818) Paraguay (1810–1811, 1873) Uruguay (1810–1813) Bolivia (1810–1822) Tierra del Fuego Patagonia Falkland Islands and Dependencies (1829–1831, 1832–1833, 1982) Argentine Antarctica Misiones Formosa Puna de Atacama (1839– ) Argentina expedition to California (1818) Equatorial Guinea (1810–1815) Paraguayan colonies Mato Grosso do Sul Formosa Bolivian Puna de Atacama (1825–1839 ceded to Argentina) (1825–1879 ceded to Chile) Acre Ethiopian Eritrea Moroccan Western Sahara Indian Gilgit Baltistan Indonesian East Timor Thai/Siamese Kingdom of Vientiane (1778–1828) Kingdom of Luang Prabang (1778–1893) Kingdom of Champasak (1778–1893) Kingdom of Cambodia (1771–1867) Kedah (1821–1826) Perlis (1821–1836) Ancient Egyptian Canaan Nubia Khedivate Egyptian Anglo-Egyptian Sudan Habesh Eyalet Sidon Eyalet Damascus Eyalet Impact The impacts of colonisation are immense and pervasive. Various effects, both immediate and protracted, include the spread of virulent diseases, unequal social relations, detribalization, exploitation, enslavement, medical advances, the creation of new institutions, abolitionism, improved infrastructure, and technological progress. Colonial practices also spur the spread of colonist languages, literature and cultural institutions, while endangering or obliterating those of native peoples. The native cultures of the colonised peoples can also have a powerful influence on the imperial country. Economy, trade and commerce Economic expansion, sometimes described as the colonial surplus, has accompanied imperial expansion since ancient times. Greek trade networks spread throughout the Mediterranean region while Roman trade expanded with the primary goal of directing tribute from the colonised areas towards the Roman metropole. According to Strabo, by the time of emperor Augustus, up to 120 Roman ships would set sail every year from Myos Hormos in Roman Egypt to India. With the development of trade routes under the Ottoman Empire, Aztec civilisation developed into an extensive empire that, much like the Roman Empire, had the goal of exacting tribute from the conquered colonial areas. For the Aztecs, a significant tribute was the acquisition of sacrificial victims for their religious rituals. On the other hand, European colonial empires sometimes attempted to channel, restrict and impede trade involving their colonies, funneling activity through the metropole and taxing accordingly. Despite the general trend of economic expansion, the economic performance of former European colonies varies significantly. In "Institutions as a Fundamental Cause of Long-run Growth", economists Daron Acemoglu, Simon Johnson and James A. Robinson compare the economic influences of the European colonists on different colonies and study what could explain the huge discrepancies in previous European colonies, for example, between West African colonies like Sierra Leone and Hong Kong and Singapore. According to the paper, economic institutions are the determinant of the colonial success because they determine their financial performance and order for the distribution of resources. At the same time, these institutions are also consequences of political institutions – especially how de facto and de jure political power is allocated. To explain the different colonial cases, we thus need to look first into the political institutions that shaped the economic institutions. For example, one interesting observation is "the Reversal of Fortune" – the less developed civilisations in 1500, like North America, Australia, and New Zealand, are now much richer than those countries who used to be in the prosperous civilisations in 1500 before the colonists came, like the Mughals in India and the Incas in the Americas. One explanation offered by the paper focuses on the political institutions of the various colonies: it was less likely for European colonists to introduce economic institutions where they could benefit quickly from the extraction of resources in the area. Therefore, given a more developed civilisation and denser population, European colonists would rather keep the existing economic systems than introduce an entirely new system; while in places with little to extract, European colonists would rather establish new economic institutions to protect their interests. Political institutions thus gave rise to different types of economic systems, which determined the colonial economic performance. European colonisation and development also changed gendered systems of power already in place around the world. In many pre-colonialist areas, women maintained power, prestige, or authority through reproductive or agricultural control. For example, in certain parts of sub-Saharan Africa women maintained farmland in which they had usage rights. While men would make political and communal decisions for a community, the women would control the village's food supply or their individual family's land. This allowed women to achieve power and autonomy, even in patrilineal and patriarchal societies. Through the rise of European colonialism came a large push for development and industrialisation of most economic systems. When working to improve productivity, Europeans focused mostly on male workers. Foreign aid arrived in the form of loans, land, credit, and tools to speed up development, but were only allocated to men. In a more European fashion, women were expected to serve on a more domestic level. The result was a technologic, economic, and class-based gender gap that widened over time. Within a colony, the presence of extractive colonial institutions in a given area has been found have effects on the modern day economic development, institutions and infrastructure of these areas. Slavery and indentured servitude European nations entered their imperial projects with the goal of enriching the European metropoles. Exploitation of non-Europeans and of other Europeans to support imperial goals was acceptable to the colonisers. Two outgrowths of this imperial agenda were the extension of slavery and indentured servitude. In the 17th century, nearly two-thirds of English settlers came to North America as indentured servants. European slave traders brought large numbers of African slaves to the Americas by sail. Spain and Portugal had brought African slaves to work in African colonies such as Cape Verde and São Tomé and Príncipe, and then in Latin America, by the 16th century. The British, French and Dutch joined in the slave trade in subsequent centuries. The European colonial system took approximately 11 million Africans to the Caribbean and to North and South America as slaves. Abolitionists in Europe and Americas protested the inhumane treatment of African slaves, which led to the elimination of the slave trade (and later, of most forms of slavery) by the late 19th century. One (disputed) school of thought points to the role of abolitionism in the American Revolution: while the British colonial metropole started to move towards outlawing slavery, slave-owning elites in the Thirteen Colonies saw this as one of the reasons to fight for their post-colonial independence and for the right to develop and continue a largely slave-based economy. British colonising activity in New Zealand from the early 19th century played a part in ending slave-taking and slave-keeping among the indigenous Māori. On the other hand, British colonial administration in Southern Africa, when it officially abolished slavery in the 1830s, caused rifts in society which arguably perpetuated slavery in the Boer Republics and fed into the philosophy of apartheid. The labour shortages that resulted from abolition inspired European colonisers in Queensland, British Guaiana and Fiji (for example) to develop new sources of labour, re-adopting a system of indentured servitude. Indentured servants consented to a contract with the European colonisers. Under their contract, the servant would work for an employer for a term of at least a year, while the employer agreed to pay for the servant's voyage to the colony, possibly pay for the return to the country of origin, and pay the employee a wage as well. The employees became "indentured" to the employer because they owed a debt back to the employer for their travel expense to the colony, which they were expected to pay through their wages. In practice, indentured servants were exploited through terrible working conditions and burdensome debts imposed by the employers, with whom the servants had no means of negotiating the debt once they arrived in the colony. India and China were the largest source of indentured servants during the colonial era. Indentured servants from India travelled to British colonies in Asia, Africa and the Caribbean, and also to French and Portuguese colonies, while Chinese servants travelled to British and Dutch colonies. Between 1830 and 1930, around 30 million indentured servants migrated from India, and 24 million returned to India. China sent more indentured servants to European colonies, and around the same proportion returned to China. Following the Scramble for Africa, an early but secondary focus for most colonial regimes was the suppression of slavery and the slave trade. By the end of the colonial period they were mostly successful in this aim, though slavery persists in Africa and in the world at large with much the same practices of de facto servility despite legislative prohibition. Military innovation Conquering forces have throughout history applied innovation in order to gain an advantage over the armies of the people they aim to conquer. Greeks developed the phalanx system, which enabled their military units to present themselves to their enemies as a wall, with foot soldiers using shields to cover one another during their advance on the battlefield. Under Philip II of Macedon, they were able to organise thousands of soldiers into a formidable battle force, bringing together carefully trained infantry and cavalry regiments. Alexander the Great exploited this military foundation further during his conquests. The Spanish Empire held a major advantage over Mesoamerican warriors through the use of weapons made of stronger metal, predominantly iron, which was able to shatter the blades of axes used by the Aztec civilisation and others. The use of gunpowder weapons cemented the European military advantage over the peoples they sought to subjugate in the Americas and elsewhere. End of empire The populations of some colonial territories, such as Canada, enjoyed relative peace and prosperity as part of a European power, at least among the majority. Minority populations such as First Nations peoples and French-Canadians experienced marginalisation and resented colonial practices. Francophone residents of Quebec, for example, were vocal in opposing conscription into the armed services to fight on behalf of Britain during World War I, resulting in the Conscription crisis of 1917. Other European colonies had much more pronounced conflict between European settlers and the local population. Rebellions broke out in the later decades of the imperial era, such as India's Sepoy Rebellion of 1857. The territorial boundaries imposed by European colonisers, notably in central Africa and South Asia, defied the existing boundaries of native populations that had previously interacted little with one another. European colonisers disregarded native political and cultural animosities, imposing peace upon people under their military control. Native populations were often relocated at the will of the colonial administrators. The Partition of British India in August 1947 led to the Independence of India and the creation of Pakistan. These events also caused much bloodshed at the time of the migration of immigrants from the two countries. Muslims from India and Hindus and Sikhs from Pakistan migrated to the respective countries they sought independence for. Post-independence population movement In a reversal of the migration patterns experienced during the modern colonial era, post-independence era migration followed a route back towards the imperial country. In some cases, this was a movement of settlers of European origin returning to the land of their birth, or to an ancestral birthplace. 900,000 French colonists (known as the Pied-Noirs) resettled in France following Algeria's independence in 1962. A significant number of these migrants were also of Algerian descent. 800,000 people of Portuguese origin migrated to Portugal after the independence of former colonies in Africa between 1974 and 1979; 300,000 settlers of Dutch origin migrated to the Netherlands from the Dutch West Indies after Dutch military control of the colony ended. After WWII 300,000 Dutchmen from the Dutch East Indies, of which the majority were people of Eurasian descent called Indo Europeans, repatriated to the Netherlands. A significant number later migrated to the US, Canada, Australia and New Zealand. Global travel and migration in general developed at an increasingly brisk pace throughout the era of European colonial expansion. Citizens of the former colonies of European countries may have a privileged status in some respects with regard to immigration rights when settling in the former European imperial nation. For example, rights to dual citizenship may be generous, or larger immigrant quotas may be extended to former colonies. In some cases, the former European imperial nations continue to foster close political and economic ties with former colonies. The Commonwealth of Nations is an organisation that promotes cooperation between and among Britain and its former colonies, the Commonwealth members. A similar organisation exists for former colonies of France, the Francophonie; the Community of Portuguese Language Countries plays a similar role for former Portuguese colonies, and the Dutch Language Union is the equivalent for former colonies of the Netherlands. Migration from former colonies has proven to be problematic for European countries, where the majority population may express hostility to ethnic minorities who have immigrated from former colonies. Cultural and religious conflict have often erupted in France in recent decades, between immigrants from the Maghreb countries of north Africa and the majority population of France. Nonetheless, immigration has changed the ethnic composition of France; by the 1980s, 25% of the total population of "inner Paris" and 14% of the metropolitan region were of foreign origin, mainly Algerian. On colonisers In his 1955 essay, Discourse on Colonialism (French: Discours sur le colonialisme), French poet Aimé Césaire evaluates the effects of racist, sexist, and capitalist attitudes and motivations on the civilisations that attempted to colonise other civilisations. In explaining his position, he says, "I admit that it is a good thing to place different civilisations in contact with each other that it is an excellent thing to blend different worlds; that whatever its own particular genius may be, a civilisation that withdraws into itself atrophies; that for civilisations, exchange is oxygen." To illustrate his point, he explains that colonisation relies on racist and xenophobic frameworks that dehumanise the targets of colonisation and justify their extreme and brutal mistreatment. Every time an immoral act perpetrated by colonisers onto the colonised is justified by racist, sexist, otherwise xenophobic, or capitalist motivations to subjugate a group of people, the colonising civilisation "acquires another dead weight, a universal regression takes place, a gangrene sets in, a centre of infection begins to spread." Césaire argues the result of this process is that "a poison [is] instilled into the veins of Europe and, slowly but surely, the continent proceeds toward savagery." Introduced diseases Encounters between explorers and populations in the rest of the world often introduced new diseases, which sometimes caused local epidemics of extraordinary virulence. For example, smallpox, measles, malaria, yellow fever, and others were unknown in pre-Columbian America. Half the native population of Hispaniola in 1518 was killed by smallpox. Smallpox also ravaged Mexico in the 1520s, killing 150,000 in Tenochtitlan alone, including the emperor, and Peru in the 1530s, aiding the European conquerors. Measles killed a further two million Mexican natives in the 17th century. In 1618–1619, smallpox wiped out 90% of the Massachusetts Bay Native Americans. Smallpox epidemics in 1780–1782 and 1837–1838 brought devastation and drastic depopulation among the Plains Indians. Some believe that the death of up to 95% of the Native American population of the New World was caused by Old World diseases. Over the centuries, the Europeans had developed high degrees of immunity to these diseases, while the indigenous peoples had no time to build such immunity. Smallpox decimated the native population of Australia, killing around 50% of indigenous Australians in the early years of British colonisation. It also killed many New Zealand Māori. As late as 1848–49, as many as 40,000 out of 150,000 Hawaiians are estimated to have died of measles, whooping cough and influenza. Introduced diseases, notably smallpox, nearly wiped out the native population of Easter Island. In 1875, measles killed over 40,000 Fijians, approximately one-third of the population. The Ainu population decreased drastically in the 19th century, due in large part to infectious diseases brought by Japanese settlers pouring into Hokkaido. Conversely, researchers have hypothesised that a precursor to syphilis may have been carried from the New World to Europe after Columbus's voyages. The findings suggested Europeans could have carried the nonvenereal tropical bacteria home, where the organisms may have mutated into a more deadly form in the different conditions of Europe. The disease was more frequently fatal than it is today; syphilis was a major killer in Europe during the Renaissance. The first cholera pandemic began in Bengal, then spread across India by 1820. Ten thousand British troops and countless Indians died during this pandemic. Between 1736 and 1834 only some 10% of East India Company's officers survived to take the final voyage home. Waldemar Haffkine, who mainly worked in India, who developed and used vaccines against cholera and bubonic plague in the 1890s, is considered the first microbiologist. According to a 2021 study by Jörg Baten and Laura Maravall on the anthropometric influence of colonialism on Africans, the average height of Africans decreased by 1.1 centimetres upon colonization and later recovered and increased overall during colonial rule. The authors attributed the decrease to diseases, such as malaria and sleeping sickness, forced labor during the early decades of colonial rule, conflicts, land grabbing, and widespread cattle deaths from the rinderpest viral disease. Countering disease As early as 1803, the Spanish Crown organised a mission (the Balmis expedition) to transport the smallpox vaccine to the Spanish colonies, and establish mass vaccination programs there. By 1832, the federal government of the United States established a smallpox vaccination program for Native Americans. Under the direction of Mountstuart Elphinstone a program was launched to propagate smallpox vaccination in India. From the beginning of the 20th century onwards, the elimination or control of disease in tropical countries became a driving force for all colonial powers. The sleeping sickness epidemic in Africa was arrested due to mobile teams systematically screening millions of people at risk. In the 20th century, the world saw the biggest increase in its population in human history due to lessening of the mortality rate in many countries due to medical advances. The world population has grown from 1.6 billion in 1900 to over seven billion today. Botany Colonial botany refers to the body of works concerning the study, cultivation, marketing and naming of the new plants that were acquired or traded during the age of European colonialism. Notable examples of these plants included sugar, nutmeg, tobacco, cloves, cinnamon, Peruvian bark, peppers and tea. This work was a large part of securing financing for colonial ambitions, supporting European expansion and ensuring the profitability of such endeavors. Vasco de Gama and Christopher Columbus were seeking to establish routes to trade spices, dyes and silk from the Moluccas, India and China by sea that would be independent of the established routes controlled by Venetian and Middle Eastern merchants. Naturalists like Hendrik van Rheede, Georg Eberhard Rumphius, and Jacobus Bontius compiled data about eastern plants on behalf of the Europeans. Though Sweden did not possess an extensive colonial network, botanical research based on Carl Linnaeus identified and developed techniques to grow cinnamon, tea and rice locally as an alternative to costly imports. Geography Settlers acted as the link between indigenous populations and the imperial hegemony, thus bridging the geographical, ideological and commercial gap between the colonisers and colonised. While the extent in which geography as an academic study is implicated in colonialism is contentious, geographical tools such as cartography, shipbuilding, navigation, mining and agricultural productivity were instrumental in European colonial expansion. Colonisers' awareness of the Earth's surface and abundance of practical skills provided colonisers with a knowledge that, in turn, created power. Anne Godlewska and Neil Smith argue that "empire was 'quintessentially a geographical project. Historical geographical theories such as environmental determinism legitimised colonialism by positing the view that some parts of the world were underdeveloped, which created notions of skewed evolution. Geographers such as Ellen Churchill Semple and Ellsworth Huntington put forward the notion that northern climates bred vigour and intelligence as opposed to those indigenous to tropical climates (See The Tropics) viz a viz a combination of environmental determinism and Social Darwinism in their approach. Political geographers also maintain that colonial behaviour was reinforced by the physical mapping of the world, therefore creating a visual separation between "them" and "us". Geographers are primarily focused on the spaces of colonialism and imperialism; more specifically, the material and symbolic appropriation of space enabling colonialism. Maps played an extensive role in colonialism, as Bassett would put it "by providing geographical information in a convenient and standardised format, cartographers helped open West Africa to European conquest, commerce, and colonisation". Because the relationship between colonialism and geography was not scientifically objective, cartography was often manipulated during the colonial era. Social norms and values had an effect on the constructing of maps. During colonialism map-makers used rhetoric in their formation of boundaries and in their art. The rhetoric favoured the view of the conquering Europeans; this is evident in the fact that any map created by a non-European was instantly regarded as inaccurate. Furthermore, European cartographers were required to follow a set of rules which led to ethnocentrism; portraying one's own ethnicity in the centre of the map. As J.B. Harley put it, "The steps in making a map – selection, omission, simplification, classification, the creation of hierarchies, and 'symbolisation' – are all inherently rhetorical." A common practice by the European cartographers of the time was to map unexplored areas as "blank spaces". This influenced the colonial powers as it sparked competition amongst them to explore and colonise these regions. Imperialists aggressively and passionately looked forward to filling these spaces for the glory of their respective countries. The Dictionary of Human Geography notes that cartography was used to empty 'undiscovered' lands of their Indigenous meaning and bring them into spatial existence via the imposition of "Western place-names and borders, [therefore] priming 'virgin' (putatively empty land, 'wilderness') for colonisation (thus sexualising colonial landscapes as domains of male penetration), reconfiguring alien space as absolute, quantifiable and separable (as property)." David Livingstone stresses "that geography has meant different things at different times and in different places" and that we should keep an open mind in regards to the relationship between geography and colonialism instead of identifying boundaries. Geography as a discipline was not and is not an objective science, Painter and Jeffrey argue, rather it is based on assumptions about the physical world. Comparison of exogeographical representations of ostensibly tropical environments in science fiction art support this conjecture, finding the notion of the tropics to be an artificial collection of ideas and beliefs that are independent of geography. Versus imperialism Marxism Marxism views colonialism as a form of capitalism, enforcing exploitation and social change. Marx thought that working within the global capitalist system, colonialism is closely associated with uneven development. It is an "instrument of wholesale destruction, dependency and systematic exploitation producing distorted economies, socio-psychological disorientation, massive poverty and neocolonial dependency". Colonies are constructed into modes of production. The search for raw materials and the current search for new investment opportunities is a result of inter-capitalist rivalry for capital accumulation. Lenin regarded colonialism as the root cause of imperialism, as imperialism was distinguished by monopoly capitalism via colonialism and as Lyal S. Sunga explains: "Vladimir Lenin advocated forcefully the principle of self-determination of peoples in his "Theses on the Socialist Revolution and the Right of Nations to Self-Determination" as an integral plank in the programme of socialist internationalism" and he quotes Lenin who contended that "The right of nations to self-determination implies exclusively the right to independence in the political sense, the right to free political separation from the oppressor nation. Specifically, this demand for political democracy implies complete freedom to agitate for secession and for a referendum on secession by the seceding nation." Non Russian marxists within the RSFSR and later the USSR, like Sultan Galiev and Vasyl Shakhrai, meanwhile, between 1918 and 1923 and then after 1929, considered the Soviet Regime a renewed version of the Russian imperialism and colonialism. In his critique of colonialism in Africa, the Guyanese historian and political activist Walter Rodney states: "The decisiveness of the short period of colonialism and its negative consequences for Africa spring mainly from the fact that Africa lost power. Power is the ultimate determinant in human society, being basic to the relations within any group and between groups. It implies the ability to defend one's interests and if necessary to impose one's will by any means available ... When one society finds itself forced to relinquish power entirely to another society that in itself is a form of underdevelopment ... During the centuries of pre-colonial trade, some control over social political and economic life was retained in Africa, in spite of the disadvantageous commerce with Europeans. That little control over internal matters disappeared under colonialism. Colonialism went much further than trade. It meant a tendency towards direct appropriation by Europeans of the social institutions within Africa. Africans ceased to set indigenous cultural goals and standards, and lost full command of training young members of the society. Those were undoubtedly major steps backwards ... Colonialism was not merely a system of exploitation, but one whose essential purpose was to repatriate the profits to the so-called 'mother country'. From an African view-point, that amounted to consistent expatriation of surplus produced by African labour out of African resources. It meant the development of Europe as part of the same dialectical process in which Africa was underdeveloped. Colonial Africa fell within that part of the international capitalist economy from which surplus was drawn to feed the metropolitan sector. As seen earlier, exploitation of land and labour is essential for human social advance, but only on the assumption that the product is made available within the area where the exploitation takes place." According to Lenin, the new imperialism emphasised the transition of capitalism from free trade to a stage of monopoly capitalism to finance capital. He states it is, "connected with the intensification of the struggle for the partition of the world". As free trade thrives on exports of commodities, monopoly capitalism thrived on the export of capital amassed by profits from banks and industry. This, to Lenin, was the highest stage of capitalism. He goes on to state that this form of capitalism was doomed for war between the capitalists and the exploited nations with the former inevitably losing. War is stated to be the consequence of imperialism. As a continuation of this thought G.N. Uzoigwe states, "But it is now clear from more serious investigations of African history in this period that imperialism was essentially economic in its fundamental impulses." Liberalism and capitalism Classical liberals were generally in abstract opposition to colonialism and imperialism, including Adam Smith, Frédéric Bastiat, Richard Cobden, John Bright, Henry Richard, Herbert Spencer, H.R. Fox Bourne, Edward Morel, Josephine Butler, W.J. Fox and William Ewart Gladstone. Their philosophies found the colonial enterprise, particularly mercantilism, in opposition to the principles of free trade and liberal policies. Adam Smith wrote in The Wealth of Nations that Britain should grant independence to all of its colonies and also argued that it would be economically beneficial for British people in the average, although the merchants having mercantilist privileges would lose out. Race and gender During the colonial era, the global process of colonisation served to spread and synthesize the social and political belief systems of the "mother-countries" which often included a belief in a certain natural racial superiority of the race of the mother-country. Colonialism also acted to reinforce these same racial belief systems within the "mother-countries" themselves. Usually also included within the colonial belief systems was a certain belief in the inherent superiority of male over female. This particular belief was often pre-existing amongst the pre-colonial societies, prior to their colonisation. Popular political practices of the time reinforced colonial rule by legitimising European (and/ or Japanese) male authority, and also legitimising female and non-mother-country race inferiority through studies of craniology, comparative anatomy, and phrenology. Biologists, naturalists, anthropologists, and ethnologists of the 19th century were focused on the study of colonised indigenous women, as in the case of Georges Cuvier's study of Sarah Baartman. Such cases embraced a natural superiority and inferiority relationship between the races based on the observations of naturalists' from the mother-countries. European studies along these lines gave rise to the perception that African women's anatomy, and especially genitalia, resembled those of mandrills, baboons, and monkeys, thus differentiating colonised Africans from what were viewed as the features of the evolutionarily superior, and thus rightfully authoritarian, European woman. In addition to what would now be viewed as pseudo-scientific studies of race, which tended to reinforce a belief in an inherent mother-country racial superiority, a new supposedly "science-based" ideology concerning gender roles also then emerged as an adjunct to the general body of beliefs of inherent superiority of the colonial era. Female inferiority across all cultures was emerging as an idea supposedly supported by craniology that led scientists to argue that the typical brain size of the female human was, on the average, slightly smaller than that of the male, thus inferring that therefore female humans must be less developed and less evolutionarily advanced than males. This finding of relative cranial size difference was later attributed to the general typical size difference of the human male body versus that of the typical human female body. Within the former European colonies, non-Europeans and women sometimes faced invasive studies by the colonial powers in the interest of the then prevailing pro-colonial scientific ideology of the day. Such seemingly flawed studies of race and gender coincided with the era of colonialism and the initial introduction of foreign cultures, appearances, and gender roles into the now gradually widening world-views of the scholars of the mother-countries. Othering Othering is the process of creating a separate entity to persons or groups who are labelled as different or non-normal due to the repetition of characteristics. Othering is the creation of those who discriminate, to distinguish, label, categorise those who do not fit in the societal norm. Several scholars in recent decades developed the notion of the "other" as an epistemological concept in social theory. For example, postcolonial scholars, believed that colonising powers explained an "other" who were there to dominate, civilise, and extract resources through colonisation of land. Political geographers explain how colonial/imperial powers "othered" places they wanted to dominate to legalise their exploitation of the land. During and after the rise of colonialism the Western powers perceived the East as the "other", being different and separate from their societal norm. This viewpoint and separation of culture had divided the Eastern and Western culture creating a dominant/subordinate dynamic, both being the "other" towards themselves. Post-colonialism Post-colonialism (or post-colonial theory) can refer to a set of theories in philosophy and literature that grapple with the legacy of colonial rule. In this sense, one can regard post-colonial literature as a branch of postmodern literature concerned with the political and cultural independence of peoples formerly subjugated in colonial empires. Many practitioners take Edward Saïd's book Orientalism (1978) as the theory's founding work (although French theorists such as Aimé Césaire (1913–2008) and Frantz Fanon (1925–1961) made similar claims decades before Saïd). Saïd analyzed the works of Balzac, Baudelaire and Lautréamont, arguing that they helped to shape a societal fantasy of European racial superiority. Writers of post-colonial fiction interact with the traditional colonial discourse, but modify or subvert it; for instance by retelling a familiar story from the perspective of an oppressed minor character in the story. Gayatri Chakravorty Spivak's Can the Subaltern Speak? (1998) gave its name to Subaltern Studies. In A Critique of Postcolonial Reason (1999), Spivak argued that major works of European metaphysics (such as those of Kant and Hegel) not only tend to exclude the subaltern from their discussions, but actively prevent non-Europeans from occupying positions as fully human subjects. Hegel's Phenomenology of Spirit (1807), famous for its explicit ethnocentrism, considers Western civilisation as the most accomplished of all, while Kant also had some traces of racialism in his work. Colonistics The field of colonistics studies colonialism from such viewpoints as those of economics, sociology and psychology. British public opinion about the British Empire The 2014 YouGov survey found that British people are mostly proud of colonialism and the British Empire: Migrations Nations and regions outside Europe with significant populations of European ancestry Africa (see Europeans in Africa) South Africa (European South African): 5.8% of the population Namibia (European Namibians): 6.5% of the population, of which most are Afrikaans-speaking, in addition to a German-speaking minority. Réunion: estimated to be approx. 25% of the population Zimbabwe (Europeans in Zimbabwe) Algeria (Pied-noir) Botswana: 3% of the population Kenya (Europeans in Kenya) Mauritius (Franco-Mauritian) Morocco (European Moroccans) Ivory Coast (French people) Senegal Canary Islands (Spaniards), known as Canarians. Seychelles (Franco-Seychellois) Somalia (Italian Somalis) Eritrea (Italian Eritreans) Saint Helena (UK) including Tristan da Cunha (UK): predominantly European. Eswatini: 3% of the population Tunisia (European Tunisians) Asia Siberia (Russians, Germans and Ukrainians) Kazakhstan (Russians in Kazakhstan, Germans of Kazakhstan): 30% of the population Uzbekistan (Russians and other Slavs): 6% of the population Kyrgyzstan (Russians and other Slavs): 14% of the population Turkmenistan (Russians and other Slavs): 4% of the population Tajikistan (Russians and other Slavs): 1% of the population Hong Kong Philippines (Spanish Ancestry): 3% of the population China (Russians in China) Indian subcontinent (Anglo-Indians) Latin America (see White Latin American) Argentina (European Immigration to Argentina): 97% european and mestizo of the population Bolivia: 15% of the population Brazil (White Brazilian): 47% of the population Chile (White Chilean): 60–70% of the population. Colombia (White Colombian): 37% of the population Costa Rica: 83% of the population Cuba (White Cuban): 65% of the population Dominican Republic: 16% of the population Ecuador: 7% of the population Honduras: 1% of the population El Salvador: 12% of the population Mexico (White Mexican): 9% or ~17% of the population. and 70–80% more as Mestizos. Nicaragua: 17% of the population Panama: 10% of the population Puerto Rico: approx. 80% of the population Peru (European Peruvian): 15% of the population Paraguay: approx. 20% of the population Uruguay (White Uruguayan): 88% of the population Venezuela (White Venezuelan): 42% of the population Rest of the Americas Bahamas: 12% of the population Barbados (White Barbadian): 4% of the population Bermuda: 34% of the population Canada (European Canadians): 80% of the population Falkland Islands: mostly of British descent. French Guiana: 12% of the population Greenland: 12% of the population Martinique: 5% of the population Saint Barthélemy Trinidad and Tobago: 1% of the population United States (European American): 72% of the population, including Hispanic and Non-Hispanic Whites. Oceania (see Europeans in Oceania) Australia (European Australians): 90% of the population New Zealand (European New Zealanders): 78% of the population New Caledonia (Caldoche): 35% of the population French Polynesia: (Zoreilles) 10% of the population Hawaii: 25% of the population Christmas Island: approx. 20% of the population. Guam: 7% of the population Norfolk Island: 9→5% of the population Numbers of European settlers in the colonies (1500–1914) By 1914, Europeans had migrated to the colonies in the millions. Some intended to remain in the colonies as temporary settlers, mainly as military personnel or on business. Others went to the colonies as immigrants. British people were by far the most numerous population to migrate to the colonies: 2.5 million settled in Canada; 1.5 million in Australia; 750,000 in New Zealand; 450,000 in the Union of South Africa; and 200,000 in India. French citizens also migrated in large numbers, mainly to the colonies in the north African Maghreb region: 1.3 million settled in Algeria; 200,000 in Morocco; 100,000 in Tunisia; while only 20,000 migrated to French Indochina. Dutch and German colonies saw relatively scarce European migration, since Dutch and German colonial expansion focused on commercial goals rather than settlement. Portugal sent 150,000 settlers to Angola, 80,000 to Mozambique, and 20,000 to Goa. During the Spanish Empire, approximately 550,000 Spanish settlers migrated to Latin America. See also African independence movements Age of Discovery Anti-imperialism Chartered company Chinese imperialism Christianity and colonialism Civilising mission Colonial Empire Colonialism and the Olympic Games Coloniality of power Colonial war Cultural colonialism Decoloniality Decolonization of the Americas Developmentalism Direct colonial rule Empire of Liberty European colonization of Africa European colonization of the Americas European colonization of Micronesia European colonisation of Southeast Asia French law on colonialism German eastward expansion Global Empire Historiography of the British Empire Impact of Western European colonialism and colonisation International relations of the Great Powers (1814–1919) Muslim conquests Orientalism Pluricontinental Protectorate Satellite state Soviet Empire Stranger King (Concept) Western imperialism in Asia References Further reading Albertini, Rudolf von. European Colonial Rule, 1880–1940: The Impact of the West on India, Southeast Asia, and Africa (1982) 581 pp Benjamin, Thomas, ed. Encyclopedia of Western Colonialism Since 1450 (2006) Cooper, Frederick. Colonialism in Question: Theory, Knowledge, History (2005) Cotterell, Arthur. Western Power in Asia: Its Slow Rise and Swift Fall, 1415 – 1999 (2009) popular history; excerpt Getz, Trevor R. and Heather Streets-Salter, eds.: Modern Imperialism and Colonialism: A Global Perspective (2010) LeCour Grandmaison, Olivier: Coloniser, Exterminer – Sur la guerre et l'Etat colonial, Fayard, 2005, Lindqvist, Sven: Exterminate All The Brutes, 1992, New Press; Reprint edition (June 1997), Morris, Richard B. and Graham W. Irwin, eds. Harper Encyclopedia of the Modern World: A Concise Reference History from 1760 to the Present (1970) online Ness, Immanuel and Zak Cope, eds. The Palgrave Encyclopedia of Imperialism and Anti-Imperialism (2 vol 2015), 1456 pp Nuzzo, Luigi: Colonial Law, European History Online, Mainz: Institute of European History, 2010, retrieved: December 17, 2012. Osterhammel, Jürgen: Colonialism: A Theoretical Overview, Princeton, NJ: M. Wiener, 1997. Page, Melvin E. et al. eds. Colonialism: An International Social, Cultural, and Political Encyclopedia (3 vol 2003) Petringa, Maria, Brazza, A Life for Africa (2006), . Prashad, Vijay: The Darker Nations: A People's History of the Third World, The New Press, 2007. ISBN 978-1-56584-785-9 Rönnbäck, K. & Broberg, O. (2019) Capital and Colonialism. The Return on British Investments in Africa 1869–1969 (Palgrave Studies in Economic History) Schill, Pierre : Réveiller l'archive d'une guerre coloniale. Photographies et écrits de Gaston Chérau, correspondant de guerre lors du conflit italo-turc pour la Libye (1911–1912), Créaphis, 480 p., 2018 (). Awaken the archive of a colonial war. Photographs and writings of a French war correspondent during the Italo-Turkish war in Libya (1911–1912). With contributions from art historian Caroline Recher, critic Smaranda Olcèse, writer Mathieu Larnaudie and historian Quentin Deluermoz. Stuchtey, Benedikt: Colonialism and Imperialism, 1450–1950, European History Online, Mainz: Institute of European History, 2011, retrieved: July 13, 2011. Townsend, Mary Evelyn. European colonial expansion since 1871 (1941). U.S. Tariff Commission. Colonial tariff policies (1922), worldwide; 922pp survey online Ab Imperio E Wendt, Reinhard: European Overseas Rule, European History Online, Mainz: Institute of European History, 2011, retrieved: June 13, 2012. Primary sources Conrad, Joseph, Heart of Darkness, 1899 Fanon, Frantz, The Wretched of the Earth, Preface by Jean-Paul Sartre. Translated by Constance Farrington. London: Penguin Book, 2001 Las Casas, Bartolomé de, A Short Account of the Destruction of the Indies (1542, published in 1552). External links Cultural geography International relations theory Articles containing video clips
3,340
7,300
https://en.wikipedia.org/wiki/Colonial
Colonial
Colonial or The Colonial may refer to: Colonial, of, relating to, or characteristic of a colony or colony (biology) Architecture American colonial architecture French Colonial Spanish Colonial architecture Automobiles Colonial (1920 automobile), the first American automobile with four-wheel brakes Colonial (Shaw automobile), a rebranded Shaw sold from 1921 until 1922 Colonial (1921 automobile), a car from Boston which was sold from 1921 until 1922 Commerce Colonial Pipeline, the largest oil pipeline network in the U.S. Inmobiliaria Colonial, a Spanish corporation, which includes companies in the domains of real estate Places The Colonial (Indianapolis, Indiana) The Colonial (Mansfield, Ohio), a National Register of Historic Places listing in Richland County, Ohio Ciudad Colonial (Santo Domingo), a historic central neighborhood of Santo Domingo Colonial Country Club (Memphis), a golf course in Tennessee Colonial Country Club (Fort Worth), a golf course in Texas Fort Worth Invitational or The Colonial, a PGA golf tournament Trains Colonial (PRR train), a Pennsylvania Railroad run between Washington, DC and New York City, last operated in 1973 by Amtrak Colonial (Amtrak train), an Amtrak train that ran between Newport News, Virginia and Boston from 1976 to 1992, and between Richmond, Virginia and New York City from 1997 to 1999 See also Colonial history of the United States, the period of American history from the 17th century to 1776, under the rule of Great Britain, France and Spain Colonial Hotel (disambiguation) Colonial Revival architecture Colonial Theatre (disambiguation) Colonial troops, any of various military units recruited from, or used as garrison troops in, colonial territories Colonialism, the extension of political control to new areas Colonials (disambiguation) Colonist, a person who has migrated to an area and established a permanent residence there to colonize the area History of Australia Spanish colonization of the Americas, the period of history of Spanish rule over most of the Americas, from the 15th century through the late 19th century
3,341
7,306
https://en.wikipedia.org/wiki/ColecoVision
ColecoVision
ColecoVision is a second-generation home video-game console developed by Coleco and launched in North America in August 1982. It was released a year later in Europe by CBS Electronics as the CBS ColecoVision. The console offered a closer experience to more powerful arcade video games compared to competitors such as the Atari 2600 and Intellivision. The initial catalog of twelve games on ROM cartridge included the first home version of Nintendo's Donkey Kong as the pack-in game. Approximately 136 games were published between 1982 and 1984, including Sega's Zaxxon and some ports of lesser known arcade games that found a larger audience on the console, such as Lady Bug, Cosmic Avenger, and Venture. Coleco released a series of hardware add-ons and special controllers to expand the capabilities of the console. "Expansion Module #1" allows the system to play Atari 2600 cartridges. A later module converts ColecoVision into the Coleco Adam home computer. ColecoVision was discontinued in 1985 when Coleco withdrew from the video game market. Coleco had already contemplated shifting focus to their Cabbage Patch Kids success after the costly failure of their Coleco Adam computer. Development Coleco entered the video game market in 1976 during the dedicated-game home console period with their line of Telstar consoles. When that market became oversaturated over the next few years, the company nearly went bankrupt, but found a successful product through handheld electronic games, with products that beat out those of the current market leader, Mattel. The company also developed a line of miniaturized tabletop arcade video games with licensed rights from arcade game makers including Sega, Bally, Midway, and Nintendo. Coleco was able to survive on sales of their electronic games through to 1982, but that market itself began to wane, and Greenberg was still interested in producing a home video game console. According to Eric Bromley, who led the engineering for the ColecoVision, Coleco president Arnold Greenberg had wanted to get into the programmable home console market with arcade-quality games but the cost of components had been a limiting factor. As early as 1979, Bromley had drawn out specifications for a system using a Texas Instruments video and a General Instruments audio chip but could not get the go-ahead due to cost of RAM. Around 1981, Bromley saw an article in The Wall Street Journal that asserted the price of RAM had fallen and, after working the cost numbers, Bromley found the system cost fell within their cost margins. Within ten minutes of reporting this to Greenberg, they had established the working name "ColecoVision" for the console as they began a more thorough design, which the marketing department never was able to surpass. Coleco recognized that licensed conversion of arcade conversions had worked for Atari in selling the Atari VCS, so they had approached Nintendo around 1981 for potential access to their arcade titles. Bromley described a tense set of meetings with Nintendo's president Hiroshi Yamauchi under typical Japanese customs where he sought to negotiate for game rights, though Yamauchi only offered seemingly obscure titles. After a meal with Yamauchi during one day, Bromley excused himself to the restroom and happened upon one of the first Donkey Kong cabinets which had yet to be released to Western countries. Knowing this game would likely be a hit, Bromley arranged a meeting the following day with Yamauchi and requested the exclusive rights to Donkey Kong; Yamauchi offered them if only they could provide upfront by that day and gave them per unit sold. Greenberg agreed, though as in Japanese custom, Bromley did not have a formal contract from Nintendo on his return. By the time of that year's Consumer Electronics Show, which Yamauchi was attending, Bromley found out from Yamauchi's daughter and translator that he had apparently given the rights to Atari. With Yamauchi's daughter's help, Bromley was able to commit Yamauchi to sign a formal contract to affirm the rights to Coleco. Coleco's announcement that they would bundle Donkey Kong with the console was initially met with surprise and skepticism, with journalists and retailers questioning why they would give away their most anticipated home video game with the console. Release The ColecoVision was released in August 1982. By Christmas 1982, Coleco had sold more than 500,000 units, in part on the strength of Donkey Kong as the bundled game. ColecoVision's main competitor was the less commercially successful Atari 5200. Sales quickly passed 1 million in early 1983. The ColecoVision was distributed by CBS Electronics outside of North America and was branded the CBS ColecoVision. In Europe the console was released in July 1983, nearly one year after the North American release. By the beginning of 1984, quarterly sales of the ColecoVision had dramatically decreased. In January 1985, Coleco discontinued the Adam, which was a home computer expansion for ColecoVision. By mid-1985, Coleco planned to withdraw from the video game market, and the ColecoVision was officially discontinued by October. Total sales are uncertain but were ultimately in excess of 2 million consoles, with the console continuing to sell modestly up until its discontinuation. In 1983, Spectravideo announced the SV-603 ColecoVision Video Game Adapter for its SV-318 computer. The company stated that the $70 product allowed users to "enjoy the entire library of exciting ColecoVision video-game cartridges". Hardware ColecoVision is based around the Zilog Z80 CPU and a variant of the Texas Instruments TMS9918 video chip that was introduced in 1979. On NTSC ColecoVision consoles, all first-party cartridges and most third-party software titles feature a 12.7 second pause before presenting the game select screen. CBS Electronics reduced this pause in the BIOS to 3.3 seconds for their PAL and SECAM ColecoVision consoles. Expansion Modules and accessories From its introduction, Coleco touted the ColecoVision's hardware expandability by highlighting the Expansion Module Interface on the front of the unit. These hardware expansion modules and accessories were sold separately. Atari 2600 expansion Expansion Module #1 makes the ColecoVision compatible with Atari 2600 cartridges and controllers. It leveraged the fact that the 2600 used largely off-the-shelf components and was effectively a complete set of 2600 electronics, including a reverse-engineered equivalent of the 2600's sole custom chip, the TIA. The ColecoVision console did not do any translation or processing of the game code on the 2600 cartridges; it only provided power and clock input to and audio/video output from the expansion module, which was otherwise entirely self-contained and could be thought of as the first Atari 2600 clone console. Functionally, this gave the ColecoVision the largest software library of any console of its day. The expansion module prompted legal action from Atari. Coleco and Atari settled out of court with Coleco becoming licensed under Atari's patents. The royalty based license also applied to Coleco's Gemini game system, a stand-alone clone of the 2600. Driving controller Expansion Module #2 is a driving controller (steering wheel / gas pedal) that came packaged with the cartridge Turbo. The gas pedal is merely a simple on/off switch. Although Coleco called the driving controller an expansion module, it actually plugs into the controller port, not the Expansion Module Interface. The driving controller is also compatible with the cartridges Destructor, Bump 'n' Jump, Pitstop, and The Dukes of Hazzard. Adam computer expansion Expansion Module #3 converts the ColecoVision into the Adam computer, complete with keyboard, digital data pack (DDP) cassette drive, 64 KB RAM, and printer. Roller Controller The Roller Controller is a trackball that came packaged with the cartridge Slither, a conversion of the arcade game. The roller controller uses a special power connector that is not compatible with Expansion Module #3 (the Adam computer). Coleco mailed an adapter to owners of both units who complained. The other cartridge programmed to use the roller controller is Victory. A joystick mode switch on the roller controller allows it to be used with all cartridges including WarGames, Omega Race, and Atarisoft's Centipede. Super Action Controller The Super Action Controller Set, available in September 1983, is a set of two handheld joystick controllers that came packaged with the cartridge Super Action Baseball. Each controller has a ball-top joystick, four finger triggered action buttons, a 12-button numeric keypad, and a "speed roller". The cartridges Super Action Football, Rocky Super Action Boxing, and a conversion of the arcade game Front Line are also designed to be used with the Super Action Controller. Unreleased Expansion Module #3 was originally the Super Game Module. It was advertised for an August 1983 release but was ultimately cancelled and replaced with the Adam computer expansion. The Super Game Module added a tape drive known as the Exatron Stringy Floppy with 128KB capacity, and the additional RAM, said to be 30KB, to load and execute programs from tape. Games could be distributed on tiny tapes, called wafers, and be much larger than the 16KB or 32KB ROM cartridges of the day. Super Donkey Kong, with all screens and animations, Super Donkey Kong Jr, and Super Smurf Rescue were demonstrated with the Super Game Module. The Adam computer expansion with its 256KB tape drive and 64KB RAM fulfilled the specifications promised by the Super Game Module. Games Legacy Masayuki Uemura, head of Famicom development, stated that the ColecoVision set the bar that influenced how he approached the creation of the Famicom. During the creation of the Nintendo Entertainment System, Takao Sawano, chief manager of the project, brought a ColecoVision home to his family, who were impressed by the system's capability to produce smooth graphics, which contrasted with the flickering commonly seen on Atari 2600 games. In 1986, Bit Corporation produced a ColecoVision clone called the Dina, which was sold in the United States by Telegames as the Telegames Personal Arcade. IGN named the ColecoVision their 12th-best video-game console out of their list of 25, citing "its incredible accuracy in bringing current-generation arcade hits home." In 1996, the first homebrew ColecoVision game was released: a Tetris clone titled Kevtris. In 1997, Telegames released Personal Arcade Vol. 1, a collection of ColecoVision games for Microsoft Windows, and a 1998 follow-up, Colecovision Hits Volume One. In 2012, Opcode Games released their own Super Game Module expansion, which increases RAM from 16KB to 32KB and adds four additional sound channels. This expansion brings the ColecoVision close to the MSX architecture standard, allowing MSX software to be more easily ported. In 2014, AtGames began producing the ColecoVision Flashback console that includes 60 games, but not the original pack-in game, Donkey Kong. References External links The History of ColecoVision Game System ColecoVision Zone - Comprehensive archive of photos and documents. Vision Computer-related introductions in 1982 Home video game consoles Second-generation video game consoles 1980s toys
3,346
7,322
https://en.wikipedia.org/wiki/Creation%20myth
Creation myth
A creation myth (or cosmogonic myth) is a symbolic narrative of how the world began and how people first came to inhabit it. While in popular usage the term myth often refers to false or fanciful stories, members of cultures often ascribe varying degrees of truth to their creation myths. In the society in which it is told, a creation myth is usually regarded as conveying profound truthsmetaphorically, symbolically, historically, or literally. They are commonly, although not always, considered cosmogonical mythsthat is, they describe the ordering of the cosmos from a state of chaos or amorphousness. Creation myths often share several features. They often are considered sacred accounts and can be found in nearly all known religious traditions. They are all stories with a plot and characters who are either deities, human-like figures, or animals, who often speak and transform easily. They are often set in a dim and nonspecific past that historian of religion Mircea Eliade termed in illo tempore ('at that time'). Creation myths address questions deeply meaningful to the society that shares them, revealing their central worldview and the framework for the self-identity of the culture and individual in a universal context. Creation myths develop in oral traditions and therefore typically have multiple versions; found throughout human culture, they are the most common form of myth. Definitions Creation myth definitions from modern references: A "symbolic narrative of the beginning of the world as understood in a particular tradition and community. Creation myths are of central importance for the valuation of the world, for the orientation of humans in the universe, and for the basic patterns of life and culture." "Creation myths tell us how things began. All cultures have creation myths; they are our primary myths, the first stage in what might be called the psychic life of the species. As cultures, we identify ourselves through the collective dreams we call creation myths, or cosmogonies. … Creation myths explain in metaphorical terms our sense of who we are in the context of the world, and in so doing they reveal our real priorities, as well as our real prejudices. Our images of creation say a great deal about who we are." A "philosophical and theological elaboration of the primal myth of creation within a religious community. The term myth here refers to the imaginative expression in narrative form of what is experienced or apprehended as basic reality … The term creation refers to the beginning of things, whether by the will and act of a transcendent being, by emanation from some ultimate source, or in any other way." Religion professor Mircea Eliade defined the word myth in terms of creation: Myth narrates a sacred history; it relates an event that took place in primordial Time, the fabled time of the "beginnings." In other words, myth tells how, through the deeds of Supernatural Beings, a reality came into existence, be it the whole of reality, the Cosmos, or only a fragment of reality – an island, a species of plant, a particular kind of human behavior, an institution. Meaning and function All creation myths are in one sense etiological because they attempt to explain how the world formed and where humanity came from. Myths attempt to explain the unknown and sometimes teach a lesson. Ethnologists and anthropologists who study origin myths say that in the modern context theologians try to discern humanity's meaning from revealed truths and scientists investigate cosmology with the tools of empiricism and rationality, but creation myths define human reality in very different terms. In the past, historians of religion and other students of myth thought of such stories as forms of primitive or early-stage science or religion and analyzed them in a literal or logical sense. Today, however, they are seen as symbolic narratives which must be understood in terms of their own cultural context. Charles Long writes: "The beings referred to in the myth – gods, animals, plants – are forms of power grasped existentially. The myths should not be understood as attempts to work out a rational explanation of deity." While creation myths are not literal explications, they do serve to define an orientation of humanity in the world in terms of a birth story. They provide the basis of a worldview that reaffirms and guides how people relate to the natural world, to any assumed spiritual world, and to each other. A creation myth acts as a cornerstone for distinguishing primary reality from relative reality, the origin and nature of being from non-being. In this sense cosmogonic myths serve as a philosophy of life – but one expressed and conveyed through symbol rather than through systematic reason. And in this sense they go beyond etiological myths (which explain specific features in religious rites, natural phenomena or cultural life). Creation myths also help to orient human beings in the world, giving them a sense of their place in the world and the regard that they must have for humans and nature. Historian David Christian has summarised issues common to multiple creation myths: Each beginning seems to presuppose an earlier beginning. ... Instead of meeting a single starting point, we encounter an infinity of them, each of which poses the same problem. ... There are no entirely satisfactory solutions to this dilemma. What we have to find is not a solution but some way of dealing with the mystery .... And we have to do so using words. The words we reach for, from God to gravity, are inadequate to the task. So we have to use language poetically or symbolically; and such language, whether used by a scientist, a poet, or a shaman, can easily be misunderstood. Classification Mythologists have applied various schemes to classify creation myths found throughout human cultures. Eliade and his colleague Charles Long developed a classification based on some common motifs that reappear in stories the world over. The classification identifies five basic types: Creation ex nihilo in which the creation is through the thought, word, dream or bodily secretions of a divine being. Earth diver creation in which a diver, usually a bird or amphibian sent by a creator, plunges to the seabed through a primordial ocean to bring up sand or mud which develops into a terrestrial world. Emergence myths in which progenitors pass through a series of worlds and metamorphoses until reaching the present world. Creation by the dismemberment of a primordial being. Creation by the splitting or ordering of a primordial unity such as the cracking of a cosmic egg or a bringing order from chaos. Marta Weigle further developed and refined this typology to highlight nine themes, adding elements such as deus faber, a creation crafted by a deity, creation from the work of two creators working together or against each other, creation from sacrifice and creation from division/conjugation, accretion/conjunction, or secretion. An alternative system based on six recurring narrative themes was designed by Raymond Van Over: Primeval abyss, an infinite expanse of waters or space. Originator deity which is awakened or an eternal entity within the abyss. Originator deity poised above the abyss. Cosmic egg or embryo. Originator deity creating life through sound or word. Life generating from the corpse or dismembered parts of an originator deity. Ex nihilo The myth that God created the world out of nothing – ex nihilo – is central today to Judaism, Christianity and Islam, and the medieval Jewish philosopher Maimonides felt it was the only concept that the three religions shared. Nonetheless, the concept is not found in the entire Hebrew Bible. The authors of Genesis 1 were concerned not with the origins of matter (the material which God formed into the habitable cosmos), but with assigning roles so that the Cosmos should function. In the early 2nd century CE, early Christian scholars were beginning to see a tension between the idea of world-formation and the omnipotence of God, and by the beginning of the 3rd century creation ex nihilo had become a fundamental tenet of Christian theology. Ex nihilo creation is found in creation stories from ancient Egypt, the Rig Veda, and many animistic cultures in Africa, Asia, Oceania and North America. In most of these stories, the world is brought into being by the speech, dream, breath, or pure thought of a creator but creation ex nihilo may also take place through a creator's bodily secretions. The literal translation of the phrase ex nihilo is "from nothing" but in many creation myths the line is blurred whether the creative act would be better classified as a creation ex nihilo or creation from chaos. In ex nihilo creation myths, the potential and the substance of creation springs from within the creator. Such a creator may or may not be existing in physical surroundings such as darkness or water, but does not create the world from them, whereas in creation from chaos the substance used for creation is pre-existing within the unformed void. Creation from chaos In creation from chaos myths, initially there is nothing but a formless, shapeless expanse. In these stories the word "chaos" means "disorder", and this formless expanse, which is also sometimes called a void or an abyss, contains the material with which the created world will be made. Chaos may be described as having the consistency of vapor or water, dimensionless, and sometimes salty or muddy. These myths associate chaos with evil and oblivion, in contrast to "order" (cosmos) which is the good. The act of creation is the bringing of order from disorder, and in many of these cultures it is believed that at some point the forces preserving order and form will weaken and the world will once again be engulfed into the abyss. One example is the Genesis creation narrative from the first chapter of the Book of Genesis. World parent There are two types of world parent myths, both describing a separation or splitting of a primeval entity, the world parent or parents. One form describes the primeval state as an eternal union of two parents, and the creation takes place when the two are pulled apart. The two parents are commonly identified as Sky (usually male) and Earth (usually female), who were so tightly bound to each other in the primeval state that no offspring could emerge. These myths often depict creation as the result of a sexual union and serve as genealogical record of the deities born from it. In the second form of world parent myths, creation itself springs from dismembered parts of the body of the primeval being. Often, in these stories, the limbs, hair, blood, bones, or organs of the primeval being are somehow severed or sacrificed to transform into sky, earth, animal or plant life, and other worldly features. These myths tend to emphasize creative forces as animistic in nature rather than sexual, and depict the sacred as the elemental and integral component of the natural world. One example of this is the Norse creation myth described in Völuspá, the first poem of Gylfaginning. Emergence In emergence myths, humanity emerges from another world into the one they currently inhabit. The previous world is often considered the womb of the earth mother, and the process of emergence is likened to the act of giving birth. The role of midwife is usually played by a female deity, like the spider woman of several mythologies of Indigenous peoples in the Americas. Male characters rarely figure into these stories, and scholars often consider them in counterpoint to male-oriented creation myths, like those of the ex nihilo variety. Emergence myths commonly describe the creation of people and/or supernatural beings as a staged ascent or metamorphosis from nascent forms through a series of subterranean worlds to arrive at their current place and form. Often the passage from one world or stage to the next is impelled by inner forces, a process of germination or gestation from earlier, embryonic forms. The genre is most commonly found in Native American cultures where the myths frequently link the final emergence of people from a hole opening to the underworld to stories about their subsequent migrations and eventual settlement in their current homelands. Earth-diver The earth-diver is a common character in various traditional creation myths. In these stories a supreme being usually sends an animal (most often, a type of bird, but also crustaceans, insects and fishes in some narratives) into the primal waters to find bits of sand or mud with which to build habitable land. Some scholars interpret these myths psychologically while others interpret them cosmogonically. In both cases emphasis is placed on beginnings emanating from the depths. Motif distribution According to Gudmund Hatt and Tristram P. Coffin, Earth-diver myths are common in Native American folklore, among the following populations: Shoshone, Fox people, Blackfoot, Chipewyan, Newettee, Yokuts of California, Mandan, Hidatsa, Cheyenne, Arapaho, Ojibwe, Yuchi and Cherokee. American anthropologist Gladys Reichard located the distribution of the motif across "all parts of North America", save for "the extreme north, northeast, and southwest". In a 1977 study, anthropologist Victor Barnouw surmised that the earth-diver motif appeared in "hunting-gathering societies", mainly among northerly groups such as the Hare, Dogrib, Kaska, Beaver, Carrier, Chippewyan, Sarsi, Cree and Montagnais. Similar tales are also found among the Chukchi and Yukaghir, the Tatars and many Finnic traditions, as well as among the Buryat and the Samoyed. In addition, the earth-diver motif also exists in narratives from Eastern Europe, namely Romani, Romanian, Slavic (namely, Bulgarian, Polish, Ukrainian, and Belarusian) and Lithuanian mythological traditions. The pattern of distribution of these stories suggest they have a common origin in the eastern Asiatic coastal region, spreading as peoples migrated west into Siberia and east to the North American continent. However, there are examples of this mytheme found well outside of this boreal distribution pattern, for example the West African Yoruba creation myth of Obatala and Oduduwa. Native American narrative Characteristic of many Native American myths, earth-diver creation stories begin as beings and potential forms linger asleep or suspended in the primordial realm. The earth-diver is among the first of them to awaken and lay the necessary groundwork by building suitable lands where the coming creation will be able to live. In many cases, these stories will describe a series of failed attempts to make land before the solution is found. Among the indigenous peoples of the Americas, the earth diver cosmogony is attested in Iroquois mythology: a female sky deity falls from the heavens, and certain animals, the beaver, the otter, the duck and the muskrat dive in the waters to fetch mud to construct an island. In a similar story from the Seneca, people lived in a sky realm. One day, the chief's daughter was afflicted with a mysterious illness, and the only cure recommended for her (revealed in a dream) was to lie beside a tree and to have it be dug up. The people do so, but a man complains that the tree was their livelihood, and kicks the girl through the hole. She ends up falling from the sky to a world of only water, but is rescued by waterfowl. A turtle offers to bear her on its shell, but asked where would be a definitive dwelling place for her. They decide to create land, and the toad dives into the depths of the primal sea to get pieces of soil. The toad puts it on the turtle's back, which grows larger with every deposit of soil. In another version from the Wyandot, the Wyandot lived in heaven. The daughter of the Big Chief (or Mighty Ruler) was sick, so the medicine man recommends that they dig up the wild apple tree that stands next to the Lodge of the Mighty Ruler, because the remedy is to be found on its roots. However, as the tree has been dug out, the ground begins to sink away, and the treetops catch and carry down the sick daughter with it. As the girl falls from the skies, two swans rescue her on their backs. The birds decide to summon all the Swimmers and the Water Tribes. Many volunteer to dive into the Great Water to fetch bits of earth from the bottom of the sea, but only the toad (female, in the story) is the one successful. See also Anthropology of religion Australian Aboriginal religion and mythology Big Bang Creationism Creator deity Evolutionary origin of religions Mother goddess Origin myth Origin-of-death myth Poles in mythology Religious cosmology Theism Xirang Young earth creationism References Bibliography Further reading On the Earth-diver motif: Berezkin, Yuri (2007). "“Earth-diver” and “emergence from under the earth”: Cosmogonic tales as evidence in favor of the heterogenic origins of the American Indians". In: Archaeology, Ethnology and Anthropology of Eurasia 32: 110–123. 10.1134/S156301100704010X. Dundes, Alan. "Earth-Diver: Creation of the Mythopoeic Male". In: American Anthropologist, New Series, 64, no. 5 (1962): 1032–051. Accessed August 20, 2021. . Kirtley, Bacil F. "A Bohol Version of the Earth-Diver Myth". In: The Journal of American Folklore 70, no. 278 (1957): 362–63. Accessed August 20, 2021. . Köngäs, Elli Kaija. "The Earth-Diver (Th. A 812)". In: Ethnohistory 7, no. 2 (1960): 151–80. Accessed August 21, 2021. . Lianshan, Chen. "Gun And Yu: Revisiting The Chinese “Earth-Diver” Hypothesis". In: China’s Creation and Origin Myths. Leiden, The Netherlands: Brill, 2011. pp. 153–162. . Mátéffy, Attila. (2014). "The Earth-Diver: Hungarian Variants of the Myth of the Dualistic Creation of the World—Pearls in the Primeval Sea of World Creation". In: Sociology Study 4. pp. 423–437. . Nagy, Ilona. "The Earth-Diver Myth (Mot. 812) and the Apocryphal Legend of the Tiberian Sea". In: Acta Ethnographica Hungarica 51, 3-4 (2006): 281–326. Accessed Aug 20, 2021. . Napolskikh, Vladimir. "The Earth-Diver Myth (А812) in Northern Eurasia and North America: Twenty Years Later". Frog; Siikala, Anna-Leena; Stepanova, Eila (2012). Mythic Discourses: Studies in Uralic Traditions. Finnish Literature Society. pp. 120–140. . External links Creation myth – Encyclopædia Britannica Japanese Creation Myth Mayan Creation Myth Egyptian Creation Myth Norse Creation Myth Indo-European Creation Myth Comparative mythology Cosmogony Religious cosmologies Traditional stories
3,351